第一步:关联Jar包

1. 配置hadoop-env.sh文件添加Hbase关联jar包

/opt/modules/hadoop-2.5.0-cdh5.3.6/etc/hadoop下编辑hadoop-env.sh文件添加下列变量

  1. export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/*

2. 配置临时或者永久环境变量

/opt/modules/hbase-0.98.6-cdh5.3.6/conf/hbase-env.sh 下追加下列变量关联jar包。如果想临时使用的话,只需要在打开的teminal当前会话下执行下列变量

  1. export HBASE_HOME=/opt/modules/hbase-0.98.6-cdh5.3.6
  2. export HADOOP_CLASSPATH=$HBASE_HOME/lib/*:classpath
  3. export HBASE_CLASSPATH=$HBASE_CLASSPATH:`$HBASE_HOME/bin/hbase classpath`
  1. Hbase根目录下运行下列命令.如果环境变量配置正确的话,命令结束后会列举命令行list菜单

在Hbase根目录下运行下列命令,/opt/modules/hadoop-2.5.0-cdh5.3.6/bin/yarn jar lib/hbase-server-0.98.6-cdh5.3.6.jar

  1. [liupeng@www hbase-0.98.6-cdh5.3.6]$ /opt/modules/hadoop-2.5.0-cdh5.3.6/bin/yarn jar lib/hbase-server-0.98.6-cdh5.3.6.jar
  2. An example program must be given as the first argument.
  3. Valid program names are:
  4. CellCounter: Count cells in HBase table
  5. completebulkload: Complete a bulk data load.
  6. copytable: Export a table from local cluster to peer cluster
  7. export: Write table data to HDFS.
  8. import: Import data written by Export.
  9. importtsv: Import data in TSV format.
  10. rowcounter: Count rows in HBase table
  11. verifyrep: Compare the data from tables in two different clusters. WARNING: It doesn't work for incrementColumnValues'd cells since the timestamp is changed after being appended to the log.
  12. [liupeng@www hbase-0.98.6-cdh5.3.6]$
  1.  

第二步:Myeclipse中代码的准备

(1) Maven工程的创建  (不在此处详细解释)

(2) pom.xml文件添加jar包 (Hadoop相关jar包添加下列属性即可)

  1. <dependency>
  2. <groupId>org.apache.hbase</groupId>
  3. <artifactId>hbase-server</artifactId>
  4. <version>0.98.24-hadoop2</version>
  5. </dependency>
  6.  
  7. <dependency>
  8. <groupId>org.apache.hbase</groupId>
  9. <artifactId>hbase-client</artifactId>
  10. <version>0.98.24-hadoop2</version>
  11. </dependency>

(3)Java 代码的准备

com.HBaseMapperReduce.Import      包名

HBaseDriver类:

  1. package com.HBaseMapperReduce.Import;
  2.  
  3. import org.apache.hadoop.conf.Configuration;
  4. import org.apache.hadoop.conf.Configured;
  5. import org.apache.hadoop.hbase.HBaseConfiguration;
  6. import org.apache.hadoop.hbase.client.Put;
  7. import org.apache.hadoop.hbase.client.Scan;
  8. import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
  9. import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
  10. import org.apache.hadoop.mapreduce.Job;
  11. import org.apache.hadoop.util.Tool;
  12. import org.apache.hadoop.util.ToolRunner;

  13. //继承Configured 及 Tool类实现 run 方法 和 Main 方法
  14. public class HBaseDriver extends Configured implements Tool{
  15.  
  16. public int run(String[] arg0) throws Exception {
  17. Configuration conf = this.getConf();
  18. Job job = Job.getInstance(conf, "mr-test"); //这里的"mr-test"为job名可以起任意值
  19. //job.setJarByClass(入口)
  20. job.setJarByClass(HBaseDriver.class);
  21. Scan scan = new Scan();
  22. TableMapReduceUtil.initTableMapperJob(
  23. "liupeng:Student", // input table
  24. scan, // Scan instance to control CF and attribute selection
  25. HBaseMapperReduceGetInfo.class, // mapper class
  26. ImmutableBytesWritable.class, // mapper output key  TableMapper类中的类型
  27. Put.class, // mapper output value    TableMapper类中的类型
  28. job);
  29.  
  30. TableMapReduceUtil.initTableReducerJob(
  31. "liupeng:DemoTest", // output table  需要在Hbase上提前创建table表
  32. null, // reducer class  //如果没有reduce这里就指定null值
  33. job);
  34. job.setNumReduceTasks(1); // at least one, adjust as required  
  35. return job.waitForCompletion(true) ? 0:1;    //三木运算如果结果为true 返回0,相反返回1
  36. }
  37.  
  38. public static void main(String[] args) {
  39. Configuration conf = HBaseConfiguration.create();
  40. try {
  41. int status = ToolRunner.run(conf, new HBaseDriver(), args);
  42. System.exit(status);
  43. } catch (Exception e) {
  44. e.printStackTrace();
  45. }
  46. }
  47. }

HBaseMapperReduceGetInfo类

  1. package com.HBaseMapperReduce.Import;
  2.  
  3. import java.io.IOException;
  4.  
  5. import org.apache.hadoop.hbase.Cell;
  6. import org.apache.hadoop.hbase.CellUtil;
  7. import org.apache.hadoop.hbase.client.Put;
  8. import org.apache.hadoop.hbase.client.Result;
  9. import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
  10. import org.apache.hadoop.hbase.mapreduce.TableMapper;
  11. import org.apache.hadoop.hbase.util.Bytes;
  12. import org.apache.hadoop.mapreduce.Mapper;
  13.  
  14. public class HBaseMapperReduceGetInfo extends TableMapper<ImmutableBytesWritable, Put> {
  15. @Override
  16. protected void map(ImmutableBytesWritable key, Result value,Context context)
  17. throws IOException, InterruptedException {
  18. //key是rowkey。key.get()得到rowkey下所有的数据
  19. Put put = new Put(key.get());

  20.        /**
             *需求:获取指定table表中,符合if else条件的所有Cell值
             *列簇有2个分别获取 "info","contect"两个列簇下的列名
             *获取符合info列簇下name,age。及contect列簇下mail的Cell值
             */
  21. for(Cell cell:value.rawCells()){
  22. if("info".equals(Bytes.toString(CellUtil.cloneFamily(cell)))){
  23. if("name".equals(Bytes.toString(CellUtil.cloneQualifier(cell)))){
  24. put.add(cell);
  25. }else if("age".equals(Bytes.toString(CellUtil.cloneQualifier(cell)))){
  26. put.add(cell);
  27. }
  28. }else if("contect".equals(Bytes.toString(CellUtil.cloneFamily(cell)))){
  29. if("mail".equals(Bytes.toString(CellUtil.cloneQualifier(cell)))){
  30. put.add(cell);
  31. }
  32. }
  33. }context.write(key, put);
  34. }
  35.  
  36. }

第三步:打jar包并运行脚本并在Hbase指定table表中查看数据

1. Hbase中创建表,具体操作如下

  1. hbase(main):250:0> create "liupeng:DemoTest",'info','contect'
  2. 0 row(s) in 0.8020 seconds
  3.  
  4. => Hbase::Table - liupeng:DemoTest
  5. hbase(main):251:0> desc "liupeng:DemoTest"
  6. DESCRIPTION ENABLED
  7. 'liupeng:DemoTest', {NAME => 'contect', BLOOMFILTER => 'ROW', VERSIONS => '1' true
  8. , IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'false', DATA_BLOCK_ENCODING =>
  9. 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKC
  10. ACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'in
  11. fo', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETE
  12. D_CELLS => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESS
  13. ION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536
  14. ', REPLICATION_SCOPE => '0'}
  15. 1 row(s) in 0.0970 seconds

2. 打jar包

图1

图2

图3

图4

3. 执行job命令

[liupeng@www hbase-0.98.6-cdh5.3.6]$ /opt/modules/hadoop-2.5.0-cdh5.3.6/bin/yarn jar /home/liupeng/Desktop/test.jar

命令执行后会运行MapperReduce任务等任务结束后会出现如下结果

  1. [liupeng@www hbase-0.98.6-cdh5.3.6]$ /opt/modules/hadoop-2.5.0-cdh5.3.6/bin/yarn jar /home/liupeng/Desktop/test.jar
  2. SLF4J: Class path contains multiple SLF4J bindings.
  3. SLF4J: Found binding in [jar:file:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  4. SLF4J: Found binding in [jar:file:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  5. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  6. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
  7. 18/06/19 14:48:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  8. 18/06/19 14:48:35 INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
  9. 18/06/19 14:48:35 INFO client.RMProxy: Connecting to ResourceManager at www.hadoopresourcemanager.com/192.168.122.169:8032
  10. 18/06/19 14:48:35 INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
  11. 18/06/19 14:48:36 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7b205dbd connecting to ZooKeeper ensemble=localhost:2181
  12. 18/06/19 14:48:36 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-cdh5.3.6--1, built on 07/28/2015 18:35 GMT
  13. 18/06/19 14:48:36 INFO zookeeper.ZooKeeper: Client environment:host.name=www.hadoopsecondarynamenode.com
  14. 18/06/19 14:48:36 INFO zookeeper.ZooKeeper: Client environment:java.version=1.8.0_151
  15. 18/06/19 14:48:36 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
  16. 18/06/19 14:48:36 INFO zookeeper.ZooKeeper: Client environment:java.home=/opt/modules/jdk1.8.0_151/jre
  17. 18/06/19 14:48:36 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/opt/modules/hadoop-2.5.0-cdh5.3.6/etc/hadoop:/opt/modules/hadoop-2.5.0-cdh5.3.6/etc/hadoop:/opt/modules/hadoop-2.5.0-cdh5.3.6/etc/hadoop:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-net-3.1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/curator-framework-2.6.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/junit-4.11.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/guava-11.0.2.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jetty-6.1.26.cloudera.4.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/activation-1.1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/zookeeper-3.4.5-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/gson-2.2.4.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-io-2.4.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jsr305-1.3.9.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jsch-0.1.42.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-el-1.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/asm-3.2.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jettison-1.1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/avro-1.7.6-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/paranamer-2.3.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/xz-1.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/curator-client-2.6.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/hadoop-auth-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/hadoop-annotations-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-collections-3.2.1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/hadoop-common-2.5.0-cdh5.3.6-tests.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/hadoop-common-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/hadoop-nfs-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/jetty-6.1.26.cloudera.4.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/commons-el-1.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/hadoop-hdfs-2.5.0-cdh5.3.6-tests.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/hadoop-hdfs-nfs-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/hadoop-hdfs-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jline-0.9.94.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jetty-6.1.26.cloudera.4.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/activation-1.1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/zookeeper-3.4.5-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/asm-3.2.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/guice-3.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/xz-1.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-api-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-common-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-client-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-server-tests-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-server-common-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/avro-1.7.6-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/hadoop-annotations-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.6-tests.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.6.jar:/contrib/capacity-scheduler/*.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hbase-hadoop2-compat-0.98.6-cdh5.3.6-tests.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hadoop-hdfs-2.5.0-cdh5.3.6-tests.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jackson-core-asl-1.8.8.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hsqldb-1.8.0.10.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jamon-runtime-2.3.1.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hbase-server-0.98.6-cdh5.3.6.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hbase-thrift-0.98.6-cdh5.3.6.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hbase-shell-0.98.6-cdh5.3.6.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/api-util-1.0.0-M20.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/commons-net-3.1.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/commons-cli-1.2.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/curator-framework-2.6.0.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/commons-daemon-1.0.3.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hbase-hadoop2-compat-0.98.6-cdh5.3.6.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/junit-4.11.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/high-scale-lib-1.1.1.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hbase-server-0.98.6-cdh5.3.6-tests.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/netty-3.6.6.Final.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hbase-common-0.98.6-cdh5.3.6-tests.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/guava-12.0.1.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/commons-math3-3.1.1.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jersey-core-1.8.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hbase-testing-util-0.98.6-cdh5.3.6.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jersey-json-1.8.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/java-xmlbuilder-0.4.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/protobuf-java-2.5.0.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jsp-2.1-6.1.14.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/commons-httpclient-3.1.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jsp-api-2.1.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jasper-compiler-5.5.23.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/slf4j-api-1.7.5.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hamcrest-core-1.3.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/servlet-api-2.5-6.1.14.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jackson-jaxrs-1.8.8.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jetty-6.1.26.cloudera.4.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/activation-1.1.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/zookeeper-3.4.5-cdh5.3.6.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/gson-2.2.4.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/httpcore-4.2.5.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/commons-io-2.4.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jruby-complete-1.6.8.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jsp-api-2.1-6.1.14.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hbase-it-0.98.6-cdh5.3.6-tests.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jsr305-1.3.9.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/htrace-core-2.04.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hbase-hadoop-compat-0.98.6-cdh5.3.6-tests.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/apacheds-i18n-2.0.0-M15.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jets3t-0.9.0.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/commons-logging-1.1.1.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jsch-0.1.42.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jetty-sslengine-6.1.26.cloudera.4.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/commons-configuration-1.6.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jasper-runtime-5.5.23.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/commons-el-1.0.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/core-3.1.1.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/findbugs-annotations-1.3.9-1.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/asm-3.2.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/api-asn1-api-1.0.0-M20.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hbase-examples-0.98.6-cdh5.3.6.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/commons-lang-2.6.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/log4j-1.2.17.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/slf4j-log4j12-1.7.5.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/avro-1.7.6-cdh5.3.6.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hbase-protocol-0.98.6-cdh5.3.6.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hadoop-common-2.5.0-cdh5.3.6.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/commons-math-2.1.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hbase-it-0.98.6-cdh5.3.6.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/libthrift-0.9.0.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/snappy-java-1.0.4.1.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hbase-client-0.98.6-cdh5.3.6.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hbase-hadoop-compat-0.98.6-cdh5.3.6.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/metrics-core-2.2.0.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jettison-1.3.1.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jaxb-api-2.1.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/servlet-api-2.5.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/curator-recipes-2.6.0.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/commons-codec-1.7.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/commons-beanutils-1.7.0.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jaxb-impl-2.2.3-1.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/paranamer-2.3.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/xmlenc-0.52.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/commons-digester-1.8.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/commons-beanutils-core-1.8.0.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jersey-server-1.8.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/httpclient-4.2.5.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/commons-compress-1.4.1.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/xz-1.0.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/curator-client-2.6.0.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hadoop-hdfs-2.5.0-cdh5.3.6.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hbase-common-0.98.6-cdh5.3.6.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jackson-xc-1.8.8.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hbase-prefix-tree-0.98.6-cdh5.3.6.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hadoop-auth-2.5.0-cdh5.3.6.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/hadoop-annotations-2.5.0-cdh5.3.6.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/jackson-mapper-asl-1.8.8.jar:/opt/modules/hbase-0.98.6-cdh5.3.6/lib/commons-collections-3.2.1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-api-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-common-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-client-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-server-tests-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-server-common-2.5.0-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jline-0.9.94.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jetty-6.1.26.cloudera.4.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/activation-1.1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/zookeeper-3.4.5-cdh5.3.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/asm-3.2.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/guice-3.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/xz-1.0.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/opt/modules/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-collections-3.2.1.jar
  18. 18/06/19 14:48:36 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/opt/modules/hadoop-2.5.0-cdh5.3.6/lib/native
  19. 18/06/19 14:48:36 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
  20. 18/06/19 14:48:36 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
  21. 18/06/19 14:48:36 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
  22. 18/06/19 14:48:36 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
  23. 18/06/19 14:48:36 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-696.el6.x86_64
  24. 18/06/19 14:48:36 INFO zookeeper.ZooKeeper: Client environment:user.name=liupeng
  25. 18/06/19 14:48:36 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/liupeng
  26. 18/06/19 14:48:36 INFO zookeeper.ZooKeeper: Client environment:user.dir=/opt/modules/hbase-0.98.6-cdh5.3.6
  27. 18/06/19 14:48:36 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x7b205dbd, quorum=localhost:2181, baseZNode=/hbase
  28. 18/06/19 14:48:36 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost6.localdomain6/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
  29. 18/06/19 14:48:36 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /0:0:0:0:0:0:0:1:35986, server: localhost6.localdomain6/0:0:0:0:0:0:0:1:2181
  30. 18/06/19 14:48:36 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost6.localdomain6/0:0:0:0:0:0:0:1:2181, sessionid = 0x36415671f800003, negotiated timeout = 40000
  31. 18/06/19 14:48:36 INFO mapreduce.TableOutputFormat: Created table instance for liupeng:DemoTest
  32. 18/06/19 14:48:39 INFO util.RegionSizeCalculator: Calculating region sizes for table "liupeng:Student".
  33. 18/06/19 14:48:40 INFO mapreduce.JobSubmitter: number of splits:1
  34. 18/06/19 14:48:40 INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
  35. 18/06/19 14:48:40 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1528678110864_0026
  36. 18/06/19 14:48:40 INFO impl.YarnClientImpl: Submitted application application_1528678110864_0026
  37. 18/06/19 14:48:40 INFO mapreduce.Job: The url to track the job: http://www.hadoopresourcemanager.com:8088/proxy/application_1528678110864_0026/
  38. 18/06/19 14:48:40 INFO mapreduce.Job: Running job: job_1528678110864_0026
  39. 18/06/19 14:48:50 INFO mapreduce.Job: Job job_1528678110864_0026 running in uber mode : false
  40. 18/06/19 14:48:50 INFO mapreduce.Job: map 0% reduce 0%
  41. 18/06/19 14:49:00 INFO mapreduce.Job: map 100% reduce 0%
  42. 18/06/19 14:49:09 INFO mapreduce.Job: map 100% reduce 100%
  43. 18/06/19 14:49:09 INFO mapreduce.Job: Job job_1528678110864_0026 completed successfully
  44. 18/06/19 14:49:09 INFO mapreduce.Job: Counters: 59
  45. File System Counters
  46. FILE: Number of bytes read=882
  47. FILE: Number of bytes written=269857
  48. FILE: Number of read operations=0
  49. FILE: Number of large read operations=0
  50. FILE: Number of write operations=0
  51. HDFS: Number of bytes read=98
  52. HDFS: Number of bytes written=0
  53. HDFS: Number of read operations=1
  54. HDFS: Number of large read operations=0
  55. HDFS: Number of write operations=0
  56. Job Counters
  57. Launched map tasks=1
  58. Launched reduce tasks=1
  59. Data-local map tasks=1
  60. Total time spent by all maps in occupied slots (ms)=7496
  61. Total time spent by all reduces in occupied slots (ms)=5648
  62. Total time spent by all map tasks (ms)=7496
  63. Total time spent by all reduce tasks (ms)=5648
  64. Total vcore-seconds taken by all map tasks=7496
  65. Total vcore-seconds taken by all reduce tasks=5648
  66. Total megabyte-seconds taken by all map tasks=7675904
  67. Total megabyte-seconds taken by all reduce tasks=5783552
  68. Map-Reduce Framework
  69. Map input records=7
  70. Map output records=7
  71. Map output bytes=862
  72. Map output materialized bytes=882
  73. Input split bytes=98
  74. Combine input records=7
  75. Combine output records=7
  76. Reduce input groups=7
  77. Reduce shuffle bytes=882
  78. Reduce input records=7
  79. Reduce output records=7
  80. Spilled Records=14
  81. Shuffled Maps =1
  82. Failed Shuffles=0
  83. Merged Map outputs=1
  84. GC time elapsed (ms)=171
  85. CPU time spent (ms)=2130
  86. Physical memory (bytes) snapshot=312143872
  87. Virtual memory (bytes) snapshot=4137783296
  88. Total committed heap usage (bytes)=170004480
  89. HBase Counters
  90. BYTES_IN_REMOTE_RESULTS=0
  91. BYTES_IN_RESULTS=1183
  92. MILLIS_BETWEEN_NEXTS=701
  93. NOT_SERVING_REGION_EXCEPTION=0
  94. NUM_SCANNER_RESTARTS=0
  95. REGIONS_SCANNED=1
  96. REMOTE_RPC_CALLS=0
  97. REMOTE_RPC_RETRIES=0
  98. RPC_CALLS=3
  99. RPC_RETRIES=0
  100. Shuffle Errors
  101. BAD_ID=0
  102. CONNECTION=0
  103. IO_ERROR=0
  104. WRONG_LENGTH=0
  105. WRONG_MAP=0
  106. WRONG_REDUCE=0
  107. File Input Format Counters
  108. Bytes Read=0
  109. File Output Format Counters
  110. Bytes Written=0
  111. [liupeng@www hbase-0.98.6-cdh5.3.6]$

Run Jar

4. 查看HBase创建的table表中是否有数据

  1. base(main):253:0> scan "liupeng:Student"
  2. ROW COLUMN+CELL
  3. 10001 column=contect:mail, timestamp=1528938151610, value=tony@cn.ibm.com
  4. 10001 column=info:age, timestamp=1528938151610, value=23
  5. 10001 column=info:name, timestamp=1528938151610, value=Tony
  6. 10001 column=info:phone, timestamp=1528938151610, value=15995719964
  7. 10002 column=contect:mail, timestamp=1528938151610, value=www.ivy@beifeng.com
  8. 10002 column=info:age, timestamp=1528938151610, value=30
  9. 10002 column=info:name, timestamp=1528938151610, value=Ivy
  10. 10002 column=info:phone, timestamp=1528938151610, value=18665851937
  11. 10003 column=contect:mail, timestamp=1528938151610, value=www.tom@beifeng.com
  12. 10003 column=info:age, timestamp=1528938151610, value=28
  13. 10003 column=info:name, timestamp=1528938151610, value=Tom
  14. 10003 column=info:phone, timestamp=1528938151610, value=17933569972
  15. 10004 column=contect:mail, timestamp=1528938151610, value=jack@alibaba.com
  16. 10004 column=info:age, timestamp=1528938151610, value=24
  17. 10004 column=info:name, timestamp=1528938151610, value=jack
  18. 10004 column=info:phone, timestamp=1528938151610, value=13677543321
  19. 10005 column=contect:mail, timestamp=1528938151610, value=kevin@cn.ibm.com
  20. 10005 column=info:age, timestamp=1528938151610, value=27
  21. 10005 column=info:name, timestamp=1528938151610, value=kevin
  22. 10005 column=info:phone, timestamp=1528938151610, value=15999876653
  23. 10006 column=contect:mail, timestamp=1528938151610, value=www.mdy@193.com
  24. 10006 column=info:age, timestamp=1528938151610, value=20
  25. 10006 column=info:name, timestamp=1528938151610, value=mevendy
  26. 10006 column=info:phone, timestamp=1528938151610, value=1892287467
  27. 10007 column=contect:mail, timestamp=1528938151610, value=www.sendy@163.com
  28. 10007 column=info:age, timestamp=1528938151610, value=30
  29. 10007 column=info:name, timestamp=1528938151610, value=Sendy
  30. 10007 column=info:phone, timestamp=1528938151610, value=15973679981

liupeng:Student原表信息

当看到scan "liupeng:DemoTest"结果出现如下显示时说明数据导入成功。仔细看一下内容,正是我们代码中所要求的 "info","contect"中"name","age","contect"的所有Cell信息

  1. hbase(main):252:0> scan "liupeng:DemoTest"
  2. ROW COLUMN+CELL
  3. 10001 column=contect:mail, timestamp=1528938151610, value=tony@cn.ibm.com
  4. 10001 column=info:age, timestamp=1528938151610, value=23
  5. 10001 column=info:name, timestamp=1528938151610, value=Tony
  6. 10002 column=contect:mail, timestamp=1528938151610, value=www.ivy@beifeng.com
  7. 10002 column=info:age, timestamp=1528938151610, value=30
  8. 10002 column=info:name, timestamp=1528938151610, value=Ivy
  9. 10003 column=contect:mail, timestamp=1528938151610, value=www.tom@beifeng.com
  10. 10003 column=info:age, timestamp=1528938151610, value=28
  11. 10003 column=info:name, timestamp=1528938151610, value=Tom
  12. 10004 column=contect:mail, timestamp=1528938151610, value=jack@alibaba.com
  13. 10004 column=info:age, timestamp=1528938151610, value=24
  14. 10004 column=info:name, timestamp=1528938151610, value=jack
  15. 10005 column=contect:mail, timestamp=1528938151610, value=kevin@cn.ibm.com
  16. 10005 column=info:age, timestamp=1528938151610, value=27
  17. 10005 column=info:name, timestamp=1528938151610, value=kevin
  18. 10006 column=contect:mail, timestamp=1528938151610, value=www.mdy@193.com
  19. 10006 column=info:age, timestamp=1528938151610, value=20
  20. 10006 column=info:name, timestamp=1528938151610, value=mevendy
  21. 10007 column=contect:mail, timestamp=1528938151610, value=www.sendy@163.com
  22. 10007 column=info:age, timestamp=1528938151610, value=30
  23. 10007 column=info:name, timestamp=1528938151610, value=Sendy
  24. 7 row(s) in 0.5030 seconds

HBase 通过myeclipce脚本来获取固定columns(获取列簇中的列及对应的value值)的更多相关文章

  1. Hbase 学习笔记5----hbase region, store, storefile和列簇的关系

    The HRegionServer opens the region and creates a corresponding HRegion object. When the HRegion is o ...

  2. HBase中Region, store, storefile和列簇的关系

    转自:http://zhb-mccoy.iteye.com/blog/1543492 The HRegionServer opens the region and creates a correspo ...

  3. HBase配置&启动脚本分析

    本文档基于hbase-0.96.1.1-cdh5.0.2,对HBase配置&启动脚本进行分析 date:2016/8/4 author:wangxl HBase配置&启动脚本分析 剔除 ...

  4. php 从一个数组中随机获取固定数据

    <?php /* * * 通过一个标识,从一个数组中随机获取固定数据 * $arr 数组 * $num 获取的数量 * $time 随机固定标识值,一般用固定时间或者某个固定整型 * */ fu ...

  5. 答:SQLServer DBA 三十问之六:Job信息我们可以通过哪些表获取;系统正在运行的语句可以通过哪些视图获取;如何获取某个T-SQL语句的IO、Time等信息;

    6. Job信息我们可以通过哪些表获取:系统正在运行的语句可以通过哪些视图获取:如何获取某个T-SQL语句的IO.Time等信息: 我的MSDB数据库中有全部的表: sys.all_columns,s ...

  6. 用JQuery中的Ajax方法获取web service等后台程序中的方法

    用JQuery中的Ajax方法获取web service等后台程序中的方法 1.准备需要被前台html页面调用的web Service,这里我们就用ws来代替了,代码如下: using System; ...

  7. 微信js-sdk开发获取签名和获取地理位置接口示例

    ###微信js-sdk开发获取签名和获取地理位置接口示例 前言:在做微信公众号开发时需要获取用户的地理位置信息,之前通过高德或者百度.腾讯等地图的api时发现经常获取不到,毕竟第三方的东西,后来改为采 ...

  8. python excel操作 练习-#操作单列 #操作A到C列 #操作1到3行 #指定一个范围遍历所有行和列 #获取所有行 #获取所有列

    ##操作单列#操作A到C列#操作1到3行#指定一个范围遍历所有行和列#获取所有行#获取所有列 #coding=utf-8 from openpyxl import Workbook wb=Workbo ...

  9. 【转载】C#如何获取DataTable中某列的数据类型

    在C#的数据表格DataTable的操作中,有时候因为业务需要,我们需要获取到DataTable所有列或者某一列的数据类型,此时我们可以通过DataTable中的Columns属性对象的DataTyp ...

随机推荐

  1. 使用slmgr查看、删除windows 授权(key)

    查看 slmgr.vbs /dlv 删除授权 使用管理员权限进入cmd All program -> accessories -> Command Prompt (右键 已管理员方式运行) ...

  2. March 1 2017 Week 9 Wednesday

    If you are serious giving up something, giving up is not serious at all. 如果你慎重地决定要放弃,那么放弃本身就没什么大不了的. ...

  3. ZT在谷歌上班感受如何?

    在谷歌上班感受如何? 2013-11-05 作者:腾讯科技 出处: 互联网 责编:zlu     在谷歌这家全球最大最有抱负的技术公司工作将是怎样的情景呢?是天堂般的享受,还是地狱般的折磨呢?下面看一 ...

  4. 用C++实现HTTP服务器 - Windows平台(开放源代码)

    有时间了看一下 https://blog.csdn.net/querw/article/details/6593328 libevent也实现了一下http服务

  5. Android学习笔记_44_apk安装、反编译及防治反编译

    一.APK安装 1.首先需要AndroidManifest.xml中加入安装程序权限: <!-- 安装程序权限 --> <uses-permission android:name=& ...

  6. HDU1069 Monkey and Banana

    HDU1069 Monkey and Banana 题目大意 给定 n 种盒子, 每种盒子无限多个, 需要叠起来, 在上面的盒子的长和宽必须严格小于下面盒子的长和宽, 求最高的高度. 思路 对于每个方 ...

  7. 我的wmware

    1.vmware 网络连接方式 NAT 模式: 虚拟机的IP 是由NAT分配的,电脑环境无论如何变化,都不会影响虚拟机 好处:在家.学校.公司,连接虚拟机都可以使用相同的ip地址 桥接模式: 只要更换 ...

  8. mui 的多图片上传

    pickHead(){ var _this = this; plus.gallery.pick(function(path){ _this.headImage=path; var files = [{ ...

  9. LeetCode11.盛最多水的容器 JavaScript

    给定 n 个非负整数 a1,a2,...,an,每个数代表坐标中的一个点 (i, ai) .在坐标内画 n 条垂直线,垂直线 i 的两个端点分别为 (i, ai) 和 (i, 0).找出其中的两条线, ...

  10. [Oracle]分区索引

    上一节学习了分区表,接着学习分区索引. (一)什么时候对索引进行分区 · 为了避免移动数据时重建整个索引,可对索引分区,在重建索引时,只需重建与数据分区相关的索引: · 在对分区表进行维护时,为了避免 ...