原表结构和数据

hbase(main):021:0* describe 'test'
DESCRIPTION ENABLED
{NAME => 'test', FAMILIES => [{NAME => 'cf1', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '', VERSIONS => '', COMPRESSION => 'NONE', MIN_VERSIONS => '', TTL = true
> '', BLOCKSIZE => '', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'cf2', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '', COMPRESSION =
> 'NONE', VERSIONS => '', TTL => '', MIN_VERSIONS => '', BLOCKSIZE => '', IN_MEMORY => 'false', BLOCKCACHE => 'true'}]}
1 row(s) in 0.0670 seconds hbase(main):022:0> scan 'test'
ROW COLUMN+CELL
row1 column=cf1:age, timestamp=1555771920276, value=21
row1 column=cf1:name, timestamp=1555771906481, value=zhangsan
row2 column=cf2:age, timestamp=1555837304256, value=20
row2 column=cf2:name, timestamp=1555837324252, value=wangba
2 row(s) in 0.0270 seconds

一.导入导出

# hbase org.apache.hadoop.hbase.mapreduce.Export
ERROR: Wrong number of arguments:
Usage: Export [-D <property=value>]* <tablename> <outputdir> [<versions> [<starttime> [<endtime>]] [^[regex pattern] or [Prefix] to filter]] Note: -D properties will be applied to the conf used.
For example:
-D mapred.output.compress=true
-D mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec
-D mapred.output.compression.type=BLOCK
Additionally, the following SCAN properties can be specified
to control/limit what is exported..
-D hbase.mapreduce.scan.column.family=<familyName>
# hbase org.apache.hadoop.hbase.mapreduce.Import
ERROR: Wrong number of arguments:
Usage: Import <tablename> <inputdir>

1.导出到hdfs

# hbase org.apache.hadoop.hbase.mapreduce.Export test /backup/test

或采用以下写法

# hbase org.apache.hadoop.hbase.mapreduce.Export test hdfs://sht-sgmhadoopnn-01:9011/backup/test

输出log

[root@sht-sgmhadoopdn-02 exp]# hbase org.apache.hadoop.hbase.mapreduce.Export test hdfs://sht-sgmhadoopnn-01:9011/backup/test
19/04/21 17:45:39 INFO mapreduce.Export: verisons=1, starttime=0, endtime=9223372036854775807
19/04/21 17:45:39 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:45:39 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:45:39 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:45:39 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:45:39 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:45:39 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:45:39 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:45:39 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hbase-0.92.1/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-1.0.3/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:host.name=sht-sgmhadoopdn-02
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_45
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:java.home=/opt/jdk1.6.0_45/jre
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/opt/hbase/bin/../conf:/opt/jdk1.6.0_45/lib/tools.jar:/opt/hbase/bin/..:/opt/hbase/bin/../hbase-0.92.1.jar:/opt/hbase/bin/../hbase-0.92.1-tests.jar:/opt/hbase/bin/../lib/activation-1.1.jar:/opt/hbase/bin/../lib/asm-3.1.jar:/opt/hbase/bin/../lib/avro-1.5.3.jar:/opt/hbase/bin/../lib/avro-ipc-1.5.3.jar:/opt/hbase/bin/../lib/commons-beanutils-1.7.0.jar:/opt/hbase/bin/../lib/commons-beanutils-core-1.8.0.jar:/opt/hbase/bin/../lib/commons-cli-1.2.jar:/opt/hbase/bin/../lib/commons-codec-1.4.jar:/opt/hbase/bin/../lib/commons-collections-3.2.1.jar:/opt/hbase/bin/../lib/commons-configuration-1.6.jar:/opt/hbase/bin/../lib/commons-digester-1.8.jar:/opt/hbase/bin/../lib/commons-el-1.0.jar:/opt/hbase/bin/../lib/commons-httpclient-3.1.jar:/opt/hbase/bin/../lib/commons-lang-2.5.jar:/opt/hbase/bin/../lib/commons-logging-1.1.1.jar:/opt/hbase/bin/../lib/commons-math-2.1.jar:/opt/hbase/bin/../lib/commons-net-1.4.1.jar:/opt/hbase/bin/../lib/core-3.1.1.jar:/opt/hbase/bin/../lib/guava-r09.jar:/opt/hbase/bin/../lib/hadoop-core-1.0.0.jar:/opt/hbase/bin/../lib/high-scale-lib-1.1.1.jar:/opt/hbase/bin/../lib/httpclient-4.0.1.jar:/opt/hbase/bin/../lib/httpcore-4.0.1.jar:/opt/hbase/bin/../lib/jackson-core-asl-1.5.5.jar:/opt/hbase/bin/../lib/jackson-jaxrs-1.5.5.jar:/opt/hbase/bin/../lib/jackson-mapper-asl-1.5.5.jar:/opt/hbase/bin/../lib/jackson-xc-1.5.5.jar:/opt/hbase/bin/../lib/jamon-runtime-2.3.1.jar:/opt/hbase/bin/../lib/jasper-compiler-5.5.23.jar:/opt/hbase/bin/../lib/jasper-runtime-5.5.23.jar:/opt/hbase/bin/../lib/jaxb-api-2.1.jar:/opt/hbase/bin/../lib/jaxb-impl-2.1.12.jar:/opt/hbase/bin/../lib/jersey-core-1.4.jar:/opt/hbase/bin/../lib/jersey-json-1.4.jar:/opt/hbase/bin/../lib/jersey-server-1.4.jar:/opt/hbase/bin/../lib/jettison-1.1.jar:/opt/hbase/bin/../lib/jetty-6.1.26.jar:/opt/hbase/bin/../lib/jetty-util-6.1.26.jar:/opt/hbase/bin/../lib/jruby-complete-1.6.5.jar:/opt/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/opt/hbase/bin/../lib/jsp-api-2.1-6.1.14.jar:/opt/hbase/bin/../lib/libthrift-0.7.0.jar:/opt/hbase/bin/../lib/log4j-1.2.16.jar:/opt/hbase/bin/../lib/netty-3.2.4.Final.jar:/opt/hbase/bin/../lib/protobuf-java-2.4.0a.jar:/opt/hbase/bin/../lib/servlet-api-2.5-6.1.14.jar:/opt/hbase/bin/../lib/servlet-api-2.5.jar:/opt/hbase/bin/../lib/slf4j-api-1.5.8.jar:/opt/hbase/bin/../lib/slf4j-log4j12-1.5.8.jar:/opt/hbase/bin/../lib/snappy-java-1.0.3.2.jar:/opt/hbase/bin/../lib/stax-api-1.0.1.jar:/opt/hbase/bin/../lib/velocity-1.7.jar:/opt/hbase/bin/../lib/xmlenc-0.52.jar:/opt/hbase/bin/../lib/zookeeper-3.4.3.jar:/opt/hadoop/conf:/opt/hadoop-1.0.3/libexec/../conf:/opt/jdk1.6.0_45/lib/tools.jar:/opt/hadoop-1.0.3/libexec/..:/opt/hadoop-1.0.3/libexec/../hadoop-core-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/asm-3.2.jar:/opt/hadoop-1.0.3/libexec/../lib/aspectjrt-1.6.5.jar:/opt/hadoop-1.0.3/libexec/../lib/aspectjtools-1.6.5.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-beanutils-1.7.0.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-cli-1.2.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-codec-1.4.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-collections-3.2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-configuration-1.6.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-daemon-1.0.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-digester-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-el-1.0.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-io-2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-lang-2.4.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-logging-1.1.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-logging-api-1.0.4.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-math-2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-net-1.4.1.jar:/opt/hadoop-1.0.3/libexec/../lib/core-3.1.1.jar:/opt/hadoop-1.0.3/libexec/../lib/hadoop-capacity-scheduler-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/hadoop-fairscheduler-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/hadoop-thriftfs-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/hadoop-1.0.3/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/hadoop-1.0.3/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/hadoop-1.0.3/libexec/../lib/jdeb-0.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jersey-core-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jersey-json-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jersey-server-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jets3t-0.6.1.jar:/opt/hadoop-1.0.3/libexec/../lib/jetty-6.1.26.jar:/opt/hadoop-1.0.3/libexec/../lib/jetty-util-6.1.26.jar:/opt/hadoop-1.0.3/libexec/../lib/jsch-0.1.42.jar:/opt/hadoop-1.0.3/libexec/../lib/junit-4.5.jar:/opt/hadoop-1.0.3/libexec/../lib/kfs-0.2.2.jar:/opt/hadoop-1.0.3/libexec/../lib/log4j-1.2.15.jar:/opt/hadoop-1.0.3/libexec/../lib/mockito-all-1.8.5.jar:/opt/hadoop-1.0.3/libexec/../lib/oro-2.0.8.jar:/opt/hadoop-1.0.3/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/hadoop-1.0.3/libexec/../lib/slf4j-api-1.4.3.jar:/opt/hadoop-1.0.3/libexec/../lib/slf4j-log4j12-1.4.3.jar:/opt/hadoop-1.0.3/libexec/../lib/xmlenc-0.52.jar:/opt/hadoop-1.0.3/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/opt/hadoop-1.0.3/libexec/../lib/native/Linux-amd64-64:/opt/hbase/bin/../lib/native/Linux-amd64-64
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-514.el7.x86_64
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:user.name=root
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:user.dir=/opt/hbase-0.92.1/dba/exp
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=sht-sgmhadoopdn-02:2182,sht-sgmhadoopdn-01:2182,sht-sgmhadoopdn-03:2182 sessionTimeout=60000 watcher=hconnection
19/04/21 17:45:45 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 24347@sht-sgmhadoopdn-02.telenav.cn
19/04/21 17:45:45 INFO zookeeper.ClientCnxn: Opening socket connection to server /172.16.101.60:2182
19/04/21 17:45:45 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
19/04/21 17:45:45 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
19/04/21 17:45:45 INFO zookeeper.ClientCnxn: Socket connection established to sht-sgmhadoopdn-03/172.16.101.60:2182, initiating session
19/04/21 17:45:45 WARN zookeeper.ClientCnxnSocket: Connected to an old server; r-o mode will be unavailable
19/04/21 17:45:45 INFO zookeeper.ClientCnxn: Session establishment complete on server sht-sgmhadoopdn-03/172.16.101.60:2182, sessionid = 0x36a3a9e24d50034, negotiated timeout = 40000
19/04/21 17:45:45 DEBUG client.HConnectionManager$HConnectionImplementation: Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@58648016; serverName=sht-sgmhadoopdn-01,60021,1555762016498
19/04/21 17:45:45 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for .META.,,1.1028785192 is sht-sgmhadoopdn-01:60021
19/04/21 17:45:46 DEBUG client.MetaScanner: Scanning .META. starting at row=test,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@58648016
19/04/21 17:45:46 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for test,,1555838328985.681b358885eb10357f9f811b77275b25. is sht-sgmhadoopdn-01:60021
19/04/21 17:45:46 DEBUG client.MetaScanner: Scanning .META. starting at row=test,,00000000000000 for max=2147483647 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@58648016
19/04/21 17:45:46 DEBUG mapreduce.TableInputFormatBase: getSplits: split -> 0 -> sht-sgmhadoopdn-01:,
19/04/21 17:45:46 INFO mapred.JobClient: Running job: job_201904201958_0026
19/04/21 17:45:47 INFO mapred.JobClient: map 0% reduce 0%
19/04/21 17:46:03 INFO mapred.JobClient: map 100% reduce 0%
19/04/21 17:46:08 INFO mapred.JobClient: Job complete: job_201904201958_0026
19/04/21 17:46:08 INFO mapred.JobClient: Counters: 19
19/04/21 17:46:08 INFO mapred.JobClient: Job Counters
19/04/21 17:46:08 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=14713
19/04/21 17:46:08 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
19/04/21 17:46:08 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
19/04/21 17:46:08 INFO mapred.JobClient: Rack-local map tasks=1
19/04/21 17:46:08 INFO mapred.JobClient: Launched map tasks=1
19/04/21 17:46:08 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
19/04/21 17:46:08 INFO mapred.JobClient: File Output Format Counters
19/04/21 17:46:08 INFO mapred.JobClient: Bytes Written=310
19/04/21 17:46:08 INFO mapred.JobClient: FileSystemCounters
19/04/21 17:46:08 INFO mapred.JobClient: HDFS_BYTES_READ=71
19/04/21 17:46:08 INFO mapred.JobClient: FILE_BYTES_WRITTEN=31358
19/04/21 17:46:08 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=310
19/04/21 17:46:08 INFO mapred.JobClient: File Input Format Counters
19/04/21 17:46:08 INFO mapred.JobClient: Bytes Read=0
19/04/21 17:46:08 INFO mapred.JobClient: Map-Reduce Framework
19/04/21 17:46:08 INFO mapred.JobClient: Map input records=2
19/04/21 17:46:08 INFO mapred.JobClient: Physical memory (bytes) snapshot=81055744
19/04/21 17:46:08 INFO mapred.JobClient: Spilled Records=0
19/04/21 17:46:08 INFO mapred.JobClient: CPU time spent (ms)=1390
19/04/21 17:46:08 INFO mapred.JobClient: Total committed heap usage (bytes)=91226112
19/04/21 17:46:08 INFO mapred.JobClient: Virtual memory (bytes) snapshot=1540837376
19/04/21 17:46:08 INFO mapred.JobClient: Map output records=2
19/04/21 17:46:08 INFO mapred.JobClient: SPLIT_RAW_BYTES=71

2.查看备份文件

# hadoop fs -ls /backup/test
Found items
-rw-r--r-- root supergroup -- : /backup/test/_SUCCESS
drwxr-xr-x - root supergroup -- : /backup/test/_logs
-rw-r--r-- root supergroup -- : /backup/test/part-m-

3.创建新的表结构

hbase(main):032:0> create 'emp', 'cf1', 'cf2'
0 row(s) in 1.0590 seconds

4.将备份导入到新表

# hbase org.apache.hadoop.hbase.mapreduce.Import emp hdfs://sht-sgmhadoopnn-01:9011/backup/test

或采用以下写法

# hbase org.apache.hadoop.hbase.mapreduce.Import emp /backup/test

输出log

[root@sht-sgmhadoopdn-02 exp]# hbase org.apache.hadoop.hbase.mapreduce.Import emp hdfs://sht-sgmhadoopnn-01:9011/backup/test
19/04/21 17:49:55 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:49:55 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:49:55 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:49:55 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:49:55 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:49:55 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:49:55 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:49:55 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hbase-0.92.1/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-1.0.3/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:host.name=sht-sgmhadoopdn-02
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_45
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:java.home=/opt/jdk1.6.0_45/jre
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/opt/hbase/bin/../conf:/opt/jdk1.6.0_45/lib/tools.jar:/opt/hbase/bin/..:/opt/hbase/bin/../hbase-0.92.1.jar:/opt/hbase/bin/../hbase-0.92.1-tests.jar:/opt/hbase/bin/../lib/activation-1.1.jar:/opt/hbase/bin/../lib/asm-3.1.jar:/opt/hbase/bin/../lib/avro-1.5.3.jar:/opt/hbase/bin/../lib/avro-ipc-1.5.3.jar:/opt/hbase/bin/../lib/commons-beanutils-1.7.0.jar:/opt/hbase/bin/../lib/commons-beanutils-core-1.8.0.jar:/opt/hbase/bin/../lib/commons-cli-1.2.jar:/opt/hbase/bin/../lib/commons-codec-1.4.jar:/opt/hbase/bin/../lib/commons-collections-3.2.1.jar:/opt/hbase/bin/../lib/commons-configuration-1.6.jar:/opt/hbase/bin/../lib/commons-digester-1.8.jar:/opt/hbase/bin/../lib/commons-el-1.0.jar:/opt/hbase/bin/../lib/commons-httpclient-3.1.jar:/opt/hbase/bin/../lib/commons-lang-2.5.jar:/opt/hbase/bin/../lib/commons-logging-1.1.1.jar:/opt/hbase/bin/../lib/commons-math-2.1.jar:/opt/hbase/bin/../lib/commons-net-1.4.1.jar:/opt/hbase/bin/../lib/core-3.1.1.jar:/opt/hbase/bin/../lib/guava-r09.jar:/opt/hbase/bin/../lib/hadoop-core-1.0.0.jar:/opt/hbase/bin/../lib/high-scale-lib-1.1.1.jar:/opt/hbase/bin/../lib/httpclient-4.0.1.jar:/opt/hbase/bin/../lib/httpcore-4.0.1.jar:/opt/hbase/bin/../lib/jackson-core-asl-1.5.5.jar:/opt/hbase/bin/../lib/jackson-jaxrs-1.5.5.jar:/opt/hbase/bin/../lib/jackson-mapper-asl-1.5.5.jar:/opt/hbase/bin/../lib/jackson-xc-1.5.5.jar:/opt/hbase/bin/../lib/jamon-runtime-2.3.1.jar:/opt/hbase/bin/../lib/jasper-compiler-5.5.23.jar:/opt/hbase/bin/../lib/jasper-runtime-5.5.23.jar:/opt/hbase/bin/../lib/jaxb-api-2.1.jar:/opt/hbase/bin/../lib/jaxb-impl-2.1.12.jar:/opt/hbase/bin/../lib/jersey-core-1.4.jar:/opt/hbase/bin/../lib/jersey-json-1.4.jar:/opt/hbase/bin/../lib/jersey-server-1.4.jar:/opt/hbase/bin/../lib/jettison-1.1.jar:/opt/hbase/bin/../lib/jetty-6.1.26.jar:/opt/hbase/bin/../lib/jetty-util-6.1.26.jar:/opt/hbase/bin/../lib/jruby-complete-1.6.5.jar:/opt/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/opt/hbase/bin/../lib/jsp-api-2.1-6.1.14.jar:/opt/hbase/bin/../lib/libthrift-0.7.0.jar:/opt/hbase/bin/../lib/log4j-1.2.16.jar:/opt/hbase/bin/../lib/netty-3.2.4.Final.jar:/opt/hbase/bin/../lib/protobuf-java-2.4.0a.jar:/opt/hbase/bin/../lib/servlet-api-2.5-6.1.14.jar:/opt/hbase/bin/../lib/servlet-api-2.5.jar:/opt/hbase/bin/../lib/slf4j-api-1.5.8.jar:/opt/hbase/bin/../lib/slf4j-log4j12-1.5.8.jar:/opt/hbase/bin/../lib/snappy-java-1.0.3.2.jar:/opt/hbase/bin/../lib/stax-api-1.0.1.jar:/opt/hbase/bin/../lib/velocity-1.7.jar:/opt/hbase/bin/../lib/xmlenc-0.52.jar:/opt/hbase/bin/../lib/zookeeper-3.4.3.jar:/opt/hadoop/conf:/opt/hadoop-1.0.3/libexec/../conf:/opt/jdk1.6.0_45/lib/tools.jar:/opt/hadoop-1.0.3/libexec/..:/opt/hadoop-1.0.3/libexec/../hadoop-core-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/asm-3.2.jar:/opt/hadoop-1.0.3/libexec/../lib/aspectjrt-1.6.5.jar:/opt/hadoop-1.0.3/libexec/../lib/aspectjtools-1.6.5.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-beanutils-1.7.0.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-cli-1.2.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-codec-1.4.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-collections-3.2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-configuration-1.6.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-daemon-1.0.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-digester-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-el-1.0.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-io-2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-lang-2.4.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-logging-1.1.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-logging-api-1.0.4.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-math-2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-net-1.4.1.jar:/opt/hadoop-1.0.3/libexec/../lib/core-3.1.1.jar:/opt/hadoop-1.0.3/libexec/../lib/hadoop-capacity-scheduler-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/hadoop-fairscheduler-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/hadoop-thriftfs-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/hadoop-1.0.3/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/hadoop-1.0.3/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/hadoop-1.0.3/libexec/../lib/jdeb-0.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jersey-core-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jersey-json-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jersey-server-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jets3t-0.6.1.jar:/opt/hadoop-1.0.3/libexec/../lib/jetty-6.1.26.jar:/opt/hadoop-1.0.3/libexec/../lib/jetty-util-6.1.26.jar:/opt/hadoop-1.0.3/libexec/../lib/jsch-0.1.42.jar:/opt/hadoop-1.0.3/libexec/../lib/junit-4.5.jar:/opt/hadoop-1.0.3/libexec/../lib/kfs-0.2.2.jar:/opt/hadoop-1.0.3/libexec/../lib/log4j-1.2.15.jar:/opt/hadoop-1.0.3/libexec/../lib/mockito-all-1.8.5.jar:/opt/hadoop-1.0.3/libexec/../lib/oro-2.0.8.jar:/opt/hadoop-1.0.3/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/hadoop-1.0.3/libexec/../lib/slf4j-api-1.4.3.jar:/opt/hadoop-1.0.3/libexec/../lib/slf4j-log4j12-1.4.3.jar:/opt/hadoop-1.0.3/libexec/../lib/xmlenc-0.52.jar:/opt/hadoop-1.0.3/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/opt/hadoop-1.0.3/libexec/../lib/native/Linux-amd64-64:/opt/hbase/bin/../lib/native/Linux-amd64-64
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-514.el7.x86_64
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:user.name=root
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:user.dir=/opt/hbase-0.92.1/dba/exp
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=sht-sgmhadoopdn-02:2182,sht-sgmhadoopdn-01:2182,sht-sgmhadoopdn-03:2182 sessionTimeout=60000 watcher=hconnection
19/04/21 17:49:56 INFO zookeeper.ClientCnxn: Opening socket connection to server /172.16.101.59:2182
19/04/21 17:49:56 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
19/04/21 17:49:56 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
19/04/21 17:49:56 INFO zookeeper.ClientCnxn: Socket connection established to sht-sgmhadoopdn-02/172.16.101.59:2182, initiating session
19/04/21 17:49:56 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 24873@sht-sgmhadoopdn-02.telenav.cn
19/04/21 17:49:56 WARN zookeeper.ClientCnxnSocket: Connected to an old server; r-o mode will be unavailable
19/04/21 17:49:56 INFO zookeeper.ClientCnxn: Session establishment complete on server sht-sgmhadoopdn-02/172.16.101.59:2182, sessionid = 0x26a3a9dc0150032, negotiated timeout = 40000
19/04/21 17:49:56 DEBUG client.HConnectionManager$HConnectionImplementation: Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@66922804; serverName=sht-sgmhadoopdn-01,60021,1555762016498
19/04/21 17:49:56 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for .META.,,1.1028785192 is sht-sgmhadoopdn-01:60021
19/04/21 17:49:56 DEBUG client.MetaScanner: Scanning .META. starting at row=emp,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@66922804
19/04/21 17:49:56 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for emp,,1555840094033.a8346463e975084ba0398d3bf9c32649. is sht-sgmhadoopdn-03:60021
19/04/21 17:49:56 INFO mapreduce.TableOutputFormat: Created table instance for emp
19/04/21 17:49:57 INFO input.FileInputFormat: Total input paths to process : 1
19/04/21 17:49:57 INFO mapred.JobClient: Running job: job_201904201958_0028
19/04/21 17:49:58 INFO mapred.JobClient: map 0% reduce 0%
19/04/21 17:50:14 INFO mapred.JobClient: map 100% reduce 0%
19/04/21 17:50:19 INFO mapred.JobClient: Job complete: job_201904201958_0028
19/04/21 17:50:19 INFO mapred.JobClient: Counters: 18
19/04/21 17:50:19 INFO mapred.JobClient: Job Counters
19/04/21 17:50:19 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=13335
19/04/21 17:50:19 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
19/04/21 17:50:19 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
19/04/21 17:50:19 INFO mapred.JobClient: Launched map tasks=1
19/04/21 17:50:19 INFO mapred.JobClient: Data-local map tasks=1
19/04/21 17:50:19 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
19/04/21 17:50:19 INFO mapred.JobClient: File Output Format Counters
19/04/21 17:50:19 INFO mapred.JobClient: Bytes Written=0
19/04/21 17:50:19 INFO mapred.JobClient: FileSystemCounters
19/04/21 17:50:19 INFO mapred.JobClient: HDFS_BYTES_READ=430
19/04/21 17:50:19 INFO mapred.JobClient: FILE_BYTES_WRITTEN=31298
19/04/21 17:50:19 INFO mapred.JobClient: File Input Format Counters
19/04/21 17:50:19 INFO mapred.JobClient: Bytes Read=310
19/04/21 17:50:19 INFO mapred.JobClient: Map-Reduce Framework
19/04/21 17:50:19 INFO mapred.JobClient: Map input records=2
19/04/21 17:50:19 INFO mapred.JobClient: Physical memory (bytes) snapshot=91877376
19/04/21 17:50:19 INFO mapred.JobClient: Spilled Records=0
19/04/21 17:50:19 INFO mapred.JobClient: CPU time spent (ms)=90
19/04/21 17:50:19 INFO mapred.JobClient: Total committed heap usage (bytes)=91226112
19/04/21 17:50:19 INFO mapred.JobClient: Virtual memory (bytes) snapshot=1535459328
19/04/21 17:50:19 INFO mapred.JobClient: Map output records=2
19/04/21 17:50:19 INFO mapred.JobClient: SPLIT_RAW_BYTES=120

5. 查看新表数据

hbase(main):034:0> scan 'emp'
ROW COLUMN+CELL
row1 column=cf1:age, timestamp=1555771920276, value=21
row1 column=cf1:name, timestamp=1555771906481, value=zhangsan
row2 column=cf2:age, timestamp=1555837304256, value=20
row2 column=cf2:name, timestamp=1555837324252, value=wangba
2 row(s) in 0.0450 seconds

二.复制

# hbase org.apache.hadoop.hbase.mapreduce.CopyTable
Usage: CopyTable [--rs.class=CLASS] [--rs.impl=IMPL] [--starttime=X] [--endtime=Y] [--new.name=NEW] [--peer.adr=ADR] <tablename> Options:
rs.class hbase.regionserver.class of the peer cluster
specify if different from current cluster
rs.impl hbase.regionserver.impl of the peer cluster
starttime beginning of the time range
without endtime means from starttime to forever
endtime end of the time range
new.name new table's name
peer.adr Address of the peer cluster given in the format
hbase.zookeeer.quorum:hbase.zookeeper.client.port:zookeeper.znode.parent
families comma-separated list of families to copy
To copy from cf1 to cf2, give sourceCfName:destCfName.
To keep the same name, just give "cfName" Args:
tablename Name of the table to copy Examples:
To copy 'TestTable' to a cluster that uses replication for a hour window:
$ bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable --rs.class=org.apache.hadoop.hbase.ipc.ReplicationRegionInterface --rs.impl=org.apache.hadoop.hbase.regionserver.replication.ReplicationRegionServer --starttime= --endtime= --peer.adr=server1,server2,server3::/hbase --families=myOldCf:myNewCf,cf2,cf3 TestTable

1. 新建表结构

hbase(main):035:0> create 'emp1', 'cf1', 'cf2'
0 row(s) in 1.0610 seconds

2. 将老表数据复制到新表

# hbase org.apache.hadoop.hbase.mapreduce.CopyTable --new.name=emp1 test

输出log

[root@sht-sgmhadoopdn-01 exp]# hbase org.apache.hadoop.hbase.mapreduce.CopyTable --new.name=emp1 test
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hbase-0.92.1/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-1.0.3/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:host.name=sht-sgmhadoopdn-01
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_45
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:java.home=/opt/jdk1.6.0_45/jre
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/opt/hbase/bin/../conf:/opt/jdk1.6.0_45/lib/tools.jar:/opt/hbase/bin/..:/opt/hbase/bin/../hbase-0.92.1.jar:/opt/hbase/bin/../hbase-0.92.1-tests.jar:/opt/hbase/bin/../lib/activation-1.1.jar:/opt/hbase/bin/../lib/asm-3.1.jar:/opt/hbase/bin/../lib/avro-1.5.3.jar:/opt/hbase/bin/../lib/avro-ipc-1.5.3.jar:/opt/hbase/bin/../lib/commons-beanutils-1.7.0.jar:/opt/hbase/bin/../lib/commons-beanutils-core-1.8.0.jar:/opt/hbase/bin/../lib/commons-cli-1.2.jar:/opt/hbase/bin/../lib/commons-codec-1.4.jar:/opt/hbase/bin/../lib/commons-collections-3.2.1.jar:/opt/hbase/bin/../lib/commons-configuration-1.6.jar:/opt/hbase/bin/../lib/commons-digester-1.8.jar:/opt/hbase/bin/../lib/commons-el-1.0.jar:/opt/hbase/bin/../lib/commons-httpclient-3.1.jar:/opt/hbase/bin/../lib/commons-lang-2.5.jar:/opt/hbase/bin/../lib/commons-logging-1.1.1.jar:/opt/hbase/bin/../lib/commons-math-2.1.jar:/opt/hbase/bin/../lib/commons-net-1.4.1.jar:/opt/hbase/bin/../lib/core-3.1.1.jar:/opt/hbase/bin/../lib/guava-r09.jar:/opt/hbase/bin/../lib/hadoop-core-1.0.0.jar:/opt/hbase/bin/../lib/high-scale-lib-1.1.1.jar:/opt/hbase/bin/../lib/httpclient-4.0.1.jar:/opt/hbase/bin/../lib/httpcore-4.0.1.jar:/opt/hbase/bin/../lib/jackson-core-asl-1.5.5.jar:/opt/hbase/bin/../lib/jackson-jaxrs-1.5.5.jar:/opt/hbase/bin/../lib/jackson-mapper-asl-1.5.5.jar:/opt/hbase/bin/../lib/jackson-xc-1.5.5.jar:/opt/hbase/bin/../lib/jamon-runtime-2.3.1.jar:/opt/hbase/bin/../lib/jasper-compiler-5.5.23.jar:/opt/hbase/bin/../lib/jasper-runtime-5.5.23.jar:/opt/hbase/bin/../lib/jaxb-api-2.1.jar:/opt/hbase/bin/../lib/jaxb-impl-2.1.12.jar:/opt/hbase/bin/../lib/jersey-core-1.4.jar:/opt/hbase/bin/../lib/jersey-json-1.4.jar:/opt/hbase/bin/../lib/jersey-server-1.4.jar:/opt/hbase/bin/../lib/jettison-1.1.jar:/opt/hbase/bin/../lib/jetty-6.1.26.jar:/opt/hbase/bin/../lib/jetty-util-6.1.26.jar:/opt/hbase/bin/../lib/jruby-complete-1.6.5.jar:/opt/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/opt/hbase/bin/../lib/jsp-api-2.1-6.1.14.jar:/opt/hbase/bin/../lib/libthrift-0.7.0.jar:/opt/hbase/bin/../lib/log4j-1.2.16.jar:/opt/hbase/bin/../lib/netty-3.2.4.Final.jar:/opt/hbase/bin/../lib/protobuf-java-2.4.0a.jar:/opt/hbase/bin/../lib/servlet-api-2.5-6.1.14.jar:/opt/hbase/bin/../lib/servlet-api-2.5.jar:/opt/hbase/bin/../lib/slf4j-api-1.5.8.jar:/opt/hbase/bin/../lib/slf4j-log4j12-1.5.8.jar:/opt/hbase/bin/../lib/snappy-java-1.0.3.2.jar:/opt/hbase/bin/../lib/stax-api-1.0.1.jar:/opt/hbase/bin/../lib/velocity-1.7.jar:/opt/hbase/bin/../lib/xmlenc-0.52.jar:/opt/hbase/bin/../lib/zookeeper-3.4.3.jar:/opt/hadoop/conf:/opt/hadoop-1.0.3/libexec/../conf:/opt/jdk1.6.0_45/lib/tools.jar:/opt/hadoop-1.0.3/libexec/..:/opt/hadoop-1.0.3/libexec/../hadoop-core-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/asm-3.2.jar:/opt/hadoop-1.0.3/libexec/../lib/aspectjrt-1.6.5.jar:/opt/hadoop-1.0.3/libexec/../lib/aspectjtools-1.6.5.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-beanutils-1.7.0.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-cli-1.2.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-codec-1.4.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-collections-3.2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-configuration-1.6.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-daemon-1.0.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-digester-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-el-1.0.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-io-2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-lang-2.4.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-logging-1.1.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-logging-api-1.0.4.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-math-2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-net-1.4.1.jar:/opt/hadoop-1.0.3/libexec/../lib/core-3.1.1.jar:/opt/hadoop-1.0.3/libexec/../lib/hadoop-capacity-scheduler-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/hadoop-fairscheduler-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/hadoop-thriftfs-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/hadoop-1.0.3/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/hadoop-1.0.3/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/hadoop-1.0.3/libexec/../lib/jdeb-0.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jersey-core-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jersey-json-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jersey-server-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jets3t-0.6.1.jar:/opt/hadoop-1.0.3/libexec/../lib/jetty-6.1.26.jar:/opt/hadoop-1.0.3/libexec/../lib/jetty-util-6.1.26.jar:/opt/hadoop-1.0.3/libexec/../lib/jsch-0.1.42.jar:/opt/hadoop-1.0.3/libexec/../lib/junit-4.5.jar:/opt/hadoop-1.0.3/libexec/../lib/kfs-0.2.2.jar:/opt/hadoop-1.0.3/libexec/../lib/log4j-1.2.15.jar:/opt/hadoop-1.0.3/libexec/../lib/mockito-all-1.8.5.jar:/opt/hadoop-1.0.3/libexec/../lib/oro-2.0.8.jar:/opt/hadoop-1.0.3/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/hadoop-1.0.3/libexec/../lib/slf4j-api-1.4.3.jar:/opt/hadoop-1.0.3/libexec/../lib/slf4j-log4j12-1.4.3.jar:/opt/hadoop-1.0.3/libexec/../lib/xmlenc-0.52.jar:/opt/hadoop-1.0.3/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/opt/hadoop-1.0.3/libexec/../lib/native/Linux-amd64-64:/opt/hbase/bin/../lib/native/Linux-amd64-64
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-514.el7.x86_64
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:user.name=root
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:user.dir=/opt/hbase-0.92.1/dba/exp
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=sht-sgmhadoopdn-02:2182,sht-sgmhadoopdn-01:2182,sht-sgmhadoopdn-03:2182 sessionTimeout=60000 watcher=hconnection
19/04/21 18:01:19 INFO zookeeper.ClientCnxn: Opening socket connection to server /172.16.101.58:2182
19/04/21 18:01:19 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
19/04/21 18:01:19 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
19/04/21 18:01:19 INFO zookeeper.ClientCnxn: Socket connection established to sht-sgmhadoopdn-01/172.16.101.58:2182, initiating session
19/04/21 18:01:19 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 12345@sht-sgmhadoopdn-01
19/04/21 18:01:19 WARN zookeeper.ClientCnxnSocket: Connected to an old server; r-o mode will be unavailable
19/04/21 18:01:19 INFO zookeeper.ClientCnxn: Session establishment complete on server sht-sgmhadoopdn-01/172.16.101.58:2182, sessionid = 0x16a3a9dc00f0035, negotiated timeout = 40000
19/04/21 18:01:19 DEBUG client.HConnectionManager$HConnectionImplementation: Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@d18d189; serverName=sht-sgmhadoopdn-01,60021,1555762016498
19/04/21 18:01:19 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for .META.,,1.1028785192 is sht-sgmhadoopdn-01:60021
19/04/21 18:01:19 DEBUG client.MetaScanner: Scanning .META. starting at row=emp1,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@d18d189
19/04/21 18:01:19 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for emp1,,1555840809230.6fda341441637758b7ea64c63a769f79. is sht-sgmhadoopdn-01:60021
19/04/21 18:01:19 INFO mapreduce.TableOutputFormat: Created table instance for emp1
19/04/21 18:01:19 DEBUG client.MetaScanner: Scanning .META. starting at row=test,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@d18d189
19/04/21 18:01:19 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for test,,1555838328985.681b358885eb10357f9f811b77275b25. is sht-sgmhadoopdn-01:60021
19/04/21 18:01:19 DEBUG client.MetaScanner: Scanning .META. starting at row=test,,00000000000000 for max=2147483647 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@d18d189
19/04/21 18:01:19 DEBUG mapreduce.TableInputFormatBase: getSplits: split -> 0 -> sht-sgmhadoopdn-01:,
19/04/21 18:01:19 INFO mapred.JobClient: Running job: job_201904201958_0029
19/04/21 18:01:20 INFO mapred.JobClient: map 0% reduce 0%
19/04/21 18:01:36 INFO mapred.JobClient: map 100% reduce 0%
19/04/21 18:01:41 INFO mapred.JobClient: Job complete: job_201904201958_0029
19/04/21 18:01:42 INFO mapred.JobClient: Counters: 18
19/04/21 18:01:42 INFO mapred.JobClient: Job Counters
19/04/21 18:01:42 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=14788
19/04/21 18:01:42 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
19/04/21 18:01:42 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
19/04/21 18:01:42 INFO mapred.JobClient: Rack-local map tasks=1
19/04/21 18:01:42 INFO mapred.JobClient: Launched map tasks=1
19/04/21 18:01:42 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
19/04/21 18:01:42 INFO mapred.JobClient: File Output Format Counters
19/04/21 18:01:42 INFO mapred.JobClient: Bytes Written=0
19/04/21 18:01:42 INFO mapred.JobClient: FileSystemCounters
19/04/21 18:01:42 INFO mapred.JobClient: HDFS_BYTES_READ=71
19/04/21 18:01:42 INFO mapred.JobClient: FILE_BYTES_WRITTEN=31301
19/04/21 18:01:42 INFO mapred.JobClient: File Input Format Counters
19/04/21 18:01:42 INFO mapred.JobClient: Bytes Read=0
19/04/21 18:01:42 INFO mapred.JobClient: Map-Reduce Framework
19/04/21 18:01:42 INFO mapred.JobClient: Map input records=2
19/04/21 18:01:42 INFO mapred.JobClient: Physical memory (bytes) snapshot=77787136
19/04/21 18:01:42 INFO mapred.JobClient: Spilled Records=0
19/04/21 18:01:42 INFO mapred.JobClient: CPU time spent (ms)=150
19/04/21 18:01:42 INFO mapred.JobClient: Total committed heap usage (bytes)=91226112
19/04/21 18:01:42 INFO mapred.JobClient: Virtual memory (bytes) snapshot=1539833856
19/04/21 18:01:42 INFO mapred.JobClient: Map output records=2
19/04/21 18:01:42 INFO mapred.JobClient: SPLIT_RAW_BYTES=71

3. 查看新表数据

hbase(main):036:0> scan 'emp1'
ROW COLUMN+CELL
row1 column=cf1:age, timestamp=1555771920276, value=21
row1 column=cf1:name, timestamp=1555771906481, value=zhangsan
row2 column=cf2:age, timestamp=1555837304256, value=20
row2 column=cf2:name, timestamp=1555837324252, value=wangba
2 row(s) in 0.0240 seconds

hbase-0.92.1表备份还原的更多相关文章

  1. Hbase 0.92.1 Replication

    原集群 服务器名称 服务 sht-sgmhadoopnn-01 Master,NameNode,JobTracker sht-sgmhadoopdn-01 RegionServer,DataNode, ...

  2. Hbase 0.92.1集群数据迁移到新集群

    老集群 hbase(main):001:0> status 4 servers, 0 dead, 0.0000 average load hbase(main):002:0> list T ...

  3. Oracle 表备份还原

    方法1: create table mdmuser20120801 as select * from mdmuser   方法2: create table mdmuser20120801 as se ...

  4. HBase备份还原OpenTSDB数据之Snapshot

    前言 本文基于伪分布式搭建 hadoop+zookeeper+hbase+opentsdb之后,想了解前因后果的可以看上一篇和上上篇. opentsdb在hbase中生成4个表(tsdb, tsdb- ...

  5. HBase备份还原OpenTSDB数据之Export/Import(增量+全量)

    前言 本文基于伪分布式搭建 hadoop+zookeeper+hbase+opentsdb之后,文章链接:https://www.cnblogs.com/yybrhr/p/11128149.html, ...

  6. MongoDB之整库备份还原单表collection备份还原

    MongoDB之整库备份还原单表collection备份还原 cd D:\MongoDB\bin 1整库备份: mongodump -h dbhost -d dbname -o dbdirectory ...

  7. mssql sqlserver 快速表备份和表还原的方法

    摘要: 在sqlserver维护中,我们偶尔需要运行一些sql脚本对数据进行相关修改操作,在数据修改前我们必须对表数据进行备份来避免出现异常时,可以快速修复数据, 下文讲述sqlserver维护中,快 ...

  8. MySQL/MariaDB数据库的mysqldump工具备份还原实战

    MySQL/MariaDB数据库的mysqldump工具备份还原实战 作者:尹正杰  版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.mysqldump概述 1>.逻辑备份工具 mysq ...

  9. SQL Server 大数据搬迁之文件组备份还原实战

    一.本文所涉及的内容(Contents) 本文所涉及的内容(Contents) 背景(Contexts) 解决方案(Solution) 搬迁步骤(Procedure) 搬迁脚本(SQL Codes) ...

随机推荐

  1. hdu4779 组合计数+dp

    提交 题意:给了n*m的网格,然后有p个重型的防御塔,能承受1次攻击,q个轻型防御塔不能接受任何攻击,然后每个防御搭会攻击他所在的行和所在的列,最后求在这个网格上放至少一个防御塔的方案数, 我们枚举 ...

  2. redis-使用问题

    记录一下相关的问题,使用参考http://www.runoob.com/redis/ 1.DENIED Redis is running in protected mode 这个是启用了保护模式,这个 ...

  3. 如何在Qt中使用自定义数据类型

    这里我们使用下面这个struct来做说明(这里不管是struct还是class都一样): struct Player { int number; QString firstName; QString ...

  4. 【题解】Luogu P4198 楼房重建

    原题传送门 根据斜率来建线段树,线段树维护区间最大斜率以及区间内能看见的楼房的数量(不考虑其他地方的原因,两个节点合并时再考虑) 细节见程序 #include <bits/stdc++.h> ...

  5. python coroutine

    1. Python Async/Await入门指南 2. 用 Python 3 的 async / await 做异步编程 3.

  6. day12函数,三元表达式 ,列表推导式 ,字典推导式,函数对象,名称空间与作用域,函数的嵌套定义

    复习 # 字符串的比较 # -- 按照从左往右比较每一个字符,通过字符对应的ascii进行比较 # 函数的参数 # 1)实参与形参: # -- 形参:在函数定义时()中出现的参数 # -- 实参:在函 ...

  7. PL/SQL执行计划查看

    一.如何查看PLSQL的执行计划 在SQl Window窗口输入sql语句,然后按键"F5",就会进入执行计划查看界面. 二.界面说明 首先我们看第二行有几个属性可以选“Tree” ...

  8. JS高程关于ajax的学习笔记

    1.ajax介绍 ajax技术可以实现浏览器向服务器请求数据时不需要重新加载页面,就可以从服务器中获取需要的数据. ajax技术的核心是XMLHttpRequest对象(简称XHR),XHR对象为向服 ...

  9. SpringMVC成员变量并发状态下使用测试

    1.SpringMVC默认是单例的,使用成员变量在并发状态下该成员变量的值是被共享的 测试平台 我们目前正在开发的电商项目  (架构组成SpringCloud + SpringBoot + Sprin ...

  10. Python单元测试框架unittest

    学习接口自动化测试时接触了unittest单元测试框架,学习时参照了虫师编写的<selenium2自动化测试实战>,个人觉得里面讲的例子还比较容易理解的. 一.基础 1.main()和框架 ...