There are two strategies for backing up HBase:
1> Backing it up with a full cluster shutdown
2> Backing it up on a live cluster
3> Backing Up and Restoring HBase Data

A full shutdown backup has to stop HBase (or disable all tables) at first, then use Hadoop's distcp command to copy the contents of an HBase directory to either another directory on the same HDFS, or to a different HDFS. To restore from a full shutdown backup, just copy the backed up files, back to the HBase directory using distcp.

There are several approaches for a live cluster backup:
1> Using the CopyTable utility to copy data from one table to another
2> Exporting an HBase table to HDFS files, and importing the files back to HBase
3> HBase cluster replication

The CopyTable utility could be used to copy data from one table to either another one on the same cluster, or to a different cluster. The Export utility dumps the data of a table to HDFS,which is on the same cluster. As a set of Export, the Import utility is used to restore the data of the dump files.

方法 1:

landen@Master:~/UntarFile/hbase-0.94.12$ bin/hbase org.apache.hadoop.hbase.mapreduce.Export

  1. Usage: Export [-D <property=value>]* <tablename> <outputdir> [<versions> [<starttime> [<endtime>]] [^[regex pattern] or [Prefix] to filter]]
  2.  
  3. Note: -D properties will be applied to the conf used.
  4. For example:
  5. -D mapred.output.compress=true
  6. -D mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec
  7. -D mapred.output.compression.type=BLOCK
  8. Additionally, the following SCAN properties can be specified
  9. to control/limit what is exported..
  10. -D hbase.mapreduce.scan.column.family=<familyName>
  11. -D hbase.mapreduce.include.deleted.rows=true
  12. For performance consider the following properties:
  13. -Dhbase.client.scanner.caching=100
  14. -Dmapred.map.tasks.speculative.execution=false
  15. -Dmapred.reduce.tasks.speculative.execution=false

landen@Master:~/UntarFile/hbase-0.94.12$ bin/hbase org.apache.hadoop.hbase.mapreduce.Export -D mapred.output.compress=true -D mapred.output.compression.codec=org.apache.hadoop.io.compress.BZip2Codec -D mapred.output.compression.type=BLOCK -D hbase.mapreduce.scan.column.family=IPAddress(可以","添加多个列簇) HiddenIPInfo(对应的HBase需导出的表) /backup/HBaseExport(导出数据时自动创建该目录)
13/12/10 20:12:15 INFO mapreduce.Export: versions=1, starttime=0, endtime=9223372036854775807, keepDeletedCells=false
13/12/10 20:12:15 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.zookeeper.ZooKeeper, using jar /home/landen/UntarFile/hbase-0.94.12/lib/zookeeper-3.4.5.jar
13/12/10 20:12:15 DEBUG mapreduce.TableMapReduceUtil: For class com.google.protobuf.Message, using jar /home/landen/UntarFile/hbase-0.94.12/lib/protobuf-java-2.4.0a.jar
13/12/10 20:12:15 DEBUG mapreduce.TableMapReduceUtil: For class com.google.common.collect.ImmutableSet, using jar /home/landen/UntarFile/hbase-0.94.12/lib/guava-11.0.2.jar
13/12/10 20:12:15 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.util.Bytes, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/10 20:12:15 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.io.LongWritable, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/10 20:12:15 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.io.Text, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/10 20:12:15 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.mapreduce.TableInputFormat, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/10 20:12:15 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.io.LongWritable, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/10 20:12:15 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.io.Text, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/10 20:12:15 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/10 20:12:15 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
..........................
13/12/10 20:12:29 DEBUG mapreduce.TableInputFormatBase: getSplits: split -> 0 -> slave1:,
13/12/10 20:12:32 INFO mapred.JobClient: Running job: job_201312042044_0033
13/12/10 20:12:33 INFO mapred.JobClient:  map 0% reduce 0%
13/12/10 20:12:53 INFO mapred.JobClient:  map 100% reduce 0%
13/12/10 20:12:58 INFO mapred.JobClient: Job complete: job_201312042044_0033
13/12/10 20:12:59 INFO mapred.JobClient: Counters: 29
13/12/10 20:12:59 INFO mapred.JobClient:   Job Counters
13/12/10 20:12:59 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=11992
13/12/10 20:12:59 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
13/12/10 20:12:59 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
13/12/10 20:12:59 INFO mapred.JobClient:     Rack-local map tasks=1
13/12/10 20:12:59 INFO mapred.JobClient:     Launched map tasks=1
13/12/10 20:12:59 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
13/12/10 20:12:59 INFO mapred.JobClient:   HBase Counters
13/12/10 20:12:59 INFO mapred.JobClient:     REMOTE_RPC_CALLS=0
13/12/10 20:12:59 INFO mapred.JobClient:     RPC_CALLS=6
13/12/10 20:12:59 INFO mapred.JobClient:     RPC_RETRIES=0
13/12/10 20:12:59 INFO mapred.JobClient:     NOT_SERVING_REGION_EXCEPTION=0
13/12/10 20:12:59 INFO mapred.JobClient:     NUM_SCANNER_RESTARTS=0
13/12/10 20:12:59 INFO mapred.JobClient:     MILLIS_BETWEEN_NEXTS=6
13/12/10 20:12:59 INFO mapred.JobClient:     BYTES_IN_RESULTS=1493
13/12/10 20:12:59 INFO mapred.JobClient:     BYTES_IN_REMOTE_RESULTS=0
13/12/10 20:12:59 INFO mapred.JobClient:     REGIONS_SCANNED=1
13/12/10 20:12:59 INFO mapred.JobClient:     REMOTE_RPC_RETRIES=0
13/12/10 20:12:59 INFO mapred.JobClient:   File Output Format Counters
13/12/10 20:12:59 INFO mapred.JobClient:     Bytes Written=775
13/12/10 20:12:59 INFO mapred.JobClient:   FileSystemCounters
13/12/10 20:12:59 INFO mapred.JobClient:     HDFS_BYTES_READ=69
13/12/10 20:12:59 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=35024
13/12/10 20:12:59 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=775
13/12/10 20:12:59 INFO mapred.JobClient:   File Input Format Counters
13/12/10 20:12:59 INFO mapred.JobClient:     Bytes Read=0
13/12/10 20:12:59 INFO mapred.JobClient:   Map-Reduce Framework
13/12/10 20:12:59 INFO mapred.JobClient:     Map input records=3
13/12/10 20:12:59 INFO mapred.JobClient:     Physical memory (bytes) snapshot=94224384
13/12/10 20:12:59 INFO mapred.JobClient:     Spilled Records=0
13/12/10 20:12:59 INFO mapred.JobClient:     CPU time spent (ms)=1110
13/12/10 20:12:59 INFO mapred.JobClient:     Total committed heap usage (bytes)=82116608
13/12/10 20:12:59 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=395390976
13/12/10 20:12:59 INFO mapred.JobClient:     Map output records=3
13/12/10 20:12:59 INFO mapred.JobClient:     SPLIT_RAW_BYTES=69
landen@Master:~/UntarFile/hadoop-1.0.4$ bin/hadoop fs -ls /backup/HBaseExport/
Warning: $HADOOP_HOME is deprecated.

Found 3 items
-rw-r--r--   1 landen supergroup          0 2013-12-10 20:12 /backup/HBaseExport/_SUCCESS
drwxr-xr-x   - landen supergroup          0 2013-12-10 20:12 /backup/HBaseExport/_logs
-rw-r--r--   1 landen supergroup        775 2013-12-10 20:12 /backup/HBaseExport/part-m-00000
landen@Master:~/UntarFile/hadoop-1.0.4$

方法 2:

  1. landen@Master:~/UntarFile/hbase-0.94.12$ bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable
  2. Usage: CopyTable [general options] [--starttime=X] [--endtime=Y] [--new.name=NEW] [--peer.adr=ADR] <tablename>
  3.  
  4. Options:
  5. rs.class hbase.regionserver.class of the peer cluster
  6. specify if different from current cluster
  7. rs.impl hbase.regionserver.impl of the peer cluster
  8. startrow the start row
  9. stoprow the stop row
  10. starttime beginning of the time range (unixtime in millis)
  11. without endtime means from starttime to forever
  12. endtime end of the time range. Ignored if no starttime specified.
  13. versions number of cell versions to copy
  14. new.name new table's name
  15. peer.adr Address of the peer cluster given in the format
  16. hbase.zookeeer.quorum:hbase.zookeeper.client.port:zookeeper.znode.parent
  17. families comma-separated list of families to copy
  18. To copy from cf1 to cf2, give sourceCfName:destCfName.
  19. To keep the same name, just give "cfName"
  20. all.cells also copy delete markers and deleted cells
  21.  
  22. Args:
  23. tablename Name of the table to copy
  24.  
  25. Examples:
  26. To copy 'TestTable' to a cluster that uses replication for a 1 hour window:
  27. $ bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable --starttime=1265875194289 --endtime=1265878794289
    --peer.adr=server1,server2,server3:2181:/hbase(指定另一个所在集群位置) --families=myOldCf:myNewCf,cf2,cf3 TestTable
  28. For performance consider the following general options:
  29. -Dhbase.client.scanner.caching=100
  30. -Dmapred.map.tasks.speculative.execution=false

CopyTable is a utility to copy the data of one table to another table, either on the samecluster, or on a different HBase cluster. You can copy to a table that is on the same cluster; however, if you have another cluster that you want to treat as a backup, you might want to use CopyTable as a live backup option to copy the data of a table to the backup cluster. CopyTable is configurable with a start and an end timestamp. If specified, only the datawith a timestamp in the specific time frame will be copied. This feature makes it possible for incremental backup of an HBase table in some situations.

"Incremental backup" is a method to only back up the data that has been changed during the last backup.

Note: Since the cluster keeps running, there is a risk that edits could be missed during the copy process.

landen@Master:~/UntarFile/hbase-0.94.12$ bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable --families=IPAddress --new.name=BackUpHiddenIPInfo(复制一个表的数据到另一个表进行备份->最好复制到不同集群) HiddenIPInfo(所需复制的数据对应的表)
13/12/10 16:15:59 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.zookeeper.ZooKeeper, using jar /home/landen/UntarFile/hbase-0.94.12/lib/zookeeper-3.4.5.jar
13/12/10 16:15:59 DEBUG mapreduce.TableMapReduceUtil: For class com.google.protobuf.Message, using jar /home/landen/UntarFile/hbase-0.94.12/lib/protobuf-java-2.4.0a.jar
13/12/10 16:15:59 DEBUG mapreduce.TableMapReduceUtil: For class com.google.common.collect.ImmutableSet, using jar /home/landen/UntarFile/hbase-0.94.12/lib/guava-11.0.2.jar
13/12/10 16:15:59 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.util.Bytes, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/10 16:15:59 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.io.LongWritable, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/10 16:15:59 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.io.Text, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/10 16:15:59 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.mapreduce.TableInputFormat, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/10 16:15:59 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.io.LongWritable, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/10 16:15:59 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.io.Text, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/10 16:15:59 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/10 16:15:59 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/10 16:15:59 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.zookeeper.ZooKeeper, using jar /home/landen/UntarFile/hbase-0.94.12/lib/zookeeper-3.4.5.jar
13/12/10 16:15:59 DEBUG mapreduce.TableMapReduceUtil: For class com.google.protobuf.Message, using jar /home/landen/UntarFile/hbase-0.94.12/lib/protobuf-java-2.4.0a.jar
13/12/10 16:15:59 DEBUG mapreduce.TableMapReduceUtil: For class com.google.common.collect.ImmutableSet, using jar /home/landen/UntarFile/hbase-0.94.12/lib/guava-11.0.2.jar
13/12/10 16:15:59 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.util.Bytes, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/10 16:15:59 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/10 16:15:59 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.io.Writable, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/10 16:15:59 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.mapreduce.TableInputFormat, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/10 16:15:59 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/10 16:15:59 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.io.Writable, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/10 16:15:59 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.mapreduce.TableOutputFormat, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/10 16:15:59 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar

.................................................

13/12/10 16:16:04 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/home/landen/UntarFile/hadoop-1.0.4/libexec/../lib/native/Linux-i386-32:/home/landen/UntarFile/hbase-0.94.12/lib/native/Linux-i386-32
13/12/10 16:16:04 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
13/12/10 16:16:04 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
13/12/10 16:16:04 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
13/12/10 16:16:04 INFO zookeeper.ZooKeeper: Client environment:os.arch=i386
13/12/10 16:16:04 INFO zookeeper.ZooKeeper: Client environment:os.version=3.2.0-24-generic-pae
13/12/10 16:16:04 INFO zookeeper.ZooKeeper: Client environment:user.name=landen
13/12/10 16:16:04 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/landen
13/12/10 16:16:04 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/landen/UntarFile/hbase-0.94.12
13/12/10 16:16:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=Slave1:2222,Master:2222,Slave2:2222 sessionTimeout=180000 watcher=hconnection
13/12/10 16:16:04 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 16010@Master
13/12/10 16:16:04 INFO zookeeper.ClientCnxn: Opening socket connection to server Master/10.21.244.79:2222. Will not attempt to authenticate using SASL (unknown error)
13/12/10 16:16:04 INFO zookeeper.ClientCnxn: Socket connection established to Master/10.21.244.79:2222, initiating session
13/12/10 16:16:04 INFO zookeeper.ClientCnxn: Session establishment complete on server Master/10.21.244.79:2222, sessionid = 0x42db7cbd1f0005, negotiated timeout = 180000
13/12/10 16:16:04 DEBUG client.HConnectionManager$HConnectionImplementation: Looked up root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@167a465; serverName=Slave1,60020,1386661855439
13/12/10 16:16:04 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for .META.,,1.1028785192 is Slave1:60020
13/12/10 16:16:05 DEBUG client.MetaScanner: Scanning .META. starting at row=BackUpHiddenIPInfo,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@167a465
13/12/10 16:16:05 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for BackUpHiddenIPInfo,,1386662946878.48312c3f9b8715670432c413ca44f2f6. is Slave1:60020
13/12/10 16:16:05 INFO mapreduce.TableOutputFormat: Created table instance for BackUpHiddenIPInfo
13/12/10 16:16:05 DEBUG client.MetaScanner: Scanning .META. starting at row=HiddenIPInfo,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@167a465
13/12/10 16:16:05 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for HiddenIPInfo,,1386509509553.9e1062d691dd4c25cdc030f8c3fc9860. is Slave1:60020
13/12/10 16:16:05 DEBUG client.MetaScanner: Scanning .META. starting at row=HiddenIPInfo,,00000000000000 for max=2147483647 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@167a465
13/12/10 16:16:05 ERROR mapreduce.TableInputFormatBase: Cannot resolve the host name for /10.21.244.124 because of javax.naming.NameNotFoundException: DNS name not found [response code 3]; remaining name '124.244.21.10.in-addr.arpa'
13/12/10 16:16:05 DEBUG mapreduce.TableInputFormatBase: getSplits: split -> 0 -> slave1:,
13/12/10 16:16:07 INFO mapred.JobClient: Running job: job_201312042044_0030
13/12/10 16:16:08 INFO mapred.JobClient:  map 0% reduce 0%
13/12/10 16:16:27 INFO mapred.JobClient:  map 100% reduce 0%
13/12/10 16:16:32 INFO mapred.JobClient: Job complete: job_201312042044_0030
13/12/10 16:16:32 INFO mapred.JobClient: Counters: 28
13/12/10 16:16:32 INFO mapred.JobClient:   Job Counters
13/12/10 16:16:32 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=12305
13/12/10 16:16:32 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
13/12/10 16:16:32 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
13/12/10 16:16:32 INFO mapred.JobClient:     Rack-local map tasks=1
13/12/10 16:16:32 INFO mapred.JobClient:     Launched map tasks=1
13/12/10 16:16:32 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
13/12/10 16:16:32 INFO mapred.JobClient:   HBase Counters
13/12/10 16:16:32 INFO mapred.JobClient:     REMOTE_RPC_CALLS=0
13/12/10 16:16:32 INFO mapred.JobClient:     RPC_CALLS=6
13/12/10 16:16:32 INFO mapred.JobClient:     RPC_RETRIES=0
13/12/10 16:16:32 INFO mapred.JobClient:     NOT_SERVING_REGION_EXCEPTION=0
13/12/10 16:16:32 INFO mapred.JobClient:     NUM_SCANNER_RESTARTS=0
13/12/10 16:16:32 INFO mapred.JobClient:     MILLIS_BETWEEN_NEXTS=162
13/12/10 16:16:32 INFO mapred.JobClient:     BYTES_IN_RESULTS=1493
13/12/10 16:16:32 INFO mapred.JobClient:     BYTES_IN_REMOTE_RESULTS=0
13/12/10 16:16:32 INFO mapred.JobClient:     REGIONS_SCANNED=1
13/12/10 16:16:32 INFO mapred.JobClient:     REMOTE_RPC_RETRIES=0
13/12/10 16:16:32 INFO mapred.JobClient:   File Output Format Counters
13/12/10 16:16:32 INFO mapred.JobClient:     Bytes Written=0
13/12/10 16:16:32 INFO mapred.JobClient:   FileSystemCounters
13/12/10 16:16:32 INFO mapred.JobClient:     HDFS_BYTES_READ=69
13/12/10 16:16:32 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=34919
13/12/10 16:16:32 INFO mapred.JobClient:   File Input Format Counters
13/12/10 16:16:32 INFO mapred.JobClient:     Bytes Read=0
13/12/10 16:16:32 INFO mapred.JobClient:   Map-Reduce Framework
13/12/10 16:16:32 INFO mapred.JobClient:     Map input records=3
13/12/10 16:16:32 INFO mapred.JobClient:     Physical memory (bytes) snapshot=83361792
13/12/10 16:16:32 INFO mapred.JobClient:     Spilled Records=0
13/12/10 16:16:32 INFO mapred.JobClient:     CPU time spent (ms)=170
13/12/10 16:16:32 INFO mapred.JobClient:     Total committed heap usage (bytes)=55443456
13/12/10 16:16:32 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=395317248
13/12/10 16:16:32 INFO mapred.JobClient:     Map output records=3
13/12/10 16:16:32 INFO mapred.JobClient:     SPLIT_RAW_BYTES=69
hbase(main):016:0> describe 'BackUpHiddenIPInfo'
DESCRIPTION                                                                   ENABLED                                  
 'BackUpHiddenIPInfo', {NAME => 'IPAddress', DATA_BLOCK_ENCODING => 'NONE', B true                                     
 LOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSI                                          
 ONS => '3', TTL => '2147483647', MIN_VERSIONS => '0', KEEP_DELETED_CELLS =>                                           
 'false', BLOCKSIZE => '65536', ENCODE_ON_DISK => 'true', IN_MEMORY => 'false                                          
 ', BLOCKCACHE => 'true'}                                                                                              
1 row(s) in 0.0670 seconds

hbase(main):017:0> scan 'BackUpHiddenIPInfo'
ROW                            COLUMN+CELL                                                                             
 125.111.251.118               column=IPAddress:city, timestamp=1386597147615, value=Ningbo                            
 125.111.251.118               column=IPAddress:countrycode, timestamp=1386597147615, value=CN                         
 125.111.251.118               column=IPAddress:countryname, timestamp=1386597147615, value=China                      
 125.111.251.118               column=IPAddress:latitude, timestamp=1386597147615, value=29.878204                     
 125.111.251.118               column=IPAddress:longitude, timestamp=1386597147615, value=121.5495                     
 125.111.251.118               column=IPAddress:region, timestamp=1386597147615, value=02                              
 125.111.251.118               column=IPAddress:regionname, timestamp=1386597147615, value=Zhejiang                    
 125.111.251.118               column=IPAddress:timezone, timestamp=1386597147615, value=Asia/Shanghai                 
 221.12.10.218                 column=IPAddress:city, timestamp=1386597147615, value=Hangzhou                          
 221.12.10.218                 column=IPAddress:countrycode, timestamp=1386597147615, value=CN                         
 221.12.10.218                 column=IPAddress:countryname, timestamp=1386597147615, value=China                      
 221.12.10.218                 column=IPAddress:latitude, timestamp=1386597147615, value=30.293594                     
 221.12.10.218                 column=IPAddress:longitude, timestamp=1386597147615, value=120.16141                    
 221.12.10.218                 column=IPAddress:region, timestamp=1386597147615, value=02                              
 221.12.10.218                 column=IPAddress:regionname, timestamp=1386597147615, value=Zhejiang                    
 221.12.10.218                 column=IPAddress:timezone, timestamp=1386597147615, value=Asia/Shanghai                 
 60.180.248.201                column=IPAddress:city, timestamp=1386597147615, value=Wenzhou                           
 60.180.248.201                column=IPAddress:countrycode, timestamp=1386597147615, value=CN                         
 60.180.248.201                column=IPAddress:countryname, timestamp=1386597147615, value=China                      
 60.180.248.201                column=IPAddress:latitude, timestamp=1386597147615, value=27.999405                     
 60.180.248.201                column=IPAddress:longitude, timestamp=1386597147615, value=120.66681                    
 60.180.248.201                column=IPAddress:region, timestamp=1386597147615, value=02                              
 60.180.248.201                column=IPAddress:regionname, timestamp=1386597147615, value=Zhejiang                    
 60.180.248.201                column=IPAddress:timezone, timestamp=1386597147615, value=Asia/Shanghai                 
3 row(s) in 0.0600 seconds

方法 3:

  1. landen@Master:~/UntarFile/hadoop-1.0.4$ bin/hadoop distcp
  2. Warning: $HADOOP_HOME is deprecated.
  3.  
  4. distcp [OPTIONS] <srcurl>* <desturl>
  5.  
  6. OPTIONS:
  7. -p[rbugp] Preserve status
  8. r: replication number
  9. b: block size
  10. u: user
  11. g: group
  12. p: permission
  13. -p alone is equivalent to -prbugp
  14. -i Ignore failures
  15. -log <logdir> Write logs to <logdir>
  16. -m <num_maps> Maximum number of simultaneous copies
  17. -overwrite Overwrite destination
  18. -update Overwrite if src size different from dst size
  19. -skipcrccheck Do not use CRC check to determine if src is
  20. different from dest. Relevant only if -update
  21. is specified
  22. -f <urilist_uri> Use list at <urilist_uri> as src list
  23. -filelimit <n> Limit the total number of files to be <= n
  24. -sizelimit <n> Limit the total size to be <= n bytes
  25. -delete Delete the files existing in the dst but not in src
  26. -mapredSslConf <f> Filename of SSL configuration for mapper task
  27.  
  28. NOTE 1: if -overwrite or -update are set, each source URI is
  29. interpreted as an isomorphic update to an existing directory.
  30. For example:
  31. hadoop distcp -p -update "hdfs://A:8020/user/foo/bar" "hdfs://B:8020/user/foo/baz"
  32.  
  33. would update all descendants of 'baz' also in 'bar'; it would
  34. *not* update /user/foo/baz/bar
  35.  
  36. NOTE 2: The parameter <n> in -filelimit and -sizelimit can be
  37. specified with symbolic representation. For examples,
  38. 1230k = 1230 * 1024 = 1259520
  39. 891g = 891 * 1024^3 = 956703965184
  40.  
  41. Generic options supported are
  42. -conf <configuration file> specify an application configuration file
  43. -D <property=value> use value for given property
  44. -fs <local|namenode:port> specify a namenode
  45. -jt <local|jobtracker:port> specify a job tracker
  46. -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
  47. -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
  48. -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
  49.  
  50. The general command line syntax is
  51. bin/hadoop command [genericOptions] [commandOptions]

distcp (distributed copy) is a tool provided by Hadoop for copying a large dataset on the same, or different HDFS cluster. It uses MapReduce to copy files in parallel, handle error and recovery, and report the job status. As HBase stores all its files, including system files on HDFS, we can simply use distcp to copy the HBase directory to either another directory on the same HDFS, or to a different HDFS, for backing up the source HBase cluster

landen@Master:~/UntarFile/hadoop-1.0.4$ bin/hadoop distcp /hbase /backup/HBaseBackUp
Warning: $HADOOP_HOME is deprecated.

13/12/10 15:33:09 INFO tools.DistCp: srcPaths=[/hbase]
13/12/10 15:33:09 INFO tools.DistCp: destPath=/backup/HBaseBackUp
13/12/10 15:33:10 INFO tools.DistCp: sourcePathsCount=46
13/12/10 15:33:10 INFO tools.DistCp: filesToCopyCount=17
13/12/10 15:33:10 INFO tools.DistCp: bytesToCopyCount=11.7k
13/12/10 15:33:11 INFO mapred.JobClient: Running job: job_201312042044_0029
13/12/10 15:33:12 INFO mapred.JobClient:  map 0% reduce 0%
13/12/10 15:33:37 INFO mapred.JobClient:  map 100% reduce 0%
13/12/10 15:33:42 INFO mapred.JobClient: Job complete: job_201312042044_0029
13/12/10 15:33:42 INFO mapred.JobClient: Counters: 22
13/12/10 15:33:42 INFO mapred.JobClient:   Job Counters
13/12/10 15:33:42 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=20465
13/12/10 15:33:42 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
13/12/10 15:33:42 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
13/12/10 15:33:42 INFO mapred.JobClient:     Launched map tasks=1
13/12/10 15:33:42 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
13/12/10 15:33:42 INFO mapred.JobClient:   File Input Format Counters
13/12/10 15:33:42 INFO mapred.JobClient:     Bytes Read=7904
13/12/10 15:33:42 INFO mapred.JobClient:   File Output Format Counters
13/12/10 15:33:42 INFO mapred.JobClient:     Bytes Written=0
13/12/10 15:33:42 INFO mapred.JobClient:   FileSystemCounters
13/12/10 15:33:42 INFO mapred.JobClient:     HDFS_BYTES_READ=20070
13/12/10 15:33:42 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=22644
13/12/10 15:33:42 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=11988
13/12/10 15:33:42 INFO mapred.JobClient:   distcp
13/12/10 15:33:42 INFO mapred.JobClient:     Files copied=17
13/12/10 15:33:42 INFO mapred.JobClient:     Bytes copied=11988
13/12/10 15:33:42 INFO mapred.JobClient:     Bytes expected=11988
13/12/10 15:33:42 INFO mapred.JobClient:   Map-Reduce Framework
13/12/10 15:33:42 INFO mapred.JobClient:     Map input records=45
13/12/10 15:33:42 INFO mapred.JobClient:     Physical memory (bytes) snapshot=36737024
13/12/10 15:33:42 INFO mapred.JobClient:     Spilled Records=0
13/12/10 15:33:42 INFO mapred.JobClient:     CPU time spent (ms)=470
13/12/10 15:33:42 INFO mapred.JobClient:     Total committed heap usage (bytes)=15925248
13/12/10 15:33:42 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=346537984
13/12/10 15:33:42 INFO mapred.JobClient:     Map input bytes=7804
13/12/10 15:33:42 INFO mapred.JobClient:     Map output records=0
13/12/10 15:33:42 INFO mapred.JobClient:     SPLIT_RAW_BYTES=178

Backing Up and Restoring HBase Data的更多相关文章

  1. Restore HBase Data

    方法 1: Restoring HBase data by importing dump files from HDFS The HBase Import utility is used to loa ...

  2. using python read/write HBase data

    A. operations on Server side 1. ensure hadoop and hbase are working properly 2. install thrift:  apt ...

  3. HBase 数据模型(Data Model)

    HBase Data Model--HBase 数据模型(翻译) 在HBase中,数据是存储在有行有列的表格中.这是与关系型数据库重复的术语,并不是有用的类比.相反,HBase可以被认为是一个多维度的 ...

  4. HBase学习笔记-高级(一)

    HBase1. hbase.id记录了集群的唯一标识:hbase.version记录了文件格式的版本号2. split和.corrupt目录在日志分裂过程中使用,以便保存一些中间结果和损坏的日志在表目 ...

  5. How Google Backs Up The Internet Along With Exabytes Of Other Data

    出处:http://highscalability.com/blog/2014/2/3/how-google-backs-up-the-internet-along-with-exabytes-of- ...

  6. hbase集群安装与部署

    1.相关环境 centos7 hadoop2.6.5 zookeeper3.4.9 jdk1.8 hbase1.2.4 本篇文章仅涉及hbase集群的搭建,关于hadoop与zookeeper的相关部 ...

  7. 【HBase】HBase Getting Started(HBase 入门指南)

    入门指南 1. 简介 Quickstart 会让你启动和运行一个单节点单机HBase. 2. 快速启动 – 单点HBase 这部分描述单节点单机HBase的配置.一个单例拥有所有的HBase守护线程- ...

  8. HBASE基础知识

    HBASE的集群的搭建HBASE的表设计HBASE的底层存储模型 HBase 是一个高可靠.高性能.面向列.可伸缩的分布式缓存系统.利用HBase 技术可在廉价PC Server上搭建起大规模结构化存 ...

  9. hbase数据迁移-HDFS拷贝

    1.把数据表test从hbase下拷出 hdfs dfs -get /hbase/data/default/test /home/hadoop/hbase/test 2.文件放到新集群的系统上 scp ...

随机推荐

  1. python面向对象-2深入类的属性

    在交互式环境中输入: >>> class A: a=0 def __init__(self): self.a=10 self.b=100 >>> a=A() > ...

  2. 可视化iOS应用程序开发的6个Xcode小技巧

    FIXME 该标签用来提醒你代码中存在稍后某个时间需要修改的部分.(编辑注:网络上有一些可以用来收集项目中`TODO`和`FIXME`标签的辅助插件,比如XToDo https://github.co ...

  3. Android绘图板的开发

    >>继承自View >>使用Canvas绘图 每次View组件上的图形状态数据发生了改变,都应该通知View组件重写调用onDraw(Canvas canvas)方法重绘该组件 ...

  4. java中的static(包括类前面修饰的static、方法前面修饰的static、成员变量前面修饰的static)

    static是静态修饰符: 什么叫静态修饰符呢?大家都知道,在程序中任何变量或者代码都是在编译时由系统自动分配内存来存储的,而所谓静态就是指在编译后所分配的内存会一直存在,直到程序退出内存才会释放这个 ...

  5. mdadm详细使用手册

    1. 文档信息 当前版本 1.2 创建人 朱荣泽 创建时间 2011.01.07 修改历史 版本号 时间 内容 1.0 2011.01.07 创建<mdadm详细使用手册>1.0文档 1. ...

  6. 自适应XAML布局经验总结 (三) 局部布局设计模式2

    本系列对实际项目中的XAML布局场景进行总结,给出了较优化的自适应布局解决方案,希望对大家有所帮助. 下面继续介绍局部布局设计模式. 5. 工具箱模式 绘图,三维模型操作等需要工具的情况,可以使用带分 ...

  7. Android中设置分割线

    设置分隔线的方法一: 在需要设置分隔线的布局文件中加入如下代码: <View     android:layout_width="fill_parent"      andr ...

  8. ActiveMq 总结(二)

    4.2.6 MessageConsumer MessageConsumer是一个由Session创建的对象,用来从Destination接收消息. 4.2.6.1 创建MessageConsumer ...

  9. 微信公众平台如何与Web App结合?

    Web App简而言之就是为移动平台而优化的网页,它可以表现得和原生应用一样,并且克服了原生应用一些固有的缺点.一般而言Web App最大的入口是浏览器,但现在微信公众平台作为新兴的平台,结合其内置浏 ...

  10. Simple Package Tool 学习

    Simple Package Tool 学习   1.getattr内置函数 getattr(object, name[, default]) python Packages.py install - ...