Hbase 0.92.1 Replication
原集群
服务器名称 | 服务 |
sht-sgmhadoopnn-01 | Master,NameNode,JobTracker |
sht-sgmhadoopdn-01 | RegionServer,DataNode,TaskTracker,ZK |
sht-sgmhadoopdn-02 | RegionServer,DataNode,TaskTracker,ZK |
sht-sgmhadoopdn-03 | RegionServer,DataNode,TaskTracker,ZK |
sht-sgmhadoopdn-04 | RegionServer,DataNode,TaskTracker,ZK |
新集群
服务器名称 | 服务 |
ec2d-newcntprocnn-01 | Master,NameNode,JobTracker |
ec2d-newcntprocdn-01 | RegionServer,DataNode,TaskTracker,ZK |
ec2d-newcntprocdn-02 | RegionServer,DataNode,TaskTracker,ZK |
ec2d-newcntprocdn-03 | RegionServer,DataNode,TaskTracker,ZK |
ec2d-newcntprocdn-04 | RegionServer,DataNode,TaskTracker,ZK |
将原表dept复制到目标集群
1. 修改原集群和新集群所有节点hbase-site.xml文件,加入以下内容,并重启集群
- <property>
- <name>hbase.replication</name>
- <value>true</value>
- </property>
2. 将所有主机名与IP地址关系写入到所有节点/etc/hosts文件
- 172.16.101.55 sht-sgmhadoopnn-01
- 172.16.101.58 sht-sgmhadoopdn-01
- 172.16.101.59 sht-sgmhadoopdn-02
- 172.16.101.60 sht-sgmhadoopdn-03
- 172.16.101.66 sht-sgmhadoopdn-04
- 10.189.100.146 ec2d-newcntprocnn-01
- 10.189.102.101 ec2d-newcntprocdn-01
- 10.189.102.94 ec2d-newcntprocdn-02
- 10.189.102.236 ec2d-newcntprocnn-03
- 10.189.102.176 ec2d-newcntprocdn-04
3.在原集群新建表dept,在新集群新建相同表结构
- create 'dept', { NAME => 'cf1', REPLICATION_SCOPE => 1}
如果是现有表,修改列族属性REPLICATION_SCOPE=1为启用该表中该列族的复制属性,注意复制是以列族为单位,并非以表为单位
- disable 'dept'
- alter 'dept', NAME => 'cf1', REPLICATION_SCOPE => ''
- enable 'dept'
4. 启用复制功能
- add_peer '',"ec2d-newcntprocnn-01,ec2d-newcntprocdn-01,ec2d-newcntprocdn-02:2181:/hbase"
- start_replication
5.插入数据测试
- put 'dept', 'row1', 'cf1:name', 'adams'
- put 'dept', 'row1', 'cf1:depart', 'research'
- put 'dept', 'row1', 'cf1:job', 'clerk'
- put 'dept', 'row1', 'cf1:id', ''
- put 'dept', 'row1', 'cf1:locate', 'dallas'
注意:复制只是将开启该功能以后新增的数据复制到新集群,开启复制之前的数据并不会复制到新集群。
6.验证两个集群复制数据的正确性
- export HADOOP_CLASSPATH=$HBASE_HOME/lib/guava-r09.jar
- $ hadoop jar $HBASE_HOME/hbase-0.92..jar verifyrep
- Usage: verifyrep [--starttime=X] [--stoptime=Y] [--families=A] <peerid> <tablename>
- Options:
- starttime beginning of the time range
- without endtime means from starttime to forever
- stoptime end of the time range
- families comma-separated list of families to copy
- Args:
- peerid Id of the peer used for verification, must match the one given for replication
- tablename Name of the table to verify
- Examples:
- To verify the data replicated from TestTable for a hour window with peer #
- $ bin/hbase org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication --starttime= --stoptime= TestTable
- $ hbase org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication dept
输出
- 19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
- 19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:host.name=sht-sgmhadoopnn-01
- 19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_45
- 19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
- 19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/local/contentplatform/jdk1.6.0_45/jre
- 19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/home/tnuser/hbase/bin/../conf:/home/tnuser/jdk/lib/tools.jar:/home/tnuser/hbase/bin/..:/home/tnuser/hbase/bin/../hbase-0.92.1.jar:/home/tnuser/hbase/bin/../hbase-0.92.1-tests.jar:/home/tnuser/hbase/bin/../lib/activation-1.1.jar:/home/tnuser/hbase/bin/../lib/asm-3.1.jar:/home/tnuser/hbase/bin/../lib/avro-1.5.3.jar:/home/tnuser/hbase/bin/../lib/avro-ipc-1.5.3.jar:/home/tnuser/hbase/bin/../lib/commons-beanutils-1.7.0.jar:/home/tnuser/hbase/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/tnuser/hbase/bin/../lib/commons-cli-1.2.jar:/home/tnuser/hbase/bin/../lib/commons-codec-1.4.jar:/home/tnuser/hbase/bin/../lib/commons-collections-3.2.1.jar:/home/tnuser/hbase/bin/../lib/commons-configuration-1.6.jar:/home/tnuser/hbase/bin/../lib/commons-digester-1.8.jar:/home/tnuser/hbase/bin/../lib/commons-el-1.0.jar:/home/tnuser/hbase/bin/../lib/commons-httpclient-3.1.jar:/home/tnuser/hbase/bin/../lib/commons-lang-2.5.jar:/home/tnuser/hbase/bin/../lib/commons-logging-1.1.1.jar:/home/tnuser/hbase/bin/../lib/commons-math-2.1.jar:/home/tnuser/hbase/bin/../lib/commons-net-1.4.1.jar:/home/tnuser/hbase/bin/../lib/core-3.1.1.jar:/home/tnuser/hbase/bin/../lib/guava-r09.jar:/home/tnuser/hbase/bin/../lib/hadoop-core-1.0.0.jar:/home/tnuser/hbase/bin/../lib/high-scale-lib-1.1.1.jar:/home/tnuser/hbase/bin/../lib/httpclient-4.0.1.jar:/home/tnuser/hbase/bin/../lib/httpcore-4.0.1.jar:/home/tnuser/hbase/bin/../lib/jackson-core-asl-1.5.5.jar:/home/tnuser/hbase/bin/../lib/jackson-jaxrs-1.5.5.jar:/home/tnuser/hbase/bin/../lib/jackson-mapper-asl-1.5.5.jar:/home/tnuser/hbase/bin/../lib/jackson-xc-1.5.5.jar:/home/tnuser/hbase/bin/../lib/jamon-runtime-2.3.1.jar:/home/tnuser/hbase/bin/../lib/jasper-compiler-5.5.23.jar:/home/tnuser/hbase/bin/../lib/jasper-runtime-5.5.23.jar:/home/tnuser/hbase/bin/../lib/jaxb-api-2.1.jar:/home/tnuser/hbase/bin/../lib/jaxb-impl-2.1.12.jar:/home/tnuser/hbase/bin/../lib/jersey-core-1.4.jar:/home/tnuser/hbase/bin/../lib/jersey-json-1.4.jar:/home/tnuser/hbase/bin/../lib/jersey-server-1.4.jar:/home/tnuser/hbase/bin/../lib/jettison-1.1.jar:/home/tnuser/hbase/bin/../lib/jetty-6.1.26.jar:/home/tnuser/hbase/bin/../lib/jetty-util-6.1.26.jar:/home/tnuser/hbase/bin/../lib/jruby-complete-1.6.5.jar:/home/tnuser/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/home/tnuser/hbase/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/tnuser/hbase/bin/../lib/libthrift-0.7.0.jar:/home/tnuser/hbase/bin/../lib/log4j-1.2.16.jar:/home/tnuser/hbase/bin/../lib/netty-3.2.4.Final.jar:/home/tnuser/hbase/bin/../lib/protobuf-java-2.4.0a.jar:/home/tnuser/hbase/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/tnuser/hbase/bin/../lib/servlet-api-2.5.jar:/home/tnuser/hbase/bin/../lib/slf4j-api-1.5.8.jar:/home/tnuser/hbase/bin/../lib/slf4j-log4j12-1.5.8.jar:/home/tnuser/hbase/bin/../lib/snappy-java-1.0.3.2.jar:/home/tnuser/hbase/bin/../lib/stax-api-1.0.1.jar:/home/tnuser/hbase/bin/../lib/velocity-1.7.jar:/home/tnuser/hbase/bin/../lib/xmlenc-0.52.jar:/home/tnuser/hbase/bin/../lib/zookeeper-3.4.3.jar:/home/tnuser/hadoop/conf:/usr/local/contentplatform/hadoop-1.0.3/libexec/../conf:/home/tnuser/jdk/lib/tools.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/..:/usr/local/contentplatform/hadoop-1.0.3/libexec/../hadoop-core-1.0.3.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/asm-3.2.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/aspectjrt-1.6.5.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/aspectjtools-1.6.5.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-beanutils-1.7.0.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-cli-1.2.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-codec-1.4.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-collections-3.2.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-configuration-1.6.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-daemon-1.0.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-digester-1.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-el-1.0.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-io-2.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-lang-2.4.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-logging-1.1.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-math-2.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-net-1.4.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/core-3.1.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/hadoop-capacity-scheduler-1.0.3.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/hadoop-fairscheduler-1.0.3.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/hadoop-thriftfs-1.0.3.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jdeb-0.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jersey-core-1.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jersey-json-1.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jersey-server-1.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jets3t-0.6.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jetty-6.1.26.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jetty-util-6.1.26.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jsch-0.1.42.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/junit-4.5.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/kfs-0.2.2.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/log4j-1.2.15.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/mockito-all-1.8.5.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/oro-2.0.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/slf4j-api-1.4.3.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/xmlenc-0.52.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/home/tnuser/hbase/lib/guava-r09.jar
- 19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/native/Linux-amd64-64:/home/tnuser/hbase/bin/../lib/native/Linux-amd64-64
- 19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
- 19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
- 19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
- 19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
- 19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-514.el7.x86_64
- 19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:user.name=tnuser
- 19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/tnuser
- 19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:user.dir=/usr/local/contentplatform/hbase-0.92.1
- 19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-03:2181 sessionTimeout=60000 watcher=hconnection
- 19/06/13 21:08:08 INFO zookeeper.ClientCnxn: Opening socket connection to server /172.16.101.59:2181
- 19/06/13 21:08:08 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 4905@sht-sgmhadoopnn-01
- 19/06/13 21:08:08 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
- 19/06/13 21:08:08 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
- 19/06/13 21:08:08 INFO zookeeper.ClientCnxn: Socket connection established to sht-sgmhadoopdn-02/172.16.101.59:2181, initiating session
- 19/06/13 21:08:08 INFO zookeeper.ClientCnxn: Session establishment complete on server sht-sgmhadoopdn-02/172.16.101.59:2181, sessionid = 0x16b5083320f0007, negotiated timeout = 60000
- 19/06/13 21:08:08 ERROR zookeeper.RecoverableZooKeeper: Node /hbase/replication/peers already exists and this is not a retry
- 19/06/13 21:08:08 ERROR zookeeper.RecoverableZooKeeper: Node /hbase/replication/rs already exists and this is not a retry
- 19/06/13 21:08:08 INFO replication.ReplicationZookeeper: Replication is now started
- 19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=ec2d-newcntprocdn-01:2181,ec2d-newcntprocnn-01:2181,ec2d-newcntprocdn-02:2181 sessionTimeout=60000 watcher=connection to cluster: ec2d-newcntprocnn-01,ec2d-newcntprocdn-01,ec2d-newcntprocdn-02:2181:/hbase
- 19/06/13 21:08:08 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 4905@sht-sgmhadoopnn-01
- 19/06/13 21:08:08 INFO zookeeper.ClientCnxn: Opening socket connection to server /10.189.102.101:2181
- 19/06/13 21:08:08 INFO client.HConnectionManager$HConnectionImplementation: Closed zookeeper sessionid=0x16b5083320f0007
- 19/06/13 21:08:08 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
- 19/06/13 21:08:08 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
- 19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Session: 0x16b5083320f0007 closed
- 19/06/13 21:08:08 INFO zookeeper.ClientCnxn: EventThread shut down
- 19/06/13 21:08:09 INFO zookeeper.ClientCnxn: Socket connection established to ec2d-newcntprocdn-01/10.189.102.101:2181, initiating session
- 19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
- 19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
- 19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
- 19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
- 19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
- 19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
- 19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
- 19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
- 19/06/13 21:08:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ec2d-newcntprocdn-01/10.189.102.101:2181, sessionid = 0x16b4fc6131e000f, negotiated timeout = 60000
- 19/06/13 21:08:09 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
- 19/06/13 21:08:09 DEBUG client.HConnectionManager$HConnectionImplementation: The connection to null was closed by the finalize method.
- 19/06/13 21:08:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-03:2181 sessionTimeout=60000 watcher=hconnection
- 19/06/13 21:08:10 INFO zookeeper.ClientCnxn: Opening socket connection to server /172.16.101.60:2181
- 19/06/13 21:08:10 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
- 19/06/13 21:08:10 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
- 19/06/13 21:08:10 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 4905@sht-sgmhadoopnn-01
- 19/06/13 21:08:10 INFO zookeeper.ClientCnxn: Socket connection established to sht-sgmhadoopdn-03/172.16.101.60:2181, initiating session
- 19/06/13 21:08:10 INFO zookeeper.ClientCnxn: Session establishment complete on server sht-sgmhadoopdn-03/172.16.101.60:2181, sessionid = 0x26b5083323d0005, negotiated timeout = 60000
- 19/06/13 21:08:10 DEBUG client.HConnectionManager$HConnectionImplementation: Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@61578aab; serverName=sht-sgmhadoopdn-02,60020,1560423906407
- 19/06/13 21:08:10 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for .META.,,1.1028785192 is sht-sgmhadoopdn-02:60020
- 19/06/13 21:08:10 DEBUG client.MetaScanner: Scanning .META. starting at row=dept,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@61578aab
- 19/06/13 21:08:10 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for dept,,1560430116142.2ba8059eaf45d5048f418b8b2ef00600. is sht-sgmhadoopdn-01:60020
- 19/06/13 21:08:10 DEBUG client.MetaScanner: Scanning .META. starting at row=dept,,00000000000000 for max=2147483647 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@61578aab
- 19/06/13 21:08:10 DEBUG mapreduce.TableInputFormatBase: getSplits: split -> 0 -> sht-sgmhadoopdn-01:,
- 19/06/13 21:08:11 INFO mapred.JobClient: Running job: job_201906081831_0002
- 19/06/13 21:08:12 INFO mapred.JobClient: map 0% reduce 0%
- 19/06/13 21:08:31 INFO mapred.JobClient: map 100% reduce 0%
- 19/06/13 21:08:36 INFO mapred.JobClient: Job complete: job_201906081831_0002
- 19/06/13 21:08:36 INFO mapred.JobClient: Counters: 19
- 19/06/13 21:08:36 INFO mapred.JobClient: Job Counters
- 19/06/13 21:08:36 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=17243
- 19/06/13 21:08:36 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
- 19/06/13 21:08:36 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
- 19/06/13 21:08:36 INFO mapred.JobClient: Launched map tasks=1
- 19/06/13 21:08:36 INFO mapred.JobClient: Data-local map tasks=1
- 19/06/13 21:08:36 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
- 19/06/13 21:08:36 INFO mapred.JobClient: File Output Format Counters
- 19/06/13 21:08:36 INFO mapred.JobClient: Bytes Written=0
- 19/06/13 21:08:36 INFO mapred.JobClient: org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication$Verifier$Counters
- 19/06/13 21:08:36 INFO mapred.JobClient: GOODROWS=1
- 19/06/13 21:08:36 INFO mapred.JobClient: FileSystemCounters
- 19/06/13 21:08:36 INFO mapred.JobClient: HDFS_BYTES_READ=71
- 19/06/13 21:08:36 INFO mapred.JobClient: FILE_BYTES_WRITTEN=31428
- 19/06/13 21:08:36 INFO mapred.JobClient: File Input Format Counters
- 19/06/13 21:08:36 INFO mapred.JobClient: Bytes Read=0
- 19/06/13 21:08:36 INFO mapred.JobClient: Map-Reduce Framework
- 19/06/13 21:08:36 INFO mapred.JobClient: Map input records=1
- 19/06/13 21:08:36 INFO mapred.JobClient: Physical memory (bytes) snapshot=87109632
- 19/06/13 21:08:36 INFO mapred.JobClient: Spilled Records=0
- 19/06/13 21:08:36 INFO mapred.JobClient: CPU time spent (ms)=1700
- 19/06/13 21:08:36 INFO mapred.JobClient: Total committed heap usage (bytes)=91226112
- 19/06/13 21:08:36 INFO mapred.JobClient: Virtual memory (bytes) snapshot=1540784128
- 19/06/13 21:08:36 INFO mapred.JobClient: Map output records=0
- 19/06/13 21:08:36 INFO mapred.JobClient: SPLIT_RAW_BYTES=71
关键字
- 19/06/13 21:21:40 INFO mapred.JobClient: GOODROWS=1
Hbase 0.92.1 Replication的更多相关文章
- Hbase 0.92.1集群数据迁移到新集群
老集群 hbase(main):001:0> status 4 servers, 0 dead, 0.0000 average load hbase(main):002:0> list T ...
- hadoop2.2.0 + hbase 0.94 + hive 0.12 配置记录
一开始用hadoop2.2.0 + hbase 0.96 + hive 0.12 ,基本全部都配好了.只有在hive中查询hbase的表出错.以直报如下错误: java.io.IOException: ...
- Hbase 0.98集群搭建的详细步骤
准备工作 Hbase的搭建是依赖于Hadoop的,Hbase的数据文件实际上存储在HDFS文件系统中,所以我们需要先搭建hadoop环境,之前的博文中已经搭建过了(详见http://www.cnblo ...
- Hbase 0.95.2介绍及下载地址
HBase是一个分布式的.面向列的开源数据库,该技术来源于Google论文“Bigtable:一个结构化数据的分布式存储系统”.就像Bigtable利用了Google文件系统(File System) ...
- 让人眼花缭乱的 RSS 版本0.90、0.91、0.92、0.93、0.94、1.0 和 2.0
1.0的规范 http://web.resource.org/rss/1.0/spec 2.0的规范 http://cyber.law.harvard.edu/rss/rss.html 一个介绍什么是 ...
- hbase 0.98.1集群安装
本文将基于hbase 0.98.1解说其在linux集群上的安装方法,并对一些重要的设置项进行解释,本文原文链接:http://blog.csdn.net/bluishglc/article/deta ...
- Hadoop 2.2 & HBase 0.96 Maven 依赖总结
由于Hbase 0.94对Hadoop 2.x的支持不是非常好,故直接添加Hbase 0.94的jar依赖可能会导致问题. 但是直接添加Hbase0.96的依赖,由于官方并没有发布Hbase 0.96 ...
- 破解UltraEdit64 Version 28.20.0.92 技术分享。
本文为原创作品,转载请注明出处,作者:Chris.xisaer E-mail:69920579@qq.com QQ群3244694 补丁程序下载地址:https://download.csdn.net ...
- Ubuntu 14.10 下安装伪分布式hbase 0.99.0
HBase 安装分为:单击模式,伪分布式,完全分布式,在单机模式中,HBase使用本地文件系统而不是HDFS ,所有的服务和zooKeeper都运作在一个JVM中.本文是安装的伪分布式. 安装步骤如下 ...
随机推荐
- 【leetcode】1235. Maximum Profit in Job Scheduling
题目如下: We have n jobs, where every job is scheduled to be done from startTime[i] to endTime[i], obtai ...
- ProGuard的作用、使用及bug分析(转载)
ProGuard的作用.使用及bug分析 本文主要ProGuard的作用.使用及bug分析.1.ProGuard作用ProGuard通过删除无用代码,将代码中类名.方法名.属性名用晦涩难懂的名称重命名 ...
- SpringBoot项目构建、测试、热部署、配置原理、执行流程
SpringBoot项目构建.测试.热部署.配置原理.执行流程 一.项目构建 二.测试和热部署 三.配置原理 四.执行流程
- redis 短信验证码
127.0.0.1:6379> get CERTIYCODESENDFORAPP.1101:18222202889 "\xac\xed\x00\x05sr\x00\x11java.ut ...
- 16位masm汇编实现记忆化递归搜索斐波那契数列第50项
.model small ;递归fib,使用压缩BCD码,小端派 .data y1 byte 6 dup(0) y2 byte 6 dup(0) vis byte 1,1,1,61 dup(0) ;便 ...
- 导数与微分简单总结(updated)
只讲一些导数在OI中的简单应用,特别基础的东西,不会很详细也不会很全面. 导数的定义 设函数\(y=f(x)\)在点\(x_0\)的某个邻域内有定义,当自变量\(x\)在\(x_0\)处有增量\(Δx ...
- A.Equivalent Prefixes(ST算法)
Equivalent Prefixes 时间限制:C/C++ 2秒,其他语言4秒 空间限制:C/C++ 524288K,其他语言1048576K 64bit IO Format: %lld 题目描述 ...
- 谷歌浏览器安装 socketLog
第一步(本地浏览器安装调试扩展) 下载扩展包并解压 链接:https://pan.baidu.com/s/14df0ewl_3wjRHc8H1jsrWQ提取码:yyu1 打开谷歌浏览器,地址栏输入 c ...
- springboot学习问题一:启动springboot报错端口被占用解决办法
一:问题 二:分析原因 springboot启动默认端口为8080,现在提示被占用,那我们可以修改springboot的启动端口,换一个未被占用的端口即可 三:解决方法 打开application.p ...
- weblogic报:java.lang.LinkageError: loader constraint violation in interface itable initialization
原因分析: gdaml服务中依赖org.apache.xerces_2.9.0.v201101211617.jar会产生jar包冲突 解决方法: 项目中的这个jar包删除,并将这个jar包放在服务器中 ...