Kafka:ZK+Kafka+Spark Streaming集群环境搭建(五)针对hadoop2.9.0启动之后发现slave上正常启动了DataNode,DataManager,但是过了几秒后发现DataNode被关闭
启动之后发现slave上正常启动了DataNode,DataManager,但是过了几秒后发现DataNode被关闭
以slave1上错误日期为例查看错误信息:
more /opt/hadoop-2.9./logs/hadoop-spark-datanode-slave1.log
找到错误信息:
-- ::, WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/opt/hadoop-2.9./dfs/data/
java.io.IOException: Incompatible clusterIDs in /opt/hadoop-2.9./dfs/data: namenode clusterID = CID-f1195fc7-ca7c-4a2a-b32f-211131a5d699; datanode clusterID = CID-292293a6-9c34-4de7-aecd-d72657a26dd5
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:)
at java.lang.Thread.run(Thread.java:)
-- ::, ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid f4badff3-7a0b-4db0-bd77-83b370f67eed) service to master/
.168.0.:. Exiting.
java.io.IOException: All specified directories have failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:)
at java.lang.Thread.run(Thread.java:)
-- ::, WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid f4badff3-7a0b-4db0-bd77-83b370f67eed) service to master
/192.168.0.120:
-- ::, INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid f4badff3-7a0b-4db0-bd77-83b370f67eed)
-- ::, WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
-- ::, INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at slave1/192.168.0.121
************************************************************/
解决方案
错误问题原因:多次格式化导致的。
1)在master执行sbin/stop-all.sh,关闭hadoop:
cd /opt/hadoop-2.9.
sbin/stop-all.sh
2)依次在master,slave1,slave2,slave3上执行以下命令:
cd /opt/hadoop-2.9.
rm -r dfs
rm -r logs
rm -r tmp
3)在master上重新格式化hadoop,重新启动hadoop
cd /opt/hadoop-2.9. #进入hadoop目录
bin/hadoop namenode -format #格式化namenode
sbin/start-all.sh #启动dfs
[spark@master hadoop-2.9.]$ cd /opt/hadoop-2.9. #进入hadoop目录
[spark@master hadoop-2.9.]$ bin/hadoop namenode -format #格式化namenode
sbin/start-all.sh #启动dfs
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. // :: INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = master/192.168.0.120
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.9.0
STARTUP_MSG: classpath = /opt/hadoop-2.9.0/etc/hadoop:/opt/hadoop-2.9.0/share/hadoop/common/lib/nimbus-jose-jwt-3.9.jar:/opt/hadoop-2.9.0/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/hadoop-...
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 756ebc8394e473ac25feac05fa493f6d612e6c50; compiled by 'arsuresh' on 2017-11-13T23:15Z
STARTUP_MSG: java = 1.8.0_171
************************************************************/
// :: INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
// :: INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-d4e2f108-de3c--9eeb-abbbb1024fe8
// :: INFO namenode.FSEditLog: Edit logging is async:true
// :: INFO namenode.FSNamesystem: KeyProvider: null
// :: INFO namenode.FSNamesystem: fsLock is fair: true
// :: INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
// :: INFO namenode.FSNamesystem: fsOwner = spark (auth:SIMPLE)
// :: INFO namenode.FSNamesystem: supergroup = supergroup
// :: INFO namenode.FSNamesystem: isPermissionEnabled = true
// :: INFO namenode.FSNamesystem: HA Enabled: false
// :: INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to . Disabling file IO profiling
// :: INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=, counted=, effected=
// :: INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
// :: INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to :::00.000
// :: INFO blockmanagement.BlockManager: The block deletion will start around Jun ::
// :: INFO util.GSet: Computing capacity for map BlocksMap
// :: INFO util.GSet: VM type = -bit
// :: INFO util.GSet: 2.0% max memory MB = 17.8 MB
// :: INFO util.GSet: capacity = ^ = entries
// :: INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
// :: WARN conf.Configuration: No unit for dfs.namenode.safemode.extension() assuming MILLISECONDS
// :: INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
// :: INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes =
// :: INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension =
// :: INFO blockmanagement.BlockManager: defaultReplication =
// :: INFO blockmanagement.BlockManager: maxReplication =
// :: INFO blockmanagement.BlockManager: minReplication =
// :: INFO blockmanagement.BlockManager: maxReplicationStreams =
// :: INFO blockmanagement.BlockManager: replicationRecheckInterval =
// :: INFO blockmanagement.BlockManager: encryptDataTransfer = false
// :: INFO blockmanagement.BlockManager: maxNumBlocksToLog =
// :: INFO namenode.FSNamesystem: Append Enabled: true
// :: INFO util.GSet: Computing capacity for map INodeMap
// :: INFO util.GSet: VM type = -bit
// :: INFO util.GSet: 1.0% max memory MB = 8.9 MB
// :: INFO util.GSet: capacity = ^ = entries
// :: INFO namenode.FSDirectory: ACLs enabled? false
// :: INFO namenode.FSDirectory: XAttrs enabled? true
// :: INFO namenode.NameNode: Caching file names occurring more than times
// :: INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: falseskipCaptureAccessTimeOnlyChange: false
// :: INFO util.GSet: Computing capacity for map cachedBlocks
// :: INFO util.GSet: VM type = -bit
// :: INFO util.GSet: 0.25% max memory MB = 2.2 MB
// :: INFO util.GSet: capacity = ^ = entries
// :: INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets =
// :: INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users =
// :: INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = ,,
// :: INFO namenode.FSNamesystem: Retry cache on namenode is enabled
// :: INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is millis
// :: INFO util.GSet: Computing capacity for map NameNodeRetryCache
// :: INFO util.GSet: VM type = -bit
// :: INFO util.GSet: 0.029999999329447746% max memory MB = 273.1 KB
// :: INFO util.GSet: capacity = ^ = entries
// :: INFO namenode.FSImage: Allocated new BlockPoolId: BP--192.168.0.120-
// :: INFO common.Storage: Storage directory /opt/hadoop-2.9./dfs/name has been successfully formatted.
// :: INFO namenode.FSImageFormatProtobuf: Saving image file /opt/hadoop-2.9./dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
// :: INFO namenode.FSImageFormatProtobuf: Image file /opt/hadoop-2.9./dfs/name/current/fsimage.ckpt_0000000000000000000 of size bytes saved in seconds.
// :: INFO namenode.NNStorageRetentionManager: Going to retain images with txid >=
// :: INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.0.120
************************************************************/
[spark@master hadoop-2.9.]$ sbin/start-all.sh #启动dfs
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to /opt/hadoop-2.9./logs/hadoop-spark-namenode-master.out
slave1: starting datanode, logging to /opt/hadoop-2.9./logs/hadoop-spark-datanode-slave1.out
slave3: starting datanode, logging to /opt/hadoop-2.9./logs/hadoop-spark-datanode-slave3.out
slave2: starting datanode, logging to /opt/hadoop-2.9./logs/hadoop-spark-datanode-slave2.out
Starting secondary namenodes [master]
master: starting secondarynamenode, logging to /opt/hadoop-2.9./logs/hadoop-spark-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop-2.9./logs/yarn-spark-resourcemanager-master.out
slave2: starting nodemanager, logging to /opt/hadoop-2.9./logs/yarn-spark-nodemanager-slave2.out
slave3: starting nodemanager, logging to /opt/hadoop-2.9./logs/yarn-spark-nodemanager-slave3.out
slave1: starting nodemanager, logging to /opt/hadoop-2.9./logs/yarn-spark-nodemanager-slave1.out
4)过30s后,查看master,slave1,slave2,slave3是否启动成功
查看master是否启动成功:
[spark@master hadoop-2.9.]$ jps
Jps
ResourceManager
NameNode
SecondaryNameNode
[spark@master hadoop-2.9.]$
在slave1,slave2,slave3分别jps查看是否都启动了DataNode,DataManager进程:
以slave1为例:
[spark@slave1 hadoop-2.9.]$ jps
Jps
NodeManager
DataNode
[spark@slave1 hadoop-2.9.]$
参考《https://blog.csdn.net/magggggic/article/details/52503502》
Kafka:ZK+Kafka+Spark Streaming集群环境搭建(五)针对hadoop2.9.0启动之后发现slave上正常启动了DataNode,DataManager,但是过了几秒后发现DataNode被关闭的更多相关文章
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(二十一)NIFI1.7.1安装
一.nifi基本配置 1. 修改各节点主机名,修改/etc/hosts文件内容. 192.168.0.120 master 192.168.0.121 slave1 192.168.0.122 sla ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十三)kafka+spark streaming打包好的程序提交时提示虚拟内存不足(Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical memory used; 2.2 GB of 2.1 G)
异常问题:Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical mem ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十二)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网。
Centos7出现异常:Failed to start LSB: Bring up/down networking. 按照<Kafka:ZK+Kafka+Spark Streaming集群环境搭 ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十一)定制一个arvo格式文件发送到kafka的topic,通过Structured Streaming读取kafka的数据
将arvo格式数据发送到kafka的topic 第一步:定制avro schema: { "type": "record", "name": ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十)安装hadoop2.9.0搭建HA
如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(九)安装kafka_2.11-1.1.0
如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(八)安装zookeeper-3.4.12
如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(三)安装spark2.2.1
如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(二)安装hadoop2.9.0
如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...
随机推荐
- WebMvcConfigurerAdapter已经过时的问题解决
spring 5开始已经废弃WebMvcConfigurerAdapter,替代的是WebMvcConfigurer接口. 参考: https://blog.csdn.net/lenkvin/arti ...
- [原创]互联网金融App测试介绍
[原创]互联网金融App测试介绍 前端时间非常忙,终于非常忙的时间过去了,抽时间总结下我现在所在公司理财软件App测试,也各位分享下,也欢迎大家提建议,谢谢! 先介绍下我所在公司的产品特点,公司所研发 ...
- MongoDB的Java驱动使用整理 (转)
MongoDB Java Driver 简单操作 一.Java驱动一致性 MongoDB的Java驱动是线程安全的,对于一般的应用,只要一个Mongo实例即可,Mongo有个内置的连接池(池大小默认为 ...
- HDU 4122 Alice's mooncake shop (RMQ)
Alice's mooncake shop Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Ot ...
- LPC43xx MCU PIN Name and GPIO PIN Name Table
//--------------------------------------------------------------------------------+ // LPC43xx Pin N ...
- TimingTool - The Timing Diagram Editor
TimingTool - The Timing Diagram TimingTool is designed to give electronics engineers an easy to use ...
- DMA Stream/Channel Outputting via GPIOC[0..7]
Ok, so quickly mashing up another example using a different TIM, DMA Stream/Channel for illustration ...
- delphi SPCOMM的一些用法注意
使用串口SPCOMM接收数据的时候0x11和0x13无法接受,从时间间隔上看来可以接收,但是无法显示.网上查错误得: --------------------------------------- ...
- ASP.NET MVC与Sql Server建立连接
用惯了使用Entity Framework连接数据库,本篇就来体验使用SqlConnection连接数据库. 打开Sql Server 2008,创建数据库,创建如下表: create table P ...
- JS --- trim() 函数
trim()是一个很适用的方法,作用是去除字符串两边的空白,但是js本身并未提供这个方法,下面介绍js使用trim()的方法. 1.通过原型创建字符串的trim() //去除字符串两边的空白 Stri ...