cat logs/hadoop-root-datanode-hadoop1.log

************************************************************/
2017-12-03 21:05:25,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2017-12-03 21:05:25,663 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2017-12-03 21:05:25,710 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2017-12-03 21:05:25,711 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2017-12-03 21:05:25,713 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is hadoop1
2017-12-03 21:05:25,717 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
2017-12-03 21:05:25,729 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
2017-12-03 21:05:25,731 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2017-12-03 21:05:25,731 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads for balancing is 5
2017-12-03 21:05:25,779 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2017-12-03 21:05:25,782 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
2017-12-03 21:05:25,791 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2017-12-03 21:05:25,792 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2017-12-03 21:05:25,792 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2017-12-03 21:05:25,792 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2017-12-03 21:05:25,802 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2017-12-03 21:05:25,804 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50075
2017-12-03 21:05:25,804 INFO org.mortbay.log: jetty-6.1.26
2017-12-03 21:05:25,907 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50075
2017-12-03 21:05:26,032 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName = root
2017-12-03 21:05:26,033 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup
2017-12-03 21:05:26,088 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000
2017-12-03 21:05:26,097 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2017-12-03 21:05:26,122 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2017-12-03 21:05:26,128 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
2017-12-03 21:05:26,139 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
2017-12-03 21:05:26,145 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to hadoop1/192.168.2.51:9000 starting to offer service
2017-12-03 21:05:26,148 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2017-12-03 21:05:26,148 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2017-12-03 21:05:26,350 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop/hadoop/data/in_use.lock acquired by nodename 11010@hadoop1
2017-12-03 21:05:26,351 WARN org.apache.hadoop.hdfs.server.common.Storage: java.io.IOException: Incompatible clusterIDs in /usr/local/hadoop/hadoop-2.6.5/data: namenode clusterID = CID-d85ea49a-7513-4c2d-b584-70d7eeeba6dc; datanode clusterID = CID-1fb49813-9c6a-4e0b-89e7-982a18f76296
2017-12-03 21:05:26,352 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to hadoop1/192.168.2.51:9000. Exiting.
java.io.IOException: All specified directories are failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:478)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1342)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1308)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:314)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:226)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:867)
at java.lang.Thread.run(Thread.java:748)
2017-12-03 21:05:26,353 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to hadoop1/192.168.2.51:9000
2017-12-03 21:05:26,453 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
2017-12-03 21:05:28,453 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2017-12-03 21:05:28,454 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2017-12-03 21:05:28,455 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at hadoop1/192.168.2.51
************************************************************/

Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service

解决办法:

rm -rf name/*;rm -rf data/*;

重启

hadoop namenode -format;

hadoop yarn namenode datanoe 启动异常问题解决 分析日志的更多相关文章

  1. hadoop的namenode无法启动的解决的方法

    安装hadoop集群时,启动集群,发现master节点的namenode没有启动成功.这一般都是没有格式格式化namenode的缘故.格式化一下就可以,格式化namenode的命令:在hadoop安装 ...

  2. 关于移动虚拟机后,linux网卡启动异常问题解决

    废话不多说,直接上解决办法. 首先执行命令:ifconfig -a 会发现原来是eth0, 而现在变成了eth1了 然后我们编辑规则配置信息: vim /etc/udev/rule.d/70-pers ...

  3. Hadoop namenode无法启动问题解决

    原因:在root账户(非hadoop账户)下操作hadoop会导致很大的问题 首先运行bin/start-all.sh发现namenode没有启动 只有它们 9428 DataNode 9712 Jo ...

  4. Hadoop namenode节点无法启动的问题解决

    namenode是Hadoop集群HDFS的管理节点,管理着整个分布式文件系统的命名空间,以及文件与块的映射关系等,在Hadoop集群中扮演着至关重要的作用. 我之前安装的Hadoop集群中namen ...

  5. hadoop 的HDFS 的 standby namenode无法启动事故处理

    standby namenode无法启动 现象:线上使用的2.5.0-cdh5.3.2版本Hadoop,开启了了NameNode HA,HA采用QJM方式.hadoop的集群的namenode的sta ...

  6. Hadoop Yarn REST API未授权漏洞利用挖矿分析

    欢迎大家前往腾讯云+社区,获取更多腾讯海量技术实践干货哦~ 一.背景情况 5月5日腾讯云安全曾针对攻击者利用Hadoop Yarn资源管理系统REST API未授权漏洞对服务器进行攻击,攻击者可以在未 ...

  7. YARN分析系列之二 -- Hadoop YARN各个自模块说明

    先做如下声明,本代码版本是基于 3.1.2 版本. 其实,我们自己在写代码的时候,会有意识地将比较大的功能项独立成包,独立成module, 独立成项目,项目之间的关系既容易阅读理解,又便于管理. 如下 ...

  8. 记一次关闭Hadoop时no namenode to stop异常

    记一次关闭Hadoop时no namenode to stop异常 ​ 在自己的虚拟机环境上跑着hadoop集群,一直正常运行着,不用的时候直接挂起虚拟机,今天需要做些调整,但是发现集群突然无法正常关 ...

  9. Hadoop namenode无法启动

    最近遇到了一个问题,执行start-all.sh的时候发现JPS一下namenode没有启动        每次开机都得重新格式化一下namenode才可以        其实问题就出在tmp文件,默 ...

随机推荐

  1. [POJ2778]DNA Sequence(AC自动机 + DP + 矩阵优化)

    传送门 AC自动机加DP就不说了 注意到 m <= 10,所以模式串很少. 而 n 很大就需要 log 的算法,很容易想到矩阵. 但是该怎么构建? 还是矩阵 A(i,j) = ∑A(i,k) * ...

  2. BZOJ3473 字符串 【广义后缀自动机】

    题目 给定n个字符串,询问每个字符串有多少子串(不包括空串)是所有n个字符串中至少k个字符串的子串? 输入格式 第一行两个整数n,k. 接下来n行每行一个字符串. 输出格式 一行n个整数,第i个整数表 ...

  3. Oracle Partition 分区详细总结

    此文从以下几个方面来整理关于分区表的概念及操作:        1.表空间及分区表的概念        2.表分区的具体作用        3.表分区的优缺点        4.表分区的几种类型及操作 ...

  4. LA 并查集路径压缩

    题目大意:有n个节点,初始时每个节点的父亲节点都不存在.有两种操作 I u v:把点节点u的父亲节点设为v,距离为|u-v|除以1000的余数.输入保证执行指令前u没有父亲节点. E u:询问u到根节 ...

  5. KVM 存储虚拟化

    KVM 的存储虚拟化是通过存储池(Storage Pool)和卷(Volume)来管理的. Storage Pool 是宿主机上可以看到的一片存储空间,可以是多种类型,后面会详细讨论.Volume 是 ...

  6. Loj #6307. 「雅礼国庆 2017 Day1」Clique

    link: https://loj.ac/problem/6307 最大团转补图的独立集,这样的话只有r[x]<l[y]或者r[y]<l[x],x和y才能连边,所以排序之后乱搞就行了. 需 ...

  7. foobar2000设置关闭按钮最小化到系统托盘

    1.设置托盘选项: 2.[File]->[Preferences]->[Advanced]->[Display]->[Default User Interface]->[ ...

  8. iOS 内存管理策略

    内存管理策略(memory Management Policy) NSObject protocol中定义的的方法和标准命名惯例一起提供了一个引用计数环境,内存管理的基本模式处于这个环境中.NSObj ...

  9. How do I get an image from an UIButton? 如何获取uibutton设置的uiimage

    UIImage*img =[button imageForState:UIControlStateNormal];

  10. android中MVC,MVP和MVVM三种模式详解析

    我们都知道,Android本身就采用了MVC模式,model层数据源层我们就不说了,至于view层即通过xml来体现,而 controller层的角色一般是由activity来担当的.虽然我们项目用到 ...