主节点间歇性报错其他没有问题 ,SNN的NN没有问题,相关的journalNode也都在,就是主节点的NN会停止。

查看hadoop主节点的NN日志。

2016-11-21 22:36:40,908 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 19822 ms (timeout=20000 ms) for a response for sendEdits. No responses yet.
2016-11-21 22:36:41,088 FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: flush failed for required journal (JournalAndStream(mgr=QJM to [192.168.58.183:8485, 192.168.58.181:8485, 192.168.58.182:8485], stream=QuorumOutputStream starting at txid 24533))
java.io.IOException: Timed out waiting 20000ms for a quorum of nodes to respond.
at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:137)
at org.apache.hadoop.hdfs.qjournal.client.QuorumOutputStream.flushAndSync(QuorumOutputStream.java:107)
at org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutputStream.java:113)
at org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutputStream.java:107)
at org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream$8.apply(JournalSet.java:533)
at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
at org.apache.hadoop.hdfs.server.namenode.JournalSet.access$100(JournalSet.java:57)
at org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream.flush(JournalSet.java:529)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:639)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2645)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2520)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:579)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:975)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)
2016-11-21 22:36:41,089 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Aborting QuorumOutputStream starting at txid 24533
2016-11-21 22:36:41,113 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2016-11-21 22:36:41,122 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Slave2/192.168.58.182:8485. Already tried 0 time(s); maxRetries=45
2016-11-21 22:36:41,123 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Slave1/192.168.58.181:8485. Already tried 0 time(s); maxRetries=45
2016-11-21 22:36:41,123 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: StandByNameNode/192.168.58.183:8485. Already tried 0 time(s); maxRetries=45
2016-11-21 22:36:41,137 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Took 20050ms to send a batch of 1 edits (218 bytes) to remote journal 192.168.58.182:8485
2016-11-21 22:36:41,137 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Took 20052ms to send a batch of 1 edits (218 bytes) to remote journal 192.168.58.181:8485
2016-11-21 22:36:41,137 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Took 20065ms to send a batch of 1 edits (218 bytes) to remote journal 192.168.58.183:8485
2016-11-21 22:36:41,145 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at CentOSMaster/192.168.58.180
************************************************************/

  首先保证设置dfs.namenode.edits.dir和dfs.journalnode.edits.dir,然后设置在hdfs-site.xml中超时时间如下:

<property>
<name>dfs.qjournal.start-segment.timeout.ms</name>
<value>600000000</value>
</property> <property>
<name>dfs.qjournal.prepare-recovery.timeout.ms</name>
<value>600000000</value>
</property> <property>
<name>dfs.qjournal.accept-recovery.timeout.ms</name>
<value>600000000</value>
</property>
<property>
<name>dfs.qjournal.prepare-recovery.timeout.ms</name>
<value>600000000</value>
</property> <property>
<name>dfs.qjournal.accept-recovery.timeout.ms</name>
<value>600000000</value>
</property> <property>
<name>dfs.qjournal.finalize-segment.timeout.ms</name>
<value>600000000</value>
</property> <property>
<name>dfs.qjournal.select-input-streams.timeout.ms</name>
<value>600000000</value>
</property> <property>
<name>dfs.qjournal.get-journal-state.timeout.ms</name>
<value>600000000</value>
</property> <property>
<name>dfs.qjournal.new-epoch.timeout.ms</name>
<value>600000000</value>
</property> <property>
<name>dfs.qjournal.write-txns.timeout.ms</name>
<value>600000000</value>
</property>

  貌似解决了,至今今天早上没出问题。

Namenode主节点停止报错 Error: flush failed for required journal的更多相关文章

  1. Jenkins之发布报错“error: RPC failed; curl 18 transfer closed with outstanding read data remaining”

    报错信息: error: RPC failed; curl transfer closed with outstanding read data remaining fatal: The remote ...

  2. pod lib create ObjcName 时候报错error: RPC failed; curl 56 LibreSSL SSL_read: SSL_ERROR_SYSCALL, errno 54

    众所周知 pod lib create ObjcName 需要从git 上边克隆模版 :https://github.com/CocoaPods/pod-template.git 然后有时候会很慢报错 ...

  3. 安卓中运行报错Error:Execution failed for task ':app:transformClassesWithDexForDebug'解决

    在androidstuio中运行我的未完项目,报错: Error:Execution failed for task ':app:transformClassesWithDexForDebug'.&g ...

  4. git报错error: RPC failed; HTTP 500 curl 22 The requested URL returned error: 500

    报错 $ git push; Enumerating objects: 1002, done. Counting objects: 100% (1002/1002), done. Delta comp ...

  5. 使用spark streaming报错ERROR DFSClient: Failed to close inode xxxx

    转载自:http://blog.csdn.net/xiaolixiaoyi/article/details/45875101 好几个Spark streaming的程序同时运行,发现spark报出了如 ...

  6. Android studio中的一次编译报错’Error:Execution failed for task ':app:transformClassesWithDexForDebug‘,困扰了两天

    先说下背景:随着各种第三方框架的使用,studio在编译打包成apk时,在dex如果发现有相同的jar包,不能创建dalvik虚拟机.一个apk,就是一个运行在linux上的一个虚拟机. 上图就是一直 ...

  7. 使用git克隆github上的项目失败,报错error: RPC failed; curl 56 OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 10054

    错误描述 今天在github上使用 git clone 某个项目代码的时, git clone https://github.com/XXXX/xxx-blog.git 下载速度很慢,然后下载一段时间 ...

  8. git clone报错error: RPC failed; curl 18 transfer closed with outstanding read data remaining

    具体错误信息如下图: error: RPC failed; curl 18 transfer closed with outstanding read data remaining    fatal: ...

  9. mac M1通过homebrew安装python3报错Error: Command failed with exit 128: git

    fatal: not in a git directoryError: Command failed with exit 128: git 只需要运行 git config --global --ad ...

随机推荐

  1. iOS根据Url 获取图片尺寸

    iOS根据Url 获取图片尺寸 // 根据图片url获取图片尺寸 +(CGSize)getImageSizeWithURL:(id)imageURL { NSURL* URL = nil; if([i ...

  2. BZOJ 3524: [Poi2014]Couriers

    3524: [Poi2014]Couriers Time Limit: 20 Sec  Memory Limit: 256 MBSubmit: 1905  Solved: 691[Submit][St ...

  3. express:webpack dev-server开发中如何调用后端服务器的接口?

    开发环境:     前端:webpack + vue + vue-resource,基于如下模板创建的开发环境: https://github.com/vuejs-templates/webpack  ...

  4. 读书摘要,一种新的黑客文化:programming is forgetting

    http://opentranscripts.org/transcript/programming-forgetting-new-hacker-ethic/ 这篇文章非常有意思,作者是一个计算机教师, ...

  5. JavaScript的一些知识碎片(2)-反射-全局变量-回调

    JavaScript中的反射:编程语言中的反射原理都一样,就是通过操作metadata(描述语言的语言)来完成一些不具备反射功能的语言很难实现的功能.在静态语言中,反射是一个高大上的东西,比如在运行时 ...

  6. 模块度与Louvain社区发现算法

    Louvain算法是基于模块度的社区发现算法,该算法在效率和效果上都表现较好,并且能够发现层次性的社区结构,其优化目标是最大化整个社区网络的模块度. 模块度(Modularity) 模块度是评估一个社 ...

  7. JQuery实现资讯上下滚动悬停效果

    第一步:使用repeater绑定一个table. <table width="530" id="rollBar"> <asp:Repeater ...

  8. java并发:线程同步机制之Volatile关键字&原子操作Atomic

    volatile关键字 volatile是一个特殊的修饰符,只有成员变量才能使用它,与Synchronized及ReentrantLock等提供的互斥相比,Synchronized保证了Synchro ...

  9. [Codevs1403]新三国争霸(MST+DP)

    题目:http://codevs.cn/problem/1403/ 分析: 很容易想到对于某个确定的一天,就是求个最小生成树,又因为数据范围很小,所以可以暴力.但问题的关键是如果相邻两天的方案不同,就 ...

  10. java 中遍历hashmap 和hashset 的方法

    一.java中遍历hashmap:    for (Map.Entry<String, Integer> entry : tempMap.entrySet()) {     String ...