dfs.datanode.max.xcievers参数导致hbase集群报错
2013/08/09 转发自http://bkeep.blog.163.com/blog/static/123414290201272644422987/
【案例】dfs.datanode.max.xcievers参数导致hbase-0.92集群报错
场景:
15个datanode挂掉,只有2个存活
[dwhftp@dw-hbase-1 ~]$ hadoop dfsadmin -report
Configured Capacity: 73837983129600 (67.16 TB)
Present Capacity: 69740285348454 (63.43 TB)
DFS Remaining: 61837580668928 (56.24 TB)
DFS Used: 7902704679526 (7.19 TB)
DFS Used%: 11.33%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
14个regionserver只有2个存活
[dwhftp@dw-hbase-11 ~]$ hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.92.0, r1231986, Mon Jan 16 13:16:35 UTC 2012
hbase(main):001:0> status
14 servers, 0 dead, 4739.8571 average load
服务进程没有挂掉,但不能对外提供服务,且服务器内存消耗16G
dwhftp 29754 1 15 Jul30 ? 3-07:58:27 /usr/alibaba/java/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Xmx16000m -Dhbase.log.dir=/home/dwhftp/opt/hbase/logs -Dhbase.log.file=hbase-dwhftp-regionserver-dw-hbase-9.hst.ali.dw.alidc.net.log -Dhbase.home.dir=/home/dwhftp/opt/hbase -Dhbase.id.str=dwhftp -Dhbase.root.logger=INFO,DRFA -Djava.library.path=/home/dwhftp/opt/hbase/lib/native/Linux-amd64-64 -classpath /home/dwhftp/opt/hbase/conf:/usr/alibaba/java/lib/tools.jar:/home/dwhftp/opt/hbase:/home/dwhftp/opt/hbase/hbase-0.92.0.jar:/home/dwhftp/opt/hbase/hbase-0.92.0-tests.jar:/home/dwhftp/opt/hbase/lib/activation-1.1.jar:/home/dwhftp/opt/hbase/lib/asm-3.1.jar:/home/dwhftp/opt/hbase/lib/avro-1.5.3.jar:/home/dwhftp/opt/hbase/lib/avro-ipc-1.5.3.jar:/home/dwhftp/opt/hbase/lib/commons-beanutils-1.7.0.jar:/home/dwhftp/opt/hbase/lib/commons-beanutils-core-1.8.0.jar:/home/dwhftp/opt/hbase/lib/commons-cli-1.2.jar:/home/dwhftp/opt/hbase/lib/commons-codec-1.4.jar:/home/dwhftp/opt/hbase/lib/commons-collections-3.2.1.jar:/home/dwhftp/opt/hbase/lib/commons-configuration-1.6.jar:/home/dwhftp/opt/hbase/lib/commons-digester-1.8.jar:/home/dwhftp/opt/hbase/lib/commons-el-1.0.jar:/home/dwhftp/opt/hbase/lib/commons-httpclient-3.1.jar:/home/dwhftp/opt/hbase/lib/commons-lang-2.5.jar:/home/dwhftp/opt/hbase/lib/commons-logging-1.1.1.jar:/home/dwhftp/opt/hbase/lib/commons-math-2.1.jar:/home/dwhftp/opt/hbase/lib/commons-net-1.4.1.jar:/home/dwhftp/opt/hbase/lib/core-3.1.1.jar:/home/dwhftp/opt/hbase/lib/guava-r09.jar:/home/dwhftp/opt/hbase/lib/guava-r09-jarjar.jar:/home/dwhftp/opt/hbase/lib/hadoop-core-0.20.2-cdh3u3.jar:/home/dwhftp/opt/hbase/lib/high-scale-lib-1.1.1.jar:/home/dwhftp/opt/hbase/lib/httpclient-4.0.1.jar:/home/dwhftp/opt/hbase/lib/httpcore-4.0.1.jar:/home/dwhftp/opt/hbase/lib/jackson-core-asl-1.5.5.jar:/home/dwhftp/opt/hbase/lib/jackson-jaxrs-1.5.5.jar:/home/dwhftp/opt/hbase/lib/jackson-mapper-asl-1.5.5.jar:/home/dwhftp/opt/hbase/lib/jackson-xc-1.5.5.jar:/home/dwhftp/opt/hbase/lib/jamon-runtime-2.3.1.jar:/home/dwhftp/opt/hbase/lib/jasper-compiler-5.5.23.jar:/home/dwhftp/opt/hbase/lib/jasper-runtime-5.5.23.jar:/home/dwhftp/opt/hbase/lib/jaxb-api-2.1.jar:/home/dwhftp/opt/hbase/lib/jaxb-impl-2.1.12.jar:/home/dwhftp/opt/hbase/lib/jersey-core-1.4.jar:/home/dwhftp/opt/hbase/lib/jersey-json-1.4.jar:/home/dwhftp/opt/hbase/lib/jersey-server-1.4.jar:/home/dwhftp/opt/hbase/lib/jettison-1.1.jar:/home/dwhftp/opt/hbase/lib/jetty-6.1.26.jar:/home/dwhftp/opt/hbase/lib/jetty-util-6.1.26.jar:/home/dwhftp/opt/hbase/lib/jruby-complete-1.6.5.jar:/home/dwhftp/opt/hbase/lib/jsp-2.1-6.1.14.jar:/home/dwhftp/opt/hbase/lib/jsp-api-2.1-6.1.14.jar:/home/dwhftp/opt/hbase/lib/libthrift-0.7.0.jar:/home/dwhftp/opt/hbase/lib/log4j-1.2.16.jar:/home/dwhftp/opt/hbase/lib/netty-3.2.4.Final.jar:/home/dwhftp/opt/hbase/lib/protobuf-java-2.4.0a.jar:/home/dwhftp/opt/hbase/lib/servlet-api-2.5-6.1.14.jar:/home/dwhftp/opt/hbase/lib/servlet-api-2.5.jar:/home/dwhftp/opt/hbase/lib/slf4j-api-1.5.8.jar:/home/dwhftp/opt/hbase/lib/slf4j-log4j12-1.5.8.jar:/home/dwhftp/opt/hbase/lib/snappy-java-1.0.3.2.jar:/home/dwhftp/opt/hbase/lib/stax-api-1.0.1.jar:/home/dwhftp/opt/hbase/lib/velocity-1.7.jar:/home/dwhftp/opt/hbase/lib/xmlenc-0.52.jar:/home/dwhftp/opt/hbase/lib/zookeeper-3.4.2.jar::/home/dwhftp/opt/hadoop/conf:/home/dwhftp/opt/hadoop/conf org.apache.hadoop.hbase.regionserver.HRegionServer start
hbase查看表时报错如下
hbase(main):001:0> scan '20120819entry'
ROW COLUMN+CELL
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/dwhftp/opt/install/hbase-0.92.0/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/dwhftp/opt/install/hadoop-0.20/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
ERROR: org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1
at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:2819)
at org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1755)
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
regionserver报错日志
org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /hbase/.logs/dw-hbase-17,60020,1345529427507/dw-hbase-17%2C60020%2C1345529427507.1345529431034 File does not exist. [Lease. Holder: DFSClient_-1871695140, pendingcreates: 4]
省略部分文字...
2012-08-21 14:35:13,530 WARN org.apache.hadoop.hdfs.DFSClient: Error while syncing
java.io.IOException: Bad connect ack with firstBadLink as 172.16.197.18:50010
HMaster 上的报错
[dwhftp@dw-hbase-3 logs]$ pwd
/home/dwhftp/opt/hbase/logs
[dwhftp@dw-hbase-3 logs]$ tail -200f hbase-dwhftp-master-dw-hbase-3.hst.ali.dw.alidc.net.log
2012-08-21 10:37:56,548 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification of .META.,,1 at address=dw-hbase-9,60020,1343643389530; org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1
hadoop日志(问题终于找到了!)
[dwhftp@dw-hbase-13 logs]$ pwd
/home/dwhftp/opt/hadoop/logs
[dwhftp@dw-hbase-13 logs]$ vi hadoop-dwhftp-datanode-dw-hbase-13.hst.ali.dw.alidc.net.log
2012-08-21 14:35:00,203 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.16.197.25:50010, storageID=DS-2042732685-172.16.197.25-50010-1334122560477, infoPort=50075, ipcPort=50020):DataXceiver
java.io.IOException: xceiverCount 4097 exceeds the limit of concurrent xcievers 4096
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:156)
解决办法,调整xcievers参数
默认是4096,改为8192
vi /home/dwhftp/opt/hadoop/conf/hdfs-site.xml
<property>
<name>dfs.datanode.max.xcievers</name>
<value>8192</value>
</property>
dfs.datanode.max.xcievers 参数说明
一个 Hadoop HDFS Datanode 有一个同时处理文件的上限. 这个参数叫 xcievers (Hadoop的作者把这个单词拼错了). 在你加载之前,先确认下你有没有配置这个文件conf/hdfs-site.xml里面的xceivers参数,至少要有4096:
<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>
对于HDFS修改配置要记得重启.
如果没有这一项配置,你可能会遇到奇怪的失败。你会在Datanode的日志中看到xcievers exceeded,但是运行起来会报missing blocks错误。例如: 10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry... [5]
重启hbase和hdfs
hbase-1
[dwhftp@dw-hbase-1 ~]$ start-dfs.sh
hbase-3
[dwhftp@dw-hbase-3 ~]$ start-hbase.sh
hbase-3
还要启动mapper/reduce用于和云梯同步数据
[dwhftp@dw-hbase-3 ~]$start-mapred.sh
后续动作
添加监控
端口监控、进程监控、
hbase-1 (namenode) 9000 50070 80 2181 3555
hbase-2 (SecondaryNameNode) 50090 2181 3555
hbase-3 (HMaster) 60010 80 2181 3555
hbase-4 (HMaster备) 60000 2181 3555
hbase-5 (HRegionServer) 60020 60030 2181 3555 datanode 50010 50075
hbase-6 ~ dw-hbase-18 (HRegionServer) 60020 60030 datanode 50010 50075
自己写脚本监控
监控活动的hbase和datanode
监控日志中的ERROR关键字
容量
跟开发人员约定可用性
namenode是单点,所以肯定不是7*24小时服务。单台机器挂掉hbase集群需要时间重建hbase,分钟级别的不可用。必须跟他们讲清楚。然后确定谁在维护hbase
另外创建一个hbase维护群,方便大家交流。
hbase资源申请
所以应用共同使用hbase集群,所以资源隔离。业务评审都需要有流程和规范。
日志级别调整
info、debug、warn、error放在同一个日志文件里面,信息量太多导致排错时间长
通过配置log4j将error级别日志打到单独文件里面。
向淘宝取经
日常维护动作
监控点
dfs.datanode.max.xcievers参数导致hbase集群报错的更多相关文章
- quartz集群报错but has failed to stop it. This is very likely to create a memory leak.
quartz集群报错but has failed to stop it. This is very likely to create a memory leak. 在一台配置1核2G内存的阿里云服务器 ...
- redis集群报错
写入redis集群报错:(error) MOVED 6918 解决方法:redis-cli -c -p 7001 -h 10.0.0.104
- nginx集群报错“upstream”directive is not allow here 错误
nginx集群报错“upstream”directive is not allow here 错误 搭建了一个服务器, 采用的是nginx + apache(多个) + php + mysql(两个) ...
- Redis创建集群报错
Redis创建集群报错: 1:任何一个集群节点中都不能存在数据,如果有备份一下删除掉aof文件或rdb文件 2: nodes-集群端口.conf 文件存的会有报错记录,所以该文件也要删除
- 搭建elsticsearch集群 报错with the same id but is a different node instance解决办法
搭建elsticsearch集群 报错with the same id but is a different node instance解决办法 学习了:https://blog.csdn.net/q ...
- redis集群报错:(error) MOVED 5798 127.0.0.1:7001
原因 这种情况一般是因为启动redis-cli时没有设置集群模式所导致. 解决方案 启动时使用-c参数来启动集群模式,命令如下: redis-cli -c -p 7000 测试 127.0.0.1:7 ...
- ceph -s集群报错too many PGs per OSD
背景 集群状态报错,如下: # ceph -s cluster 1d64ac80-21be-430e-98a8-b4d8aeb18560 health HEALTH_WARN <-- 报错的地方 ...
- HADOOP HA 报错 - 所有 namenode 都是standby --集群报错: Operation category READ is not supported in state standby
报错: 经过查看集群的jps如下: ==================== hadoop01 jps =================== FsShell ResourceManager Name ...
- redis集群报错:(error) CLUSTERDOWN The cluster is down
更换了电脑,把原来的电脑上的虚拟机复制到了新电脑上,启动虚拟机上的centos系统,然后启动redis集群(redis5版本),发现集群可以启动,redis进程也有,但是连接集群中的任意节点就报错,如 ...
随机推荐
- struts2中利用POI导出Excel文档并下载
1.项目组负责人让我实现这个接口,因为以前做过类似的,中间并没有遇到什么太困难的事情.其他不说,先上代码: package com.tydic.eshop.action.feedback; impor ...
- EXC_BAD_ACCESS
EXC_BAD_ACCESS,就可以在控制台中看到是哪个对象被释放掉了. 另外要避免频繁的出现上述问题,下面是一些建议: 1. 当引用了别人传递进来的对象时,最好retain一下,避免在别人那里已经把 ...
- ListView(2)最简单的上拉刷新,下拉刷新
最简单的上拉刷新和下拉刷新,当listview滚动到底部时向上拉刷新数据.当listview滚动到最顶部时下拉刷新. 图1,上拉刷新 图2,下拉刷新 1,设置lisview,加载heade ...
- git push --no-thin
有时候我们执行 git push 将一个 new branch 推送到远程仓库的时候,会被远程仓库阻止. 可能是我们没有相应的权限吧.然而,我在 git push 的时候加上 --no-thin 参数 ...
- 让Eclipse和NetBeans共享同一个项目
有的时候,我们会下载一些源代码来学习研究,但是下载下来的工程文件是eclipse的或者是NetBeans的.如果手头上没有eclipse或者没有 NetBeans,或只有其中一个怎么办?又或者,你习惯 ...
- BZOJ2252: [2010Beijing wc]矩阵距离
题解: 我脑子里都是翔??? bfs一下就行了 我居然还想什么kd tree!真是too naive,,, #include<cstdio> #include<cstdlib> ...
- 【C#学习笔记】读access2007
using System; using System.Data.OleDb; namespace ConsoleApplication { class Program { static void Ma ...
- [Everyday Mathematics]20150119
设 $V$ 是 $n$ 维线性空间, $V_1, V_2$ 均为 $V$ 的子空间, 且 $$\bex V_1\subset V_2,\quad \dim V=10,\quad \dim V_1=3, ...
- 如何进行Monkey Test
如何进行Monkey Test 目录 一 简介 二 测试准备 三 基本命令格式 四 测试Log获取 五 Monkey命令参数介绍 六 保存monkey log以及手机log到sdcard(新增) ...
- android 官网处理图片 代码
/** * 获取压缩后的图片 (官网大图片加载对应代码) * * @param res * @param resId * @param reqWidth * 所需图片压缩尺寸最小宽度 * @param ...