Found lingering reference异常
ERROR: Found lingering reference file hdfs://jiujiang1:9000/hbase/month_hotstatic/5af24d51488823419d155283441c2d0f/c/9b58bc5e853f445e9f28b98a36da6d04.b330aa24d0e3652ae89e6674fc2b3689


官方解决:
第一种解决:hbase hbck -fixReferenceFiles  month_hotstatic


另一种方法:
#http://stackoverflow.com/questions/17810443/error-found-inconsistency-in-table-hbase


This looks like you had a failed region split, see [HBASE-8052] (https://issues.apache.org/jira/browse/HBASE-8502) for more details.

This bug leaves references to parent regions that have been moved in HDFS. To fix, just delete the reference files listed in the HBCK output e.g. hadoop fs -rm hdfs://master:8020/hbase/LogTable/f41ff2fae25d1dab3f16306f4f995369/l/d9c7d33257ae406caf8d94277ff6d247.fbda7904cd1f0ac9583e04029a138487.

Once the bad references are gone the region should be assigned automatically. You may have to do the assignment from the shell, in my experience though it only takes a minute or two for the region to get reassigned. Then run hbase hbck -fix again to confirm there are no other inconsistencies.




 hbase hbck > 1.log 2>&1
 cat 1.log | grep -i "ERROR" 

 cat 1.log | grep -i "ERROR" | awk -F"ERROR: Found lingering reference file " '{print $2}' >a.txt

#!/bin/sh

while read line
do
        hadoop fs -rmr $line

done < a.txt

ERROR: Found lingering reference file hdfs的更多相关文章

  1. o] TortoiseGit错误 - Could not get all refs. libgit2 returned: corrupted loose reference file

    因无法追溯的同步操作错误或工程文件错误,造成Git 同步时报错: Could not get all refs. libgit2 returned: corrupted loose reference ...

  2. kettle在本地执行向远程hdfs执行转换错误"Couldn't open file hdfs"

    kettle在本地执行向远程hdfs执行转换时,会出现以下错误: ToHDFS.0 - ERROR (version 7.1.0.0-12, build 1 from 2017-05-16 17.18 ...

  3. 云服务器 linux文件系统异常an error occurren during the file system check导致服务器启动失败

    云服务器 linux文件系统异常an error occurren during the file system check导致服务器启动失败 文件系统宕机,重启后报错,无法启动 处理流程: 1.编辑 ...

  4. troubleshooting-执行导数shell脚本抛异常error=2, No such file or directory

    Cannot run program "order_log.sh" (in directory "/data/yarn/nm/usercache/chenweidong/ ...

  5. Flume启动报错[ERROR - org.apache.flume.sink.hdfs. Hit max consecutive under-replication rotations (30); will not continue rolling files under this path due to under-replication解决办法(图文详解)

    前期博客 Flume自定义拦截器(Interceptors)或自带拦截器时的一些经验技巧总结(图文详解)   问题详情 -- ::, (SinkRunner-PollingRunner-Default ...

  6. ParquetDecodingException: Can not read value at 0 in block -1 in file hdfs:...

    : jdbc:hive2://master01.hadoop.dtmobile.cn:1> select * from cell_random_grid_tmp2 limit 1; INFO : ...

  7. HUE Oozie : error=2, No such file or directory采坑记录

    HUE Oozie : error=2, No such file or directory采坑记录 1.错误详情 一直都是同一种方式在hue上定义workflow,不知为啥 今天定义的就是不行... ...

  8. [ERROR] error: error while loading <root>, error in opening zip file error: scala.reflect.internal.MissingRequirementError: object scala.runtime in compiler mirror not found.

    在家编译一个Apache的开源项目,在编译时遇到错误如下: error: error while loading <root>, error in opening zip file [ER ...

  9. oracle ora-01114 IO error writing block to file 207 (block # )

    oracle ORA-01114 IO error writing block to file 207 (block # ) Reference: https://stackoverflow.com/ ...

随机推荐

  1. 一笔画问题 南阳acm42(貌似没用到什么算法)

    一笔画问题 时间限制:3000 ms  |  内存限制:65535 KB 难度:4   描述 zyc从小就比较喜欢玩一些小游戏,其中就包括画一笔画,他想请你帮他写一个程序,判断一个图是否能够用一笔画下 ...

  2. array_x

    import java.util.*; public class array_x { public static void main(String args[]) { int a[][]={{2,4, ...

  3. SKIP(插入空行)

    WRITE 'This is the 1st line'. SKIP. WRITE 'This is the 2nd line'. 跳转至某一行 SKIP TO LINE line_number. 插 ...

  4. 【Android开发】 HttpURLConnection.getOutputStream() IO异常

    HttpURLConnection.getOutputStream()  IO异常百度下,没找到想要的答案.网上的解决方案几乎都是从权限考虑的~最后找到个国外网站上找到答案~ http://stack ...

  5. 使用 -命令行-给-python-安装whl文件,

    whl文件下载到哪个位置,命令行就切入到哪里: 我的在D盘目录下,所以命令行切进D盘(CD):方式如下: 列出<用户目录>下的目录(dir): 因为我安装了2个版本的python所以给py ...

  6. Java8新特性(二)——强大的Stream API

    一.强大的Stream API 除了Lambda表达式外,Java8另外一项重大更新便是位于java.util.stream.*下的Stream API Stream 是 Java8 中处理集合的关键 ...

  7. Hadoop学习(一) Hadoop是什么

    Hadoop是什么? Hadoop是一个开发和运行处理大规模数据的软件平台,是Appach的一个用Java语言实现开源软件框架,实现在大量计算机组成的集群中对海量数据进行分布式计算. Hadoop框架 ...

  8. jmeter之Synchronizing Timer的理解

    该功能类似loadrunner的集合点,一般按照jmeter的树形结构,放在需要设置集合点的请求之前,两个参数的意思,我们先看官网的解释: 大概意思就是: Number of Simulated Us ...

  9. OpenStack配置虚拟机vcpu绑定步骤 转至元数据结尾

    . Changed in compute node: 给宿主机预留资源: 宿主机可用cpu:cpuid – cpuid 宿主机可用内存:25G #vim /etc/nova/nova.conf vcp ...

  10. HDU 2175 汉诺塔IX

    http://acm.hdu.edu.cn/showproblem.php?pid=2175 Problem Description 1,2,...,n表示n个盘子.数字大盘子就大.n个盘子放在第1根 ...