来自:http://centoshowtos.org/hadoop/fix-corrupt-blocks-on-hdfs/

How do I know if my hadoop hdfs filesystem has corrupt blocks, and how do I fix it?

The easiest way to determine this is to run an fsck on the filesystem. If you have setup your hadoop environment variables you should be able to use a path of /, if not hdfs://ip.or.hostname:50070/.

hdfs fsck /

or.

hdfs fsck hdfs://ip.or.hostname:50070/

If the end of your output looks something like this, you have corrupt blocks on your fs.

.............................Status: CORRUPT
Total size: 3453345169348 B (Total open files size: 664 B)
Total dirs: 15233
Total files: 14029
Total symlinks: 0 (Files currently being written: 8)
Total blocks (validated): 40961 (avg. block size 84308126 B) (Total open file blocks (not validated): 8)
********************************
CORRUPT FILES: 2
MISSING BLOCKS: 2
MISSING SIZE: 15731297 B
CORRUPT BLOCKS: 2
********************************
Corrupt blocks: 2
Number of data-nodes: 12
Number of racks: 2
FSCK ended at Fri Mar 27 XX:03:21 UTC 201X in XXX milliseconds The filesystem under path '/' is CORRUPT

How do I know which files have blocks that are corrupt?

The output of the fsck above will be very verbose, but it will mention which blocks are corrupt. We can do some grepping of the fsck above so that we aren't "reading through a firehose".

hdfs fsck / | egrep -v '^\.+$' | grep -v replica | grep -v Replica

or

hdfs fsck hdfs://ip.or.host:50070/ | egrep -v '^\.+$' | grep -v replica | grep -v Replica

This will list the affected files, and the output will not be a bunch of dots, and also files that might currently have under-replicated blocks (which isn't necessarily an issue). The output should include something like this with all your affected files.

/path/to/filename.fileextension: CORRUPT blockpool BP-1016133662-10.29.100.41-1415825958975 block blk_1073904305

/path/to/filename.fileextension: MISSING 1 blocks of total size 15620361 B

The next step would be to determine the importance of the file, can it just be removed and copied back into place, or is there sensitive data that needs to be regenerated?

If it's easy enough just to replace the file, that's the route I would take.

Remove the corrupted file from your hadoop cluster

This command will move the corrupted file to the trash.

hdfs dfs -rm /path/to/filename.fileextension
hdfs dfs -rm hdfs://ip.or.hostname.of.namenode:50070/path/to/filename.fileextension

Or you can skip the trash to permanently delete (which is probably what you want to do)

hdfs dfs -rm -skipTrash /path/to/filename.fileextension
hdfs dfs -rm -skipTrash hdfs://ip.or.hostname.of.namenode:50070/path/to/filename.fileextension

How would I repair a corrupted file if it was not easy to replace?

This might or might not be possible, but the first step would be to gather information on the file's location, and blocks.

hdfs fsck /path/to/filename/fileextension -locations -blocks -files
hdfs fsck hdfs://ip.or.hostname.of.namenode:50070/path/to/filename/fileextension -locations -blocks -files

From this data, you can track down the node where the corruption is. On those nodes, you can look through logs and determine what the issue is. If a disk was replaced, i/o errors on the server, etc. If possible to recover on that machine and get the partition with the blocks online that would report back to hadoop and the file would be healthy again. If that isn't possible, you will unforunately have to find another way to regenerate.

Fix Corrupt Blocks on HDFS的更多相关文章

  1. How to fix corrupt HDFS FIles

    1 问题描述 HDFS在机器断电或意外崩溃的情况下,有可能出现正在写的数据(例如保存在DataNode内存的数据等)丢失的问题.再次重启HDFS后,发现hdfs无法启动,查看日志后发现,一直处于安全模 ...

  2. ORA-19566: exceeded limit of 0 corrupt blocks for file E:\xxxx\<datafilename>.ORA.

    How to Format Corrupted Block Not Part of Any Segment (Doc ID 336133.1) To BottomTo Bottom In this D ...

  3. 手动修复 under-replicated blocks in HDFS

    解决方式步骤: 1.进入hdfs的pod kubectl get pod -o wide | grep hdfs kubectl exec -ti hadoop-hdfs-namenode-hdfs1 ...

  4. hdfs 如何实现退役节点快速下线(也就是退役节点上的数据块快速迁移)speed up decommission blocks removal

    以下是选择复制源节点的代码 代码总结: A=datanode上要复制block的Queue size与 target datanode没被选出之前待处理复制工作数之和. 1. 优先选择退役中的节点,因 ...

  5. [bigdata] 使用Flume hdfs sink, hdfs文件未关闭的问题

    现象: 执行mapreduce任务时失败 通过hadoop fsck -openforwrite命令查看发现有文件没有关闭. [root@com ~]# hadoop fsck -openforwri ...

  6. hdfs 常用命令

    (2)bin/hdfs dfs -mkdir -p /home/雨渐渐 (3)scp /media/root/DCE28B65E28B432E/download/第2周/ChinaHadoop第二讲\ ...

  7. 【原创】大数据基础之HDFS(1)HDFS新创建文件如何分配Datanode

    HDFS中的File由Block组成,一个File包含一个或多个Block,当创建File时会创建一个Block,然后根据配置的副本数量(默认是3)申请3个Datanode来存放这个Block: 通过 ...

  8. Hadoop 2.7.4 HDFS+YRAN HA增加datanode和nodemanager

    当前集群 主机名称 IP地址 角色 统一安装目录 统一安装用户 sht-sgmhadoopnn-01 172.16.101.55 namenode,resourcemanager /usr/local ...

  9. hdfs fsck命令查看HDFS文件对应的文件块信息(Block)和位置信息(Locations)

    关键字:hdfs fsck.block.locations 在HDFS中,提供了fsck命令,用于检查HDFS上文件和目录的健康状态.获取文件的block信息和位置信息等. fsck命令必须由HDFS ...

随机推荐

  1. PowerDesigner表创建脚本双引号问题

    在使用PowerDesigner表属性的Preview查看创建脚本的时候,发现大多表名和字段名都加上了双引号,而且有引号的都是大小写混合的,导致创建的表里,表名和字段名也都是大小写混合的. 在一番搜索 ...

  2. 解决/bin/sh: 1: syntax error: "(" unexpected错误,以及更换bash仍然无法解决的问题

    编译文件的时候出现 /bin/sh: 1: syntax error: "(" unexpected 错误. 网上查到的资料都是: (1)在脚本前写#!/bin/bash (2)执 ...

  3. BZOJ4977 八月月赛 Problem G 跳伞求生 set 贪心

    欢迎访问~原文出处——博客园-zhouzhendong 去博客园看该题解 题目传送门 - BZOJ4977 - 八月月赛 Problem G 题意 小明组建了一支由n名玩家组成的战队,编号依次为1到n ...

  4. 039 hive中关于数据库与表等的基本操作

    一:基本用法 1.新建数据库 2.删除数据库 3.删除非空的数据库 4.指定数据库的位置 LOCATION:指定数据库的位置,不会在系统的默认文件下. 5.在指定数据库中新建表(验证在指定的数据库中可 ...

  5. 在jupyter notebook导入tensorflow出错:No module named tensorflow 解决办法

    1.背景 首先说一下我的环境: os : windows10 anaconda版本:2.7 官网提供了两种方法来安装TensorFlow:pip和anaconda.我使用的是anaconda方法.按照 ...

  6. Spring日记_01 之 Maven项目的创建和更新

    创建Maven项目: Maven是一个第三方工具用来 下载包的,将阿里云maven中的对应包的dependency 复制到maven项目的pom.xml文件中.就可以自动下载包(比如Spring-we ...

  7. MAC下通过改apache配置文件切换php多版本的方法

    网上关于php版本切换的文章有很多,但测试发现有很多都不行,所以不如自己想办法实现了,所以下面这篇文章主要给大家介绍了在MAC系统下通过改apache配置文件的方法来使php多版本切换的相关资料,需要 ...

  8. Windows10 下安装scrapy 日志

    Windows10 下安装scrapy 日志 1.下载python3.6 2.添加python.exe和pip.exe的路径到系统环境变量path中 如c:\python36_64 C:\Python ...

  9. HDU-2066-一个人的旅行 【Dijkstra】

    <题目链接> 虽然草儿是个路痴(就是在杭电待了一年多,居然还会在校园里迷路的人,汗~),但是草儿仍然很喜欢旅行,因为在旅途中 会遇见很多人(白马王子,^0^),很多事,还能丰富自己的阅历, ...

  10. NumPy学习(索引和切片,合并,分割,copy与deep copy)

    NumPy学习(索引和切片,合并,分割,copy与deep copy) 目录 索引和切片 合并 分割 copy与deep copy 索引和切片 通过索引和切片可以访问以及修改数组元素的值 一维数组 程 ...