实验简单来讲就是



1. put 一个600M文件,分散3个replica x 9个block 共18个blocks到4个datanode



2. 我关掉了两个datanode,使得大部分的block只在一个datanode上存在,但因为9个很分散,所以文件能正确取回(靠的是checksum来计算文件值)



3. hadoop namenode很迅速的复制了仅有一个replica的block使之成为 3 replica(2) but only found 2



4. 我再关掉一个datanode,结果发现每个datanode被很均衡的分配了block,这样即使只有一个datanode,也因为之前有确保2个replicas的比率,所以依然healthy



5. 我从这个仅存的datanode中删除一个blk,namenode report这个文件corrupt,(我其实一直很希望能进safemode,结果-safemode get一直是OFF)



6. 然后我启动另外一个datanode,30秒不到,这个missing的block被从这个新启动的datanode中迅速“扩展”为2个replicas



容灾性非常可靠,如果使用至少三个rack的话,数据会非常坚挺,对HADOOP信任值 level up!

首先来了解一下HDFS的一些基本特性



HDFS设计基础与目标



硬件错误是常态。因此需要冗余

流式数据访问。即数据批量读取而非随机读写,Hadoop擅长做的是数据分析而不是事务处理

大规模数据集

简单一致性模型。为了降低系统复杂度,对文件采用一次性写多次读的逻辑设计,即是文件一经写入,关闭,就再也不能修改

程序采用“数据就近”原则分配节点执行

HDFS体系结构



NameNode

DataNode

事务日志

映像文件

SecondaryNameNode

Namenode



管理文件系统的命名空间

记录每个文件数据块在各个Datanode上的位置和副本信息

协调客户端对文件的访问

记录命名空间内的改动或空间本身属性的改动

Namenode使用事务日志记录HDFS元数据的变化。使用映像文件存储文件系统的命名空间,包括文件映射,文件属性等

Datanode



负责所在物理节点的存储管理

一次写入,多次读取(不修改)

文件由数据块组成,典型的块大小是64MB

数据块尽量散布道各个节点

读取数据流程



客户端要访问HDFS中的一个文件

首先从namenode获得组成这个文件的数据块位置列表

根据列表知道存储数据块的datanode

访问datanode获取数据

Namenode并不参与数据实际传输

HDFS的可靠性



冗余副本策略

机架策略

心跳机制

安全模式

使用文件块的校验和 Checksum来检查文件的完整性

回收站

元数据保护

快照机制

我分别试验了冗余副本策略/心跳机制/安全模式/回收站。下面实验是关于冗余副本策略的。



环境:



Namenode/Master/jobtracker: h1/192.168.221.130

SecondaryNameNode: h1s/192.168.221.131

四个Datanode: h2~h4 (IP段:142~144)

为以防文件太小只有一个文件块(block/blk),我们准备一个稍微大一点的(600M)的文件,使之能分散分布到几个datanode,再停掉其中一个看有没有问题。


先来put一个文件(为了方便起见,建议将hadoop/bin追加到$Path变量后

:hadoop fs –put ~/Documents/IMMAUSWX201304

结束后,我们想查看一下文件块的情况,可以去网页上看,也可以在namenode上使用fsck命令来检查一下,关于fsck命令

:bin/hadoop fsck /user/hadoop_admin/in/bigfile  -files -blocks -locations < ~/hadoopfiles/log1.txt


下面打印结果说明 个600M文件被划分为9个64M的blocks,并且被分散到我当前所有datanode上(共4个),看起来比较平均,



/user/hadoop_admin/in/bigfile/USWX201304 597639882 bytes, 9 block(s):  OK

0. blk_-4541681964616523124_1011 len=67108864 repl=3 [192.168.221.131:50010, 192.168.221.142:50010, 192.168.221.144:50010]


1. blk_4347039731705448097_1011 len=67108864 repl=3 [192.168.221.143:50010, 192.168.221.131:50010, 192.168.221.144:50010]


2. blk_-4962604929782655181_1011 len=67108864 repl=3 [192.168.221.142:50010, 192.168.221.143:50010, 192.168.221.144:50010]


3. blk_2055128947154747381_1011 len=67108864 repl=3 [192.168.221.143:50010, 192.168.221.142:50010, 192.168.221.144:50010]


4. blk_-2280734543774885595_1011 len=67108864 repl=3 [192.168.221.131:50010, 192.168.221.142:50010, 192.168.221.144:50010]


5. blk_6802612391555920071_1011 len=67108864 repl=3 [192.168.221.143:50010, 192.168.221.142:50010, 192.168.221.144:50010]


6. blk_1890624110923458654_1011 len=67108864 repl=3 [192.168.221.143:50010, 192.168.221.142:50010, 192.168.221.144:50010]


7. blk_226084029380457017_1011 len=67108864 repl=3 [192.168.221.143:50010, 192.168.221.131:50010, 192.168.221.144:50010]


8. blk_-1230960090596945446_1011 len=60768970 repl=3 [192.168.221.142:50010, 192.168.221.143:50010, 192.168.221.144:50010]



Status: HEALTHY

Total size:    597639882 B

Total dirs:    0

Total files:   1

Total blocks (validated):      9 (avg. block size 66404431 B)

Minimally replicated blocks:   9 (100.0 %)

Over-replicated blocks:        0 (0.0 %)

Under-replicated blocks:       0 (0.0 %)

Mis-replicated blocks:         0 (0.0 %)

Default replication factor:    3

Average block replication:     3.0

Corrupt blocks:                0

Missing replicas:              0 (0.0 %)

Number of data-nodes:          4

Number of racks:               1



h1s,h2,h3,h4四个DD全部参与,跑去h2 (142),h3(143) stop datanode, 从h4上面get,发现居然能够get回,而且初步来看,size正确,看一下上图中黄底和绿底都DEAD了,每个blk都有源可以取回,所以GET后数据仍然是完整的,从这点看hadoop确实是强大啊,load balancing也做得很不错,数据看上去很坚强,容错性做得不错



1



再检查一下,我本来想测试safemode的,结果隔一会一刷,本来有几个blk只有1个livenode的,现在又被全部复制为确保每个有2个了!    



hadoop_admin@h1:~/hadoop-0.20.2$ hadoop fsck /user/hadoop_admin/in/bigfile  -files -blocks -locations


/user/hadoop_admin/in/bigfile/USWX201304 597639882 bytes, 9 block(s):  

Under replicated blk_-4541681964616523124_1011. Target Replicas is 3 but found 2 replica(s).


Under replicated blk_4347039731705448097_1011. Target Replicas is 3 but found 2 replica(s).


Under replicated blk_-4962604929782655181_1011. Target Replicas is 3 but found 2 replica(s).


Under replicated blk_2055128947154747381_1011. Target Replicas is 3 but found 2 replica(s).


Under replicated blk_-2280734543774885595_1011. Target Replicas is 3 but found 2 replica(s).


Under replicated blk_6802612391555920071_1011. Target Replicas is 3 but found 2 replica(s).


Under replicated blk_1890624110923458654_1011. Target Replicas is 3 but found 2 replica(s).


Under replicated blk_226084029380457017_1011. Target Replicas is 3 but found 2 replica(s).


Under replicated blk_-1230960090596945446_1011. Target Replicas is 3 but found 2 replica(s).


0. blk_-4541681964616523124_1011 len=67108864 repl=2 [192.168.221.131:50010, 192.168.221.144:50010]


1. blk_4347039731705448097_1011 len=67108864 repl=2 [192.168.221.144:50010, 192.168.221.131:50010]


2. blk_-4962604929782655181_1011 len=67108864 repl=2 [192.168.221.144:50010, 192.168.221.131:50010]


3. blk_2055128947154747381_1011 len=67108864 repl=2 [192.168.221.144:50010, 192.168.221.131:50010]


4. blk_-2280734543774885595_1011 len=67108864 repl=2 [192.168.221.131:50010, 192.168.221.144:50010]


5. blk_6802612391555920071_1011 len=67108864 repl=2 [192.168.221.144:50010, 192.168.221.131:50010]


6. blk_1890624110923458654_1011 len=67108864 repl=2 [192.168.221.144:50010, 192.168.221.131:50010]


7. blk_226084029380457017_1011 len=67108864 repl=2 [192.168.221.144:50010, 192.168.221.131:50010]


8. blk_-1230960090596945446_1011 len=60768970 repl=2 [192.168.221.144:50010, 192.168.221.131:50010]



我决定再关一个datanode,结果等了好半天也没见namenode发现它死了,这是因为心跳机制,datanode每隔3秒会向namenode发送heartbeat指令表明它的存活,但如果namenode很长时间(5~10分钟看设置)没有收到heartbeat即认为这个NODE死掉了,就会做出BLOCK的复制操作,以保证有足够的replica来保证数据有足够的容灾/错性,现在再打印看看,发现因为只有一个live datanode,所以现在每个blk都有且只有一份



hadoop_admin@h1:~$ hadoop fsck /user/hadoop_admin/in/bigfile -files -blocks -locations


/user/hadoop_admin/in/bigfile/USWX201304 597639882 bytes, 9 block(s):  Under replicated blk_-4541681964616523124_1011. Target Replicas is 3 but found 1 replica(s).


Under replicated blk_4347039731705448097_1011. Target Replicas is 3 but found 1 replica(s).


Under replicated blk_-4962604929782655181_1011. Target Replicas is 3 but found 1 replica(s).


Under replicated blk_2055128947154747381_1011. Target Replicas is 3 but found 1 replica(s).


Under replicated blk_-2280734543774885595_1011. Target Replicas is 3 but found 1 replica(s).


Under replicated blk_6802612391555920071_1011. Target Replicas is 3 but found 1 replica(s).


Under replicated blk_1890624110923458654_1011. Target Replicas is 3 but found 1 replica(s).


Under replicated blk_226084029380457017_1011. Target Replicas is 3 but found 1 replica(s).


Under replicated blk_-1230960090596945446_1011. Target Replicas is 3 but found 1 replica(s).




我现在把其中一个BLK从这个仅存的Datanode中移走使之corrupt,我想实验,重启一个DATANODE后,会不会复员

hadoop_admin@h4:/hadoop_run/data/current$ mv blk_4347039731705448097_1011* ~/Documents/


然后为了不必要等8分钟DN发block report,我手动修改了h4的dfs.blockreport.intervalMsec值为30000,stop datanode,再start (另外,你应该把hadoop/bin也加入到Path变量后面,这样你可以不带全路径执行hadoop命令,结果,检测它已被损坏


hadoop_admin@h1:~$ hadoop fsck /user/hadoop_admin/in/bigfile -files -blocks -locations




/user/hadoop_admin/in/bigfile/USWX201304 597639882 bytes, 9 block(s):  Under replicated blk_-4541681964616523124_1011. Target Replicas is 3 but found 1 replica(s).



/user/hadoop_admin/in/bigfile/USWX201304: CORRUPT block blk_4347039731705448097

Under replicated blk_-4962604929782655181_1011. Target Replicas is 3 but found 1 replica(s).


Under replicated blk_2055128947154747381_1011. Target Replicas is 3 but found 1 replica(s).


Under replicated blk_-2280734543774885595_1011. Target Replicas is 3 but found 1 replica(s).


Under replicated blk_6802612391555920071_1011. Target Replicas is 3 but found 1 replica(s).


Under replicated blk_1890624110923458654_1011. Target Replicas is 3 but found 1 replica(s).


Under replicated blk_226084029380457017_1011. Target Replicas is 3 but found 1 replica(s).


Under replicated blk_-1230960090596945446_1011. Target Replicas is 3 but found 1 replica(s).


MISSING 1 blocks of total size 67108864 B

0. blk_-4541681964616523124_1011 len=67108864 repl=1 [192.168.221.144:50010]

1. blk_4347039731705448097_1011 len=67108864 MISSING!

2. blk_-4962604929782655181_1011 len=67108864 repl=1 [192.168.221.144:50010]

3. blk_2055128947154747381_1011 len=67108864 repl=1 [192.168.221.144:50010]

4. blk_-2280734543774885595_1011 len=67108864 repl=1 [192.168.221.144:50010]

5. blk_6802612391555920071_1011 len=67108864 repl=1 [192.168.221.144:50010]

6. blk_1890624110923458654_1011 len=67108864 repl=1 [192.168.221.144:50010]

7. blk_226084029380457017_1011 len=67108864 repl=1 [192.168.221.144:50010]

8. blk_-1230960090596945446_1011 len=60768970 repl=1 [192.168.221.144:50010]



Status: CORRUPT

Total size:    597639882 B

Total dirs:    0

Total files:   1

Total blocks (validated):      9 (avg. block size 66404431 B)

   ********************************

   CORRUPT FILES:        1

   MISSING BLOCKS:       1

   MISSING SIZE:         67108864 B

   CORRUPT BLOCKS:       1

   ********************************

Minimally replicated blocks:   8 (88.888885 %)

Over-replicated blocks:        0 (0.0 %)

Under-replicated blocks:       8 (88.888885 %)

Mis-replicated blocks:         0 (0.0 %)

Default replication factor:    3

Average block replication:     0.8888889

Corrupt blocks:                1

Missing replicas:              16 (200.0 %)

Number of data-nodes:          1

Number of racks:               1





The filesystem under path '/user/hadoop_admin/in/bigfile' is CORRUPT



我现在启动一个DATANODE h1s(131),结果很快的在30秒之内,它就被hadoop原地满HP复活了,现在每个blk都有了两份replica

hadoop_admin@h1:~$ hadoop fsck /user/hadoop_admin/in/bigfile -files -blocks -locations


/user/hadoop_admin/in/bigfile/USWX201304 597639882 bytes, 9 block(s):  Under replicated blk_-4541681964616523124_1011. Target Replicas is 3 but found 2 replica(s).


Under replicated blk_4347039731705448097_1011. Target Replicas is 3 but found 2 replica(s).


Under replicated blk_-4962604929782655181_1011. Target Replicas is 3 but found 2 replica(s).


Under replicated blk_2055128947154747381_1011. Target Replicas is 3 but found 2 replica(s).


Under replicated blk_-2280734543774885595_1011. Target Replicas is 3 but found 2 replica(s).


Under replicated blk_6802612391555920071_1011. Target Replicas is 3 but found 2 replica(s).


Under replicated blk_1890624110923458654_1011. Target Replicas is 3 but found 2 replica(s).


Under replicated blk_226084029380457017_1011. Target Replicas is 3 but found 2 replica(s).


Under replicated blk_-1230960090596945446_1011. Target Replicas is 3 but found 2 replica(s).


0. blk_-4541681964616523124_1011 len=67108864 repl=2 [192.168.221.144:50010, 192.168.221.131:50010]


1. blk_4347039731705448097_1011 len=67108864 repl=2 [192.168.221.131:50010, 192.168.221.144:50010]


2. blk_-4962604929782655181_1011 len=67108864 repl=2 [192.168.221.144:50010, 192.168.221.131:50010]


3. blk_2055128947154747381_1011 len=67108864 repl=2 [192.168.221.144:50010, 192.168.221.131:50010]


4. blk_-2280734543774885595_1011 len=67108864 repl=2 [192.168.221.144:50010, 192.168.221.131:50010]


5. blk_6802612391555920071_1011 len=67108864 repl=2 [192.168.221.144:50010, 192.168.221.131:50010]


6. blk_1890624110923458654_1011 len=67108864 repl=2 [192.168.221.144:50010, 192.168.221.131:50010]


7. blk_226084029380457017_1011 len=67108864 repl=2 [192.168.221.144:50010, 192.168.221.131:50010]


8. blk_-1230960090596945446_1011 len=60768970 repl=2 [192.168.221.144:50010, 192.168.221.131:50010]



发现这个文件被从131成功复制回了144 (h4)。



结论:HADOOP容灾太坚挺了,我现在坚信不疑了!



另外有一个没有粘出来的提示就是,h4 datanode上有不少重新format遗留下来的badLinkBlock,在重新put同一个文件的时候,hadoop将那些老旧残留的block文件全部都删除了。这说明它是具有删除无效bad block的功能的。

hadoop容灾能力测试的更多相关文章

  1. hadoop容灾能力测试 分类: A1_HADOOP 2015-03-02 09:38 291人阅读 评论(0) 收藏

    实验简单来讲就是 1. put 一个600M文件,分散3个replica x 9个block 共18个blocks到4个datanode 2. 我关掉了两个datanode,使得大部分的block只在 ...

  2. 阿里云OSS同城冗余存储正式商业化,提供云上同城容灾能力

    近日,阿里云正式发布OSS同城冗余存储产品.这是国内目前提供同城多AZ冗余部署能力覆盖最广的云上对象存储产品,可以实现云存储的同城双活,满足企业级客户对于“发生机房级灾难事件时数据不丢失,业务不中断” ...

  3. 传统业务上云:跨AZ容灾架构解析

    本文由  网易云发布. 数字化转型浪潮之下,采用云计算服务提升业务敏捷性.降低运维成本,成为了传统企业的优选方案.网易云资深解决方案架构师张亮通过某物流企业客户的实际案例,分享了传统业务系统在云上的架 ...

  4. QQ 相册后台存储架构重构与跨 IDC 容灾实践

    欢迎大家前往云加社区,获取更多腾讯海量技术实践干货哦~ 作者简介:xianmau,2015 年加入腾讯 TEG 架构平台部,一直负责 QQ 相册平台的维护和建设,主导相册上传架构重构和容灾优化等工作. ...

  5. 腾讯云COS对象存储占据数据容灾C位

    说到公有云容灾,大家首先想到的是云上数据备份. 然而,随着企业核心业务逐渐从线下迁移到云上,客户提出了更高的要求.如何确保云上业务的高可用.数据的高可靠,这对云厂商提出了新的挑战. 腾讯云作为全球领先 ...

  6. 华为云计算IE面试笔记-请描述华为容灾解决方案全景图,并解释双活数据中心需要从哪些角度着手考虑双活设计

    容灾全景图: 按照距离划分:分为本地容灾 同城容灾 异地容灾  本地容灾包括本地高可用和本地主备.(本数据中心的两机房.机柜) 本地高可用这个方案为了保持业务的连续性,从两个层面来考虑: ①一个是从主 ...

  7. Hadoop能力测试图谱

    一张图测试你的Hadoop能力-Hadoop能力测试图谱 1.引言 看到一张图,关于Hadoop技术框架的图,基本上涉及到Hadoop当前应用的主要领域,感觉可以作为测试Hadoop开发人员当前能力和 ...

  8. 巨杉Tech|SequoiaDB 巨杉数据库高可用容灾测试

    数据库的高可用是指最大程度地为用户提供服务,避免服务器宕机等故障带来的服务中断.数据库的高可用性不仅仅体现在数据库能否持续提供服务,而且也体现在能否保证数据的一致性. SequoiaDB 巨杉数据库作 ...

  9. [转]金融业容灾技术分析 (终于看到QREP了)

    源地址:http://www.cnblogs.com/SuperXJ/p/3480929.html 数据复制技术很多,初步比较如下. 后面重点讨论银行最常用的存储复制和数据库复制..当然,我最推荐的还 ...

随机推荐

  1. ecstore生成二维码

    利用phpqrcode库生成二维码: /* *二维码添加 */ ////////////////////////////////////// /*引入文件*/ @include(APP_DIR.'/i ...

  2. Codeforces 219D Choosing Capital for Treeland

    http://codeforces.com/problemset/problem/219/D 题目大意: 给出一棵树,但是它的边是有向边,选择一个城市,问最少调整多少条边的方向能使一个选中城市可以到达 ...

  3. Could not find *.apk!解决办法

    右键点击项目选择Properties,把Libraries下Android x.x给remove了. 点右侧的Add Library,选择JRE System Library然后next,重新指定JR ...

  4. hdu4623:crime 数学优化dp

    鞍山热身赛的题,也是去年多校原题 题目大意: 求n个数的排列中满足相邻两个数互质的排列的数量并取模 当时的思路就是状压dp.. dp[i][state]  state用二进制记录某个数是否被取走,i ...

  5. poj3237--Tree 树链剖分

    题意:三种操作 ①修改第i条边的权值为val,②把u到v路径上的所有边的权值 去相反数③求u 到v路径上最大的边权 线段树的区间更新还是不熟练,,一直搞不对调试了好久还是没对,最后还是看的kuangb ...

  6. 理解Spring MVC Model Attribute和Session Attribute

    作为一名 Java Web 应用开发者,你已经快速学习了 request(HttpServletRequest)和 session(HttpSession)作用域.在设计和构建 Java Web 应用 ...

  7. [置顶] 使用struts拦截器+注解实现网络安全要求中的日志审计功能

    J2EE项目中出于安全的角度考虑,用户行为审计日志功能必不可少,通过本demo可以实现如下功能: 1.项目中记录审计日志的方法. 2.struts拦截器的基本配置和使用方法. 3.struts拦截器中 ...

  8. table表格边框样式

    ; border-left:1px solid #aaa; border-top:1px solid #aaa; } td{border-right:1px solid #aaa; border-bo ...

  9. 初识前端HTML

    HTML 超文本标记语言 HTML的解析 顾名思义,HTML就是由一个个的标签组成的,组成后,HTML可被浏览器直接识别以及处理成我们想给用户展示的样子. 下面我们就来解析HTML的一个个标签. &l ...

  10. 有关JAVA基础学习中的集合讨论

        很高兴能在这里认识大家,我也是刚刚接触后端开发的学习者,相信很多朋友在学习中都会遇到很多头疼的问题,希望我们都能够把问题分享出来,把自己的学习思路整理出来,我们一起探讨一起成长.    今天我 ...