HDFS Architecture Notes

 1、Moving Computation is Cheaper than Moving Data

  A computation requested by an application is much more efficient if it is executed near the data it operates on. This is especially true when the size of the data set is huge. This minimizes network congestion and increases the overall throughput of the system. The assumption is that it is often better to migrate the computation closer to where the data is located rather than moving the data to where the application is running. HDFS provides interfaces for applications to move themselves closer to where the data is located.

 2、Safemode

  On startup, the NameNode enters a special state called Safemode. Replication of data blocks does not occur when the NameNode is in the Safemode state. The NameNode receives Heartbeat and Blockreport messages from the DataNodes. A Blockreport contains the list of data blocks that a DataNode is hosting. Each block has a specified minimum number of replicas. A block is considered safely replicated when the minimum number of replicas of that data block has checked in with the NameNode. After a configurable percentage of safely replicated data blocks checks in with the NameNode (plus an additional 30 seconds), the NameNode exits the Safemode state. It then determines the list of data blocks (if any) that still have fewer than the specified number of replicas. The NameNode then replicates these blocks to other DataNodes.

 3、The Persistence of File System Metadata

  FsImage、EditLog.

 4、Staging

  HDFS client caches the file data into a temporary local file. Application writes are transparently redirected to this temporary local file. When the local file accumulates data worth over one HDFS block size, the client contacts the NameNode.

 5、Replication Pipelining

  When a client is writing data to an HDFS file, its data is first written to a local file as explained in the previous section. Suppose the HDFS file has a replication factor of three. When the local file accumulates a full block of user data, the client retrieves a list of DataNodes from the NameNode. This list contains the DataNodes that will host a replica of that block. The client then flushes the data block to the first DataNode. The first DataNode starts receiving the data in small portions (4 KB), writes each portion to its local repository and transfers that portion to the second DataNode in the list. The second DataNode, in turn starts receiving each portion of the data block, writes that portion to its repository and then flushes that portion to the third DataNode. Finally, the third DataNode writes the data to its local repository. Thus, a DataNode can be receiving data from the previous one in the pipeline and at the same time forwarding data to the next one in the pipeline. Thus, the data is pipelined from one DataNode to the next.

 6、File Deletes and Undeletes

  When a file is deleted by a user or an application, it is not immediately removed from HDFS. Instead, HDFS first renames it to a file in the /trashdirectory.  The file can be restored quickly as long as it remains in /trash. A file remains in /trash for a configurable amount of time. After the expiry of its life in /trash, the NameNode deletes the file from the HDFS namespace. The deletion of a file causes the blocks associated with the file to be freed. Note that there could be an appreciable time delay between the time a file is deleted by a user and the time of the corresponding increase in free space in HDFS.

  Reference:http://hadoop.apache.org/docs/r1.2.1/hdfs_design.html

HDFS Architecture Notes的更多相关文章

  1. Hadoop官方文档翻译——HDFS Architecture 2.7.3

    HDFS Architecture HDFS Architecture(HDFS 架构) Introduction(简介) Assumptions and Goals(假设和目标) Hardware ...

  2. 【转载】Hadoop官方文档翻译——HDFS Architecture 2.7.3

    HDFS Architecture HDFS Architecture(HDFS 架构) Introduction(简介) Assumptions and Goals(假设和目标) Hardware ...

  3. HDFS Architecture

    http://hadoop.apache.org/docs/r2.9.0/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html Introduction Ha ...

  4. API Management Architecture Notes

    Kong/Tyk/Zuul/strongloop/Ambassador/Gravitee IBM Reference Architecture for API Management: https:// ...

  5. HDFS 与 GFS 的设计差异

    后端分布式系列」前面关于 HDFS 的一些文章介绍了它的整体架构和一些关键部件的设计实现要点. 我们知道 HDFS 最早是根据 GFS(Google File System)的论文概念模型来设计实现的 ...

  6. HDFS 异常处理与恢复

    在前面的文章 <HDFS DataNode 设计实现解析>中我们对文件操作进行了描述,但并未展开讲述其中涉及的异常错误处理与恢复机制.本文将深入探讨 HDFS 文件操作涉及的错误处理与恢复 ...

  7. HDFS Client 设计实现解析

    前面对 HDFS NameNode 和 DataNode 的架构设计实现要点做了介绍,本文对 HDFS 最后一个主要构成组件 Client 做进一步解析. 流式读取 HDFS Client 为客户端应 ...

  8. HDFS DataNode 设计实现解析

    前文分析了 NameNode,本文进一步解析 DataNode 的设计和实现要点. 文件存储 DataNode 正如其名是负责存储文件数据的节点.HDFS 中文件的存储方式是将文件按块(block)切 ...

  9. HDFS NameNode 设计实现解析

    接前文 分布式存储-HDFS 架构解析,我们总体分析了 HDFS 架构的主要构成组件包括:NameNode.DataNode 和 Client.本文首先进一步解析 HDFS NameNode 的设计和 ...

随机推荐

  1. Hadoop1.1.2伪分布式安装

    一.安装前准备设置Linux的静态IP修改VirtualBox的虚拟网卡地址修改主机名把hostname和ip绑定关闭防火墙:service iptables stop二.SSH免密码登陆生成秘钥文件 ...

  2. 一个自动化测试工具 UI Recorder

    链接 教程 UI Recorder 是一款零成本UI自动化录制工具,类似于Selenium IDE. UI Recorder 要比Selenium IDE更加强大! UI Recorder 非常简单易 ...

  3. protel 99se 加载库文件 files not recognised 解决办法-转

    WIN7操作系统下,protel99se添加元件库的操作方法(非修改ADVSch99SE方法) 最近更换了新电脑,操作系统是正版的WIN7,在用protel时发现元件库无法加载,很是郁闷,上网查找解决 ...

  4. Django 之 用redis存储session

    方案I: 1. 安装 django-redis liuqian@ubuntu:~$ pip install django-redis dango-redis 官方文档:http://niwinz.gi ...

  5. vue url生产二维码

    <template> <div id="QRcode"> <div class='QR-qrcode' style='display:none;'&g ...

  6. start-stop-daemon自动启动、关闭后台程序参数传递

    /************************************************************************* * start-stop-daemon自动启动.关 ...

  7. CH3201 Hankson的趣味题

    题意 3201 Hankson的趣味题 0x30「数学知识」例题 描述 Hanks博士是BT(Bio-Tech,生物技术)领域的知名专家,他的儿子名叫Hankson.现在,刚刚放学回家的Hankson ...

  8. Programming Languages: Application and Interpretation

    http://cs.brown.edu/courses/cs173/2012/book/ 1 Introduction 1.1 Our Philosophy 1.2 The Structure of ...

  9. 《快学Scala》

    Robert Peng's Blog - https://mr-dai.github.io/ <快学Scala>Intro与第1章 - https://mr-dai.github.io/S ...

  10. Tomcat 7 可以修改 Session 默认的 Cookie 名 JSESSIONID 了

    Tomcat 7 可以修改 Session 默认的 Cookie 名 JSESSIONID 了       程序员必上的开发者服务平台 —— DevStore 看看下面这个配置: <Contex ...