HDFS架构

the core of HADOOP/distributed systems is storeage(HDFS) and resource manager(YARN) for computing engines built on it.

Master/Slave: The character of distribution system follows M/S pattern.

Name Node

NN is the master and single active node. it contains / manges the namespace of files/dirs so called metadata , keeps block location andallocate data nodes.

Metadata is just like the one in linux file system including file name,size,owner/user,group,permission(umask) and HDFS specified elements like block ids , replication factor and block size and so on. Meta is in both memory and disk for fast access and restore respectively. NN also contains block location but it does not save it which is actually from DN. When a client access HDFS, it talks to NN first , NN checks the the permission, file existing just like the normal operation in a non-distributed linux system.

Data Node

The one actually stores files in form of blocks.

once a NN starts up, it sends its block report to NN like 'I(NN-1) has blocks #1,#2....' and it sends it periodically. So,NN can build a mapping of which bock in which DNs to serve file access request. DN also sends heartbeat information every 3 seconds to report its status along with actual data storage(total, free, used space and data transfer current in progress....) which is used for block allocating and and load banlancing by NN. a DN is considered dead if NN does not receive the HB within 10 mins. What is about the replication? It is for fault tolerance. If a block lost or a DN goes down, there is other nodes containing the same blocks. Also, based on the replciation, the system will also automatically replica the blocks if the number of copies does not meet the level.

Rack is nothing but a box with machines. dedicated power suply and network switch.

HDFS写

It should be easy to understand if you know the hdfs arch. client uses the hdfs client lib to manuplicate files. The communication is done thru RPC call. As mentioned before, NN will check metadata to see if client is qualified to write files. Then clients request NN to allocates blocks and NN returns the DN list(NN knows the status for each DN). for client to write the first 128MB block which is accumulated in the client library's managed space. This is the leader-folower pattern.The write to DNs is a pipleline so it is synchronized writing? client/leader writes data in a 4k packets and followers sends ACK so i guess it is synchronized writing. DNs also send the information to NN once it recieved the blocks. So, NN can build the block location in the write process as well. Then it contines to write the next 128 MB blocks. It loops till reach the EOF of the file. finally the client close() and indicates the operation is completed.

HDFS读

As mentioned before, you need HDFS java client library to perform the read operation like open the file, read the stream.  client will call NN thru RPC to get the block id and block location. NN metadata has the block IDs for a file and the block location holds the mapping. Both of them are in memory and it should be fast. The actually read is between client and the DN. If the client is in a DN like a map task, the NN will return the block location with a list of network distance sort so the network delay will be reduced between racks. If the DN the map task is running on contains the blocks it needs, it will read directly locally. If the reading fails, client will switch to another node to read in the location list. Because of the data transfermation is between clients(you may have many current reads) and data nodes and name node only provides the block location, so, the load is distributed across the cluster. That's why HDFS is scalable. It may be not good to store vary large number of small files as name node may response poorly due to managing too much metadata.

When a DN is down

As we know, HDFS is reliable. If a data node goes down(NN does not receive its heart beat within 10 mins), there must be other data nodes storing the copies of the blocks in the failed data node depending on the replication level. So, the system is still available. But in this situation, the cluster is unser-replicated, and NN will schedule MR jobs to write the blocks to available data nodes to meet the replication level. When the data node(s) will send block report to NN , the block location will be updated accordingly.

Trade off among reliability, performance/network bandwidth

If you want to gain more reliability, for example, make the replication level to a big value (5?), the write operation will be very expensive(it not only involes writing data to disk of multiple data nodes but also count in tranferring data across data nodes so the performance is low) and vice versa.

HADOOP/HDFS Essay的更多相关文章

  1. Hadoop HDFS 用户指南

    This document is a starting point for users working with Hadoop Distributed File System (HDFS) eithe ...

  2. Hadoop HDFS负载均衡

    Hadoop HDFS负载均衡 转载请注明出处:http://www.cnblogs.com/BYRans/ Hadoop HDFS Hadoop 分布式文件系统(Hadoop Distributed ...

  3. Hive:org.apache.hadoop.hdfs.protocol.NSQuotaExceededException: The NameSpace quota (directories and files) of directory /mydir is exceeded: quota=100000 file count=100001

    集群中遇到了文件个数超出限制的错误: 0)昨天晚上spark 任务突然抛出了异常:org.apache.hadoop.hdfs.protocol.NSQuotaExceededException: T ...

  4. Hadoop程序运行中的Error(1)-Error: org.apache.hadoop.hdfs.BlockMissingException

    15/03/18 09:59:21 INFO mapreduce.Job: Task Id : attempt_1426641074924_0002_m_000000_2, Status : FAIL ...

  5. Hadoop HDFS编程 API入门系列之HDFS_HA(五)

    不多说,直接上代码. 代码 package zhouls.bigdata.myWholeHadoop.HDFS.hdfs3; import java.io.FileInputStream;import ...

  6. Hadoop HDFS编程 API入门系列之简单综合版本1(四)

    不多说,直接上代码. 代码 package zhouls.bigdata.myWholeHadoop.HDFS.hdfs4; import java.io.IOException; import ja ...

  7. [转]hadoop hdfs常用命令

    FROM : http://www.2cto.com/database/201303/198460.html hadoop hdfs常用命令   hadoop常用命令:  hadoop fs  查看H ...

  8. org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /user/hive/warehouse/page_view. Name node is in safe mode

    FAILED: Error in metadata: MetaException(message:Got exception: org.apache.hadoop.ipc.RemoteExceptio ...

  9. Hadoop HDFS文件常用操作及注意事项

    Hadoop HDFS文件常用操作及注意事项 1.Copy a file from the local file system to HDFS The srcFile variable needs t ...

随机推荐

  1. C++ 内存、new与malloc分配内存区别?

    一关于内存 1.内存分配方式 内存分配方式有三种: (1)从静态存储区域分配.内存在程序编译的时候就已经分配好,这块内存在程序的整个运行期间都存在.例如全局变量,static变量. (2)在栈上创建. ...

  2. TCP/IP协议族之链路层(二)

    TCP/IP学习记录,如有错误请指正,谢谢!!! TCP/IP协议族之链路层(二) 链路层是最底层协议,主要有三个目的: 1. 为IP模块发送和接收IP数据报 2. 为ARP模块发送ARP请求和接收A ...

  3. stack permutation

    #include <iostream> #include <stack> #include <queue> using namespace std; bool ch ...

  4. MySQL5.7主从同步--点位方式及GTID方式

    MySQL5.6加入了GTID的新特性,其全称是Global Transaction Identifier,可简化MySQL的主从切换以及Failover.GTID用于在binlog中唯一标识一个事务 ...

  5. day 16 初试面试对象

    1.初识面向对象      面向过程:             一切以事物的发展流程为中心      面向对象:             一切以对象为中心.一切皆为对象.具体的某一个事务就是对象 打比 ...

  6. TImage保存图片到Stream及从Stream中取图片

    因为一个项目,不得不将图片保存到数据库中,需要的时候再从数据库中读取.初时,以为很简单,不就是一个Stream.事实上,也很简单.度娘一下,代码也很多,但,都是坑! 看一下TImage的源,Pictu ...

  7. QP-nano结构分析

    QP-nano是QP的一个裁剪版本,是一个通用的.可移植的.超轻量级的事件驱动型框架.适用于像8051.PIC.AVR.MSP430.68HC01/11/12.R8C/Tiny等资源受限的8位和16位 ...

  8. PyPI - Datetime

    PyPI for Python 3.7 import datetime https://docs.python.org/3.7/library/datetime.html timedelta Obje ...

  9. GeekOS: 一、构建基于Ubuntu9.04的实验环境

    参考:http://www.cnblogs.com/wuchang/archive/2009/05/05/1450311.html 补充:在最后步骤中,执行bochs即可弹出运行窗口

  10. 在CentOS7中搭建Zookeeper集群

    前几天装了CentOS7.并安装了一些基本的工具,现在我手上有三台机器:分别是master,slave1,slave2. 今天我将搭建zookeeper,使用的版本是zookeeper-3.4.11. ...