HDFS中的File由Block组成,一个File包含一个或多个Block,当创建File时会创建一个Block,然后根据配置的副本数量(默认是3)申请3个Datanode来存放这个Block;

通过hdfs fsck命令可以查看一个文件具体的Block、Datanode、Rack信息,例如:

hdfs fsck /tmp/test.sql -files -blocks -locations -racks
Connecting to namenode via http://name_node:50070
FSCK started by hadoop (auth:SIMPLE) from /client for path /tmp/test.sql at Thu Dec 13 15:44:12 CST 2018
/tmp/test.sql 16 bytes, 1 block(s): OK
0. BP-436366437-name_node-1493982655699:blk_1449692331_378721485 len=16 repl=3 [/DEFAULT/server111:50010, /DEFAULT/server121:50010, /DEFAULT/server43:50010]

Status: HEALTHY
Total size: 16 B
Total dirs: 0
Total files: 1
Total symlinks: 0
Total blocks (validated): 1 (avg. block size 16 B)
Minimally replicated blocks: 1 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 3.0
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 193
Number of racks: 1
FSCK ended at Thu Dec 13 15:44:12 CST 2018 in 1 milliseconds

The filesystem under path '/tmp/test.sql' is HEALTHY

那3个Datanode是如何选择出来的?有一个优先级:

1 当前机架(相对hdfs client而言)

2 远程机架(相对hdfs client而言)

3 另一机架

4 全部随机

然后每个机架能选择几个Datanode(即maxNodesPerRack)有一个计算公式,详见代码

org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer

    private int findNewDatanode(final DatanodeInfo[] original

        ) throws IOException {

      if (nodes.length != original.length + 1) {

        throw new IOException(

            new StringBuilder()

            .append("Failed to replace a bad datanode on the existing pipeline ")

            .append("due to no more good datanodes being available to try. ")

            .append("(Nodes: current=").append(Arrays.asList(nodes))

            .append(", original=").append(Arrays.asList(original)).append("). ")

            .append("The current failed datanode replacement policy is ")

            .append(dfsClient.dtpReplaceDatanodeOnFailure).append(", and ")

            .append("a client may configure this via '")

            .append(DFSConfigKeys.DFS_CLIENT_WRITE_REPLACE_DATANODE_ON_FAILURE_POLICY_KEY)

            .append("' in its configuration.")

            .toString());

      }

注释:当没有找到新的datanode时会报异常,报错如下:

Caused by: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[server82:50010], original=[server.82:50010]).
The current failed datanode replacement policy is ALWAYS, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

    private void addDatanode2ExistingPipeline() throws IOException {

...

      final DatanodeInfo[] original = nodes;

      final LocatedBlock lb = dfsClient.namenode.getAdditionalDatanode(

          src, fileId, block, nodes, storageIDs,

          failed.toArray(new DatanodeInfo[failed.size()]),

          1, dfsClient.clientName);

      setPipeline(lb);

      //find the new datanode

      final int d = findNewDatanode(original);

注释:会调用getAdditionalDatanode方法来获取1个新的datanode,此处略去很多调用堆栈

org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault

  private DatanodeStorageInfo[] chooseTarget(int numOfReplicas,

                                    Node writer,

                                    List<DatanodeStorageInfo> chosenStorage,

                                    boolean returnChosenNodes,

                                    Set<Node> excludedNodes,

                                    long blocksize,

                                    final BlockStoragePolicy storagePolicy) {

...

    int[] result = getMaxNodesPerRack(chosenStorage.size(), numOfReplicas);

    numOfReplicas = result[0];

    int maxNodesPerRack = result[1];

...

    final Node localNode = chooseTarget(numOfReplicas, writer, excludedNodes,

        blocksize, maxNodesPerRack, results, avoidStaleNodes, storagePolicy,

        EnumSet.noneOf(StorageType.class), results.isEmpty());

注释:此处maxNodesPerRack表示每个机架最多只能分配几个datanode

  private Node chooseTarget(int numOfReplicas,

                            Node writer,

                            final Set<Node> excludedNodes,

                            final long blocksize,

                            final int maxNodesPerRack,

                            final List<DatanodeStorageInfo> results,

                            final boolean avoidStaleNodes,

                            final BlockStoragePolicy storagePolicy,

                            final EnumSet<StorageType> unavailableStorages,

                            final boolean newBlock) {

...

      if (numOfResults <= 1) {

        chooseRemoteRack(1, dn0, excludedNodes, blocksize, maxNodesPerRack,

            results, avoidStaleNodes, storageTypes);

        if (--numOfReplicas == 0) {

          return writer;

        }

      }

注释:此处会尝试在远程机架(即与已有的datanode不同的机架)获取一个新的datanode

  protected void chooseRemoteRack(int numOfReplicas,

                                DatanodeDescriptor localMachine,

                                Set<Node> excludedNodes,

                                long blocksize,

                                int maxReplicasPerRack,

                                List<DatanodeStorageInfo> results,

                                boolean avoidStaleNodes,

                                EnumMap<StorageType, Integer> storageTypes)

                                    throws NotEnoughReplicasException {

...

      chooseRandom(numOfReplicas, "~" + localMachine.getNetworkLocation(),

          excludedNodes, blocksize, maxReplicasPerRack, results,

          avoidStaleNodes, storageTypes);

注释:此处会在所有可选的datanode中随机选择一个

  protected DatanodeStorageInfo chooseRandom(int numOfReplicas,

                            String scope,

                            Set<Node> excludedNodes,

                            long blocksize,

                            int maxNodesPerRack,

                            List<DatanodeStorageInfo> results,

                            boolean avoidStaleNodes,

                            EnumMap<StorageType, Integer> storageTypes)

                            throws NotEnoughReplicasException {

...

    int numOfAvailableNodes = clusterMap.countNumOfAvailableNodes(

        scope, excludedNodes);

...

    if (numOfReplicas>0) {

      String detail = enableDebugLogging;

      if (LOG.isDebugEnabled()) {

        if (badTarget && builder != null) {

          detail = builder.toString();

          builder.setLength(0);

        } else {

          detail = "";

        }

      }

      throw new NotEnoughReplicasException(detail);

    }

注释:如果由于一些原因(比如节点磁盘满或者下线),导致numOfAvailableNodes计算结果为0,会抛出NotEnoughReplicasException

其中maxNodesPerRack计算逻辑如下:

org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault

  /**

   * Calculate the maximum number of replicas to allocate per rack. It also

   * limits the total number of replicas to the total number of nodes in the

   * cluster. Caller should adjust the replica count to the return value.

   *

   * @param numOfChosen The number of already chosen nodes.

   * @param numOfReplicas The number of additional nodes to allocate.

   * @return integer array. Index 0: The number of nodes allowed to allocate

   *         in addition to already chosen nodes.

   *         Index 1: The maximum allowed number of nodes per rack. This

   *         is independent of the number of chosen nodes, as it is calculated

   *         using the target number of replicas.

   */

  private int[] getMaxNodesPerRack(int numOfChosen, int numOfReplicas) {

    int clusterSize = clusterMap.getNumOfLeaves();

    int totalNumOfReplicas = numOfChosen + numOfReplicas;

    if (totalNumOfReplicas > clusterSize) {

      numOfReplicas -= (totalNumOfReplicas-clusterSize);

      totalNumOfReplicas = clusterSize;

    }

    // No calculation needed when there is only one rack or picking one node.

    int numOfRacks = clusterMap.getNumOfRacks();

    if (numOfRacks == 1 || totalNumOfReplicas <= 1) {

      return new int[] {numOfReplicas, totalNumOfReplicas};

    }

    int maxNodesPerRack = (totalNumOfReplicas-1)/numOfRacks + 2;

    // At this point, there are more than one racks and more than one replicas

    // to store. Avoid all replicas being in the same rack.

    //

    // maxNodesPerRack has the following properties at this stage.

    //   1) maxNodesPerRack >= 2

    //   2) (maxNodesPerRack-1) * numOfRacks > totalNumOfReplicas

    //          when numOfRacks > 1

    //

    // Thus, the following adjustment will still result in a value that forces

    // multi-rack allocation and gives enough number of total nodes.

    if (maxNodesPerRack == totalNumOfReplicas) {

      maxNodesPerRack--;

    }

    return new int[] {numOfReplicas, maxNodesPerRack};

  }

注释:

int maxNodesPerRack = (totalNumOfReplicas-1)/numOfRacks + 2;

if (maxNodesPerRack == totalNumOfReplicas) {

maxNodesPerRack--;

}

【原创】大数据基础之HDFS(1)HDFS新创建文件如何分配Datanode的更多相关文章

  1. 大数据学习(一)-------- HDFS

    需要精通java开发,有一定linux基础. 1.简介 大数据就是对海量数据进行数据挖掘. 已经有了很多框架方便使用,常用的有hadoop,storm,spark,flink等,辅助框架hive,ka ...

  2. 【原创】大数据基础之Zookeeper(2)源代码解析

    核心枚举 public enum ServerState { LOOKING, FOLLOWING, LEADING, OBSERVING; } zookeeper服务器状态:刚启动LOOKING,f ...

  3. 【原创】大数据基础之Kerberos(2)hive impala hdfs访问

    1 hive # kadmin.local -q 'ktadd -k /tmp/hive3.keytab -norandkey hive/server03@TEST.COM'# kinit -kt / ...

  4. 大数据基础总结---HDFS分布式文件系统

    HDFS分布式文件系统 文件系统的基本概述 文件系统定义:文件系统是一种存储和组织计算机数据的方法,它使得对其访问和查找变得容易. 文件名:在文件系统中,文件名是用于定位存储位置. 元数据(Metad ...

  5. 大数据技术之Hadoop(HDFS)

    第1章 HDFS概述 1.1 HDFS产出背景及定义 1.2 HDFS优缺点 1.3 HDFS组成架构 1.4 HDFS文件块大小(面试重点) 第2章 HDFS的Shell操作(开发重点) 1.基本语 ...

  6. 大数据学习(02)——HDFS入门

    Hadoop模块 提到大数据,Hadoop是一个绕不开的话题,我们来看看Hadoop本身包含哪些模块. Common是基础模块,这个是必须用的.剩下常用的就是HDFS和YARN. MapReduce现 ...

  7. 【原创】大数据基础之Impala(1)简介、安装、使用

    impala2.12 官方:http://impala.apache.org/ 一 简介 Apache Impala is the open source, native analytic datab ...

  8. 大数据学习之旅1——HDFS版本演化

    最近开始学习大数据,发现大数据有很多很多组件,我现在负责的是HDFS(Hadoop分布式储存系统)的学习,整理了一下HDFS的版本情况.因为HDFS是Hadoop的重要组成部分,所以有关HDFS的版本 ...

  9. 大数据之路week07--day01(HDFS学习,Java代码操作HDFS,将HDFS文件内容存入到Mysql)

    一.HDFS概述 数据量越来越多,在一个操作系统管辖的范围存不下了,那么就分配到更多的操作系统管理的磁盘中,但是不方便管理和维护,因此迫切需要一种系统来管理多台机器上的文件,这就是分布式文件管理系统 ...

随机推荐

  1. 二十九、layui分页插件的使用

    <div id="page1"></div> <script> //开启分页 var page = 1; function findstoreL ...

  2. 软件工程(FZU2015) 赛季得分榜,第一回合

    SE_FZU目录:1 2 3 4 5 6 7 8 9 10 11 12 13 积分规则 积分制: 作业为10分制,练习为3分制:alpha30分: 团队项目分=团队得分+个人贡献分 个人贡献分: 个人 ...

  3. StatefulSet(一):拓扑状态

    Deployment 实际上并不足以覆盖所有的应用编排问题. 造成这个问题的根本原因,在于 Deployment 对应用做了一个简单化假设. 它认为,一个应用的所有 Pod,是完全一样的.所以,它们互 ...

  4. Python进阶2---树的遍历和堆排序

    二叉树的遍历 堆排序Heap Sort 堆排序的过程 完整过程: #打印完整的二叉树 import math #打印完全二叉树,此函数非必要只是为了显示便利! def print_tree(lst): ...

  5. params.success && params.success(res.data)

    params.success && params.success(res.data)   只有success 为真,才执行后边的代码

  6. Vue实现tab选项卡

    <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8&quo ...

  7. 利用 Python_tkinter 完成 2048 游戏

    成品展示 具备基本的数据合并以及分数统计,不同数字的色块不同 产生随机数, 数据无法合并判定以及重新开始选项 同时可以判定游戏失败条件 需求分析 完成基本数据合并算法 游戏结束条件 界面展示 重置按钮 ...

  8. python学习day11 函数Ⅲ (内置函数与lambda表达式)

    函数Ⅲ(内置函数&lambda表达式) 1.函数小高级 函数可以当做变量来使用: def func(): print(123) func_list = [func, func, func] # ...

  9. 驱动调试(一)-printk

    目录 驱动调试(一)-printk 引入 框架 入口console_setup add_preferred_console register_console s3c24xx_serial_initco ...

  10. Jumbo frame与MTU

    最近有测试问我Jumbo和MTU分别限制的是什么把我问住了,网上查了一些资料,发现大部分是百科上copy下来的,都没有说到点子上,关键时刻还是同事靠谱,现在根据自己的理解整理一下.   首先了解一下二 ...