As part of HDP 2.0 Beta, YARN takes the resource management capabilities that were in MapReduce and packages them so they can be used by new engines.  This also streamlines MapReduce to do what it does best, process data.  With YARN, you can now run multiple applications in Hadoop, all sharing a common resource management.

In this blog post we’ll walk through how to plan for and configure processing capacity in your enterprise HDP 2.0 cluster deployment. This will cover YARN and MapReduce 2. We’ll use an example physical cluster of slave nodes each with 48 GB ram, 12 disks and 2 hex core CPUs (12 total cores).

YARN takes into account all the available compute resources on each machine in the cluster. Based on the available resources, YARN will negotiate resource requests from applications (such as MapReduce) running in the cluster. YARN then provides processing capacity to each application by allocating Containers. A Container is the basic unit of processing capacity in YARN, and is an encapsulation of resource elements (memory, cpu etc.).

Configuring YARN

In a Hadoop cluster, it’s vital to balance the usage of RAM, CPU and disk so that processing is not constrained by any one of these cluster resources. As a general recommendation, we’ve found that allowing for 1-2 Containers per disk and per core gives the best balance for cluster utilization. So with our example cluster node with 12 disks and 12 cores, we will allow for 20 maximum Containers to be allocated to each node.

Each machine in our cluster has 48 GB of RAM. Some of this RAM should be reserved for Operating System usage. On each node, we’ll assign 40 GB RAM for YARN to use and keep 8 GB for the Operating System. The following property sets the maximum memory YARN can utilize on the node:

In yarn-site.xml

<name>yarn.nodemanager.resource.memory-mb</name>
<value>40960</value>

The next step is to provide YARN guidance on how to break up the total resources available into Containers. You do this by specifying the minimum unit of RAM to allocate for a Container. We want to allow for a maximum of 20 Containers, and thus need (40 GB total RAM) / (20 # of Containers) = 2 GB minimum per container:

In yarn-site.xml

 <name>yarn.scheduler.minimum-allocation-mb</name>
<value>2048</value>

YARN will allocate Containers with RAM amounts greater than the yarn.scheduler.minimum-allocation-mb.

Configuring MapReduce 2

MapReduce 2 runs on top of YARN and utilizes YARN Containers to schedule and execute its map and reduce tasks.

When configuring MapReduce 2 resource utilization on YARN, there are three aspects to consider:

  1. Physical RAM limit for each Map And Reduce task
  2. The JVM heap size limit for each task
  3. The amount of virtual memory each task will get

You can define how much maximum memory each Map and Reduce task will take. Since each Map and each Reduce will run in a separate Container, these maximum memory settings should be at least equal to or more than the YARN minimum Container allocation.

For our example cluster, we have the minimum RAM for a Container (yarn.scheduler.minimum-allocation-mb) = 2 GB. We’ll thus assign 4 GB for Map task Containers, and 8 GB for Reduce tasks Containers.

In mapred-site.xml:

 <name>mapreduce.map.memory.mb</name>
<value>4096</value>
<name>mapreduce.reduce.memory.mb</name>
<value>8192</value>

Each Container will run JVMs for the Map and Reduce tasks. The JVM heap size should be set to lower than the Map and Reduce memory defined above, so that they are within the bounds of the Container memory allocated by YARN.

In mapred-site.xml:

 <name>mapreduce.map.java.opts</name>
<value>-Xmx3072m</value>
<name>mapreduce.reduce.java.opts</name>
<value>-Xmx6144m</value>

The above settings configure the upper limit of the physical RAM that Map and Reduce tasks will use. The virtual memory (physical + paged memory) upper limit for each Map and Reduce task is determined by the virtual memory ratio each YARN Container is allowed. This is set by the following configuration, and the default value is 2.1:

In yarn-site.xml:

 <name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2.1</value>

Thus, with the above settings on our example cluster, each Map task will get the following memory allocations with the following:

  • Total physical RAM allocated = 4 GB
  • JVM heap space upper limit within the Map task Container = 3 GB
  • Virtual memory upper limit = 4*2.1 = 8.2 GB

With YARN and MapReduce 2, there are no longer pre-configured static slots for Map and Reduce tasks. The entire cluster is available for dynamic resource allocation of Maps and Reduces as needed by the job. In our example cluster, with the above configurations, YARN will be able to allocate on each node up to 10 mappers (40/4) or 5 reducers (40/8) or a permutation within that.

How to Plan and Configure YARN and MapReduce 2 in HDP 2.0的更多相关文章

  1. Wordcount on YARN 一个MapReduce示例

    Hadoop YARN版本:2.2.0 关于hadoop yarn的环境搭建可以参考这篇博文:Hadoop 2.0安装以及不停集群加datanode hadoop hdfs yarn伪分布式运行,有如 ...

  2. 怎样通过Java程序提交yarn的mapreduce计算任务

    因为项目需求,须要通过Java程序提交Yarn的MapReduce的计算任务.与一般的通过Jar包提交MapReduce任务不同,通过程序提交MapReduce任务须要有点小变动.详见下面代码. 下面 ...

  3. Determine YARN and MapReduce Memory Configuration Settings

    Determine YARN and MapReduce Memory Configuration Settings https://docs.hortonworks.com/HDPDocuments ...

  4. 3.Hadoop测试Yarn和MapReduce

    Hadoop测试Yarn和MapReduce 1.配置Yarn (1)配置ResourceManager 生产环境中,一般是重开一台机器作为ResourceManager,这里我们以Master机器代 ...

  5. YARN和MapReduce的内存设置参考

    如何确定Yarn中容器Container,Mapreduce相关参数的内存设置,对于初始集群,由于不知道集群的类型(如cpu密集.内存密集)我们需要根据经验提供给我们一个参考配置值,来作为基础的配置. ...

  6. YARN和MapReduce的内存设置參考

    怎样确定Yarn中容器Container,Mapreduce相关參数的内存设置,对于初始集群,由于不知道集群的类型(如cpu密集.内存密集)我们须要依据经验提供给我们一个參考配置值,来作为基础的配置. ...

  7. Hadoop2 使用 YARN 运行 MapReduce 的过程源码分析

    Hadoop 使用 YARN 运行 MapReduce 的过程如下图所示: 总共分为11步. 这里以 WordCount 为例, 我们在客户端终端提交作业: # 把本地的 /home/hadoop/t ...

  8. 经典MapReduce作业和Yarn上MapReduce作业运行机制

    一.经典MapReduce的作业运行机制 如下图是经典MapReduce作业的工作原理: 1.1 经典MapReduce作业的实体 经典MapReduce作业运行过程包含的实体: 客户端,提交MapR ...

  9. 大数据系列4:Yarn以及MapReduce 2

    系列文章: 大数据系列:一文初识Hdfs 大数据系列2:Hdfs的读写操作 大数据谢列3:Hdfs的HA实现 通过前文,我们对Hdfs的已经有了一定的了解,本文将继续之前的内容,介绍Yarn与Yarn ...

随机推荐

  1. python内建函数和工厂函数的整理

    内建函数参阅: https://www.cnblogs.com/pyyu/p/6702896.html 工厂函数: 本篇博文比较粗糙,后续会深入整理

  2. SpringBoot官方文档学习(一)SpringApplication

    Springboot通过main方法启动,在许多情况下,委派给静态SpringApplication.run方法: public static void main(String[] args) { S ...

  3. php 文件包含函数

    在实际开发中,常常需要把程序中的公用代码放到一个文件中,使用这些代码的文件只需要包含这个文件即可.这种方法有助于提高代码的重用性,给代码的编写与维护带来很大的便利.在PHP中, 有require.re ...

  4. 02_SAE中创建数据表

    Step1:进入新浪云应用数据库点击应用名称,进入到该应用管理界面,在数据库服务中点击“共享型MySQL”: 开启MySQL服务,使用PHPMyAdmin管理数据库,进入MySQL数据库管理界面: S ...

  5. 2019/7/18 --1.<%@ include file=""%>与<jsp:include page=""/>两种方式的作用

    一.前言 身为一名coder有太多太多的知识点要去学,太多太多的东西要去记.往往一些小细节也就难免疏忽,但悲催的是多数困恼你的bug就是因为这些微不足道的知识点.我们又不是机器人,怎么可能什么都记得了 ...

  6. luogu 3466 对顶堆

    显然答案是将一段区间全部转化成了其中位数这样的话,需要维护一个数据结构支持查询当前所有数中位数对顶堆 用两个堆将 < 中位数的数放入大根堆将 > 中位数的数放入小根堆这样就会存在删除操作 ...

  7. Processing 中玩增强现实 Argument Reality

    其实2009年Processing就能做AR了,只是我不知道而已~ 需要以下几个东西: 1.JMyron 2.GSVideo 3.nyar4psg 4.Picking 5.OBJLoader 或者大伙 ...

  8. Spark-Hadoop、Hive、Spark 之间是什么关系?

    作者:Xiaoyu Ma链接:https://www.zhihu.com/question/27974418/answer/38965760来源:知乎著作权归作者所有.商业转载请联系作者获得授权,非商 ...

  9. maven ssm 编译异常记录:

    maven ssm 编译异常记录: javax.servlet.jsp 解决: 清除 tomacat libraries 修改 pom 文件 <dependency> <groupI ...

  10. [CMS]Joomla 3.4.6-RCE漏洞复现

    0x00:简介 1.Joomla是一套全球有名的CMS系统. 2.Joomla基于PHP语言加上MySQL数据库所开发出来的WEB软件系统,目前最新版本是3.9.12. 3.Joomla可以在多种不同 ...