1. What is the recommended value for "yarn.nodemanager.resource.local-dirs"?

We only have one value (directory) configured for the above property, which has a size of 200GB.

Our hive jobs' map/reduce fill this folder up, and yarn places this node in the blocklist. Moving to tez engine and/or increasing the quota size may fix this, but we'd like to know the recommended value.

最佳解答

个解答,截止Sourygna Luangsay  · 2015年10月28日 08:04

If you use the same partitions for yarn intermediate data than for the HDFS blocks, then you might also consider setting the fs.datanode.du.reserved property, which reserves some space on those partitions for non-hdfs use (such as intermediate yarn data).

One base recommendation I saw on my first Hadoop training long time ago was to dedicate 25% of the "data disks" for that kind of intermediate data.

I guess the optimal answer should consider the maximum amount of intermediate data you can get at the same time (when launching a job,

do you use all the data of HDFS as input data?) and dedicate the space for yarn.nodemanager.resource.local-dirs accordingly.

I would also recommend turning on the property mapreduce.map.output.compress in order to reduce the size of the intermediate data.

 

个解答,截止Jean-Philippe Player  · 2015年10月27日 20:58

You would assign one folder to each of the datanode disks, closely mapping dfs.datanode.data.dir. On a 12 disk system you would have 12 yarn local-dir locations.

2.Though Dataflow can be used with an out of the box Hadoop installation , there are a couple of configuration properties which may improve DataFlow/Hadoop performance

Resolution

Using the O/S file system (i.e. /tmp or /var/) can be problematic especially if any applications log a lot of information or require large local files. So we have two properties to overcome this bottleneck.
 The first is yarn.nodemanager.local-dirs. This setting specifies the directories to use as base directories for the containers run within YARN.

For each application and container created in YARN, a set of directories will be created underneath these local directories. These are then cleaned up when the application completes.
 
Here’s the setting from the yarn-site.xml file on one of our clusters. Note we have eight data disks per node on these clusters and create a directory for YARN on each data filesystem.

<property>
<name>yarn.nodemanager.local-dirs</name>
<value>
/hadoop/hdfs/data1/hadoop/yarn/local,/hadoop/hdfs/data2/hadoop/yarn/local,/hadoop/hdfs/data3/hadoop/yarn/local,/hadoop/hdfs/data4/hadoop/yarn/local,/hadoop/hdfs/data5/hadoop/yarn/local,/hadoop/hdfs/data6/hadoop/yarn/local,/hadoop/hdfs/data7/hadoop/yarn/local,/hadoop/hdfs/data8/hadoop/yarn/local
</value>
<source>yarn-site.xml</source>
</property>

The second is yarn.nodemanager.log-dirs. Much like the local-dirs property, this setting specifies where container log files should go on the local disk. YARN spreads the load around if you specify multiple directories.
And here’s a sample setting:

<property>
<name>yarn.nodemanager.log-dirs</name>
<value>
/hadoop/hdfs/data1/hadoop/yarn/log,/hadoop/hdfs/data2/hadoop/yarn/log,/hadoop/hdfs/data3/hadoop/yarn/log,/hadoop/hdfs/data4/hadoop/yarn/log,/hadoop/hdfs/data5/hadoop/yarn/log,/hadoop/hdfs/data6/hadoop/yarn/log,/hadoop/hdfs/data7/hadoop/yarn/log,/hadoop/hdfs/data8/hadoop/yarn/log
</value>
<source>yarn-site.xml</source>
</property>

Another YARN property you want to validate is the yarn.nodemanager.resource.memory-mb. This setting specifies the amount of memory YARN is allowed to allocate per worker node.

YARN will only allocate this much memory in total to containers. So it’s important to set this to some value less than the physical memory per worker node.

HDP appears to automatically pick 75% of the physical memory for this setting as our machines have 16GB of RAM each.
Here’s an example:

<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value></value>
<source>yarn-site.xml</source>
</property>

 当然也可以考虑使用nfs挂载,相关资料如下

3.How can I change yarn.nodemanager.local-dirs to point to file:/// (high performance NFS mount point)

Hi,I'm trying to change the "yarn.nodemanager.local-dirs" to point to "file:///fast_nfs/yarn/local". This is indeed a high-performance NFS mount-point that all the nodes in my cluster have.

When I try to change it in Ambari I can't and the message "Must be a slash or drive at the start, and must not contain white spaces" is displayed.

If I manually change the /etc/hadoop/conf/yarn-site.xml in all the nodes, after restarting YARN the "file:///" is removed from that option.

I want to have all the shuffle happening in my high-performance NFS array instead of in HDFS.

How can I change this behaviour in HDP?

@Raul Pingarrón

The culprit is "file:/// "you should get a was to create a mount point /fast_nfs/yarn/local, hence the message "Must be a slash or drive ........" like te list below

/hadoop/yarn/local,/opt/hadoop/yarn/local,/usr/hadoop/yarn/local,/var/hadoop/yarn/local

Hope that helps

4. How to set yarn.nodemanager.local-dirs on M3 cluster to write to mapr fs

We are running a four node M3 cluster with one node running NFS. We are getting the the following error.

1/1 local-dirs are bad: /mapr/clustername/tmp/host_name on the nodes that does not have NFS running.

What is the best way to set this property in the yarn-site.xml to allow all nodes to use mapr fs /tmp as the default location and not the local file system /tmp

I believe the property "yarn.nodemanager.local-dirs" is meant to be a location on the local file system. It cannot be a location of the distributed file system (HDFS or MapR FS).

This property determines the location where the node manager maintains intermediate data (for example during the shuffle phase).

You can the find gory details here: http://hortonworks.com/blog/resource-localization-in-yarn-deep-dive/

The default location as you mentioned is /tmp. If you want to improve performance, you could provide multiple directories on separate disks for better I/O throughput.

But, you should ascertain that this is a indeed bottleneck and if a separate disk is warranted for this purpose (or you are better of using it as a MapR data disk).

One other thing, the NFS mounted location (/mapr/clustername/tmp/host_name) is not a part of the distributed FS.

MapR makes it seamless to work between its distributed file system and the POSIX file system. But the files of the POSIX system are not stored in any containers/chunks/blocks, etc.

Since the path you specified is really a local directory on the node running NFS, you don't get an error message on that node . But on the other nodes, the system can't find a local directory by that name and hence it is complaining.

Hadoop Yarn配置项 yarn.nodemanager.resource.local-dirs探讨的更多相关文章

  1. hadoop集群配置方法---mapreduce应用:xml解析+wordcount详解---yarn配置项解析

    注:以下链接均为近期hadoop集群搭建及mapreduce应用开发查找到的资料.使用hadoop2.6.0,其中hadoop集群配置过程下面的文章都有部分参考. hadoop集群配置方法: ---- ...

  2. Hadoop学习之YARN框架

    转自:http://www.ibm.com/developerworks/cn/opensource/os-cn-hadoop-yarn/,非常感谢分享! 对于业界的大数据存储及分布式处理系统来说,H ...

  3. Hadoop生态系统之Yarn

    Apache YARN(Yet Another Resource Negotiator) 是Hadoop的集群资源管理系统.YARN被引入Hadoop2最初是为了改善MapReduce的实现,但它具有 ...

  4. hadoop备战:yarn框架的搭建(mapreduce2)

    昨天没有写好了没有更新,今天一起更新,yarn框架也是刚搭建好的. 我这里把hadoop放在了我的个人用户hadoop下了,你也能够尝试把它放在/usr/local,考虑的问题就相对多点. 主要的软硬 ...

  5. hadoop备战:yarn框架的简单介绍(mapreduce2)

    新 Hadoop Yarn 框架原理及运作机制 重构根本的思想是将 JobTracker 两个基本的功能分离成单独的组件,这两个功能是资源管理和任务调度 / 监控.新的资源管理器全局管理全部应用程序计 ...

  6. Hadoop核心组件之YARN

    YARN概述 Yet Another Resource Negotiator:另外资源的协调者 通用的资源管理系统 为上层应用提供统一的资源管理和调度 操作系统级别的调度框架,可以让各种计算框架运行在 ...

  7. Hadoop学习笔记—Yarn

    目录 一些基本知识 ResourceManager 的恢复 Resource Manager的HA YARN Node Labels YARN Node Attributes Web Applicat ...

  8. Hadoop 2.2 YARN分布式集群搭建配置流程

    搭建环境准备:JDK1.6,SSH免密码通信 系统:CentOS 6.3 集群配置:NameNode和ResourceManager在一台服务器上,三个数据节点 搭建用户:YARN Hadoop2.2 ...

  9. Hadoop数据操作系统YARN全解析

    “ Hadoop 2.0引入YARN,大大提高了集群的资源利用率并降低了集群管理成本.其在异构集群中是怎样应用的?Hulu又有哪些成功实践可以分享? 为了能够对集群中的资源进行统一管理和调度,Hado ...

随机推荐

  1. mybatis中的缓存问题

    关于mybatis基础我们前面几篇博客已经介绍了很多了,今天我们来说一个简单的问题,那就是mybatis中的缓存问题.mybatis本身对缓存提供了支持,但是如果我们没有进行任何配置,那么默认情况下系 ...

  2. 带着新人学springboot的应用12(springboot+Dubbo+Zookeeper 下)

    上半节已经下载好了Zookeeper,以及新建了两个应用provider和consumer,这一节我们就结合dubbo来测试一下分布式可不可以用. 现在就来简单用一下,注意:这里只是涉及最简单的部分, ...

  3. 从0打卡leetcode之day 5 ---两个排序数组的中位数

    前言 我靠,才坚持了四天,就差点不想坚持了.不行啊,我得把leetcode上的题给刷完,不然怕是不好进入bat的大门. 题目描述 给定两个大小为 m 和 n 的有序数组 nums1 和 nums2 . ...

  4. 在AspNetCore中使用极验做行为认证

    先上效果图 极验的流程 极验官方文档地址 https://docs.geetest.com/install/deploy/server/csharp 简单说明一下极验的验证流程 引用官方的图片 向服务 ...

  5. properties配置文件读取操作总结【java笔记】

    声明:本文所有例子中的 properties 文件均放在 src 目录下,ecclipse 软件自动增加 一.基本概念 1.1  properties文件,存储格式 键=值. properties文件 ...

  6. selenium和webdriver区别

    接触selenium大概半年时间了.从开始的预研,简单的写个流程到后期的自动化框架的开发,因为本人不属于代码方面的大牛,一直的边研究边做.逐步深入学习.近期发现自己对本身selenium的发展还存在困 ...

  7. Java实现将任何编码方式的txt文件以UTF-8编码方式转存

    本文利用JDK中的BufferedReader和BufferedWriter实现将任何编码方式的txt文件以UTF-8编码方式转存. UTF-8(8-bit Unicode Transformatio ...

  8. springmvc 项目完整示例03 小结

    利用spring 创建一个web项目 大致原理 利用spring的ioc 原理,例子中也就是体现在了配置文件中 设置了自动扫描注解 配置了数据库信息等 一般一个项目,主要有domain,dao,ser ...

  9. Perl进程:僵尸进程和孤儿进程

    概念 僵尸进程:当子进程退出时,父进程还没有(使用wait或waitpid)接收其退出状态时,子进程就成了僵尸进程 孤儿进程:当子进程还在运行时,父进程先退出了,子进程就会成为孤儿进程被pid=1的i ...

  10. [CSS] css的background及多背景设置

    问题 首先是一个 div 块里需要一张背景,带文本和图案的那种,但是身为容器的 div 是能够随数据的改变而变化长度的,所以一张静态图片不免的会有拉伸和挤扁的状态,尤其是有图案和文本的情况下最为明显 ...