MapReduce: number of mappers/reducers
|
14 down vote
|
It's the other way round. Number of mappers is decided based on the number of splits. In reality it is the job of To better understand this, assume you are processing data stored in your MySQL using MR. Since there is no concept of blocks in this case, the theory that splits are always created based on the HDFS block fails. Right? What about splits creation then? One possibility is to create splits based on ranges of rows in your MySQL table (and this is what It is only for the InputFormats based on There is a fundamental difference between MR Coming back to your question. Hadoop allows much more than 200 mappers. Having said that, it doesn't make much sense to have 200 mappers for just 500MB of data. Always remember that when you talk about Hadoop, you are dealing with very huge data. Sending just 2.5 MB data to each mapper would be an overkill. And yes, if there are no free CPU slots then some mappers may run after the completion of current mappers. But the MR framework is very intelligent and tries its best to avoid these kind of situation. If the machine where data to processed is present, doesn't have any free CPU slots, the data will be moved to a nearby node, where free slots are available, and get processed. HTH |
How many mappers/reducers should be set when configuring Hadoop cluster?
There is no formula. It depends on how many cores and how much memory do you have. The number of mapper + number of reducer should not exceed the number of cores in general. Keep in mind that the machine is also running Task Tracker and Data Node daemons. One of the general suggestion is more mappers than reducers. If I were you, I would run one of my typical jobs with reasonable amount of data to try it out.
For a normal 7200rpm disk, 2-3 mappers is a good number. For you system, with 48G mem and 16 cpu thread, I/O will likely to be the problem. I suggest you to get multiple disk for each node and set them up as JBOD.
Quoting from "Hadoop The Definite Guide, 3rd edition", page 306
Because MapReduce jobs are normally I/O-bound, it makes sense to have more tasks than processors to get better utilization.
The amount of oversubscription depends on the CPU utilization of jobs you run, but a good rule of thumb is to have a factor of between one and two more tasks (counting both map and reduce tasks) than processors.
A processor in the quote above is equivalent to one logical core.
But this is just in theory, and most likely each use case is different than another, some tests need to be performed. But this number can be a good start to test with.
No. of mappers is decided in accordance with the data locality principle as described earlier. Data Locality principle : Hadoop tries its best to run map tasks on nodes where the data is present locally to optimize on the network and inter-node communication latency. As the input data is split into pieces and fed to different map tasks, it is desirable to have all the data fed to that map task available on a single node.Since HDFS only guarantees data having size equal to its block size (64M) to be present on one node, it is advised/advocated to have the split size equal to the HDFS block size so that the map task can take advantage of this data localization. Therefore, 64M of data per mapper. If we see some mappers running for a very small period of time, try to bring down the number of mappers and make them run longer for a minute or so.
No. of reducers should be slightly less than the number of reduce slots in the cluster (the concept of slots comes in with a pre-configuration in the job/task tracker properties while configuring the cluster) so that all the reducers finish in one wave and make full utilisation of the cluster resources.
mapred.tasktracker.reduce.tasks.maximum
mapred.tasktracker.map.tasks.maximum
in mapred-site.xml
This is applicable for all jobs. If you want to set for a specific one, you can use
mapred.reduce.tasks
mapred.map.tasks
Liyin Tang added a comment - 13/Nov/10 01:16
I just finished converting common join into map join based on the file size. There are 2 flags to control this optimization.
1) set hive.auto.convert.join = true; It means this optimization is enabled. By default right now, this flag is disabled in order not to break any existing test cases. Also I put 25 additional test cases, auto_join0.q - auto_join25.q, which covers this optimization code.
2) Set hive.hashtable.max.memory.usage = 0.9; It means if the memory usage of local task is more than 90% of its heap size, then the local task will abort by itself. The Driver will know the local work fails and it won't submit the MapJoinTask (a Map Only MapRedTask) to Hadoop, but instead, it will submit the originally CommonJoinTask to Hadoop to run.
3) Set hive.smalltable.filesize = 25000000L; It means if the summary of the small table file size is less than 25M, then it will run the map join task. If not, just run the originally common join task.
MapReduce: number of mappers/reducers的更多相关文章
- Hadoop官方文档翻译——MapReduce Tutorial
MapReduce Tutorial(个人指导) Purpose(目的) Prerequisites(必备条件) Overview(综述) Inputs and Outputs(输入输出) MapRe ...
- [转]Hive:简单查询不启用Mapreduce job而启用Fetch task
转自:http://www.iteblog.com/archives/831 如果你想查询某个表的某一列,Hive默认是会启用MapReduce Job来完成这个任务,如下: hive> SEL ...
- MIT 6.824 lab1:mapreduce
这是 MIT 6.824 课程 lab1 的学习总结,记录我在学习过程中的收获和踩的坑. 我的实验环境是 windows 10,所以对lab的code 做了一些环境上的修改,如果你仅仅对code 感兴 ...
- Number of dynamic partitions exceeded hive.exec.max.dynamic.partitions.pernode
动态分区数太大的问题:[Fatal Error] Operator FS_2 (id=2): Number of dynamic partitions exceeded hive.exec.max.d ...
- Hive之简单查询不启用MapReduce
假设你想查询某个表的某一列.Hive默认是会启用MapReduce Job来完毕这个任务,例如以下: 01 hive> SELECT id, money FROM m limit 10; 02 ...
- 011-HQL中级1-Hive快捷查询:不启用Mapreduce job启用Fetch task三种方式介绍
如果你想查询某个表的某一列,Hive默认是会启用MapReduce Job来完成这个任务,如下: hive; Total MapReduce jobs Launching Job out since ...
- MapReduce 模式、算法和用例(MapReduce Patterns, Algorithms, and Use Cases)
在新文章“MapReduce模式.算法和用例”中,Ilya Katsov提供了一个系统化的综述,阐述了能够应用MapReduce框架解决的问题. 文章开始描述了一个非常简单的.作为通用的并行计算框架的 ...
- 从零自学Hadoop(16):Hive数据导入导出,集群数据迁移上
阅读目录 序 导入文件到Hive 将其他表的查询结果导入表 动态分区插入 将SQL语句的值插入到表中 模拟数据文件下载 系列索引 本文版权归mephisto和博客园共有,欢迎转载,但须保留此段声明,并 ...
- hive创建索引
索引是hive0.7之后才有的功能,创建索引需要评估其合理性,因为创建索引也是要磁盘空间,维护起来也是需要代价的 创建索引 hive> create index [index_studentid ...
随机推荐
- 查看网卡流量:sar
sar(System Activity Reporter 系统活动情况报告)是目前 Linux 上最为全面的系统性能分析工具之一,可以从多方面对系统的活动进行报告,但我们一般用来监控网卡流量 [roo ...
- Python 使用正则表达式匹配电子邮箱
如下: In [1]: import re In [2]: email = "1210640219@qq.com" In [3]: regular = re.compile(r'[ ...
- with revoked permission android.permission.CAMERA
1,刚出现这样的问题我是直接把 CAMERA 移除掉 2.第一步判断时候授权. if (Build.VERSION.SDK_INT >= 23) { int checkCallPhonePerm ...
- 如何搭建本地WordPress
今天就来介绍一下如何在Windows下搭建本地WordPress. 安装前准备 1.正常的电脑 2.PHPNow http://www.phpnow.org 这里面的PHPNow环境包其实包含了常见 ...
- android基础---->Fragment的使用
碎片(Fragment)是一种可以嵌入在活动当中的UI 片段,它能让程序更加合理和充分地利用大屏幕的空间,因而在平板上应用的非常广泛. Fragment的基础例子
- MQTT-SN协议乱翻之简要介绍
前言 这一段时间在翻看MQTT-SN的协议,对针对不依赖于TCP传输的MQTT协议十分感兴趣,总是再想着这货到底是怎么定义的.一系列文章皆有MQTT-SN 1.2协议所拼装组成,原文档地址: MQTT ...
- this、target、currentTarget
this:绑定事件所触发行为的对象 target:最开始冒泡的的对象 currentTarget:事件触发行为的对象 this == target currentTarget和this 是target ...
- 用ajax实现用户名的检测(JavaScript方法)
<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding= ...
- Egret资源管理解决方案
关于egret开发H5页游,资源管理和加载的一点看法. 一 多json文件管理 二 资源归类和命名 三 exml文件编写规范 四 资源预加载.分步加载.偷载 五 资源文件group分组 六 ResUt ...
- 利用aspose-words 实现 java中word转pdf文件
利用aspose-words 实现 java中word转pdf文件 首先下载aspose-words-15.8.0-jdk16.jar包 引入jar包,编写Java代码 package test; ...