Hadoop 2.6.0分布式部署參考手冊
Hadoop 2.6.0分布式部署參考手冊
关于本參考手冊的word文档。能够到例如以下地址下载:http://download.csdn.net/detail/u012875880/8291493
1.环境说明
1.1安装环境说明
本列中。操作系统为Centos 7.0。JDK版本号为Oracle HotSpot 1.7,Hadoop版本号为Apache Hadoop 2.6.0。操作用户为hadoop。
2.2 Hadoop集群环境说明:
集群各节点信息參考例如以下:
主机名 |
IP地址 |
角色 |
ResourceManager |
172.15.0.2 |
ResourceManager & MR JobHistory Server |
NameNode |
172.15.0.3 |
NameNode |
SecondaryNameNode |
172.15.0.4 |
SecondaryNameNode |
DataNode01 |
172.15.0.5 |
DataNode & NodeManager |
DataNode02 |
172.15.0.6 |
DataNode & NodeManager |
DataNode03 |
172.15.0.7 |
DataNode & NodeManager |
DataNode04 |
172.15.0.8 |
DataNode & NodeManager |
DataNode05 |
172.15.0.9 |
DataNode & NodeManager |
注:上述表中用”&”连接多个角色,如主机”ResourceManager”有两个角色。分别为ResourceManager和MR JobHistory Server。
2.基础环境安装及配置
2.1 加入hadoop用户
useradd hadoop
用户“hadoop”即为Hadoop集群的安装和使用用户。
2.2 JDK 1.7安装
Centos 7自带的JDK版本号为 OpenJDK 1.7,本例中须要将其更换为Oracle HotSpot 1.7版。本例中採用解压二进制包方式安装。安装文件夹为/opt/。
① 查看当前JDK rpm包
rpm -qa | grep jdk
java-1.7.0-openjdk-1.7.0.51-2.4.5.5.el7.x86_64
java-1.7.0-openjdk-headless-1.7.0.51-2.4.5.5.el7.x86_64
② 删除自带JDK
rpm -e --nodeps java-1.7.0-openjdk-1.7.0.51-2.4.5.5.el7.x86_64
rpm -e --nodeps java-1.7.0-openjdk-headless-1.7.0.51-2.4.5.5.el7.x86_64
③ 安装指定JDK
进入安装包所在文件夹并解压
④ 配置环境变量
编辑~/.bashrc或者/etc/profile。加入例如以下内容:
#JAVA
export JAVA_HOME=/opt/jdk1.7
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=$JAVA_HOME/lib
export CLASSPATH=$CLASSPATH:$JAVA_HOME/jre/lib
2.3 SSH无password登陆配置
① 须要设置如上表格所看到的8台主机间的SSH无password登陆。
② 进入hadoop用户的根文件夹下并通过命令ssh-keygen -t rsa 生成秘钥对
③ 创建公钥认证文件authorized_keys并将生成的~/.ssh文件夹下的id_rsa.pub文件 的内容输出至该文件:
more id_rsa.pub > auhorized_keys
④ 分别改变~/.ssh文件夹和authorized_keys文件的权限:
chmod 700 ~/.ssh;chmod 600 ~/.ssh/authorized_keys
⑤ 每一个节点主机都反复以上步骤,并将各自的~/.ssh/id_rsa.pub文件的公钥拷贝至其 他主机。
对于以上操作,也能够通过一句命令搞定:
rm -rf ~/.ssh;ssh-keygen -t rsa;chmod 700 ~/.ssh;more ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys;chmod 600 ~/.ssh/authorized_keys;
注:在centos 6中能够用dsa方式:ssh-keygen -t dsa命令来设置无password登陆,在centos 7中仅仅能用rsa方式。否则仅仅能ssh无password登陆本机。无能登陆它机。
2.4 改动hosts映射文件
分别编辑各节点上的/etc/hosts文件,加入例如以下内容:
172.15.0.2 ResourceManager
172.15.0.3 NameNode
172.15.0.4 SecondaryNameNode
172.15.0.5 DataNode01
172.15.0.6 DataNode02
172.15.0.7 DataNode03
172.15.0.8 DataNode04
172.15.0.9 DataNode05
172.15.0.5 NodeManager01
172.15.0.6 NodeManager02
172.15.0.7 NodeManager03
172.15.0.8 NodeManager04
172.15.0.9 NodeManager05
3.Hadoop安装及配置
3.1 通用部分安装及配置
下面操作内容为通用操作部分,及在每一个节点上的内容一样。
分别在每一个节点上反复例如以下操作:
① 将hadoop安装包(hadoop-2.6.0.tar)拷贝至/opt文件夹下,并解压:
tar -xvf hadoop-2.6.0.tar
解压后的hadoop-2.6.0文件夹(/opt/hadoop-2.6.0)即为hadoop的安装根文件夹。
② 更改hadoop安装文件夹hadoop-2.6.0的全部者为hadoop用户:
chown -R hadoop.hadoop /opt/hadoop-2.6.0
③ 加入环境变量:
#hadoop
export HADOOP_HOME=/opt/hadoop-2.6.0
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
3.2 各节点配置
分别将例如以下配置文件解压并分发至每一个节点的Hadoop“$HADOOP_HOME/etc/hadoop”文件夹中。如提示是否覆盖文件。确认就可以。
Hadoop配置文件:http://download.csdn.net/detail/u012875880/8291517
注:关于各节点的配置參数设置,请參考后面的“附录1”或“附录2”
4.格式化/启动集群
4.1 格式化集群HDFS文件系统
安装完成后。需登陆NameNode节点或任一DataNode节点运行hdfs namenode -format格式化集群HDFS文件系统;
注:假设非第一次格式化HDFS文件系统。则须要在进行格式化操作前分别将NameNode的dfs.namenode.name.dir和各个DataNode节点的dfs.datanode.data.dir文件夹(在本例中为/home/hadoop/hadoopdata)下的全部内容清空。
4.2启动Hadoop集群
分别登陆例如以下主机并运行对应命令:
① 登陆ResourceManger运行start-yarn.sh命令启动集群资源管理系统yarn
② 登陆NameNode运行start-dfs.sh命令启动集群HDFS文件系统
③ 分别登陆SecondaryNameNode、DataNode01、DataNode02、DataNode03、DataNode04 节点执行jps命令。查看每一个节点是否有例如以下Java进程执行:
ResourceManger节点执行的进程:ResouceNamager
NameNode节点执行的进程:NameNode
SecondaryNameNode节点执行的进程:SecondaryNameNode
各个DataNode节点执行的进程:DataNode & NodeManager
假设以上操作正常则说明Hadoop集群已经正常启动。
附录1 关键配置内容參考
1 core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://NameNode:9000</value>
<description>NameNode URI</description>
</property>
</configuration>
- 属性”fs.defaultFS“表示NameNode节点地址,由”hdfs://主机名(或ip):port号”组成。
2 hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///home/jack/hadoopdata/hdfs/datanode</value>
</property
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>SecondaryNameNode:50090</value>
</property>
</configuration>
- 属性“dfs.namenode.name.dir”表示NameNode存储命名空间和操作日志相关的元数据信息的本地文件系统文件夹。该项默认本地路径为”/tmp/hadoop-{username}/dfs/name”;
- 属性”dfs.datanode.data.dir“表示DataNode节点存储HDFS文件的本地文件系统文件夹。由”file://本地文件夹”组成,该项默认本地路径为”/tmp/hadoop-{username}/dfs/data”。
- 属性“dfs.namenode.secondary.http-address”表示SecondNameNode主机及port号(假设无需额外指定SecondNameNode角色,能够不进行此项配置);
3 mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<description>Execution framework set to Hadoop YARN.</description>
</property>
</configuration>
- 属性”mapreduce.framework.name“表示执行mapreduce任务所使用的执行框架,默觉得local,须要将其改为”yarn”
4 yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>ResourceManager</value>
<description>ResourceManager host</description>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
<description>Shuffle service that needs to be set for Map Reduce applications.</description>
</property>
</configuration>
- 属性”yarn.resourcemanager.hostname”用来指定ResourceManager主机地址;
- 属性”yarn.nodemanager.aux-service“表示MR applicatons所使用的shuffle工具类
5 hadoop-env.sh
JAVA_HOME表示当前的Java安装文件夹
export JAVA_HOME=/opt/jdk-1.7
6 slaves
集群中的master节点(NameNode、ResourceManager)须要配置其所拥有的slaver节点,当中:
NameNode节点的slaves内容为:
DataNode01
DataNode02
DataNode03
DataNode04
DataNode05
ResourceManager节点的slaves内容为:
NodeManager01
NodeManager02
NodeManager03
NodeManager04
NodeManager05
附录2 具体配置内容參考
注:下面的红色字体部分的配置參数为必须配置的部分,其它配置皆为默认配置。
1 core-site.xml
<configuration>
<!--Configurations for NameNode(SecondaryNameNode)、DataNode、NodeManager:-->
<property>
<name>fs.defaultFS</name>
<value>hdfs://NameNode:9000</value>
<description>NameNode URI</description>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
<description>Size of read/write buffer used in SequenceFiles,The default value is 131072</description>
</property>
</configuration>
l 属性”fs.defaultFS“表示NameNode节点地址,由”hdfs://主机名(或ip):port号”组成。
2 hdfs-site.xml
<configuration>
<!--Configurations for NameNode:-->
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/namenode</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>SecondaryNameNode:50090</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property
<property>
<name>dfs.blocksize</name>
<value>268435456</value>
</property>
<property>
<name>dfs.namenode.handler.count</name>
<value>100</value>
</property>
<!--Configurations for DataNode:-->
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/datanode</value>
</property>
</configuration>
- l 属性“dfs.namenode.name.dir”表示NameNode存储命名空间和操作日志相关的元数据信息的本地文件系统文件夹。该项默认本地路径为”/tmp/hadoop-{username}/dfs/name”。
- l 属性”dfs.datanode.data.dir“表示DataNode节点存储HDFS文件的本地文件系统文件夹,由”file://本地文件夹”组成,该项默认本地路径为”/tmp/hadoop-{username}/dfs/data”。
- l 属性“dfs.namenode.secondary.http-address”表示SecondNameNode主机及port号(假设无需额外指定SecondNameNode角色,能够不进行此项配置);
3 mapred-site.xml
<configuration>
<!--Configurations for MapReduce Applications:-->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<description>Execution framework set to Hadoop YARN.</description>
</property>
<property>
<name>mapreduce.map.memory.mb</name>
<value>1024</value>
<description>Larger resource limit for maps.</description>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>Xmx1024M</value>
<description>Larger heap-size for child jvms of maps.</description>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>1024</value>
<description>Larger resource limit for reduces.</description>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>Xmx2560M</value>
<description></description>
</property>
<property>
<name>mapreduce.task.io.sort.mb</name>
<value>512</value>
<description></description>
</property>
<property>
<name>mapreduce.task.io.sort.factor</name>
<value>10</value>
<description>More streams merged at once while sorting files.</description>
</property>
<property>
<name>mapreduce.reduce.shuffle.parallelcopies</name>
<value>5</value>
<description>Higher number of parallel copies run by reduces to fetch outputs from very large number of maps.</description>
</property>
<!--Configurations for MapReduce JobHistory Server:-->
<property>
<name>mapreduce.jobhistory.address</name>
<value>ResourceManager:10020</value>
<description>MapReduce JobHistory Server host:port Default port is 10020</description>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>ResourceManager:19888</value>
<description>MapReduce JobHistory Server Web UI host:port Default port is 19888</description>
</property>
<property>
<name>mapreduce.jobhistory.intermediate-done-dir</name>
<value>/mr-history/tmp</value>
<description>Directory where history files are written by MapReduce jobs. Defalut is "/mr-history/tmp"</description>
</property>
<property>
<name>mapreduce.jobhistory.done-dir</name>
<value>/mr-history/done</value>
<description>Directory where history files are managed by the MR JobHistory Server.Default value is "/mr-history/done"</description>
</property>
</configuration>
- l 属性”mapreduce.framework.name“表示执行mapreduce任务所使用的执行框架。默觉得local。须要将其改为”yarn”
4 yarn-site.xml
<configuration>
<!--Configurations for ResourceManager and NodeManager:-->
<property>
<name>yarn.acl.enable</name>
<value>false</value>
<description>Enable ACLs?
Defaults to false. The value of the optional is "true" or "false"</description>
</property>
<property>
<name>yarn.admin.acl</name>
<value>*</value>
<description>ACL to set admins on the cluster. ACLs are of for comma-separated-usersspacecomma-separated-groups. Defaults to special value of * which means anyone. Special value of just space means no one has access</description>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>false</value>
<description>Configuration to enable or disable log aggregation</description>
</property>
<!--Congrations for ResourceManager:-->
<property>
<name>yarn.resourcemanager.address</name>
<value>ResourceManager:8032</value>
<description>ResourceManager host:port for clients to submit jobs.NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname.</description>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>ResourceManager:8030</value>
<description>ResourceManager host:port for ApplicationMasters to talk to Scheduler to obtain resources.NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname</description>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>ResourceManager:8031</value>
<description>ResourceManager host:port for NodeManagers.NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname</description>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>ResourceManager:8033</value>
<description>ResourceManager host:port for administrative commands.NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname.</description>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>ResourceManager:8088</value>
<description>ResourceManager web-ui host:port. NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname</description>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>ResourceManager</value>
<description>ResourceManager host</description>
</property>
<property>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
<description>ResourceManager Scheduler class CapacityScheduler (recommended), FairScheduler (also recommended), or FifoScheduler.The default value is "org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler".
</description>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>1024</value>
<description>Minimum limit of memory to allocate to each container request at the Resource Manager.NOTES:In MBs</description>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>8192</value>
<description>Maximum limit of memory to allocate to each container request at the Resource Manager.NOTES:In MBs</description>
</property>
<!--Congrations for History Server:-->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>-1</value>
<description>How long to keep aggregation logs before deleting them. -1 disables. Be careful, set this too small and you will spam the name node.</description>
</property>
<property>
<name>yarn.log-aggregation.retain-check-interval-seconds</name>
<value>-1</value>
<description>Time between checks for aggregated log retention. If set to 0 or a negative value then the value is computed as one-tenth of the aggregated log retention time. Be careful, set this too small and you will spam the name node.</description>
</property>
<!--Configurations for Configurations for NodeManager:-->
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>8192</value>
<description>Resource i.e. available physical memory, in MB, for given NodeManager.
The default value is 8192.
NOTES:Defines total available resources on the NodeManager to be made available to running containers
</description>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2.1</value>
<description>Maximum ratio by which virtual memory usage of tasks may exceed physical memory.
The default value is 2.1
NOTES:The virtual memory usage of each task may exceed its physical memory limit by this ratio. The total amount of virtual memory used by tasks on the NodeManager may exceed its physical memory usage by this ratio.
</description>
</property>
<property>
<name>yarn.nodemanager.local-dir</name>
<value>${hadoop.tmp.dir}/nm-local-dir</value>
<description>Comma-separated list of paths on the local filesystem where intermediate data is written.
The default value is "${hadoop.tmp.dir}/nm-local-dir"
NOTES:Multiple paths help spread disk i/o.
</description>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>${yarn.log.dir}/userlogs</value>
<description>Comma-separated list of paths on the local filesystem where logs are written
The default value is "${yarn.log.dir}/userlogs"
NOTES:Multiple paths help spread disk i/o.
</description>
</property>
<property>
<name>yarn.nodemanager.log.retain-seconds</name>
<value>10800</value>
<description>Default time (in seconds) to retain log files on the NodeManager Only applicable if log-aggregation is disabled.
The default value is "10800"
</description>
</property>
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/logs</value>
<description>HDFS directory where the application logs are moved on application completion. Need to set appropriate permissions. Only applicable if log-aggregation is enabled.
The default value is "/logs" or "/tmp/logs"
</description>
</property>
<property>
<name>yarn.nodemanager.remote-app-log-dir-suffix</name>
<value>logs</value>
<description>Suffix appended to the remote log dir. Logs will be aggregated to ${yarn.nodemanager.remote-app-log-dir}/${user}/${thisParam} Only applicable if log-aggregation is enabled.</description>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
<description>Shuffle service that needs to be set for Map Reduce applications.</description>
</property>
</configuration>
- 属性”yarn.resourcemanager.hostname”用来指定ResourceManager主机地址;
- l 属性”yarn.nodemanager.aux-service“表示MR applicatons所使用的shuffle工具类
5 hadoop-env.sh
JAVA_HOME表示当前的Java安装文件夹
export JAVA_HOME=/opt/jdk-1.7
6 slaves
集群中的master节点(NameNode、ResourceManager)须要配置其所拥有的slaver节点,当中:
NameNode节点的slaves内容为:
DataNode01
DataNode02
DataNode03
DataNode04
DataNode05
ResourceManager节点的slaves内容为:
NodeManager01
NodeManager02
NodeManager03
NodeManager04
NodeManager05
附录3 具体參数配置參考
Configuring the Hadoop Daemons in Non-Secure Mode
This section deals with important parameters to be specified in the given configuration files:
· conf/core-site.xml
Parameter |
Value |
Notes |
fs.defaultFS |
NameNode URI |
hdfs://host:port/ |
io.file.buffer.size |
131072 |
Size of read/write buffer used in SequenceFiles. |
· conf/hdfs-site.xml
o Configurations for NameNode:
Parameter |
Value |
Notes |
dfs.namenode.name.dir |
Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently. |
If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy. |
dfs.namenode.hosts /dfs.namenode.hosts.exclude |
List of permitted/excluded DataNodes. |
If necessary, use these files to control the list of allowable datanodes. |
dfs.blocksize |
268435456 |
HDFS blocksize of 256MB for large file-systems. |
dfs.namenode.handler.count |
100 |
More NameNode server threads to handle RPCs from large number of DataNodes. |
o Configurations for DataNode:
Parameter |
Value |
Notes |
dfs.datanode.data.dir |
Comma separated list of paths on the local filesystem of a DataNode where it should store its blocks. |
If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. |
· conf/yarn-site.xml
o Configurations for ResourceManager and NodeManager:
Parameter |
Value |
Notes |
yarn.acl.enable |
true / false |
Enable ACLs? Defaults to false. |
yarn.admin.acl |
Admin ACL |
ACL to set admins on the cluster. ACLs are of for comma-separated-usersspacecomma-separated-groups. Defaults to special value of * which means anyone. Special value of just space means no one has access. |
yarn.log-aggregation-enable |
false |
Configuration to enable or disable log aggregation |
o Configurations for ResourceManager:
Parameter |
Value |
Notes |
yarn.resourcemanager.address |
ResourceManager host:port for clients to submit jobs. |
host:port |
yarn.resourcemanager.scheduler.address |
ResourceManager host:port for ApplicationMasters to talk to Scheduler to obtain resources. |
host:port |
yarn.resourcemanager.resource-tracker.address |
ResourceManager host:port for NodeManagers. |
host:port |
yarn.resourcemanager.admin.address |
ResourceManager host:port for administrative commands. |
host:port |
yarn.resourcemanager.webapp.address |
ResourceManager web-ui host:port. |
host:port |
yarn.resourcemanager.hostname |
ResourceManager host. |
host |
yarn.resourcemanager.scheduler.class |
ResourceManager Scheduler class. |
CapacityScheduler (recommended), FairScheduler (also recommended), or FifoScheduler |
yarn.scheduler.minimum-allocation-mb |
Minimum limit of memory to allocate to each container request at theResource Manager. |
In MBs |
yarn.scheduler.maximum-allocation-mb |
Maximum limit of memory to allocate to each container request at theResource Manager. |
In MBs |
yarn.resourcemanager.nodes.include-path /yarn.resourcemanager.nodes.exclude-path |
List of permitted/excluded NodeManagers. |
If necessary, use these files to control the list of allowable NodeManagers. |
o Configurations for NodeManager:
Parameter |
Value |
Notes |
yarn.nodemanager.resource.memory-mb |
Resource i.e. available physical memory, in MB, for givenNodeManager |
Defines total available resources on the NodeManager to be made available to running containers |
yarn.nodemanager.vmem-pmem-ratio |
Maximum ratio by which virtual memory usage of tasks may exceed physical memory |
The virtual memory usage of each task may exceed its physical memory limit by this ratio. The total amount of virtual memory used by tasks on the NodeManager may exceed its physical memory usage by this ratio. |
yarn.nodemanager.local-dirs |
Comma-separated list of paths on the local filesystem where intermediate data is written. |
Multiple paths help spread disk i/o. |
yarn.nodemanager.log-dirs |
Comma-separated list of paths on the local filesystem where logs are written. |
Multiple paths help spread disk i/o. |
yarn.nodemanager.log.retain-seconds |
10800 |
Default time (in seconds) to retain log files on the NodeManager Only applicable if log-aggregation is disabled. |
yarn.nodemanager.remote-app-log-dir |
/logs |
HDFS directory where the application logs are moved on application completion. Need to set appropriate permissions. Only applicable if log-aggregation is enabled. |
yarn.nodemanager.remote-app-log-dir-suffix |
logs |
Suffix appended to the remote log dir. Logs will be aggregated to ${yarn.nodemanager.remote-app-log-dir}/${user}/${thisParam} Only applicable if log-aggregation is enabled. |
yarn.nodemanager.aux-services |
mapreduce_shuffle |
Shuffle service that needs to be set for Map Reduce applications. |
o Configurations for History Server (Needs to be moved elsewhere):
Parameter |
Value |
Notes |
yarn.log-aggregation.retain-seconds |
-1 |
How long to keep aggregation logs before deleting them. -1 disables. Be careful, set this too small and you will spam the name node. |
yarn.log-aggregation.retain-check-interval-seconds |
-1 |
Time between checks for aggregated log retention. If set to 0 or a negative value then the value is computed as one-tenth of the aggregated log retention time. Be careful, set this too small and you will spam the name node. |
· conf/mapred-site.xml
o Configurations for MapReduce Applications:
Parameter |
Value |
Notes |
mapreduce.framework.name |
yarn |
Execution framework set to Hadoop YARN. |
mapreduce.map.memory.mb |
1536 |
Larger resource limit for maps. |
mapreduce.map.java.opts |
-Xmx1024M |
Larger heap-size for child jvms of maps. |
mapreduce.reduce.memory.mb |
3072 |
Larger resource limit for reduces. |
mapreduce.reduce.java.opts |
-Xmx2560M |
Larger heap-size for child jvms of reduces. |
mapreduce.task.io.sort.mb |
512 |
Higher memory-limit while sorting data for efficiency. |
mapreduce.task.io.sort.factor |
100 |
More streams merged at once while sorting files. |
mapreduce.reduce.shuffle.parallelcopies |
50 |
Higher number of parallel copies run by reduces to fetch outputs from very large number of maps. |
o Configurations for MapReduce JobHistory Server:
Parameter |
Value |
Notes |
mapreduce.jobhistory.address |
MapReduce JobHistory Server host:port |
Default port is 10020. |
mapreduce.jobhistory.webapp.address |
MapReduce JobHistory Server Web UI host:port |
Default port is 19888. |
mapreduce.jobhistory.intermediate-done-dir |
/mr-history/tmp |
Directory where history files are written by MapReduce jobs. |
Hadoop 2.6.0分布式部署參考手冊的更多相关文章
- ANTLR4权威參考手冊(一)
写在前面的话: 此文档是对伟大的Terence Parr的著作<the definitive antlr4 reference>的翻译本.致敬!欢迎转载,请注明原地址,请尊重劳动成果.翻译 ...
- 6. GC 调优(工具篇) - GC參考手冊
进行GC性能调优时, 须要明白了解, 当前的GC行为对系统和用户有多大的影响. 有多种监控GC的工具和方法, 本章将逐一介绍经常使用的工具. 您应该已经阅读了前面的章节: 垃圾收集简单介绍 - GC參 ...
- HTML5 界面元素 Canvas 參考手冊
HTML5 界面元素 Canvas 參考手冊 太阳火神的漂亮人生 (http://blog.csdn.net/opengl_es) 本文遵循"署名-非商业用途-保持一致"创作公用协 ...
- 刚開始学习的人非常有用之chm结尾的參考手冊打开后无法正常显示
从网上下载了struts2的參考手冊.chm(本文适用全部已.chm结尾的文件)不能正常打开使用. 如图: watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQv/ ...
- bash參考手冊之五(shell变量)续三
LINENO 当前在运行的脚本或者shell函数的行号. LINES 命令select用来确定打印选择列表的列宽.收到SIGWINCH后,自己主动设置. MACHTYPE 是一个字符串,描写叙述了正在 ...
- bash參考手冊之六(Bash特性)
6 Bash 特性 这部分描写叙述Bash独有的特性. * 调用Bash : Bash能够接受的命令行选项. * Bash启动文件 : Bash何时及怎样运行脚本. * 交互Shell : 什么 ...
- MySQL中文參考手冊
非常好的中文手冊: 链接:http://www.sdau.edu.cn/support/mysq_doc/manual_toc.html
- Hadoop生态圈-zookeeper完全分布式部署
Hadoop生态圈-zookeeper完全分布式部署 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 本篇博客部署是建立在Hadoop高可用基础之上的,关于Hadoop高可用部署请参 ...
- Hadoop生态圈-phoenix完全分布式部署以及常用命令介绍
Hadoop生态圈-phoenix完全分布式部署 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. phoenix只是一个插件,我们可以用hive给hbase套上一个JDBC壳,但是你 ...
随机推荐
- 【翻译】Why JavaScript Is and Will Continue to Be the First Choice of Programmers
花费2半小时,那么最终会被翻译.假设有问题,请提出,毕竟,自己的6不超过级别. 附加链接 Why JavaScript Is and Will Continue to Be the First Cho ...
- gopkg:一种方便的go pakcage管理方式
在使用go的过程中,我们有时候会引入一些第三方库来使用,而通常的方式就是使用go get,可是这样的方式有一个非常严重的问题,假设第三方库更新了相关接口,非常有可能你就无法使用了,所以我们一套非常好地 ...
- 工作经常使用的SQL整理,实战篇(一)
原文:工作经常使用的SQL整理,实战篇(一) 工作经常使用的SQL整理,实战篇,地址一览: 工作经常使用的SQL整理,实战篇(一) 工作经常使用的SQL整理,实战篇(二) 工作经常使用的SQL整理,实 ...
- 尝到awk
我前几天写的sed,这个时候继续了解它的兄弟,awk,两者都使用,一种感觉.既可以用来处理场.假设你想要做文本处理.sed删除.匹配,一些频繁更换使用,假设每一行文本,你想深入,一些每行和列处理的,例 ...
- cocos2d-x学习过程中的疑问
1.一个Scene中不同的层或者有几层Layer是在什么时候设置的? 2.helloWord中init()函数是有谁来调用的? 答:HelloWorld的init函数是在create函数调用后才会调用 ...
- mac_Mac环境下怎样编写HTML代码?
在Mac环境下,使用默认的文本编辑器编写的HTML的源代码, 使用不同的浏览器打开后,依旧还是显示源代码 推荐使用UltraEdit,问题就迎刃而解了
- Ural 1309 Dispute (递归)
意甲冠军: 给你一个数列: f(0) = 0 f(n) = g(n,f(n-1)) g(x,y) = ((y-1)*x^5+x^3-xy+3x+7y)%9973 让你求f(n) n <= 1e ...
- 【转】c#实现字符串倒序的n种写法
其中LINQ写法最为简洁 //string concatenation with for loop public string ReverseA(string text) { char[] c ...
- cocos2dx 3.0 学习笔记 引用cocostudio库 的环境配置
cocostudio创建UI并应用时须要引用cocostudio库,须要额外的环境配置: 之前已经搭配好了基础的开发环境,包含 1) JDK 2) Python 2.7 3) ant 4) visua ...
- Android-管理Activity生命周期 -暂停和恢复一个Activity
在正常的使用app时,前台的activity有时候会被可见的组件阻塞导致activity暂停.比如,当打开一个半透明的activity(就像打开了一个对话框),之前的activity就会暂停.只要ac ...