在Ubuntu 13.10 中安装配置 Hadoop 2.2.0
预备条件:
1. 已安装JDK
Add Hadoop Group and User
$ sudo adduser --ingroup hadoop hduser
$ sudo adduser hduser sudo
Setup SSH Certificate
$ ssh-keygen -t rsa -P
''
...
Your identification has been saved in /home/hduser/.ssh/id_rsa.
Your
public
key has been saved in /home/hduser/.ssh/id_rsa.pub.
...
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ ssh localhost
Download Hadoop 2.2.0
$ cd ~
$ wget http:
//www.trieuvan.com/apache/hadoop/common/hadoop-2.2.0/hadoop-2.2.0.tar.gz
$ sudo tar vxzf hadoop-
2.2
.
0
.tar.gz -C /usr/local
$ cd /usr/local
$ sudo mv hadoop-
2.2
.
0
hadoop
$ sudo chown -R hduser:hadoop hadoop
Setup Hadoop Environment Variables
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_45
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
###end of paste
$ vi hadoop-env.sh
#modify JAVA_HOME
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_45
Subversion https://svn.apache.org/repos/asf/hadoop/common -r 1529768
Compiled by hortonmu on 2013-10-07T06:28Z
Compiled with protoc 2.5.0
From source with checksum 79e53ce7994d1628b240f09af91e1af4
This command was run using /usr/local/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar
#!/bin/bash
# 该脚本将在单个节点上安装 Hadoop 2.2.0
cd ~
sudo apt-get update
#### HADOOP 安装 ###
# Download java jdk
#if [ ! -f jdk-7u45-linux-i586.tar.gz ]; then
# wget http://uni-smr.ac.ru/archive/dev/java/SDKs/sun/j2se/7/jdk-7u45-linux-i586.tar.gz
#fi
#sudo mkdir /usr/lib/jvm
#sudo tar zxvf jdk-7u45-linux-i586.tar.gz -C /usr/lib/jvm
# 再修改环境变量
#sudo sh -c 'echo export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_45 >> ~/.bashrc'
#sudo sh -c 'echo export JRE_HOME=${JAVA_HOME}/jre >> ~/.bashrc'
#sudo sh -c 'echo export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib >> ~/.bashrc'
#sudo sh -c 'echo export PATH=${JAVA_HOME}/bin:$PATH >> ~/.bashrc'
# 使之立即生效
#source ~/.bashrc
# 配置默认JDK版本
#sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.7.0_45/bin/java 300
#sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/jdk1.7.0_45/bin/javac 300
#sudo update-alternatives --install /usr/bin/jar jar /usr/lib/jvm/jdk1.7.0_45/bin/jar 300
#sudo update-alternatives --install /usr/bin/javah javah /usr/lib/jvm/jdk1.7.0_45/bin/javah 300
#sudo update-alternatives --install /usr/bin/javap javap /usr/lib/jvm/jdk1.7.0_45/bin/javap 300
# 查看设置是否成功
#sudo update-alternatives --config java
#查看JDK版本,判断是否安装成功
#java -version
# 专门添加hadoop用户用来做hadoop计算
#sudo addgroup hadoop
#sudo adduser --ingroup hadoop hduser
#sudo adduser hduser sudo
# 安装openssh-server
sudo apt-get install openssh-server
# 产生ssh key 并添加到授权文件
sudo -u hduser ssh-keygen -t rsa -P ''
sudo sh -c 'cat /home/hduser/.ssh/id_rsa.pub >> /home/hduser/.ssh/authorized_keys'
#ssh localhost
# 下载安装Hadoop,并更改文件夹权限
cd ~
if [ ! -f hadoop-2.2.0.tar.gz ]; then
#wget http://apache.osuosl.org/hadoop/common/hadoop-2.2.0.tar.gz
wget http://xx.xx.xx.xx/downloads/hadoop-2.2.0.tar.gz
fi
sudo tar vxzf hadoop-2.2.0.tar.gz -C /usr/local
cd /usr/local
sudo mv hadoop-2.2.0 hadoop
sudo chown -R hduser:hadoop hadoop
# 将Hadoop 环境变量添加到.bashrc
cd ~cd ~
sudo sh -c "echo export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_45 >> /home/hduser/.bashrc"
sudo sh -c 'echo export HADOOP_INSTALL=/usr/local/hadoop >> /home/hduser/.bashrc'
sudo sh -c 'echo export PATH=\$PATH:\$JAVA_HOME/bin >> /home/hduser/.bashrc'
sudo sh -c 'echo export PATH=\$PATH:\$HADOOP_INSTALL/bin >> /home/hduser/.bashrc'
sudo sh -c 'echo export PATH=\$PATH:\$HADOOP_INSTALL/sbin >> /home/hduser/.bashrc'
sudo sh -c 'echo export HADOOP_MAPRED_HOME=\$HADOOP_INSTALL >> /home/hduser/.bashrc'
sudo sh -c 'echo export HADOOP_COMMON_HOME=\$HADOOP_INSTALL >> /home/hduser/.bashrc'
sudo sh -c 'echo export HADOOP_HDFS_HOME=\$HADOOP_INSTALL >> /home/hduser/.bashrc'
sudo sh -c 'echo export YARN_HOME=\$HADOOP_INSTALL >> /home/hduser/.bashrc'
sudo sh -c 'echo export HADOOP_COMMON_LIB_NATIVE_DIR=\$\{HADOOP_INSTALL\}/lib/native >> /home/hduser/.bashrc'
sudo sh -c 'echo export HADOOP_OPTS=\"-Djava.library.path=\$HADOOP_INSTALL/lib\" >> /home/hduser/.bashrc'
# 修改Hadoop 的JAVA_HOME,在hadoop-env.sh 文件中
cd /usr/local/hadoop/etc/hadoop
sudo -u hduser sed -i.bak s=\${JAVA_HOME}=/usr/lib/jvm/jdk1.7.0_45/=g hadoop-env.sh
pwd
# Check that Hadoop is installed
/usr/local/hadoop/bin/hadoop version
# Edit configuration files
sudo -u hduser sed -i.bak 's=<configuration>=<configuration>\<property>\<name>fs\.default\.name\</name>\<value>hdfs://localhost:9000\</value>\</property>=g' core-site.xml
sudo -u hduser sed -i.bak 's=<configuration>=<configuration>\<property>\<name>yarn\.nodemanager\.aux-services</name>\<value>mapreduce_shuffle</value>\</property>\<property>\<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>\<value>org\.apache\.hadoop\.mapred\.ShuffleHandler</value>\</property>=g' yarn-site.xml
sudo -u hduser cp mapred-site.xml.template mapred-site.xml
sudo -u hduser sed -i.bak 's=<configuration>=<configuration>\<property>\<name>mapreduce\.framework\.name</name>\<value>yarn</value>\</property>=g' mapred-site.xml
cd ~
sudo mkdir -p mydata/hdfs/namenode
sudo mkdir -p mydata/hdfs/datanode
sudo chown -R hduser:hadoop mydata
cd /usr/local/hadoop/etc/hadoop
sudo -u hduser sed -i.bak 's=<configuration>=<configuration>\<property>\<name>dfs\.replication</name>\<value>1\</value>\</property>\<property>\<name>dfs\.namenode\.name\.dir</name>\<value>file:/home/hduser/mydata/hdfs/namenode</value>\</property>\<property>\<name>dfs\.datanode\.data\.dir</name>\<value>file:/home/hduser/mydata/hdfs/datanode</value>\</property>=g' hdfs-site.xml
### Testing Hadoop
## Run the following commands as hduser to start and test hadoop
#sudo su hduser
# Format Namenode
#hdfs namenode -format
# Start Hadoop Service
#start-dfs.sh
#start-yarn.sh
# Check status
#hduser jps
# Example
#cd /usr/local/hadoop
#hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi 2 5
集群设置:
禁用IPV6
编辑文件/etc/sysctl.conf,在最后插入
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
重启生效
cat /proc/sys/net/ipv6/conf/all/disable_ipv6
若返回0则IPV6生效,返回1则说明没有生效
配置 hosts文件
将以下条目加入到 master以及每个slave的 /etc/hosts文件
xxx.xxx.xxx.xx0 master
xxx.xxx.xxx.xx1 b-w01
xxx.xxx.xxx.xx2 b-w02
xxx.xxx.xxx.xx3 b-w03
xxx.xxx.xxx.xx4 b-w04
配置ssh,以使master可以无密码连接slaves
ssh-copy-id -i $HOME/.ssh/id_rsa.pub hduser@slaveXXXX
在弹出密码提示时输入slaves的密码
然后再测试 ssh slave,即可看到成功登陆的消息
配置文件:slaves
b-w01
b-w02
b-w03
b-w04
配置文件core-site.XML
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hduser/mydata/hadoop_temp</value>
</property>
</configuration>
配置文件:hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hduser/mydata/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hduser/mydata/hdfs/datanode</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
配置文件:mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>
配置文件:yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
</configuration>
最后通过脚本把上述配置文件迁移到每个slave
#!/bin/bash
scp /usr/local/hadoop/etc/hadoop/slaves hduser@b-w01:/usr/local/hadoop/etc/hadoop/slaves
scp /usr/local/hadoop/etc/hadoop/slaves hduser@b-w02:/usr/local/hadoop/etc/hadoop/slaves
scp /usr/local/hadoop/etc/hadoop/slaves hduser@b-w03:/usr/local/hadoop/etc/hadoop/slaves
scp /usr/local/hadoop/etc/hadoop/slaves hduser@b-w04:/usr/local/hadoop/etc/hadoop/slaves
scp /usr/local/hadoop/etc/hadoop/core-site.xml hduser@b-w01:/usr/local/hadoop/etc/hadoop/core-site.xml
scp /usr/local/hadoop/etc/hadoop/core-site.xml hduser@b-w02:/usr/local/hadoop/etc/hadoop/core-site.xml
scp /usr/local/hadoop/etc/hadoop/core-site.xml hduser@b-w03:/usr/local/hadoop/etc/hadoop/core-site.xml
scp /usr/local/hadoop/etc/hadoop/core-site.xml hduser@b-w04:/usr/local/hadoop/etc/hadoop/core-site.xml
scp /usr/local/hadoop/etc/hadoop/hdfs-site.xml hduser@b-w01:/usr/local/hadoop/etc/hadoop/hdfs-site.xml
scp /usr/local/hadoop/etc/hadoop/hdfs-site.xml hduser@b-w02:/usr/local/hadoop/etc/hadoop/hdfs-site.xml
scp /usr/local/hadoop/etc/hadoop/hdfs-site.xml hduser@b-w03:/usr/local/hadoop/etc/hadoop/hdfs-site.xml
scp /usr/local/hadoop/etc/hadoop/hdfs-site.xml hduser@b-w04:/usr/local/hadoop/etc/hadoop/hdfs-site.xml
scp /usr/local/hadoop/etc/hadoop/mapred-site.xml hduser@b-w01:/usr/local/hadoop/etc/hadoop/mapred-site.xml
scp /usr/local/hadoop/etc/hadoop/mapred-site.xml hduser@b-w02:/usr/local/hadoop/etc/hadoop/mapred-site.xml
scp /usr/local/hadoop/etc/hadoop/mapred-site.xml hduser@b-w03:/usr/local/hadoop/etc/hadoop/mapred-site.xml
scp /usr/local/hadoop/etc/hadoop/mapred-site.xml hduser@b-w04:/usr/local/hadoop/etc/hadoop/mapred-site.xml
scp /usr/local/hadoop/etc/hadoop/yarn-site.xml hduser@b-w01:/usr/local/hadoop/etc/hadoop/yarn-site.xml
scp /usr/local/hadoop/etc/hadoop/yarn-site.xml hduser@b-w02:/usr/local/hadoop/etc/hadoop/yarn-site.xml
scp /usr/local/hadoop/etc/hadoop/yarn-site.xml hduser@b-w03:/usr/local/hadoop/etc/hadoop/yarn-site.xml
scp /usr/local/hadoop/etc/hadoop/yarn-site.xml hduser@b-w04:/usr/local/hadoop/etc/hadoop/yarn-site.xml
查看JOBMANAGER:
http://master:8088
查看namenode:
http://master:50070/
测试:
cd /usr/local/hadoop
hadoop fs -mkdir /output
hadoop fs -mkdir /input
hadoop dfs -put ~/wordCounterTest.txt /input/wordCounterTest.txt
hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /input/wordCounterTest.txt /output/wordcountresult
在Ubuntu 13.10 中安装配置 Hadoop 2.2.0的更多相关文章
- 在 Ubuntu 13.10 中搭建Java开发环境 - 懒人版
本文记录我在Ubuntu 13.10中搭建Java开发环境. 本文环境: Ubuntu 13.10 x64运行在Win7下的VMware Workstation 10中. 1. 安装JDK与JRE s ...
- Linux中安装配置hadoop集群
一. 简介 参考了网上许多教程,最终把hadoop在ubuntu14.04中安装配置成功.下面就把详细的安装步骤叙述一下.我所使用的环境:两台ubuntu 14.04 64位的台式机,hadoop选择 ...
- ubuntu 13.10 Ralink RT3290 无线与蓝牙4.0的驱动安装
我的本是hp envy15, 蓝牙与无线的型号是Ralink RT3290, 装了Ubuntu 13.10 64bit后,蓝牙无法使用,无线几秒钟就会断开,查知,是因为驱动问题. ## 准备工作 首先 ...
- ubuntu 13.10 amd64安装ia32-libs
很多软件只有32位的,有的依赖32位库还挺严重的:从ubuntu 13.10已经废弃了ia32-libs,但可以使用多架构,安装软件或包apt-get install program:i386.有的还 ...
- ubuntu 13.10 monodevelop3 安装
版本 ubuntu 13.10 桌面模式默认:unity :文件管理器:nautilus
- Ubuntu 13.10下安装ns2 2.35遇到的小问题
前面下载安装的环节我就不多说了,网上已经有很多的例子,最全的是一个新浪网友写的博客:http://blog.sina.com.cn/s/blog_785a23ae0100xraq.html.他使用的是 ...
- Ubuntu 13.10 下安装 eclipse
Ubuntu软件社区用的3.8,个人想用最新版本,所有手动下载安装. 1.下载安装Jdk sudo apt-get install openjdk-7-jdk 2.查看系统JVM sudo updat ...
- Ubuntu 13.10 下安装搜狗输入法
1.卸载ibus输入法: sudo apt-get remove ibus sudo为取得root权限的意思,Ubuntu系统默认root账户关闭,很多操作需要取得root 权限才可以 ...
- Ubuntu 13.10 下安装node
1.首先更新Ubuntu在线包:sudo apt-get update && sudo apt-get dist-upgrade, 2.默认Ubuntu已经安装python的,具体版本 ...
随机推荐
- postgresql-hdd,ssd,效率
既有ssd又有hdd是将数据存储到ssd还是将索引存储到ssd的效率更高呢? 一种说法是索引是随机扫描,将索引放入ssd效率会增高, 一种说法是将数据放入ssd效率更高 最好的情况是将数据和索引都 ...
- Python文件与函数练习题
练习题 文件处理相关 编码问题 请说明python2 与python3中的默认编码是什么? python2默认是ASCII码,python3默认是utf-8 为什么会出现中文乱码?你能列举出现乱码的情 ...
- Unity项目接入应用宝SDK实现截图功能
Unity项目接入应用宝SDK实现截图功能 问题由来 点击应用宝悬浮窗 如图所示 左下角有一个截图按钮 需要解决那些问题 截图信息需要由游戏引擎提供 SDK获取截图信息为同步 但是Unity引擎没有提 ...
- flask信号使用
flask信号: 安装: flask中的信号使用的是一个第三方插件,叫做blinker.通过pip list看一下,如果没有安装,通过以下命令即可安装blinker: pip install blin ...
- vs2017 对dockerfile的支持
项目添加 dockerfile Docker file 内容 FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base WORKDIR /app EXP ...
- linux下的进程(孤儿进程、僵尸进程)
linux提供了一种机制可以保证只要父进程想知道子进程结束时的状态信息,就可以得到.这种机制就是:在每个进程退出的时候,内核释放该进程所有的资源,包括打开文件,占用的内存等.但是仍然为其保留一定的信息 ...
- Selenium自动化测试Python三:WebDriver进阶
WebDriver 进阶 欢迎阅读WebDriver进阶讲义.本篇讲义将会重点介绍Selenium WebDriver API的重点使用方法,以及使用模块化和参数化进行自动化测试的设计. WebDri ...
- Apple Pay 支付集成
Refer:https://open.unionpay.com/ajweb/product/detail?id=80 交易步骤: 1.浏览并选购商品:用户通过手机客户端与商户系统交互浏览选购商品,客户 ...
- ASP.NET MVC 与NLog的使用
NLog是一个.NET 下一个完善的日志工具,个人已经在项目中使用很久,与ELMAH相比,可能EAMAH更侧重 APS.NET MVC 包括调试路由,性能等方面,而NLog则更简洁. github: ...
- Twitter Bootstrap3小结
今天有空,小结一下Twitter Bootstrap 3的使用.首先不得不说,Bootstrap是迄今(2014)比较好的WEB设计框架(当然,其它的优秀WEB Framework还有:Foundat ...