Shell脚本完成hadoop的集群安装
虽然整体实现的自动安装,但还是有很多需要完善的地方,比如说:
1. 代码目前只能在root权限下运行,否则会出错,这方面需要加权限判断;
2.另外可以增加几个函数,减少代码冗余;
3.还有一些判断不够智能;
......
苦于能力和时间都有限,只能写到这里了。
installHadoop文件代码如下:
#!/bin/bash
# root_password="123456"
jdk_tar=jdk-8u65-linux-i586.tar.gz
jdk_url=http://download.oracle.com/otn-pub/java/jdk/8u65-b17/jdk-8u65-linux-i586.tar.gz
jdk_version=jdk1.8.0_65
java_version=1.8.0_65
jdk_install_path=/usr/local/development
hadoop_url=http://101.44.1.4/files/2250000004303235/mirrors.hust.edu.cn/apache/hadoop/common/stable1/hadoop-1.2.1.tar.gz
hadoop_version=hadoop-1.2.1
hadoop_tar=hadoop-1.2.1.tar.gz
hadoop_install_path=hadoop
hadoop_tmp_path=/home/hadoop/hadoop_tmp
hadoop_name_path=/home/hadoop/hdfs/name
hadoop_data_path=/home/hadoop/hdfs/data
user_name=hadoop
user_passwd=hadoop #su
#判断能否root
#if [ $? -ne 0 ] ;then
# echo "No root access"
# exit
#fi shFilePath=$(pwd) #check jdk installed or not
java -version &> /dev/null
if [ $? -ne 0 ] ;then
echo {Jdk has been installed in this pc}
java -version
else
#检查~/../usr/local/development目录存在否,不存在就创建
#先进入当前用户的家目录
#cd ~
#cd ../../usr/local/$jdk_install_path &> /dev/null
#if [ $? -ne 0 ] ;then
if [ ! -d $jdk_install_path ] ;then
echo "{Create $jdk_install_path folder to install jdk}"
mkdir $jdk_install_path
cd $jdk_install_path
echo "{Success to create $jdk_install_path folder}"
else
echo "{$jdk_install_path folder has already exists}"
cd $jdk_install_path
fi #检查jdk是否解压
#ls | grep "$jdk_version" &> /dev/null
if [ ! -d $jdk_version ] ;then
#检查jdk是否已有压缩包
if [ ! -f $jdk_tar ] ;then
echo "{Download $jdk_tar}"
wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" $jdk_url
fi
echo "{Untar $jdk_tar}"
tar -zxvf $jdk_tar else
echo "{$jdk_version folder has already exists in $jdk_install_path/}"
fi #set jdk environment
echo {set java environment}
cd ~
cd ../../../../etc/profile.d/
touch $jdk_install_path.sh
#echo "#!bin/bash" > $jdk_install_path.sh
echo "export JAVA_HOME=/usr/local/$jdk_install_path/$jdk_version" >> $jdk_install_path.sh
echo "export JRE_HOME=\$JAVA_HOME/jre" >> $jdk_install_path.sh
echo "export CLASSPATH=.:\$JAVA_HOME/lib:\$JRE_HOME/lib:\$CLASSPATH" >> $jdk_install_path.sh
echo "PATH=\$JAVA_HOME/bin:\$JRE_HOME/bin:\$PATH" >> $jdk_install_path.sh
source $jdk_install_path.sh #check the java version
java -version | grep "$java_version" &> /dev/null
if [ $? -ne 0 ] ;then
echo "{Success to install $jdk_version}"
fi
fi #no passwd when login via ss
echo "{Config ssh service and login via ssh without no passwd}"
sudo yum -y install ssh openssh-server
#update /etc/ssh/sshd_config
#RSAAuthentication
RSAAuthentication_lineNum=`awk '/RSAAuthentication yes/{print NR}' ~/../etc/ssh/sshd_config`
RSAAuthentication="RSAAuthentication yes"
sed -i "${RSAAuthentication_lineNum}s/^.*/${RSAAuthentication}/g" ~/../etc/ssh/sshd_config #PubkeyAuthentication
PubkeyAuthentication_lineNum=`awk '/PubkeyAuthentication yes/{print NR}' ~/../etc/ssh/sshd_config`
PubkeyAuthentication="PubkeyAuthentication yes"
sed -i "${PubkeyAuthentication_lineNum}s/^.*/${PubkeyAuthentication}/g" ~/../etc/ssh/sshd_config #AuthorizedKeysFile
AuthorizedKeysFile_lineNum=`awk '/AuthorizedKeysFile/{print NR}' ~/../etc/ssh/sshd_config`
AuthorizedKeysFile="AuthorizedKeysFile .ssh\/authorized_keys"
sed -i "${AuthorizedKeysFile_lineNum}s/^.*/${AuthorizedKeysFile}/g" ~/../etc/ssh/sshd_config echo "{You change in sshd_config as follow}"
sed -n "${RSAAuthentication_lineNum},${AuthorizedKeysFile_lineNum}p" ~/../etc/ssh/sshd_config #restart sshd service
~/../sbin/service sshd restart
echo "{Finish to update sshd_config}" #generate public key
if [ ! -d ~/.ssh ] ;then
mkdir ~/.ssh
fi cd ~/.ssh
echo y | ssh-keygen -t rsa -P '' -f id_rsa
if [ ! -f authorized_keys ] ;then
touch authorized_keys
cat id_rsa.pub > authorized_keys
else
cat id_rsa.pub >> authorized_keys fi chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys #Download hadoop
cd ~
cd ../home/$hadoop_install_path &> /dev/null
if [ $? -ne 0 ] ;then
echo "{Create /home/$hadoop_install_path folder to install jdk}"
cd ../home
mkdir $hadoop_install_path
cd $hadoop_install_path
echo "{Success to create $$hadoop_install_path folder}"
else
echo "{/home/$hadoop_install_path folder has already exists}"
cd ~
cd ../home/$hadoop_install_path
fi #check hadoop-2.7.0 folder is exists or not
if [ ! -d "$hadoop_version" ] ;then
#check hadoop-2.7.0.tar.gz is exist or not
if [ ! -f "$hadoop_tar" ] ;then
echo "{Download $hadoop_tar}"
wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" $hadoop_url
fi
echo "{Untar $hadoop_tar}"
tar -zxvf $hadoop_tar
else
echo "{$hadoop_version folder has already exists in /home/$hadoop_install_path/}"
fi #enter into config folder
cd $hadoop_version
if [ ! -d "conf" ] ;then
cd etc/hadoop/
else
cd conf
fi #update hadoop-env.sh
java_home_line_num=`awk '/export JAVA_HOME/{print NR}' hadoop-env.sh`
JAVAHOME="export JAVA_HOME=\/usr\/local\/"$jdk_install_path"\/"$jdk_version #-i is directly modify the source file
sed -i "${java_home_line_num}s/^.*/${JAVAHOME}/g" hadoop-env.sh
cat hadoop-env.sh | grep "JAVA_HOME"
echo "{Finish to update hadoop-env.sh}" hadoop_config_path=$(pwd)
#echo $cur_path
#echo $shFilePath
#unalias cp
#cp -rf core-site.xml $curPath/ cd $shFilePath #update core_site.xml
cat core-site.xml > $hadoop_config_path/core-site.xml
if [ ! -d $hadoop_tmp_path ] ;then
mkdir $hadoop_tmp_path
fi
rm -rf $hadoop_tmp_path/* if [ ! -d $hadoop_name_path ] ;then
mkdir $hadoop_name_path
fi
chmod g-w $hadoop_name_path
rm -rf $hadoop_name_path/* if [ ! -d $hadoop_data_path ] ;then
mkdir $hadoop_data_path
fi
chmod g-w $hadoop_data_path
rm -rf $hadoop_data_path/* #update mapred-site.xml
cat mapred-site.xml > $hadoop_config_path/mapred-site.xml #update hdfs-site.xml
cat hdfs-site.xml > $hadoop_config_path/hdfs-site.xml cd $hadoop_config_path
echo "{Check core-site.xml}"
#cat core-site.xml
echo "{Check mapred-site.xml}"
#cat mapred-site.xml
echo "{Check hdfs-site.xml}"
#cat hdfs-site.xml
echo "{Finish config hadoop}" #add hadoop account and has admin access
id $user_name
if [ $? -ne 0 ] ;then
echo "{add $user_name}"
sudo useradd -mr $user_name
fi
#set passwd for hadoop account
echo $user_passwd | sudo passwd --stdin $user_name echo "{Format hadoop}"
echo Y | ../bin/hadoop namenode -format
cd ../bin/
bash stop-all.sh
echo "{Start hadoop}"
bash start-all.sh result=`jps | awk '{print $2}' | xargs`
expect_result="JobTracker NameNode DataNode TaskTracker Jps SecondaryNameNode"
if [ "$result" == "$expect_result" ] ;then
echo "{Congratulations!!! Success to intall hadoop!}"
else
echo "{Sorry, fail to install hadoop and try to restart hadroop!}"
bash stop-all.sh
echo "{Start hadoop}"
bash start-all.sh
result=`jps | awk '{print $2}' | xargs`
if [ "$result" == "$expect_result" ] ;then
echo "{Sorry, fail to find all java thread and please check!}"
else
echo "{Congratulations!!! find all java thread, success to install hadoop!}"
fi
fi echo {!!!finish!!!}
此外为了实现自动化配置hadoop, 还需要把core-site.xml, hdfs-site.xml和mapred-site.xml文件放到与installHadoop文件同级目录下。
core-site.xml文件:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop_tmp</value>
<description>A base for other temporary directories.</description>
</property>
</configuration>
hdfs-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/home/hadoop/hdfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/hadoop/hdfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
单机版的Hadoop安装可以参考:
http://www.linuxidc.com/Linux/2015-04/116447.htm
多台机器的Hadoop安装可以参考:
http://blog.csdn.net/ab198604/article/details/8250461
Shell脚本完成hadoop的集群安装的更多相关文章
- hadoop的集群安装
hadoop的集群安装 1.安装JDK,解压jar,配置环境变量 1.1.解压jar tar -zxvf jdk-7u79-linux-x64.tar.gz -C /opt/install //将jd ...
- 基于zookeeper的高可用Hadoop HA集群安装
(1)hadoop2.7.1源码编译 http://aperise.iteye.com/blog/2246856 (2)hadoop2.7.1安装准备 http://aperise.iteye.com ...
- Hadoop分布式集群安装
环境准备 操作系统使用ubuntu-16.04.2 64位 JDK使用jdk1.8 Hadoop使用Hadoop 2.8版本 镜像下载 操作系统 操作系统使用ubun ...
- shell 脚本实战笔记(6)--集群环境配置检测
1). 背景: 集群部署的时候, 需要一致的配置和环境设置. 对于虚拟机集群, 可以借助镜像拷贝, 复制和还原集群机器. 对与物理机集群而言, 则不一样, 如果机器一多, 多人去操作和配置, 对于成熟 ...
- 编写shell脚本一键启动zookeeper集群!!
踩了一个多小时坑终于解决了: 这里分享给大家,更主要的目的是记住这些坑,避免以后重复走!!! 首先,这里采用ssh秘钥方式进行集群主机之间免密登录执行启动命令 这里简单说下原理: 通过ssh去另外一台 ...
- hadoop 分布式集群安装
这一套环境搭完,你有可能碰到无数个意想不到的情况. 用了1周的时间,解决各种linux菜鸟级的问题,终于搭建好了.. 沿途的风景,甚是历练. 环境介绍: 系统:win7 内存:16G(最低4G,不然跑 ...
- shell 脚本实战笔记(3)--集群机器的时间同步设置
背景: 有些分布式服务(比如HBase服务), 依赖于系统时间戳, 如果集群各个节点, 系统时间不一致, 导致服务出现诡异的情况. 解决方案: 那如何同步集群各个节点之间的时间? 采用NTP(Netw ...
- CentOS下Hadoop-2.2.0集群安装配置
对于一个刚开始学习Spark的人来说,当然首先需要把环境搭建好,再跑几个例子,目前比较流行的部署是Spark On Yarn,作为新手,我觉得有必要走一遍Hadoop的集群安装配置,而不仅仅停留在本地 ...
- 一步步教你Hadoop多节点集群安装配置
1.集群部署介绍 1.1 Hadoop简介 Hadoop是Apache软件基金会旗下的一个开源分布式计算平台.以Hadoop分布式文件系统HDFS(Hadoop Distributed Filesys ...
随机推荐
- Linux学习笔记——如何使用echo指令向文件写入内容
0.前言 本文总结如何使用echo命令向文件中写入内容,例如使用echo指令覆盖文件内容,使用echo指令向文件追加内容,使用echo指令往文件中追加制表符. echo向文件中输出内容 ...
- 省厅报件7.0 读取mdb 生成xml 文件
using System;using System.Collections.Generic;using System.Data;using System.Data.OleDb;using System ...
- Css+Html
CSS样式 <style type="text/css"> tt.tt1 { <style type="text/css"> p { b ...
- CF1088D Ehab and another another xor problem
思路: 根据异或的性质一位一位来搞.参考了https://blog.lucien.ink/archives/362/ 实现: #include <bits/stdc++.h> using ...
- openfire4.0.2源码 使用 IntelliJ IDEA 搭建开发环境
从官网下载压缩包,解压,直接打开build目录下的project 打开后, 相关的设置 fix直接修复或者下载 设置 设置每个插件目录下的java目录为source 编译openfire和plugin ...
- HDU 3592 World Exhibition (差分约束,spfa,水)
题意: 有n个人在排队,按照前后顺序编号为1~n,现在对其中某两人的距离进行约束,有上限和下限,表示dis[a,b]<=c或者dis[a,b]>=c,问第1个人与第n个人的距离最多可能为多 ...
- cesium-大规模人群运动测试
环境:cesium1.57: 笔记本电脑:集成显卡+独显Navida 1060 测试内容:大规模人群运动(500人,可设置运动的路径),可行性及帧率 测试结果:21-23FPS,较为流畅:集显70%- ...
- noip模拟赛#45
T1:n<=1e6,求最小的区间包含(1,m)的所有数. =>双指针扫一遍即可 #include<cstdio> #include<cstring> #includ ...
- mini_batch GD
工作过程:训练总样本个数是固定的,batch_size大小也是固定的,但组成一个mini_batch的样本可以从总样本中随机选择.将mini_batch中每个样本都经过前向传播和反向传播,求出每个样本 ...
- javaweb基础(1)_入门
一.基本概念 1.1.WEB开发的相关知识 WEB,在英语中web即表示网页的意思,它用于表示Internet主机上供外界访问的资源. Internet上供外界访问的Web资源分为: 静态web资源( ...