前期规划

192.168.100.231                  db01

192.168.100.232                  db02

192.168.100.233                  db03

一、安装java

[root@master ~]# vim /etc/profile

在末尾添加环境变量:

export JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera

export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH

export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$CLASSPATH

检查java是否安装成功:

[root@master ~]# java -version

二、创建hadoop用户用于安装软件

groupadd hadoop

useradd -g hadoop hadoop

echo "dbking588" | passwd --stdin hadoop

配置环境变量:

export HADOOP_HOME=/opt/cdh-5.3.6/hadoop-2.5.0

export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH:$HOME/bin

三、安装hadoop

# cd /opt/software

# tar -zxvf hadoop-2.5.0.tar.gz -C /opt/cdh-5.3.6/

# chown -R hadoop:hadoop /opt/cdh-5.3.6/hadoop-2.5.0

四、配置SSH

--配置方法:

$ ssh-keygen -t rsa

$ ssh-copy-id db07.chavin.king

(ssh-copy-id方式只能用于rsa加密秘钥配置,测试对于dsa加密配置无效)

--验证:

[hadoop@db01 ~]$ ssh db02 date

Wed Apr 19 09:57:34 CST 2017

五、编辑hadoop配置文件

需要配置的文件包括:

HDFS配置文件:

etc/hadoop/hadoop-env.sh

etc/hadoop/core-site.xml

etc/hadoop/hdfs-site.xml

etc/haoop/slaves

YARN配置文件:

etc/hadoop/yarn-env.sh

etc/hadoop/yarn-site.xml

etc/haoop/slaves

MapReduce配置文件:

etc/hadoop/mapred-env.sh

etc/hadoop/mapred-site.xml

配置文件内容如下:

[hadoop@db01 hadoop-2.5.0]$ cat etc/hadoop/core-site.xml

<?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!--

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License. See accompanying LICENSE file.

-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>

<name>fs.defaultFS</name>

<value>hdfs://db01:9000</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/opt/cdh-5.3.6/hadoop-2.5.0/data/tmp</value>

</property>

<property>

<name>fs.trash.interval</name>

<value>7000</value>

</property>

</configuration>

[hadoop@db01 hadoop-2.5.0]$ cat etc/hadoop/hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!--

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License. See accompanying LICENSE file.

-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>

<name>dfs.namenode.secondary.http-address</name>

<value>db03:50090</value>

</property>

</configuration>

[hadoop@db01 hadoop-2.5.0]$ cat etc/hadoop/yarn-site.xml

<?xml version="1.0"?>

<!--

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License. See accompanying LICENSE file.

-->

<configuration>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

<property>

<name>yarn.resourcemanager.hostname</name>

<value>db02</value>

</property>

<property>

<name>yarn.log-aggregation-enable</name>

<value>true</value>

</property>

<property>

<name>yarn.log-aggregation.retain-seconds</name>

<value>600000</value>

</property>

</configuration>

[hadoop@db01 hadoop-2.5.0]$ cat etc/hadoop/mapred-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!--

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License. See accompanying LICENSE file.

-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

<property>

<name>mapreduce.jobhistory.address</name>

<value>db01:10020</value>

</property>

<property>

<name>mapreduce.jobhistory.webapp.address</name>

<value>db01:19888</value>

</property>

</configuration>

[hadoop@db01 hadoop-2.5.0]$ cat etc/hadoop/slaves

db01

db02

db03

在以下文件中修改Java环境变量:

etc/hadoop/hadoop-env.sh

etc/hadoop/yarn-env.sh

etc/hadoop/mapred-env.sh

创建数据目录:

/opt/cdh-5.3.6/hadoop-2.5.0/data/tmp

六、格式化HDFS

[hadoop@db01 hadoop-2.5.0]$ hdfs namenode -format

七、启动hadoop

*启动方式1:各个服务器逐一启动(比较常用,可编写shell脚本)
        hdfs:
            sbin/hadoop-daemon.sh start|stop namenode
            sbin/hadoop-daemon.sh start|stop datanode
            sbin/hadoop-daemon.sh start|stop secondarynamenode
        yarn:
            sbin/yarn-daemon.sh start|stop resourcemanager
               sbin/yarn-daemon.sh start|stop nodemanager
        mapreduce:
            sbin/mr-jobhistory-daemon.sh start|stop historyserver
    *启动方式2:各个模块分开启动:需要配置ssh对等性,需要在namenode上运行
        hdfs:
            sbin/start-dfs.sh
            sbin/start-yarn.sh
        yarn:
            sbin/stop-dfs.sh
            sbin/stop-yarn.sh
    *启动方式3:全部启动:不建议使用,这个命令需要在namenode上运行,但是会同时叫secondaryname节点也启动到namenode节点
            sbin/start-all.sh
            sbin/stop-all.sh

八、测试集群

[hadoop@db01 logs]$ cd ~/hadoop-2.5.2/share/hadoop/mapreduce/

[hadoop@db02 mapreduce]$ hadoop jar hadoop-mapreduce-examples-2.5.2.jar pi 10 10

Hadoop 2.x完全分布式安装的更多相关文章

  1. Hadoop单机和伪分布式安装

    本教程为单机版+伪分布式的Hadoop,安装过程写的有些简单,只作为笔记方便自己研究Hadoop用. 环境 操作系统 Centos 6.5_64bit   本机名称 hadoop001   本机IP ...

  2. hadoop+zookeeper+hbase伪分布式安装

    基本安装步骤 安装包下载 从大数据组件下载地址下载以下组件安装包 hadoop-2.6.0-cdh5.6.0.tar.gz hbase-1.0.0-cdh5.6.0.tar.gz zookeeper- ...

  3. hadoop最简伪分布式安装

    本次安装运行过程使用的是Ubuntu16.04 64位+Hadoop2.5.2+jdk1.7.0_75 Notice: Hadoop2.5.2版本默认只支持64位系统 使用的jdk可以为1.7和1.8 ...

  4. [大数据] hadoop全分布式安装

    一.准备工作 在伪分布式的搭建基础上修改配置,搭建全分布式hadoop环境,伪分布式安装参照 hadoop伪分布式安装. 首先准备4台虚拟机,信息如下: 192.168.1.11 namenode1 ...

  5. CentOS7 分布式安装 Hadoop 2.8

    1. 基本环境 1.1 操作系统 操作系统:CentOS7.3 1.2 三台虚拟机 172.20.20.100 master 172.20.20.101 slave1 172.20.20.102 sl ...

  6. 指导手册02:伪分布式安装Hadoop(ubuntuLinux)

    指导手册02:伪分布式安装Hadoop(ubuntuLinux)   Part 1:安装及配置虚拟机 1.安装Linux. 1.安装Ubuntu1604 64位系统 2.设置语言,能输入中文 3.创建 ...

  7. hadoop伪分布式安装之Linux环境准备

    Hadoop伪分布式安装之Linux环境准备 一.软件版本 VMare Workstation Pro 14 CentOS 7 32/64位 二.实现Linux服务器联网功能 网络适配器双击选择VMn ...

  8. hadoop 0.20.2伪分布式安装详解

    adoop 0.20.2伪分布式安装详解 hadoop有三种运行模式: 伪分布式不需要安装虚拟机,在同一台机器上同时启动5个进程,模拟分布式. 完全分布式至少有3个节点,其中一个做master,运行名 ...

  9. 【Hadoop学习之三】Hadoop全分布式安装

    环境 虚拟机:VMware 10 Linux版本:CentOS-6.5-x86_64 客户端:Xshell4 FTP:Xftp4 jdk8 hadoop3.1.1 全分布式就是集群,注意配置主机名. ...

随机推荐

  1. spring-mybatis-data-common程序级分表操作实例

    spring-mybatis-data-common-2.0新增分表机制,在1.0基础上做了部分调整. 基于机架展示分库应用数据库分表实力创建 create table tb_example_1( i ...

  2. StringUtils类中isEmpty与isBlank的区别

    org.apache.commons.lang.StringUtils类提供了String的常用操作,最为常用的判空有如下两种isEmpty(String str)和isBlank(String st ...

  3. rdlc报表在vs2008下编辑正常,在vs2012上编辑就报错

    最近我们的系统的开发工具由vs2008升级到了2012,由于系统中很多报表都是用rdlc来开发的,今天 遇到有报表需要改动的需求,就直接使用vs2012对rdlc报表进行了编辑,结果改完后,怎么预览报 ...

  4. Hyper-V 怎样拷贝文件至虚拟硬盘并附加到虚拟机上

    对于大文件来说,通过远程桌面拷贝是件麻烦的事情,虽然简单,但速度受限太多,不推荐使用. 我工作中对于大文件的拷贝,通过创建一个新的虚拟硬盘(VHD),再把大文件拷贝至虚拟硬盘中,最后附加到虚拟机上. ...

  5. 并发和多线程-说说面试长提平时少用的volatile

    说到volatile,一些参加过面试的同学对此肯定不陌生. 它是面试官口中的常客,但是平时的编码却很少打照面(起码,我是这样的). 最近的面试,我也经常会问到volatile相关的问题,比如volat ...

  6. 使用Python解析JSON详解

    为大家介绍如何使用 Python 语言来编码和解码 JSON 对象. JSON(JavaScript Object Notation) 是一种轻量级的数据交换格式,易于人阅读和编写. JSON 函数 ...

  7. packetfence 7.2网络准入部署(二)

    今天呢先说下packetfence部署的环境: 关于使用方法之前的帖子有介绍,一定要看哦 https://blog.csdn.net/qq_18204953/article/details/80708 ...

  8. hello.cpp 第一个C++程序(本博客没有特指都是以QT测试)

    操作步骤:1.文件->新建文件或项目(N)->New File or Project->Qt Console Application->Choose->“名称”中输入工程 ...

  9. redis竞汰数据同步问题解决

    Redis 面试的时候遇到过问Redis是如何解决“竞态条件”的,相关知识点总结一下. 乐观锁 所谓竞态条件,举个例子,一个代表点击数的数值hitcount,每个客户点击一次则+1. 没有事务的时候, ...

  10. JS封装动画框架,网易轮播图,旋转轮播图

    JS封装动画框架,网易轮播图,旋转轮播图 1. JS封装运动框架 // 多个属性运动框架 添加回调函数 function animate(obj,json,fn) { clearInterval(ob ...