一、规划

(一)硬件资源

10.171.29.191 master

10.171.94.155  slave1

10.251.0.197 slave3

(二)基本资料

用户:  jediael

目录:/mnt/jediael/



二、环境配置

(一)统一用户名密码,并为jediael赋予执行所有命令的权限

#passwd  

# useradd jediael  

# passwd jediael  

# vi /etc/sudoers  

增加以下一行:

jediael ALL=(ALL) ALL

(二)创建目录/mnt/jediael

$sudo chown jediael:jediael /opt  

$ cd /opt  

$ sudo mkdir jediael  

注意:/opt必须是jediael的,否则会在format namenode时出错。



(三)修改用户名及/etc/hosts文件

1、修改/etc/sysconfig/network

NETWORKING=yes  

HOSTNAME=*******

2、修改/etc/hosts

10.171.29.191 master

10.171.94.155  slave1

10.251.0.197 slave3

注 意hosts文件不能有127.0.0.1  *****配置,否则会导致出现异常。org.apache.hadoop.ipc.Client: Retrying connect to server: master/10.171.29.191:9000. Already trie

3、hostname命令

hostname ****  



(四)配置免密码登录

以上命令在master上使用jediael用户执行:

$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa  

$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys  

然后,将authorized_keys复制到slave1,slave2

scp ~/.ssh/authorized_keys slave1:~/.ssh/  

scp ~/.ssh/authorized_keys slave2:~/.ssh/  

注意

(1)若提示.ssh目录不存在,则表示此机器从未运行过ssh,因此运行一次即可创建.ssh目录。

(2).ssh/的权限为600,authorized_keys的权限为700,权限大了小了都不行。



(五)在3台机器上分别安装java,并设置相关环境变量

参考http://blog.csdn.net/jediael_lu/article/details/38925871



(六)下载hadoop-2.6.0.tar.gz,并将其解压到/mnt/jediael

wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz

tar -zxvf hadoop-2.6.0.tar.gz



三、修改配置文件

【3台机器上均要执行,一般先在一台机器上配置完成,再用scp复制到其它机器】

(一)hadoop_env.sh

export JAVA_HOME=/usr/java/jdk1.7.0_51  



(二)修改core-site.xml

        <property>
<name>hadoop.tmp.dir</name>
<value>/mnt/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>4096</value>
</property>

(三)修改hdfs-site.xml

        <property>
<name>dfs.replication</name>
<value>2</value>
</property>

(四)修改mapred-site.xml

 

       <property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<final>true</final>
</property> <property>
<name>mapreduce.jobtracker.http.address</name>
<value>master:50030</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>http://master:9001</value>
</property>

(五)修改yarn.xml

        <property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property> <property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>

(六)修改slaves 【不用修改masters文件??】

slaves:

slave1
slave3

四、启动并验证





1、格式 化namenode

[jediael@master hadoop-1.2.1]$  bin/hadoop namenode -format  





2、启动hadoop【此步骤只需要在master上执行】

[jediael@master hadoop-1.2.1]$ bin/start-all.sh   



3、验证1:向hdfs中写入内容

[jediael@master hadoop-2.6.0]$ bin/hadoop fs -ls /

[jediael@master hadoop-2.6.0]$ bin/hadoop fs -mkdir /test

[jediael@master hadoop-2.6.0]$ bin/hadoop fs -ls /       

Found 1 items

drwxr-xr-x   - jediael supergroup          0 2015-04-19 23:41 /test



4、验证:登录页面

NameNode    http://ip:50070   



5、查看各个主机的java进程

(1)master:

$ jps

3694 NameNode

3882 SecondaryNameNode

7216 Jps

4024 ResourceManager

(2)slave1:

$ jps

1913 NodeManager

2673 Jps

1801 DataNode

(3)slave3:

$ jps

1942 NodeManager

2252 Jps

1840 DataNode



五、运行一个完整的mapreduce程序:运行自带的wordcount程序



$ bin/hadoop fs -mkdir /input

$ bin/hadoop fs -ls /        

Found 2 items

drwxr-xr-x   - jediael supergroup          0 2015-04-20 18:04 /input

drwxr-xr-x   - jediael supergroup          0 2015-04-19 23:41 /test

$ bin/hadoop fs -copyFromLocal etc/hadoop/mapred-site.xml.template /input

$ pwd

/mnt/jediael/hadoop-2.6.0/share/hadoop/mapreduce

$ /mnt/jediael/hadoop-2.6.0/bin/hadoop jar hadoop-mapreduce-examples-2.6.0.jar wordcount /input /output

15/04/20 18:15:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

15/04/20 18:15:48 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id

15/04/20 18:15:48 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=

15/04/20 18:15:49 INFO input.FileInputFormat: Total input paths to process : 1

15/04/20 18:15:49 INFO mapreduce.JobSubmitter: number of splits:1

15/04/20 18:15:49 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local657082309_0001

15/04/20 18:15:50 INFO mapreduce.Job: The url to track the job: http://localhost:8080/

15/04/20 18:15:50 INFO mapreduce.Job: Running job: job_local657082309_0001

15/04/20 18:15:50 INFO mapred.LocalJobRunner: OutputCommitter set in config null

15/04/20 18:15:50 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter

15/04/20 18:15:50 INFO mapred.LocalJobRunner: Waiting for map tasks

15/04/20 18:15:50 INFO mapred.LocalJobRunner: Starting task: attempt_local657082309_0001_m_000000_0

15/04/20 18:15:50 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]

15/04/20 18:15:50 INFO mapred.MapTask: Processing split: hdfs://master:9000/input/mapred-site.xml.template:0+2268

15/04/20 18:15:51 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)

15/04/20 18:15:51 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100

15/04/20 18:15:51 INFO mapred.MapTask: soft limit at 83886080

15/04/20 18:15:51 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600

15/04/20 18:15:51 INFO mapred.MapTask: kvstart = 26214396; length = 6553600

15/04/20 18:15:51 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer

15/04/20 18:15:51 INFO mapred.LocalJobRunner:

15/04/20 18:15:51 INFO mapred.MapTask: Starting flush of map output

15/04/20 18:15:51 INFO mapred.MapTask: Spilling map output

15/04/20 18:15:51 INFO mapred.MapTask: bufstart = 0; bufend = 1698; bufvoid = 104857600

15/04/20 18:15:51 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26213916(104855664); length = 481/6553600

15/04/20 18:15:51 INFO mapred.MapTask: Finished spill 0

15/04/20 18:15:51 INFO mapred.Task: Task:attempt_local657082309_0001_m_000000_0 is done. And is in the process of committing

15/04/20 18:15:51 INFO mapred.LocalJobRunner: map

15/04/20 18:15:51 INFO mapred.Task: Task 'attempt_local657082309_0001_m_000000_0' done.

15/04/20 18:15:51 INFO mapred.LocalJobRunner: Finishing task: attempt_local657082309_0001_m_000000_0

15/04/20 18:15:51 INFO mapred.LocalJobRunner: map task executor complete.

15/04/20 18:15:51 INFO mapred.LocalJobRunner: Waiting for reduce tasks

15/04/20 18:15:51 INFO mapred.LocalJobRunner: Starting task: attempt_local657082309_0001_r_000000_0

15/04/20 18:15:51 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]

15/04/20 18:15:51 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@39be5e01

15/04/20 18:15:51 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=363285696, maxSingleShuffleLimit=90821424, mergeThreshold=239768576, ioSortFactor=10, memToMemMergeOutputsThreshold=10

15/04/20 18:15:51 INFO reduce.EventFetcher: attempt_local657082309_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events

15/04/20 18:15:51 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local657082309_0001_m_000000_0 decomp: 1566 len: 1570 to MEMORY

15/04/20 18:15:51 INFO reduce.InMemoryMapOutput: Read 1566 bytes from map-output for attempt_local657082309_0001_m_000000_0

15/04/20 18:15:51 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 1566, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->1566

15/04/20 18:15:51 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning

15/04/20 18:15:51 INFO mapred.LocalJobRunner: 1 / 1 copied.

15/04/20 18:15:51 INFO reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs

15/04/20 18:15:51 INFO mapred.Merger: Merging 1 sorted segments

15/04/20 18:15:51 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 1560 bytes

15/04/20 18:15:51 INFO reduce.MergeManagerImpl: Merged 1 segments, 1566 bytes to disk to satisfy reduce memory limit

15/04/20 18:15:51 INFO reduce.MergeManagerImpl: Merging 1 files, 1570 bytes from disk

15/04/20 18:15:51 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce

15/04/20 18:15:51 INFO mapred.Merger: Merging 1 sorted segments

15/04/20 18:15:51 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 1560 bytes

15/04/20 18:15:51 INFO mapred.LocalJobRunner: 1 / 1 copied.

15/04/20 18:15:51 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords

15/04/20 18:15:51 INFO mapreduce.Job: Job job_local657082309_0001 running in uber mode : false

15/04/20 18:15:51 INFO mapreduce.Job:  map 100% reduce 0%

15/04/20 18:15:51 INFO mapred.Task: Task:attempt_local657082309_0001_r_000000_0 is done. And is in the process of committing

15/04/20 18:15:51 INFO mapred.LocalJobRunner: 1 / 1 copied.

15/04/20 18:15:51 INFO mapred.Task: Task attempt_local657082309_0001_r_000000_0 is allowed to commit now

15/04/20 18:15:51 INFO output.FileOutputCommitter: Saved output of task 'attempt_local657082309_0001_r_000000_0' to hdfs://master:9000/output/_temporary/0/task_local657082309_0001_r_000000

15/04/20 18:15:51 INFO mapred.LocalJobRunner: reduce > reduce

15/04/20 18:15:51 INFO mapred.Task: Task 'attempt_local657082309_0001_r_000000_0' done.

15/04/20 18:15:51 INFO mapred.LocalJobRunner: Finishing task: attempt_local657082309_0001_r_000000_0

15/04/20 18:15:51 INFO mapred.LocalJobRunner: reduce task executor complete.

15/04/20 18:15:52 INFO mapreduce.Job:  map 100% reduce 100%

15/04/20 18:15:52 INFO mapreduce.Job: Job job_local657082309_0001 completed successfully

15/04/20 18:15:52 INFO mapreduce.Job: Counters: 38

        File System Counters

                FILE: Number of bytes read=544164

                FILE: Number of bytes written=1040966

                FILE: Number of read operations=0

                FILE: Number of large read operations=0

                FILE: Number of write operations=0

                HDFS: Number of bytes read=4536

                HDFS: Number of bytes written=1196

                HDFS: Number of read operations=15

                HDFS: Number of large read operations=0

                HDFS: Number of write operations=4

        Map-Reduce Framework

                Map input records=43

                Map output records=121

                Map output bytes=1698

                Map output materialized bytes=1570

                Input split bytes=114

                Combine input records=121

                Combine output records=92

                Reduce input groups=92

                Reduce shuffle bytes=1570

                Reduce input records=92

                Reduce output records=92

                Spilled Records=184

                Shuffled Maps =1

                Failed Shuffles=0

                Merged Map outputs=1

                GC time elapsed (ms)=123

                CPU time spent (ms)=0

                Physical memory (bytes) snapshot=0

                Virtual memory (bytes) snapshot=0

                Total committed heap usage (bytes)=269361152

        Shuffle Errors

                BAD_ID=0

                CONNECTION=0

                IO_ERROR=0

                WRONG_LENGTH=0

                WRONG_MAP=0

                WRONG_REDUCE=0

        File Input Format Counters

                Bytes Read=2268

        File Output Format Counters

$ /mnt/jediael/hadoop-2.6.0/bin/hadoop fs -cat /output/*

搭建hadoop2.6.0集群环境的更多相关文章

  1. 搭建hadoop2.6.0集群环境 分类: A1_HADOOP 2015-04-20 07:21 459人阅读 评论(0) 收藏

    一.规划 (一)硬件资源 10.171.29.191 master 10.171.94.155  slave1 10.251.0.197 slave3 (二)基本资料 用户:  jediael 目录: ...

  2. ubuntu14.04搭建Hadoop2.9.0集群(分布式)环境

    本文进行操作的虚拟机是在伪分布式配置的基础上进行的,具体配置本文不再赘述,请参考本人博文:ubuntu14.04搭建Hadoop2.9.0伪分布式环境 本文主要参考 给力星的博文——Hadoop集群安 ...

  3. CentOS6.4上搭建hadoop-2.4.0集群

    公司Commerce Cloud平台上提供申请主机的服务.昨天试了下,申请了3台机器,搭了个hadoop环境.以下是机器的一些配置: emi-centos-6.4-x86_64medium | 6GB ...

  4. 分享一份关于Hadoop2.2.0集群环境搭建文档

    目录 一,准备环境 三,克隆VM 四,搭建集群 五,Hadoop启动与测试 六,安装过程中遇到的问题及其解决方案 一,准备环境 PC基本配置如下: 处理器:Intel(R) Core(TM) i5-3 ...

  5. Linux下Hadoop2.6.0集群环境的搭建

    本文旨在提供最基本的,可以用于在生产环境进行Hadoop.HDFS分布式环境的搭建,对自己是个总结和整理,也能方便新人学习使用. 基础环境 JDK的安装与配置 现在直接到Oracle官网(http:/ ...

  6. 第八章 搭建hadoop2.2.0集群,Zookeeper集群和hbase-0.98.0-hadoop2-bin.tar.gz集群

    安装配置jdk,SSH 一.首先,先搭建三台小集群,虚拟机的话,创建三个 下面为这三台机器分别分配IP地址及相应的角色:集群有个特点,三台机子用户名最好一致,要不你就创建一个组,把这些用户放到组里面去 ...

  7. 在CentOS7下搭建Hadoop2.9.0集群

    系统环境:CentOS 7 JDK版本:jdk-8u191-linux-x64 MYSQL版本:5.7.26 Hadoop版本:2.9.0 Hive版本:2.3.4 Host Name Ip User ...

  8. CentOS7搭建Hadoop2.8.0集群及基础操作与测试

    环境说明 示例环境 主机名 IP 角色 系统版本 数据目录 Hadoop版本 master 192.168.174.200 nameNode CentOS Linux release 7.4.1708 ...

  9. Linux基于Hadoop2.8.0集群安装配置Hive2.1.1及基础操作

    前言 安装Apache Hive前提是要先安装hadoop集群,并且hive只需要在hadoop的namenode节点集群里安装即可,安装前需保证Hadoop已启(动文中用到了hadoop的hdfs命 ...

随机推荐

  1. 体验Lua

    想用之和NGINX结合,终结公司混乱的NGINX配置 玩起来先,感觉很精简,很实用哟. print("hello world") a={,} b=a print(a==b,a~=b ...

  2. 当DOCKER遇上ESXI

    特别是你要为DOCKER窗口设置静态IP,且和公司局域网打成一片的时候, 苦逼的测试就会开始,我差不多前前后后测试了四五天,一百多个容器报废. NETNS,NSENTER,PIPWORK,各种镜像合下 ...

  3. rsyslog 同步丢失问题

    <pre name="code" class="html">[root@dr-mysql01 zjzc_log]# wc -l localhost_ ...

  4. MSSQL 标准PROC 写法

    MSSQL 标准PROC 写法 ALTER PROC [dbo].[usp_ADM_InsertFlowSortInfo]@FlowSortName NVARCHAR(50),AS/*PAGE: 分类 ...

  5. HDU5126---stars (CDQ套CDQ套 树状数组)

    题意:Q次操作,三维空间内 每个星星对应一个坐标,查询以(x1,y1,z1) (x2,y2,z2)为左下顶点 .右上顶点的立方体内的星星的个数. 注意Q的范围为50000,显然离散化之后用三维BIT会 ...

  6. 金错刀对话口袋购物王珂:找到痛点,确认卖点,制造爆点! - 资讯 - i黑马网

    金错刀对话口袋购物王珂:找到痛点,确认卖点,制造爆点! - 资讯 - i黑马网 金错刀对话口袋购物王珂:找到痛点,确认卖点,制造爆点!

  7. app后端设计--总目录

    做了3年app相关的系统架构,api设计,先后在3个创业公司中工作,经历过手机网页端,android客户端,iphone客户端,现就职于app云后端平台bmob(想了解bmob点击这里).其中的乐与苦 ...

  8. linux下javaEE系统安装部署

    最近公司在将服务器往阿里云上面迁移,所以需要重新在linux上面安装相关的软件以及部署项目,这里用到的linux版本为centos7.0,需要安装的软件有 jdk1.7.mysql5.6.mongo3 ...

  9. mysql中文名字按首字母排序

    在mysql数据库中可以使用GBK编码对中文进行排序,如名字按首字母排序 order by convert(substr(tu.username,1,1) using 'GBK') 其中substr方 ...

  10. UITextView换行问题解决办法

    在UITextView中输入数据时常会遇到换行显示问题,不要再xib中输入text内容,要通过代码输入,换行处加上\r\n,即可以实现换行