hadoop实战之分布式模式
环境
192.168.1.101 host101
192.168.1.102 host102
1.安装配置host101
[root@host101 ~]# cat /etc/hosts |grep 192
192.168.1.101 host101
192.168.1.102 host102
[root@host101 ~]# rpm -ivh jdk-8u91-linux-x64.rpm
[root@host101 ~]# tar -zxvf hadoop-2.6.4.tar.gz
[root@host101 ~]# mv hadoop-2.6.4 /usr/local/hadoop
[root@host101 ~]# cd /usr/local/hadoop/
[root@host101 hadoop]# vim etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/java/latest
export HADOOP_PREFIX=/usr/local/hadoop
[root@host101 hadoop]# vim etc/hadoop/slaves
host101
host102
[root@host101 hadoop]# vim etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://host101:9000</value>
</property>
</configuration> [root@host101 hadoop]# mkdir -p /hadoop/
[root@host101 hadoop]# vim etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/hadoop/name/</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/hadoop/data/</value>
</property>
</configuration>
[root@host101 hadoop]# vim mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>host101:9001</value>
</property>
</configuration>
[root@host101 ~]# ssh-keygen
[root@host101 ~]# ssh-copy-id host101
[root@host101 ~]# ssh-copy-id host102
2.安装配置host102
[root@host102 ~]# scp host101:/root/hadoop-2.6.4.tar.gz .
[root@host102 ~]# scp host101:/root/jdk-8u91-linux-x64.rpm . [root@host102 ~]# rpm -ivh jdk-8u91-linux-x64.rpm
[root@host102 ~]# tar -zxvf hadoop-2.6.4.tar.gz
[root@host102 ~]# mv hadoop-2.6.4 /usr/local/hadoop
[root@host102 ~]# ssh-keygen
[root@host102 ~]# ssh-copy-id host101
[root@host102 ~]# ssh-copy-id host102
[root@host102 etc]# cd /usr/local/hadoop/etc/hadoop/
[root@host102 hadoop]# scp host101:/usr/local/hadoop/etc/hadoop/mapred-site.xml .
[root@host102 hadoop]# scp host101:/usr/local/hadoop/etc/hadoop/slaves .
[root@host102 hadoop]# scp host101:/usr/local/hadoop/etc/hadoop/hdfs-site.xml .
[root@host102 hadoop]# scp host101:/usr/local/hadoop/etc/hadoop/hadoop-env.sh .
[root@host102 hadoop]# scp host101:/usr/local/hadoop/etc/hadoop/core-site.xml .
3.启动hadoop集群
[root@host101 hadoop]# sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [host101]
host101: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-host101.out
host101: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-host101.out
host102: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-host102.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-host101.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-root-resourcemanager-host101.out
host101: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-host101.out
host102: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-host102.out [root@host101 hadoop]# bin/hdfs dfs -mkdir /eric
[root@host101 hadoop]# bin/hdfs dfs -ls /
Found 1 items
drwxr-xr-x - root supergroup 0 2016-07-06 12:09 /eric
[root@host101 hadoop]# bin/hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. Configured Capacity: 37576769536 (35.00 GB)
Present Capacity: 29447094272 (27.42 GB)
DFS Remaining: 29447086080 (27.42 GB)
DFS Used: 8192 (8 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0 -------------------------------------------------
Live datanodes (2): Name: 192.168.1.101:50010 (host101)
Hostname: host101
Decommission Status : Normal
Configured Capacity: 18788384768 (17.50 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 3870842880 (3.61 GB)
DFS Remaining: 14917537792 (13.89 GB)
DFS Used%: 0.00%
DFS Remaining%: 79.40%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Wed Jul 06 12:10:07 CST 2016 Name: 192.168.1.102:50010 (host102)
Hostname: host102
Decommission Status : Normal
Configured Capacity: 18788384768 (17.50 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 4258832384 (3.97 GB)
DFS Remaining: 14529548288 (13.53 GB)
DFS Used%: 0.00%
DFS Remaining%: 77.33%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Wed Jul 06 12:10:07 CST 2016
[root@host101 hadoop]# jps
3920 DataNode
3811 NameNode
4056 SecondaryNameNode
4299 Jps
4. 测试集群
NameNode http://192.168.1.101:50070/dfshealth.html
ResourceManager http://192.168.1.101:8088/cluster
http://192.168.1.101:8042/node [root@host101 hadoop]# bin/hadoop fs -mkdir /eric/input
[root@host101 hadoop]# bin/hadoop fs -copyFromLocal etc/hadoop/*.xml /eric/input
[root@host101 hadoop]# bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar grep /eric/input /eric/output 'dfs[a-z.]+'
[root@host101 hadoop]# bin/hadoop fs -ls /eric/output/
Found 2 items
-rw-r--r-- 1 root supergroup 0 2016-07-06 12:38 /eric/output/_SUCCESS
-rw-r--r-- 1 root supergroup 77 2016-07-06 12:38 /eric/output/part-r-00000
[root@host101 hadoop]# bin/hadoop fs -cat /eric/output/part-r-00000
1 dfsadmin
1 dfs.replication
1 dfs.namenode.name.dir
1 dfs.datanode.data.dir
[root@host101 hadoop]# sbin/stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [host101]
host101: stopping namenode
host101: stopping datanode
host102: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
host101: stopping nodemanager
host102: no nodemanager to stop
no proxyserver to stop
5. 动态添加节点
[root@host101 hadoop]# echo "192.168.1.161 host161" >> /etc/hosts
[root@host102 hadoop]# echo "192.168.1.161 host161" >> /etc/hosts
[root@host101 hadoop]# ssh-copy-id host161
[root@host102 hadoop]# ssh-copy-id host161
[root@host161 ~]# ssh-copy-id host161
[root@host161 ~]# ssh-copy-id host101
[root@host161 ~]# ssh-copy-id host102
[root@host102 ~]# scp host101:/root/hadoop-2.6.4.tar.gz .
[root@host102 ~]# scp host101:/root/jdk-8u91-linux-x64.rpm .
[root@host102 ~]# rpm -ivh jdk-8u91-linux-x64.rpm
[root@host102 ~]# tar -zxvf hadoop-2.6.4.tar.gz
[root@host102 ~]# mv hadoop-2.6.4 /usr/local/hadoop
[root@host101 hadoop]# echo 'host161' >> etc/hadoop/slaves
[root@host102 hadoop]# echo 'host161' >> etc/hadoop/slaves
[root@host161 hadoop]# scp host101:/usr/local/hadoop/etc/hadoop/mapred-site.xml .
[root@host161 hadoop]# scp host101:/usr/local/hadoop/etc/hadoop/slaves .
[root@host161 hadoop]# scp host101:/usr/local/hadoop/etc/hadoop/hdfs-site.xml .
[root@host161 hadoop]# scp host101:/usr/local/hadoop/etc/hadoop/hadoop-env.sh .
[root@host161 hadoop]# scp host101:/usr/local/hadoop/etc/hadoop/core-site.xml .
[root@host161 hadoop]# sbin/hadoop-daemon.sh start datanode
starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-host161.out [root@host101 hadoop]# bin/hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. Configured Capacity: 56365154304 (52.49 GB)
Present Capacity: 44354347008 (41.31 GB)
DFS Remaining: 44192788480 (41.16 GB)
DFS Used: 161558528 (154.07 MB)
DFS Used%: 0.36%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0 -------------------------------------------------
Live datanodes (3): Name: 192.168.1.101:50010 (host101)
Hostname: host101
Decommission Status : Normal
Configured Capacity: 18788384768 (17.50 GB)
DFS Used: 161546240 (154.06 MB)
Non DFS Used: 3873861632 (3.61 GB)
DFS Remaining: 14752976896 (13.74 GB)
DFS Used%: 0.86%
DFS Remaining%: 78.52%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Wed Jul 06 16:02:19 CST 2016 Name: 192.168.1.161:50010 (host161)
Hostname: host161
Decommission Status : Normal
Configured Capacity: 18788384768 (17.50 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 3877494784 (3.61 GB)
DFS Remaining: 14910885888 (13.89 GB)
DFS Used%: 0.00%
DFS Remaining%: 79.36%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Wed Jul 06 16:02:20 CST 2016 Name: 192.168.1.102:50010 (host102)
Hostname: host102
Decommission Status : Normal
Configured Capacity: 18788384768 (17.50 GB)
DFS Used: 8192 (8 KB)
Non DFS Used: 4259450880 (3.97 GB)
DFS Remaining: 14528925696 (13.53 GB)
DFS Used%: 0.00%
DFS Remaining%: 77.33%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Wed Jul 06 16:02:19 CST 2016
hadoop实战之分布式模式的更多相关文章
- Hadoop基础-完全分布式模式部署yarn日志聚集功能
Hadoop基础-完全分布式模式部署yarn日志聚集功能 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 其实我们不用配置也可以在服务器后台通过命令行的形式查看相应的日志,但为了更方 ...
- Hadoop三种安装模式:单机模式,伪分布式,真正分布式
Hadoop三种安装模式:单机模式,伪分布式,真正分布式 一 单机模式standalone单 机模式是Hadoop的默认模式.当首次解压Hadoop的源码包时,Hadoop无法了解硬件安装环境,便保守 ...
- 王家林的“云计算分布式大数据Hadoop实战高手之路---从零开始”的第十一讲Hadoop图文训练课程:MapReduce的原理机制和流程图剖析
这一讲我们主要剖析MapReduce的原理机制和流程. “云计算分布式大数据Hadoop实战高手之路”之完整发布目录 云计算分布式大数据实战技术Hadoop交流群:312494188,每天都会在群中发 ...
- 云计算分布式大数据Hadoop实战高手之路第七讲Hadoop图文训练课程:通过HDFS的心跳来测试replication具体的工作机制和流程
这一讲主要深入使用HDFS命令行工具操作Hadoop分布式集群,主要是通过实验的配置hdfs-site.xml文件的心跳来测试replication具体的工作和流程. 通过HDFS的心跳来测试repl ...
- 云计算分布式大数据Hadoop实战高手之路第八讲Hadoop图文训练课程:Hadoop文件系统的操作实战
本讲通过实验的方式讲解Hadoop文件系统的操作. “云计算分布式大数据Hadoop实战高手之路”之完整发布目录 云计算分布式大数据实战技术Hadoop交流群:312494188,每天都会在群中发布云 ...
- Hadoop伪分布式模式部署
Hadoop的安装有三种执行模式: 单机模式(Local (Standalone) Mode):Hadoop的默认模式,0配置.Hadoop执行在一个Java进程中.使用本地文件系统.不使用HDFS, ...
- hadoop的安装和配置(三)完全分布式模式
博主会用三篇文章为大家详细说明hadoop的三种模式: 本地模式 伪分布模式 完全分布模式 完全分布式模式: 前面已经说了本地模式和伪分布模式,这两种在hadoop的应用中并不用于实际,因为几乎没人会 ...
- 使用docker搭建hadoop环境,并配置伪分布式模式
docker 1.下载docker镜像 docker pull registry.cn-hangzhou.aliyuncs.com/kaibb/hadoop:latest 注:此镜像为阿里云个人上传镜 ...
- Hadoop Single Node Setup(hadoop本地模式和伪分布式模式安装-官方文档翻译 2.7.3)
Purpose(目标) This document describes how to set up and configure a single-node Hadoop installation so ...
随机推荐
- MYSQL -NOSQL -handlersocket
一个MYSQL的插件,让MYSQL支持NOSQL 好处,跟MYSQL公用数据.比普通CACHE方便.普通CACHE有同步数据问题 坏处,不兼容MEMCAHE,跟MEMCAHE一样没安全控制 编译与安装 ...
- maven入门探讨
java项目最恶心的一点莫过于需要使用大量的jar.每次引用jar的时候都要自己手动去各地寻找,然后导入到项目的指定文件夹当中最后还要添加Path.这无疑是一项工作量巨大的工作,同时如果控制不当就会提 ...
- 【必备】史上最全的浏览器 CSS & JS Hack 手册(转)
浏览器渲染页面的方式各不相同,甚至同一浏览器的不同版本(“杰出代表”是 IE)也有差异.因此,浏览器兼容成为前端开发人员的必备技能.如果有一份浏览器 Hack 手册,那查询起来就方便多了.这篇文章就向 ...
- jfinal 基本应用 --定时任务 QuartzPlugin
jfinal 的定时器的使用: 项目中使用的maven管理器 1.导入要使用的包 2.添加Job类 配置参数 这个配置是jfinal-quartz 包中带的默认文档,即是默认加载的文档(其中还有一个q ...
- iOS开发UI篇—Quartz2D简单使用(一)
iOS开发UI篇—Quartz2D简单使用(一) 一.画直线 代码: // // YYlineview.m // 03-画直线 // // Created by apple on 14-6-9. // ...
- css布局之三列布局
网站上使用三列布局的还是比较多的,不过三列和两列有些相似: 1.自适应三列 <!DOCTYPE html> <html lang="en"> <hea ...
- juery常用
1input解除焦点时触发操作 同时给另一个元素赋值 $(document).ready(function(){ $("input[name='url']").change(fun ...
- adobe form
Call Adobe Form through ABAP Program 2015-04-24 0个评论 来源:ChampaignWolf的专栏 收藏 我要投稿 Scenar ...
- AMQP与RabbitMQ简介
MQ(Message Queue,消息队列)是一种应用系统之间的通信方法.是通过读写出入队列的消息来通信(RPC则是通过直接调用彼此来通信的). 1.AMQP协议 在了解RabbitMQ之前,首先要了 ...
- 没有好看的 Terminal 怎么能够快乐地写代码
换了好几回Terminal默认的配色,真是难看哭了,作为一只有生活追求的序媛,当然不能安(zuo)之(yi)若(dai)素(bi)了 1 自定义 Terminal问候语 sudo pico /etc/ ...