环境

192.168.1.101 host101 
192.168.1.102 host102

1.安装配置host101

  1. [root@host101 ~]# cat /etc/hosts |grep 192
  2. 192.168.1.101 host101
  3. 192.168.1.102 host102
  4. [root@host101 ~]# rpm -ivh jdk-8u91-linux-x64.rpm
  5. [root@host101 ~]# tar -zxvf hadoop-2.6.4.tar.gz
  6. [root@host101 ~]# mv hadoop-2.6.4 /usr/local/hadoop
  7. [root@host101 ~]# cd /usr/local/hadoop/
  8. [root@host101 hadoop]# vim etc/hadoop/hadoop-env.sh
  9. export JAVA_HOME=/usr/java/latest
  10. export HADOOP_PREFIX=/usr/local/hadoop
  11. [root@host101 hadoop]# vim etc/hadoop/slaves
  12. host101
  13. host102
  14. [root@host101 hadoop]# vim etc/hadoop/core-site.xml
  15. <configuration>
  16. <property>
  17. <name>fs.defaultFS</name>
  18. <value>hdfs://host101:9000</value>
  19. </property>
  20. </configuration>
  21.  
  22. [root@host101 hadoop]# mkdir -p /hadoop/
  23. [root@host101 hadoop]# vim etc/hadoop/hdfs-site.xml
  24. <configuration>
  25. <property>
  26. <name>dfs.replication</name>
  27. <value>1</value>
  28. </property>
  29. <property>
  30. <name>dfs.namenode.name.dir</name>
  31. <value>/hadoop/name/</value>
  32. </property>
  33. <property>
  34. <name>dfs.datanode.data.dir</name>
  35. <value>/hadoop/data/</value>
  36. </property>
  37. </configuration>
  38. [root@host101 hadoop]# vim mapred-site.xml
  39. <configuration>
  40. <property>
  41. <name>mapred.job.tracker</name>
  42. <value>host101:9001</value>
  43. </property>
  44. </configuration>
  45. [root@host101 ~]# ssh-keygen
  46. [root@host101 ~]# ssh-copy-id host101
  47. [root@host101 ~]# ssh-copy-id host102

2.安装配置host102

  1. [root@host102 ~]# scp host101:/root/hadoop-2.6.4.tar.gz .
  2. [root@host102 ~]# scp host101:/root/jdk-8u91-linux-x64.rpm .
  3.  
  4. [root@host102 ~]# rpm -ivh jdk-8u91-linux-x64.rpm
  5. [root@host102 ~]# tar -zxvf hadoop-2.6.4.tar.gz
  6. [root@host102 ~]# mv hadoop-2.6.4 /usr/local/hadoop
  7. [root@host102 ~]# ssh-keygen
  8. [root@host102 ~]# ssh-copy-id host101
  9. [root@host102 ~]# ssh-copy-id host102
  10. [root@host102 etc]# cd /usr/local/hadoop/etc/hadoop/
  11. [root@host102 hadoop]# scp host101:/usr/local/hadoop/etc/hadoop/mapred-site.xml .
  12. [root@host102 hadoop]# scp host101:/usr/local/hadoop/etc/hadoop/slaves .
  13. [root@host102 hadoop]# scp host101:/usr/local/hadoop/etc/hadoop/hdfs-site.xml .
  14. [root@host102 hadoop]# scp host101:/usr/local/hadoop/etc/hadoop/hadoop-env.sh .
  15. [root@host102 hadoop]# scp host101:/usr/local/hadoop/etc/hadoop/core-site.xml .

3.启动hadoop集群

  1. [root@host101 hadoop]# sbin/start-all.sh
  2. This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
  3. Starting namenodes on [host101]
  4. host101: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-host101.out
  5. host101: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-host101.out
  6. host102: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-host102.out
  7. Starting secondary namenodes [0.0.0.0]
  8. 0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-host101.out
  9. starting yarn daemons
  10. starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-root-resourcemanager-host101.out
  11. host101: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-host101.out
  12. host102: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-host102.out
  13.  
  14. [root@host101 hadoop]# bin/hdfs dfs -mkdir /eric
  15. [root@host101 hadoop]# bin/hdfs dfs -ls /
  16. Found 1 items
  17. drwxr-xr-x - root supergroup 0 2016-07-06 12:09 /eric
  18. [root@host101 hadoop]# bin/hadoop dfsadmin -report
  19. DEPRECATED: Use of this script to execute hdfs command is deprecated.
  20. Instead use the hdfs command for it.
  21.  
  22. Configured Capacity: 37576769536 (35.00 GB)
  23. Present Capacity: 29447094272 (27.42 GB)
  24. DFS Remaining: 29447086080 (27.42 GB)
  25. DFS Used: 8192 (8 KB)
  26. DFS Used%: 0.00%
  27. Under replicated blocks: 0
  28. Blocks with corrupt replicas: 0
  29. Missing blocks: 0
  30.  
  31. -------------------------------------------------
  32. Live datanodes (2):
  33.  
  34. Name: 192.168.1.101:50010 (host101)
  35. Hostname: host101
  36. Decommission Status : Normal
  37. Configured Capacity: 18788384768 (17.50 GB)
  38. DFS Used: 4096 (4 KB)
  39. Non DFS Used: 3870842880 (3.61 GB)
  40. DFS Remaining: 14917537792 (13.89 GB)
  41. DFS Used%: 0.00%
  42. DFS Remaining%: 79.40%
  43. Configured Cache Capacity: 0 (0 B)
  44. Cache Used: 0 (0 B)
  45. Cache Remaining: 0 (0 B)
  46. Cache Used%: 100.00%
  47. Cache Remaining%: 0.00%
  48. Xceivers: 1
  49. Last contact: Wed Jul 06 12:10:07 CST 2016
  50.  
  51. Name: 192.168.1.102:50010 (host102)
  52. Hostname: host102
  53. Decommission Status : Normal
  54. Configured Capacity: 18788384768 (17.50 GB)
  55. DFS Used: 4096 (4 KB)
  56. Non DFS Used: 4258832384 (3.97 GB)
  57. DFS Remaining: 14529548288 (13.53 GB)
  58. DFS Used%: 0.00%
  59. DFS Remaining%: 77.33%
  60. Configured Cache Capacity: 0 (0 B)
  61. Cache Used: 0 (0 B)
  62. Cache Remaining: 0 (0 B)
  63. Cache Used%: 100.00%
  64. Cache Remaining%: 0.00%
  65. Xceivers: 1
  66. Last contact: Wed Jul 06 12:10:07 CST 2016
  67. [root@host101 hadoop]# jps
  68. 3920 DataNode
  69. 3811 NameNode
  70. 4056 SecondaryNameNode
  71. 4299 Jps

4. 测试集群

  1. NameNode http://192.168.1.101:50070/dfshealth.html
  2. ResourceManager http://192.168.1.101:8088/cluster
  3. http://192.168.1.101:8042/node
  4.  
  5. [root@host101 hadoop]# bin/hadoop fs -mkdir /eric/input
  6. [root@host101 hadoop]# bin/hadoop fs -copyFromLocal etc/hadoop/*.xml /eric/input
  7. [root@host101 hadoop]# bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar grep /eric/input /eric/output 'dfs[a-z.]+'
  8. [root@host101 hadoop]# bin/hadoop fs -ls /eric/output/
  9. Found 2 items
  10. -rw-r--r-- 1 root supergroup 0 2016-07-06 12:38 /eric/output/_SUCCESS
  11. -rw-r--r-- 1 root supergroup 77 2016-07-06 12:38 /eric/output/part-r-00000
  12. [root@host101 hadoop]# bin/hadoop fs -cat /eric/output/part-r-00000
  13. 1 dfsadmin
  14. 1 dfs.replication
  15. 1 dfs.namenode.name.dir
  16. 1 dfs.datanode.data.dir
  17. [root@host101 hadoop]# sbin/stop-all.sh
  18. This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
  19. Stopping namenodes on [host101]
  20. host101: stopping namenode
  21. host101: stopping datanode
  22. host102: stopping datanode
  23. Stopping secondary namenodes [0.0.0.0]
  24. 0.0.0.0: stopping secondarynamenode
  25. stopping yarn daemons
  26. stopping resourcemanager
  27. host101: stopping nodemanager
  28. host102: no nodemanager to stop
  29. no proxyserver to stop

5. 动态添加节点

  1. [root@host101 hadoop]# echo "192.168.1.161 host161" >> /etc/hosts
  2. [root@host102 hadoop]# echo "192.168.1.161 host161" >> /etc/hosts
  3. [root@host101 hadoop]# ssh-copy-id host161
  4. [root@host102 hadoop]# ssh-copy-id host161
  5. [root@host161 ~]# ssh-copy-id host161
  6. [root@host161 ~]# ssh-copy-id host101
  7. [root@host161 ~]# ssh-copy-id host102
  8. [root@host102 ~]# scp host101:/root/hadoop-2.6.4.tar.gz .
  9. [root@host102 ~]# scp host101:/root/jdk-8u91-linux-x64.rpm .
  10. [root@host102 ~]# rpm -ivh jdk-8u91-linux-x64.rpm
  11. [root@host102 ~]# tar -zxvf hadoop-2.6.4.tar.gz
  12. [root@host102 ~]# mv hadoop-2.6.4 /usr/local/hadoop
  13. [root@host101 hadoop]# echo 'host161' >> etc/hadoop/slaves
  14. [root@host102 hadoop]# echo 'host161' >> etc/hadoop/slaves
  15. [root@host161 hadoop]# scp host101:/usr/local/hadoop/etc/hadoop/mapred-site.xml .
  16. [root@host161 hadoop]# scp host101:/usr/local/hadoop/etc/hadoop/slaves .
  17. [root@host161 hadoop]# scp host101:/usr/local/hadoop/etc/hadoop/hdfs-site.xml .
  18. [root@host161 hadoop]# scp host101:/usr/local/hadoop/etc/hadoop/hadoop-env.sh .
  19. [root@host161 hadoop]# scp host101:/usr/local/hadoop/etc/hadoop/core-site.xml .
  20. [root@host161 hadoop]# sbin/hadoop-daemon.sh start datanode
  21. starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-host161.out
  22.  
  23. [root@host101 hadoop]# bin/hadoop dfsadmin -report
  24. DEPRECATED: Use of this script to execute hdfs command is deprecated.
  25. Instead use the hdfs command for it.
  26.  
  27. Configured Capacity: 56365154304 (52.49 GB)
  28. Present Capacity: 44354347008 (41.31 GB)
  29. DFS Remaining: 44192788480 (41.16 GB)
  30. DFS Used: 161558528 (154.07 MB)
  31. DFS Used%: 0.36%
  32. Under replicated blocks: 0
  33. Blocks with corrupt replicas: 0
  34. Missing blocks: 0
  35.  
  36. -------------------------------------------------
  37. Live datanodes (3):
  38.  
  39. Name: 192.168.1.101:50010 (host101)
  40. Hostname: host101
  41. Decommission Status : Normal
  42. Configured Capacity: 18788384768 (17.50 GB)
  43. DFS Used: 161546240 (154.06 MB)
  44. Non DFS Used: 3873861632 (3.61 GB)
  45. DFS Remaining: 14752976896 (13.74 GB)
  46. DFS Used%: 0.86%
  47. DFS Remaining%: 78.52%
  48. Configured Cache Capacity: 0 (0 B)
  49. Cache Used: 0 (0 B)
  50. Cache Remaining: 0 (0 B)
  51. Cache Used%: 100.00%
  52. Cache Remaining%: 0.00%
  53. Xceivers: 1
  54. Last contact: Wed Jul 06 16:02:19 CST 2016
  55.  
  56. Name: 192.168.1.161:50010 (host161)
  57. Hostname: host161
  58. Decommission Status : Normal
  59. Configured Capacity: 18788384768 (17.50 GB)
  60. DFS Used: 4096 (4 KB)
  61. Non DFS Used: 3877494784 (3.61 GB)
  62. DFS Remaining: 14910885888 (13.89 GB)
  63. DFS Used%: 0.00%
  64. DFS Remaining%: 79.36%
  65. Configured Cache Capacity: 0 (0 B)
  66. Cache Used: 0 (0 B)
  67. Cache Remaining: 0 (0 B)
  68. Cache Used%: 100.00%
  69. Cache Remaining%: 0.00%
  70. Xceivers: 1
  71. Last contact: Wed Jul 06 16:02:20 CST 2016
  72.  
  73. Name: 192.168.1.102:50010 (host102)
  74. Hostname: host102
  75. Decommission Status : Normal
  76. Configured Capacity: 18788384768 (17.50 GB)
  77. DFS Used: 8192 (8 KB)
  78. Non DFS Used: 4259450880 (3.97 GB)
  79. DFS Remaining: 14528925696 (13.53 GB)
  80. DFS Used%: 0.00%
  81. DFS Remaining%: 77.33%
  82. Configured Cache Capacity: 0 (0 B)
  83. Cache Used: 0 (0 B)
  84. Cache Remaining: 0 (0 B)
  85. Cache Used%: 100.00%
  86. Cache Remaining%: 0.00%
  87. Xceivers: 1
  88. Last contact: Wed Jul 06 16:02:19 CST 2016

  

  

  

  

  

hadoop实战之分布式模式的更多相关文章

  1. Hadoop基础-完全分布式模式部署yarn日志聚集功能

    Hadoop基础-完全分布式模式部署yarn日志聚集功能 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 其实我们不用配置也可以在服务器后台通过命令行的形式查看相应的日志,但为了更方 ...

  2. Hadoop三种安装模式:单机模式,伪分布式,真正分布式

    Hadoop三种安装模式:单机模式,伪分布式,真正分布式 一 单机模式standalone单 机模式是Hadoop的默认模式.当首次解压Hadoop的源码包时,Hadoop无法了解硬件安装环境,便保守 ...

  3. 王家林的“云计算分布式大数据Hadoop实战高手之路---从零开始”的第十一讲Hadoop图文训练课程:MapReduce的原理机制和流程图剖析

    这一讲我们主要剖析MapReduce的原理机制和流程. “云计算分布式大数据Hadoop实战高手之路”之完整发布目录 云计算分布式大数据实战技术Hadoop交流群:312494188,每天都会在群中发 ...

  4. 云计算分布式大数据Hadoop实战高手之路第七讲Hadoop图文训练课程:通过HDFS的心跳来测试replication具体的工作机制和流程

    这一讲主要深入使用HDFS命令行工具操作Hadoop分布式集群,主要是通过实验的配置hdfs-site.xml文件的心跳来测试replication具体的工作和流程. 通过HDFS的心跳来测试repl ...

  5. 云计算分布式大数据Hadoop实战高手之路第八讲Hadoop图文训练课程:Hadoop文件系统的操作实战

    本讲通过实验的方式讲解Hadoop文件系统的操作. “云计算分布式大数据Hadoop实战高手之路”之完整发布目录 云计算分布式大数据实战技术Hadoop交流群:312494188,每天都会在群中发布云 ...

  6. Hadoop伪分布式模式部署

    Hadoop的安装有三种执行模式: 单机模式(Local (Standalone) Mode):Hadoop的默认模式,0配置.Hadoop执行在一个Java进程中.使用本地文件系统.不使用HDFS, ...

  7. hadoop的安装和配置(三)完全分布式模式

    博主会用三篇文章为大家详细说明hadoop的三种模式: 本地模式 伪分布模式 完全分布模式 完全分布式模式: 前面已经说了本地模式和伪分布模式,这两种在hadoop的应用中并不用于实际,因为几乎没人会 ...

  8. 使用docker搭建hadoop环境,并配置伪分布式模式

    docker 1.下载docker镜像 docker pull registry.cn-hangzhou.aliyuncs.com/kaibb/hadoop:latest 注:此镜像为阿里云个人上传镜 ...

  9. Hadoop Single Node Setup(hadoop本地模式和伪分布式模式安装-官方文档翻译 2.7.3)

    Purpose(目标) This document describes how to set up and configure a single-node Hadoop installation so ...

随机推荐

  1. linux split (分割文件)命令

    linux split 命令 功能说明:切割文件. 语 法:split [--help][--version][-<行数>][-b <字节>][-C <字节>][- ...

  2. iOS开发多线程篇—GCD的基本使用

    iOS开发多线程篇—GCD的基本使用 一.主队列介绍 主队列:是和主线程相关联的队列,主队列是GCD自带的一种特殊的串行队列,放在主队列中得任务,都会放到主线程中执行. 提示:如果把任务放到主队列中进 ...

  3. JavaWeb chapter9 JSP基础

    1.  Servlet的缺陷 一个动态网页中,大部分内容都是HTML代码等固定不变的内容,编写和修改HTML非常不方便,令人厌恶: 部署Servlet是繁琐而且容易出错的任务:(Servlet3.0规 ...

  4. SqlServer索引及优化详解

    实际上,您可以把索引理解为一种特殊的目录.微软的SQL SERVER提供了两种索引:聚集索引(clustered index,也称聚类索引.簇集索引)和非聚集索引(nonclustered index ...

  5. pod template

    Pod::Spec.new do |s| s.name = "MLAlipaySDK" s.version = "2.1" s.summary = " ...

  6. ASP.NET关于WebPages的一点总结

    相比于早期的ASP,WebPage貌似只是多了一些Razor语法可以直接调用服务器代码,其他的内容HTML.样式CSS以及脚本JavaScript基本都是一样的处理方式 只是说内容HTML里面加入了更 ...

  7. Hive Over HBase

    1. 在hbase上建测试表 hbase(main)::> create 'test_hive_over_hbase','f' row(s) in 2.5810 seconds hbase(ma ...

  8. 设计模式之二:MVC

    模型(Model) 视图(View) 控制器(Controller) (MVC) 是 Cocoa 中的一种行为模块,并且也是所有 Cocoa 设计模式中使用最多的.在程序中按照它们的角色来分类这些对象 ...

  9. web前端基础篇⑦

    1.img src/url后一般接地址信息 标题为<h1>-<h6> 字体依次变小 预文本格式<pre></pre> 2.a标签实现跳转 例:<a ...

  10. jQuery 菜单项切换

    <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title> ...