【手动验证:任意2个节点间是否实现 双向 ssh免密登录】

弄懂通信原理和集群的容错性

任意2个节点间实现双向 ssh免密登录,默认在~目录下

【实现上步后,在其中任一节点安装\配置hadoop后,可以将整个安装目录scp复制到各个节点::::各个节点的文件内容是一样的!!!!】

  1. [hadoop@bigdata-server-03 ~]$ jps
  2. 9217 SecondaryNameNode
  3. 9730 Jps
  4. 9379 ResourceManager
  5. 9497 NodeManager
  6. 8895 NameNode
  7. 9039 DataNode
  8. [hadoop@bigdata-server-01 ~]$ ssh bigdata-server-01
  9. Last login: Sat Nov 25 23:13:06 2017 from 120.178.18.4
  10. [hadoop@bigdata-server-01 ~]$ jps
  11. 19035 Jps
  12. 18670 DataNode
  13. [hadoop@bigdata-server-01 ~]$ sh bigdata-server-02
  14. Last login: Sat Nov 25 23:14:03 2017 from 120.0.0.1
  15. [hadoop@bigdata-server-01 ~]$ jps
  16. 19035 Jps
  17. 18670 DataNode

  

  1. BASE
  2. https://stackoverflow.com/questions/26346277/scp-files-from-local-to-remote-machine-error-no-such-file-or-directory
  3. http://www.tldp.org/LDP/lame/LAME/linux-admin-made-easy/removing-user-accounts.html
  4.  
  5. $ whoami
  6. user1
  7. $ su - user2
  8. Password:
  9. $ whoami
  10. user2
  11. $ exit
  12. logout
  13.  
  14. http://www.binarytides.com/linux-command-shutdown-reboot-restart-system/
  15.  
  16. [2.9.0]
  17. http://www.server-world.info/en/note?os=CentOS_7&p=hadoop
  18. http://www.server-world.info/en/note?os=CentOS_7&p=jdk8
  19. http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html#Configuring_Environment_of_Hadoop_Daemons
  20. http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml
  21.  
  22. http://www.codecoffee.com/tipsforlinux/articles/22.html
  23. the directory size
  24. du ~ -s

3节点 spark-yarn 配置
【免密通信模块 】
su - hadoop;

ssh-keygen -t rsa;

cd /usr/hadoop/.ssh/;

cat /etc/hosts;

ssh bigdata-server-01;
ssh bigdata-server-02;
ssh bigdata-server-03;
# passwd hadoop;
ssh-copy-id bigdata-server-01;
ssh-copy-id bigdata-server-02;
ssh-copy-id bigdata-server-03;

cat /etc/ssh/sshd_config

阅读配置信息

【安装spark 】

【安装配置 hadoop】


[root@bigdata-server-02 ~]# cd /usr/local/hadoop
[root@bigdata-server-02 hadoop]# ll -as
total 208
4 drwxr-xr-x 9 bigdata bigdata 4096 Dec 9 03:42 .
4 drwxr-xr-x. 18 root root 4096 Dec 19 21:40 ..
4 drwxr-xr-x 2 bigdata bigdata 4096 Dec 9 03:42 bin
4 drwxr-xr-x 3 bigdata bigdata 4096 Dec 9 03:17 etc
4 drwxr-xr-x 2 bigdata bigdata 4096 Dec 9 03:42 include
4 drwxr-xr-x 3 bigdata bigdata 4096 Dec 9 03:42 lib
4 drwxr-xr-x 4 bigdata bigdata 4096 Dec 9 03:42 libexec
144 -rw-r--r-- 1 bigdata bigdata 147066 Nov 15 03:19 LICENSE.txt
24 -rw-r--r-- 1 bigdata bigdata 20891 Nov 15 03:19 NOTICE.txt
4 -rw-r--r-- 1 bigdata bigdata 1366 Jul 9 2016 README.txt
4 drwxr-xr-x 3 bigdata bigdata 4096 Dec 9 03:17 sbin
4 drwxr-xr-x 4 bigdata bigdata 4096 Dec 9 03:53 share
[root@bigdata-server-02 hadoop]# pwd
/usr/local/hadoop
[root@bigdata-server-02 hadoop]# mkdir mydatanode
[root@bigdata-server-02 hadoop]# mkdir mynamenode
[root@bigdata-server-02 hadoop]#

# add into <configuration> - </configuration> section
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///usr/local/hadoop/mydatanode</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///usr/local/hadoop/mynamenode</value>
</property>
</configuration>
vi etc/hadoop/hdfs-site.xml;

# add into <configuration> - </configuration> section
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://bigdata-server-02:8080/</value>
</property>
</configuration>
#Problem binding to [bigdata-server-01:9000] java.net.BindException: Cannot assign requested address;
#9001

vi etc/hadoop/core-site.xml;

# create new
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

vi etc/hadoop/mapred-site.xml;

# add into <configuration> - </configuration> section

<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>bigdata-server-02</value>
</property>
<property>
<name>yarn.nodemanager.hostname</name>
<value>bigdata-server-02</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

vi etc/hadoop/yarn-site.xml;

scp hadoop-3.0.0 root@bigdata-server-01:/usr/local;scp hadoop-3.0.0 root@bigdata-server-03:/usr/local;
【压缩,scp 复制,ssh 解压、创建软连接】
tar -cf hadoop-3.0.0.mycom.tar hadoop-3.0.0;
scp hadoop-3.0.0.mycom.tar root@bigdata-server-01:/usr/local;scp hadoop-3.0.0.mycom.tar root@bigdata-server-03:/usr/local;

ssh bigdata-server-03 "cd /usr/local;tar -xvf hadoop-3.0.0.mycom.tar;ln -s hadoop-3.0.0 hadoop;";
ssh bigdata-server-01 "cd /usr/local;tar -xvf hadoop-3.0.0.mycom.tar;ln -s hadoop-3.0.0 hadoop;";

Hadoop- datanode and node manager not running

https://stackoverflow.com/questions/32753218/yarn-do-we-need-nodemanager-on-namenode

启动原理  pid

信息存储位置

旧有信息冲突

[root@bigdata-server-02 hadoop]# rm -rf mynamenode/*;rm -rf mydatanode/*;rm -rf /tmp/*hadoop*;rm -rf /tmp/*yarn*; rm -rf /tmp/*pid;
[root@bigdata-server-02 hadoop]# ssh bigdata-server-01 'cd /usr/local/hadoop;rm -rf mynamenode/*;rm -rf mydatanode/*;rm -rf /tmp/*hadoop*;rm -rf /tmp/*yarn*; rm -rf /tmp/*pid;';
[root@bigdata-server-02 hadoop]# ssh bigdata-server-03 'cd /usr/local/hadoop;rm -rf mynamenode/*;rm -rf mydatanode/*;rm -rf /tmp/*hadoop*;rm -rf /tmp/*yarn*; rm -rf /tmp/*pid;';
[root@bigdata-server-02 hadoop]#

ssh bigdata-server-01 'cd /usr/local/hadoop;rm -rf {mydataname,mynamenode}/*';

4 -rwxr-xr-x 1 root root 349 Dec 25 23:12 root_rm_logs_mydn-nn_roottmp.sh
4 -rwxr-xr-x 1 root root 349 Dec 25 23:14 root_rm_mydn-nn_roottmp.sh
4 drwxr-xr-x 2 20415 101 4096 Dec 16 09:12 sbin
4 drwxr-xr-x 4 20415 101 4096 Dec 16 09:12 share
[root@bigdata-server-02 hadoop]# cat root_rm_logs_mydn-nn_roottmp.sh
ssh bigdata-server-01 'cd /usr/local/hadoop;rm -rf {mydatanode,mynamenode}/*;rm -rf /tmp/*;rm -rf logs/*';
ssh bigdata-server-02 'cd /usr/local/hadoop;rm -rf {mydatanode,mynamenode}/*;rm -rf /tmp/*;rm -rf logs/*';
ssh bigdata-server-03 'cd /usr/local/hadoop;rm -rf {mydatanode,mynamenode}/*;rm -rf /tmp/*;rm -rf logs/*';

进一步提升,使用for精简代码

修改密码

passwd

192.268.3.102
192.268.3.103
root 123

192.268.2.40
root 123

2.40-->3.101 ifconfig enp2s0 192.168.3.101 netmask 255.255.254.0

【分析ssh-keygen 】

  1. [root@hadoop3 ~]# rm -rf /root/.ssh
  2. [root@hadoop3 ~]# ssh-keygen -t rsa;
  3. Generating public/private rsa key pair.
  4. Enter file in which to save the key (/root/.ssh/id_rsa):
  5. Created directory '/root/.ssh'.
  6. Enter passphrase (empty for no passphrase):
  7. Enter same passphrase again:
  8. Your identification has been saved in /root/.ssh/id_rsa.
  9. Your public key has been saved in /root/.ssh/id_rsa.pub.
  10. The key fingerprint is:
  11. SHA256:CCj5BZDUWU5pNK0kvlJz5VYnbjLYuFwnsMvqWrlgA/Y root@hadoop3
  12. The key's randomart image is:
  13. +---[RSA 2048]----+
  14. |o+o +=o |
  15. |.. *o=.o o . |
  16. |o o *.X o o |
  17. | o = B X + |
  18. |..o * * S |
  19. |o....= |
  20. | +.E. |
  21. |. +.. |
  22. | .oo |
  23. +----[SHA256]-----+
  24. [root@hadoop3 ~]# ll -as /root/.ssh/
  25. 总用量 12
  26. 0 drwx------ 2 root root 38 7月 12 11:47 .
  27. 4 dr-xr-x---. 28 root root 4096 7月 12 11:47 ..
  28. 4 -rw------- 1 root root 1679 7月 12 11:47 id_rsa
  29. 4 -rw-r--r-- 1 root root 394 7月 12 11:47 id_rsa.pub
  30. [root@hadoop3 ~]#

  

生成了2个文件

  1. [root@hadoop3 ~]# cat /root/.ssh/id_rsa
  2. -----BEGIN RSA PRIVATE KEY-----
  3. MIIEpAIBAAKCAQEAvKqnSF+sxETyn+xHeF1KUZygmkcWU5eDTAkbSPOjRa8CGK6G
  4. g5UkayNVdyf/hiHc+PWG5DzLfmvkU4CdylL792U80+lhpFJSZ3spd4lgh8c20mly
  5. AgzJ5pl/kYaAz5VkF7uMJWX61g46NDWSCO2ruZLuWEMkytTh2RR9Pjjykp80e5mD
  6. HzGByubpL3uo1iHtfq7cHlMlsiXBf94xdinICJum0SVg9usLrj1X1ASzCZ9dgG3h
  7. ICuBFli7d/POvu8NCOIUmA2tPfgmeb0RXHJGDGSldpeg08+zMd6Emngn8zmu9Nxv
  8. 8EvmCPZTOxuQ9dr40bumHeMhufZUEYF6LsaZJQIDAQABAoIBAQCFsYSTQ8Eg4B7y
  9. drP6tlkY1h301ZUbrU1MT1O3cXbsxWR96wbFLaW+Ci7hHkXzXgHBpfNtvysQrhIB
  10. ni2ylvWYTXQ6UrJviCp+zAcJfx8ZeHD/z9sLWtakA3gjvqV+9EUWkD9yrP6AO1rB
  11. bojKrOk4uscNYp8q4Ioek2dg9Wfnv5Qaj6Mk7ASOEB0zviSIv/hgYcKZ75oZTv1E
  12. L/j2tj4Xzyzh0N9wez6ZmnTvT2TdAWZypnLSJIWDT7lskaWv3nH0CAhFG5boPriF
  13. FqpVA4aR0Q9TNbfIQLtv/zHRUUDhsJNn4q4bH+snDWTl9R+yXX2bljK5IM2NaIQH
  14. yOQ0+3xhAoGBAODcnJrTDFDVlXUaSKD9KiopJBD63b6reiMaCd/7/hKoxytTGc/o
  15. x9AK3+5vkjizbB/dbGNsyrXBOTrHaK2Jd1EY3CSTm/r4KNBox+4POZZdboAXwP3u
  16. HiBkvmosc4tYLZ7v63xjedQGFQLkb7wc0XZOClZ1pxCFkN2UKI1ATDRpAoGBANbK
  17. 6tyNkhoevW2lKJPiZvgJNixW/h1Tfu1Bu6kdWMdZkm3qWoTWffMYn6xFDWsrhDQ3
  18. QCXFBXTIoMIacl3i3TZb9JriL34jOjNHWkVi6z1ghcO2Oc3M2LfvILMY+Qd2mMgm
  19. 6HC4dj5nDEQ4biOOZgvPhC9ocraNdZbx37EmjTddAoGAIuhcr3RgDxR5NUq1R7jF
  20. mPH2FWS8k+MO/PAH5Gu8T60/7ivib/JVQqjNhrhvXLoN6Qx4zR6QgZLTjZpzV61l
  21. hoNzeYIoztdDjscVcpGOgRdUFjKZ1VHn/2NkZBsufM1dl7TrO849lXq0PFS2O9/F
  22. bLZEyJNPMjNp/9wGR5dZvTECgYEAh+y3fcT1PSRQ2c8Xg6ZVZQdnSd3vR52sB/Z+
  23. DEIvCVBssrQIfmHCKJFfkkPMfxJ1whlotb4des7vtIXJ9BH5zUmZ3F3gkiE21naD
  24. 8L7tgNTRMY3ivJKyXousFMpr5UYu3xKIK7T/1vOdNprDUCrv9u9mhh3B4jZYwKHl
  25. 3hQ4b10CgYADSt/7N9GEWwVGljcxqBLH/NVZBKMW4gv5pLL+IY5Rqzk97a3Zc0TE
  26. 46HDQORgPTmRquBHOO46sJaoF/lH/E2yK+7ggLsWpQg36L/QCAZj/4JH24M3W+sH
  27. tz2MlgKckSAxjlwTtP7+dom3uTIo6sih+sRrIWHwzI0CmPmPe/QXjA==
  28. -----END RSA PRIVATE KEY-----
  29. [root@hadoop3 ~]# cat /root/.ssh/id_rsa.pub
  30. ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8qqdIX6zERPKf7Ed4XUpRnKCaRxZTl4NMCRtI86NFrwIYroaDlSRrI1V3J/+GIdz49YbkPMt+a+RTgJ3KUvv3ZTzT6WGkUlJneyl3iWCHxzbSaXICDMnmmX+RhoDPlWQXu4wlZfrWDjo0NZII7au5ku5YQyTK1OHZFH0+OPKSnzR7mYMfMYHK5ukve6jWIe1+rtweUyWyJcF/3jF2KcgIm6bRJWD26wuuPVfUBLMJn12AbeEgK4EWWLt3886+7w0I4hSYDa09+CZ5vRFcckYMZKV2l6DTz7Mx3oSaeCfzOa703G/wS+YI9lM7G5D12vjRu6Yd4yG59lQRgXouxpkl root@hadoop3
  31. [root@hadoop3 ~]#

  

  1. [root@hadoop1 ~]# cat /root/.ssh/id_rsa.pub
  2. ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC/mSwq7gdDusAiW6gaA8ZAlCWOb9aCv4Bz/2L5JjpWwFPkZUQNpOtnfGRxi4X24Wrk4kq4Boj/mQ3U3sXIpeNz+ZyNVe3OPE3qKVDB2jve5pzeeM7qtWh4Ock30NzpznZFLeY9a+Ic8vDeXPPAEkgibgutibEqWyXuomSCIZGpPlh+HveY6Dtc/oRaoTEfxyLJS0FqyvzRdynCHgwiavbKFyzfL5IQVBRAqBYXLVdC6e4+gSUs1v9DAe/uEOtf5aw9AdScePSUY5AJ1fe2DTucXzci3zCIJoQ8bBedKFf0iIzWZZMLnlqLPG49E4tJCnI4qyyP6nyHv/mN6AEWk95t root@hadoop1
  3. [root@hadoop1 ~]# cat /root/.ssh/id_rsa
  4. -----BEGIN RSA PRIVATE KEY-----
  5. MIIEpAIBAAKCAQEAv5ksKu4HQ7rAIluoGgPGQJQljm/Wgr+Ac/9i+SY6VsBT5GVE
  6. DaTrZ3xkcYuF9uFq5OJKuAaI/5kN1N7FyKXjc/mcjVXtzjxN6ilQwdo73uac3njO
  7. 6rVoeDnJN9Dc6c52RS3mPWviHPLw3lzzwBJIIm4LrYmxKlsl7qJkgiGRqT5Yfh73
  8. mOg7XP6EWqExH8ciyUtBasr80Xcpwh4MImr2yhcs3y+SEFQUQKgWFy1XQunuPoEl
  9. LNb/QwHv7hDrX+WsPQHUnHj0lGOQCdX3tg07nF83It8wiCaEPGwXnShX9IiM1mWT
  10. C55aizxuPROLSQpyOKssj+p8h7/5jegBFpPebQIDAQABAoIBAHhJuQIGyIbMIz4u
  11. 3x3eCsSWffGr4zfY9NNenguf5XZ7bu/wZ8ZNKQGihgkHOIbjxNGIBLL+X1phA98G
  12. MZQkGeXA63mMXi1hjOUbJTlfQsFRdWDy5a1TURBR7zNcrKUZWwVZqLgdGCtmlrR0
  13. FRAcKi97eVdtH85gxTLJv2I3oxRmGFft2yKmVrb9+uLCx2PE6ccpOOATClfHr3K3
  14. 5zOXon2eFVaepu4Si2JJypkzrHtH+qiZodEnQqN/UOVMAhJyyjg04bNxWEMUXF7U
  15. 6nvKuA7Yz1pAVqdb1JHhdOv/4e6zkPc6EyLu7api7g6fPlV2GVHocvYaMbx210TB
  16. msAkuP0CgYEA/XBYMSv6yTlKCnaaDLarowWOr5RMBr8tf8Lcg4a6nh9uKLAQeAlN
  17. xee7kH0m6UCkfa0qp4eYhfraghx+XjEZ0igfbwgawLoPFfE371jHDeGRquDH6tu3
  18. +25mc988O/cnJPwj7/QyXbub36moJ8tHgvq/oI5AI3UNZ7JWBQ8ESn8CgYEAwYjY
  19. KJ9gwQf1DDmkr5wU4CXGaZlY/7KlAZ4ZTz7SYsaRbe1UyXlnsc/JNQQDcfdeHCOh
  20. rHKsBiWqk/LnXbm9BVxysH08E8hVFxE2IzBTbQ169qCafIJQD1rpGEND2EEqO2E1
  21. iIFZZ70Wo7usXTvebjMdNf3WhkCa9y12ssSsKRMCgYEAhLobVdUkh9Ga9xPZ5aKd
  22. DMlSSp2tmzLwDyLr/W3HuhvXwzNBzLuCoYyU7Dy+7hVOkArqdcZLmI8hdFabz5SD
  23. Y05j9/AUoq5OTD2B/7VMufZSJV2HFXZwShstSK22i+kJ9RKfd4E6B2DDZ0UgrYaG
  24. MxBC30DgUjFxDceVyRxuMN0CgYBNqNXkZx/yFXlVcIQPG7icwUu+8BPwdwUTgxdw
  25. 3yqZDEkrLUMKnbboeRKqPXQMdVDEReAITPOOe+rY9220BGY/EnvLKlXDMm5ClVt9
  26. /1RavEANWyDiuX/ayYYjgEpnKq8BqN5Mamsv34aIKTTfLLjyy3v7QGKm+KG2cf3h
  27. el4DFQKBgQCJVsTeTDLKNGb3AhT840mt87HI5oVoFNt0GLZ16FBTV7+/qs/2ecdV
  28. oafb7L4Njsx/0zNQHYc7ql86O/AD/JJP8JBPGMfGOJP0IfKh7F9vxLjAshERD6my
  29. xllnVB+M2BzNacO4G0rAjWkpdNh2OhQiRPoQbfJ94/QyyKqdR/eqpw==
  30. -----END RSA PRIVATE KEY-----
  31. [root@hadoop1 ~]#

  

  1. +-----------------+
  2. [root@hadoop2 ~]# cat /root/.ssh/id_rsa
  3. -----BEGIN RSA PRIVATE KEY-----
  4. MIIEpAIBAAKCAQEAyREegcNk148FwkcqpJKlZxiM137ZF40zcu6NL0vtCQw5e9p4
  5. gjkmyxgoc1g8mvzKGdUiIJ2K5AqDc567hEKptP+AN644UsOW+Leq7s096+17tV/a
  6. cOnYwDjJkwHZX1l9o87C59uXD0aydf60Z9lNzXJvsedkThSuJfN5r0qq3cTz32vY
  7. CAajPmEwu4p7l3PNAB5kvOTICoesL+Bxssy2JYB5FWCL+ZKaTwiP6LLnsZ54fhej
  8. j4DZpkuvPj6BUFcLP8jMRwird7jx5lYbd+X5vwVwTbiG0OTE1rZbm9nBdeWAZCRH
  9. M7kvECajwdtTQAeMqv/vN9pCUGV80dJlD8jPPwIDAQABAoIBADVcjr0fjzbKJVwf
  10. KQkORklrMY3Lg3AFsF7TQrMHsnvRO7xMCel9o3cJDUs3YrY7WqOqdek0BnVo+OQJ
  11. f3ilfIalvHCKkzYb5IhTrlC8Na/Ukh1buAx5c2XobE7QkdEFXhvINt/z3k5Wk+xO
  12. 0bAx8r0QnuYXI/647FL0IBpOdbRvJT6a0vgHaJY/XWKLcnZXpN6amp9yepB31An2
  13. AdEDM26+/9YL9YNJ0lE071SvI7LE/ew3pHjCYb41vni4zDysS94tAPDqBXijWpni
  14. BGklooQuU8FIp3qPipeO9LqJtqxhTGNDrlxMgN5dUjR0LkbwbqZfcwZmFD+ZLMx+
  15. V/amGCECgYEA7eO/o1EvSwzEzFP70qiG335HXJY5Op1W4CX7LPeO7DnW+0rSb+45
  16. 0onwVQKDkmw/O0GiwyZ6Bnn6Zpoh/cWTCU0tTMWc1L9ugZlYjb6luW7XECYcc2r7
  17. J0v0/F76lJAiyezGvxNZueHtpgtFO2b2YSc5ALGsPn/7SBevJK3mCXECgYEA2F+6
  18. fTwY9yyQKp7qkADOlvVYI8bso0UIV8tmXkhVUTMdRhSuSnLE7GdsB5iJnHOItLfo
  19. VWb33QpiUVzAduvbLm9ZYWwJJSU5twFXtxhY0ktt6qW940eqZZwoy48kXT8h09nY
  20. pLILZcWsgjpP0ONQwp5QLS7tMxSGsfyp3Froi68CgYEAljT5G0E20HeWh8H7vs8E
  21. onfUzWRZbGZOpae1ynXh+8ylrvRWnbBZOFQ6uSKmOz04S80s3XYdFJvOfRyTm+41
  22. 4mil0tTwKvFY8GIIJTAc6lJPX3YA/uus++odHYFHTakZHlDwSVQJkrJSYUa6h0CD
  23. D2M5vfNx4+DhpGq3/zwChTECgYB9pbFc0g5JUqZxKZFaiC1veg9xzy1Rbl/245WR
  24. gH2SxpTkQlQnxVfXVANmscyPfoPPNdCD72RWBparWqolJLdF0sFbkmoJGQHX5L60
  25. Az5o+AZfMVoAZnhrwu/prTjXsTaKmEF2+jEmK1EO2p/I1IfsTBSQ+GQjunKxXuCg
  26. pmXN3wKBgQCKLyvQT5y7G6YcbGtKQhj2LEgEwiUjurZFWVBSdvvIECfs2JO7uo1t
  27. PmqIQ43RfnOU5YAblnA3hJeAPKz/hlKw5NBwnOMrGFmOgb2xwyq+xlP83/g/C9dK
  28. PUjJn0D6MKOmfAcJJS3M2UbwwH6IF6j2xUBP33F2c5EVeMx/KGiYyA==
  29. -----END RSA PRIVATE KEY-----
  30. [root@hadoop2 ~]# cat /root/.ssh/id_rsa.pub
  31. ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJER6Bw2TXjwXCRyqkkqVnGIzXftkXjTNy7o0vS+0JDDl72niCOSbLGChzWDya/MoZ1SIgnYrkCoNznruEQqm0/4A3rjhSw5b4t6ruzT3r7Xu1X9pw6djAOMmTAdlfWX2jzsLn25cPRrJ1/rRn2U3Ncm+x52ROFK4l83mvSqrdxPPfa9gIBqM+YTC7inuXc80AHmS85MgKh6wv4HGyzLYlgHkVYIv5kppPCI/osuexnnh+F6OPgNmmS68+PoFQVws/yMxHCKt3uPHmVht35fm/BXBNuIbQ5MTWtlub2cF15YBkJEczuS8QJqPB21NAB4yq/+832kJQZXzR0mUPyM8/ root@hadoop2
  32. [root@hadoop2 ~]#

  

hadoop1免密登录hadoop2,但hadoop2不能免密登录hadoop1

  1. [root@hadoop1 ~]# ssh-copy-id hadoop2;
  2. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
  3. The authenticity of host 'hadoop2 (192.168.3.102)' can't be established.
  4. ECDSA key fingerprint is SHA256:UqQuyu+TPvuGuwdiDAkmKSrjPfjqMFBas1OyTT6aRQg.
  5. ECDSA key fingerprint is MD5:ed:b8:30:e2:0f:e5:0c:0f:bb:7c:86:2c:9f:72:e3:d0.
  6. Are you sure you want to continue connecting (yes/no)? yes
  7. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  8. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  9. root@hadoop2's password:
  10.  
  11. Number of key(s) added: 1
  12.  
  13. Now try logging into the machine, with: "ssh 'hadoop2'"
  14. and check to make sure that only the key(s) you wanted were added.
  15.  
  16. [root@hadoop1 ~]# ssh hadoop2
  17. Last login: Thu Jul 12 11:11:20 2018 from 192.168.3.99
  18. [root@hadoop2 ~]# ssh hadoop1
  19. The authenticity of host 'hadoop1 (192.168.3.101)' can't be established.
  20. ECDSA key fingerprint is e0:19:b2:4b:1b:d1:4e:d4:21:73:9b:44:a7:2b:d7:8c.
  21. Are you sure you want to continue connecting (yes/no)? ^C
  22. [root@hadoop2 ~]#

 

hadoop1文件变化

  1. [root@hadoop1 ~]# ll -as /root/.ssh/
  2. 总用量 16
  3. 0 drwx------ 2 root root 57 7 12 11:54 .
  4. 4 dr-xr-x---. 16 root root 4096 7 12 11:48 ..
  5. 4 -rw------- 1 root root 1679 7 12 11:48 id_rsa
  6. 4 -rw-r--r-- 1 root root 394 7 12 11:48 id_rsa.pub
  7. 4 -rw-r--r-- 1 root root 183 7 12 11:54 known_hosts
  8. [root@hadoop1 ~]# cat /root/.ssh/id_rsa
  9. -----BEGIN RSA PRIVATE KEY-----
  10. MIIEpAIBAAKCAQEAv5ksKu4HQ7rAIluoGgPGQJQljm/Wgr+Ac/9i+SY6VsBT5GVE
  11. DaTrZ3xkcYuF9uFq5OJKuAaI/5kN1N7FyKXjc/mcjVXtzjxN6ilQwdo73uac3njO
  12. 6rVoeDnJN9Dc6c52RS3mPWviHPLw3lzzwBJIIm4LrYmxKlsl7qJkgiGRqT5Yfh73
  13. mOg7XP6EWqExH8ciyUtBasr80Xcpwh4MImr2yhcs3y+SEFQUQKgWFy1XQunuPoEl
  14. LNb/QwHv7hDrX+WsPQHUnHj0lGOQCdX3tg07nF83It8wiCaEPGwXnShX9IiM1mWT
  15. C55aizxuPROLSQpyOKssj+p8h7/5jegBFpPebQIDAQABAoIBAHhJuQIGyIbMIz4u
  16. 3x3eCsSWffGr4zfY9NNenguf5XZ7bu/wZ8ZNKQGihgkHOIbjxNGIBLL+X1phA98G
  17. MZQkGeXA63mMXi1hjOUbJTlfQsFRdWDy5a1TURBR7zNcrKUZWwVZqLgdGCtmlrR0
  18. FRAcKi97eVdtH85gxTLJv2I3oxRmGFft2yKmVrb9+uLCx2PE6ccpOOATClfHr3K3
  19. 5zOXon2eFVaepu4Si2JJypkzrHtH+qiZodEnQqN/UOVMAhJyyjg04bNxWEMUXF7U
  20. 6nvKuA7Yz1pAVqdb1JHhdOv/4e6zkPc6EyLu7api7g6fPlV2GVHocvYaMbx210TB
  21. msAkuP0CgYEA/XBYMSv6yTlKCnaaDLarowWOr5RMBr8tf8Lcg4a6nh9uKLAQeAlN
  22. xee7kH0m6UCkfa0qp4eYhfraghx+XjEZ0igfbwgawLoPFfE371jHDeGRquDH6tu3
  23. +25mc988O/cnJPwj7/QyXbub36moJ8tHgvq/oI5AI3UNZ7JWBQ8ESn8CgYEAwYjY
  24. KJ9gwQf1DDmkr5wU4CXGaZlY/7KlAZ4ZTz7SYsaRbe1UyXlnsc/JNQQDcfdeHCOh
  25. rHKsBiWqk/LnXbm9BVxysH08E8hVFxE2IzBTbQ169qCafIJQD1rpGEND2EEqO2E1
  26. iIFZZ70Wo7usXTvebjMdNf3WhkCa9y12ssSsKRMCgYEAhLobVdUkh9Ga9xPZ5aKd
  27. DMlSSp2tmzLwDyLr/W3HuhvXwzNBzLuCoYyU7Dy+7hVOkArqdcZLmI8hdFabz5SD
  28. Y05j9/AUoq5OTD2B/7VMufZSJV2HFXZwShstSK22i+kJ9RKfd4E6B2DDZ0UgrYaG
  29. MxBC30DgUjFxDceVyRxuMN0CgYBNqNXkZx/yFXlVcIQPG7icwUu+8BPwdwUTgxdw
  30. 3yqZDEkrLUMKnbboeRKqPXQMdVDEReAITPOOe+rY9220BGY/EnvLKlXDMm5ClVt9
  31. /1RavEANWyDiuX/ayYYjgEpnKq8BqN5Mamsv34aIKTTfLLjyy3v7QGKm+KG2cf3h
  32. el4DFQKBgQCJVsTeTDLKNGb3AhT840mt87HI5oVoFNt0GLZ16FBTV7+/qs/2ecdV
  33. oafb7L4Njsx/0zNQHYc7ql86O/AD/JJP8JBPGMfGOJP0IfKh7F9vxLjAshERD6my
  34. xllnVB+M2BzNacO4G0rAjWkpdNh2OhQiRPoQbfJ94/QyyKqdR/eqpw==
  35. -----END RSA PRIVATE KEY-----
  36. [root@hadoop1 ~]# cat /root/.ssh/id_rsa.pub
  37. ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC/mSwq7gdDusAiW6gaA8ZAlCWOb9aCv4Bz/2L5JjpWwFPkZUQNpOtnfGRxi4X24Wrk4kq4Boj/mQ3U3sXIpeNz+ZyNVe3OPE3qKVDB2jve5pzeeM7qtWh4Ock30NzpznZFLeY9a+Ic8vDeXPPAEkgibgutibEqWyXuomSCIZGpPlh+HveY6Dtc/oRaoTEfxyLJS0FqyvzRdynCHgwiavbKFyzfL5IQVBRAqBYXLVdC6e4+gSUs1v9DAe/uEOtf5aw9AdScePSUY5AJ1fe2DTucXzci3zCIJoQ8bBedKFf0iIzWZZMLnlqLPG49E4tJCnI4qyyP6nyHv/mN6AEWk95t root@hadoop1
  38. [root@hadoop1 ~]# cat /root/.ssh/known_hosts
  39. hadoop2,192.168.3.102 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKwk4ldwjl9bHfulRh/Go9dRfR70PK+XYiFAgE8JuCgBzLjfShC3JQpZNq1uDcXTPSwwWGWxfTe5lWLzKnA6jXc=
  40. [root@hadoop1 ~]#

  

hadoop2文件变化

  1. [root@hadoop2 ~]# ssh hadoop1
  2. The authenticity of host 'hadoop1 (192.168.3.101)' can't be established.
  3. ECDSA key fingerprint is e0:19:b2:4b:1b:d1:4e:d4:21:73:9b:44:a7:2b:d7:8c.
  4. Are you sure you want to continue connecting (yes/no)? ^C
  5. [root@hadoop2 ~]# ll -as /root/.ssh/
  6. 总用量 16
  7. 0 drwx------ 2 root root 61 7月 12 11:52 .
  8. 4 dr-xr-x---. 25 root root 4096 7月 12 11:47 ..
  9. 4 -rw------- 1 root root 394 7月 12 11:52 authorized_keys
  10. 4 -rw------- 1 root root 1679 7月 12 11:47 id_rsa
  11. 4 -rw-r--r-- 1 root root 394 7月 12 11:47 id_rsa.pub
  12. [root@hadoop2 ~]# cat /root/.ssh/id_rsa
  13. -----BEGIN RSA PRIVATE KEY-----
  14. MIIEpAIBAAKCAQEAyREegcNk148FwkcqpJKlZxiM137ZF40zcu6NL0vtCQw5e9p4
  15. gjkmyxgoc1g8mvzKGdUiIJ2K5AqDc567hEKptP+AN644UsOW+Leq7s096+17tV/a
  16. cOnYwDjJkwHZX1l9o87C59uXD0aydf60Z9lNzXJvsedkThSuJfN5r0qq3cTz32vY
  17. CAajPmEwu4p7l3PNAB5kvOTICoesL+Bxssy2JYB5FWCL+ZKaTwiP6LLnsZ54fhej
  18. j4DZpkuvPj6BUFcLP8jMRwird7jx5lYbd+X5vwVwTbiG0OTE1rZbm9nBdeWAZCRH
  19. M7kvECajwdtTQAeMqv/vN9pCUGV80dJlD8jPPwIDAQABAoIBADVcjr0fjzbKJVwf
  20. KQkORklrMY3Lg3AFsF7TQrMHsnvRO7xMCel9o3cJDUs3YrY7WqOqdek0BnVo+OQJ
  21. f3ilfIalvHCKkzYb5IhTrlC8Na/Ukh1buAx5c2XobE7QkdEFXhvINt/z3k5Wk+xO
  22. 0bAx8r0QnuYXI/647FL0IBpOdbRvJT6a0vgHaJY/XWKLcnZXpN6amp9yepB31An2
  23. AdEDM26+/9YL9YNJ0lE071SvI7LE/ew3pHjCYb41vni4zDysS94tAPDqBXijWpni
  24. BGklooQuU8FIp3qPipeO9LqJtqxhTGNDrlxMgN5dUjR0LkbwbqZfcwZmFD+ZLMx+
  25. V/amGCECgYEA7eO/o1EvSwzEzFP70qiG335HXJY5Op1W4CX7LPeO7DnW+0rSb+45
  26. 0onwVQKDkmw/O0GiwyZ6Bnn6Zpoh/cWTCU0tTMWc1L9ugZlYjb6luW7XECYcc2r7
  27. J0v0/F76lJAiyezGvxNZueHtpgtFO2b2YSc5ALGsPn/7SBevJK3mCXECgYEA2F+6
  28. fTwY9yyQKp7qkADOlvVYI8bso0UIV8tmXkhVUTMdRhSuSnLE7GdsB5iJnHOItLfo
  29. VWb33QpiUVzAduvbLm9ZYWwJJSU5twFXtxhY0ktt6qW940eqZZwoy48kXT8h09nY
  30. pLILZcWsgjpP0ONQwp5QLS7tMxSGsfyp3Froi68CgYEAljT5G0E20HeWh8H7vs8E
  31. onfUzWRZbGZOpae1ynXh+8ylrvRWnbBZOFQ6uSKmOz04S80s3XYdFJvOfRyTm+41
  32. 4mil0tTwKvFY8GIIJTAc6lJPX3YA/uus++odHYFHTakZHlDwSVQJkrJSYUa6h0CD
  33. D2M5vfNx4+DhpGq3/zwChTECgYB9pbFc0g5JUqZxKZFaiC1veg9xzy1Rbl/245WR
  34. gH2SxpTkQlQnxVfXVANmscyPfoPPNdCD72RWBparWqolJLdF0sFbkmoJGQHX5L60
  35. Az5o+AZfMVoAZnhrwu/prTjXsTaKmEF2+jEmK1EO2p/I1IfsTBSQ+GQjunKxXuCg
  36. pmXN3wKBgQCKLyvQT5y7G6YcbGtKQhj2LEgEwiUjurZFWVBSdvvIECfs2JO7uo1t
  37. PmqIQ43RfnOU5YAblnA3hJeAPKz/hlKw5NBwnOMrGFmOgb2xwyq+xlP83/g/C9dK
  38. PUjJn0D6MKOmfAcJJS3M2UbwwH6IF6j2xUBP33F2c5EVeMx/KGiYyA==
  39. -----END RSA PRIVATE KEY-----
  40. [root@hadoop2 ~]# cat /root/.ssh/id_rsa.pub
  41. ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJER6Bw2TXjwXCRyqkkqVnGIzXftkXjTNy7o0vS+0JDDl72niCOSbLGChzWDya/MoZ1SIgnYrkCoNznruEQqm0/4A3rjhSw5b4t6ruzT3r7Xu1X9pw6djAOMmTAdlfWX2jzsLn25cPRrJ1/rRn2U3Ncm+x52ROFK4l83mvSqrdxPPfa9gIBqM+YTC7inuXc80AHmS85MgKh6wv4HGyzLYlgHkVYIv5kppPCI/osuexnnh+F6OPgNmmS68+PoFQVws/yMxHCKt3uPHmVht35fm/BXBNuIbQ5MTWtlub2cF15YBkJEczuS8QJqPB21NAB4yq/+832kJQZXzR0mUPyM8/ root@hadoop2
  42. [root@hadoop2 ~]# cat /root/.ssh/authorized_keys
  43. ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC/mSwq7gdDusAiW6gaA8ZAlCWOb9aCv4Bz/2L5JjpWwFPkZUQNpOtnfGRxi4X24Wrk4kq4Boj/mQ3U3sXIpeNz+ZyNVe3OPE3qKVDB2jve5pzeeM7qtWh4Ock30NzpznZFLeY9a+Ic8vDeXPPAEkgibgutibEqWyXuomSCIZGpPlh+HveY6Dtc/oRaoTEfxyLJS0FqyvzRdynCHgwiavbKFyzfL5IQVBRAqBYXLVdC6e4+gSUs1v9DAe/uEOtf5aw9AdScePSUY5AJ1fe2DTucXzci3zCIJoQ8bBedKFf0iIzWZZMLnlqLPG49E4tJCnI4qyyP6nyHv/mN6AEWk95t root@hadoop1
  44. [root@hadoop2 ~]#

 

hadoop2新生成了文件authorized_keys ,而hadoop1新生成了文件known_hosts

hadoop2的authorized_keys 中添加了hadoop1的/root/.ssh/id_rsa.pub

进行An2次 hadoopM ssh-copy-id hadoopN后

再进行   hadoopM ssh-copy-id hadoopM

共执行An2+n次

最后各个节点的

  1. .ssh/authorized_keys
  1. .ssh/known_hosts
  1. [root@hadoop2 ~]# cat /root/.ssh/authorized_keys
  2. ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC/mSwq7gdDusAiW6gaA8ZAlCWOb9aCv4Bz/2L5JjpWwFPkZUQNpOtnfGRxi4X24Wrk4kq4Boj/mQ3U3sXIpeNz+ZyNVe3OPE3qKVDB2jve5pzeeM7qtWh4Ock30NzpznZFLeY9a+Ic8vDeXPPAEkgibgutibEqWyXuomSCIZGpPlh+HveY6Dtc/oRaoTEfxyLJS0FqyvzRdynCHgwiavbKFyzfL5IQVBRAqBYXLVdC6e4+gSUs1v9DAe/uEOtf5aw9AdScePSUY5AJ1fe2DTucXzci3zCIJoQ8bBedKFf0iIzWZZMLnlqLPG49E4tJCnI4qyyP6nyHv/mN6AEWk95t root@hadoop1
  3. ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8qqdIX6zERPKf7Ed4XUpRnKCaRxZTl4NMCRtI86NFrwIYroaDlSRrI1V3J/+GIdz49YbkPMt+a+RTgJ3KUvv3ZTzT6WGkUlJneyl3iWCHxzbSaXICDMnmmX+RhoDPlWQXu4wlZfrWDjo0NZII7au5ku5YQyTK1OHZFH0+OPKSnzR7mYMfMYHK5ukve6jWIe1+rtweUyWyJcF/3jF2KcgIm6bRJWD26wuuPVfUBLMJn12AbeEgK4EWWLt3886+7w0I4hSYDa09+CZ5vRFcckYMZKV2l6DTz7Mx3oSaeCfzOa703G/wS+YI9lM7G5D12vjRu6Yd4yG59lQRgXouxpkl root@hadoop3
  4. ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJER6Bw2TXjwXCRyqkkqVnGIzXftkXjTNy7o0vS+0JDDl72niCOSbLGChzWDya/MoZ1SIgnYrkCoNznruEQqm0/4A3rjhSw5b4t6ruzT3r7Xu1X9pw6djAOMmTAdlfWX2jzsLn25cPRrJ1/rRn2U3Ncm+x52ROFK4l83mvSqrdxPPfa9gIBqM+YTC7inuXc80AHmS85MgKh6wv4HGyzLYlgHkVYIv5kppPCI/osuexnnh+F6OPgNmmS68+PoFQVws/yMxHCKt3uPHmVht35fm/BXBNuIbQ5MTWtlub2cF15YBkJEczuS8QJqPB21NAB4yq/+832kJQZXzR0mUPyM8/ root@hadoop2
  5. [root@hadoop2 ~]# cat /root/.ssh/known_hosts
  6. hadoop1,192.168.3.101 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDVVW5cw8LXqZq6aJJ9tw4idIUa1qq79AQpRrRB3zsKKtN3I9jJPYfL5IU0KRh1OO4oSvU/RV/B+KEkLayC86dI=
  7. hadoop3,192.168.3.103 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKAfre/2chabtppEpNdzgtyA4M62VXCR6sNfU6z4+MWe0dx+m2tSo67F7JrPNJ/NQfXO3TbQxXRawOEu9AbjhHg=
  8. hadoop2,192.168.3.102 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKwk4ldwjl9bHfulRh/Go9dRfR70PK+XYiFAgE8JuCgBzLjfShC3JQpZNq1uDcXTPSwwWGWxfTe5lWLzKnA6jXc=
  9. [root@hadoop2 ~]#

  内容相同

配置细节参考官网文档,注意环境变量的JAVA_HOME正确

  1. Apache > Hadoop > Apache Hadoop Project Dist POM > Apache Hadoop 2.9.1 > Hadoop Cluster Setup
  2. http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html
  3. Apache Hadoop 2.9.1 Hadoop Cluster Setup http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html
  4. etc/hadoop/hadoop-env.sh中指明该节点的JAVA_HOME
  5. At the very least, you must specify the JAVA_HOME so that it is correctly defined on each remote node.
  6.  
  7. java -verbose 找到jdk安装目录
  8. [Loaded java.lang.Shutdown$Lock from /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-8.b10.el7_5.x86_64/jre/lib/rt.jar]
  9.  
  10. 写入环境变量
  11. export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.102-4.b14.el7.x86_64
  12.  
  13. etc/hadoop/core-site.xml
  14. Parameter Value Notes
  15. fs.defaultFS NameNode URI hdfs://host:port/
  16. io.file.buffer.size 131072 Size of read/write buffer used in SequenceFiles.
  17.  
  18. <configuration>
  19. <property>
  20. <name>fs.defaultFS</name>
  21. <value>hdfs://hadoop1:9001/</value>
  22. </property>
  23. <property>
  24. <name>io.file.buffer.size</name>
  25. <value>131072</value>
  26. </property>
  27. </configuration>
  28.  
  29. etc/hadoop/hdfs-site.xml
  30. Configurations for NameNode:
  31. Parameter Value Notes
  32. dfs.namenode.name.dir Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently. If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.
  33. dfs.hosts / dfs.hosts.exclude List of permitted/excluded DataNodes. If necessary, use these files to control the list of allowable datanodes.
  34. dfs.blocksize 268435456 HDFS blocksize of 256MB for large file-systems.
  35. dfs.namenode.handler.count 100 More NameNode server threads to handle RPCs from large number of DataNodes.
  36. Configurations for DataNode:
  37. Parameter Value Notes
  38. dfs.datanode.data.dir Comma separated list of paths on the local filesystem of a DataNode where it should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices.
  39.  
  40. 创建目录
  41. mkdir -p /home/hadoop-2.9.1/mydata/namenode;mkdir -p /home/hadoop-2.9.1/mydata/datanode;
  42.  
  43. <configuration>
  44. <property>
  45. <name>dfs.replication</name>
  46. <value>2</value>
  47. </property>
  48. <property>
  49. <name>dfs.datanode.data.dir</name>
  50. <value>file://home/hadoop-2.9.1/mydata/datanode</value>
  51. </property>
  52. <property>
  53. <name>dfs.namenode.name.dir</name>
  54. <value>file:///home/hadoop-2.9.1/mydata/namenode</value>
  55. </property>
  56. <property>
  57. <name>dfs.blocksize</name>
  58. <value>268435456</value>
  59. </property>
  60. <property>
  61. <name>dfs.namenode.handler.count</name>
  62. <value>100</value>
  63. </property>
  64. </configuration>
  65.  
  66. etc/hadoop/yarn-site.xml
  67. <configuration>
  68. <property>
  69. <name>yarn.resourcemanager.hostname</name>
  70. <value>hadoop1</value>
  71. </property>
  72. <property>
  73. <name>yarn.nodemanager.hostname</name>
  74. <value>hadoop1</value>
  75. </property>
  76. </configuration>
  77.  
  78. etc/hadoop/mapred-site.xml
  79.  
  80. <configuration>
  81. <property>
  82. <name>mapreduce.framework.name</name>
  83. <value>yarn</value>
  84. </property>
  85. </configuration>
  86.  
  87. Slaves File
  88. List all slave hostnames or IP addresses in your etc/hadoop/slaves file, one per line.
  89.  
  90. vim etc/hadoop/slaves
  91. hadoop1
  92. hadoop2
  93. hadoop3
  94.  
  95. 文件夹复制
  96. scp -r /home/hadoop-2.9.1 hadoop2:/home/;
  97. scp -r /home/hadoop-2.9.1 hadoop1:/home/;
  98.  
  99. 效率
  100. HADOOP_PREFIX=/home/hadoop-2.9.1;export HADOOP_PREFIX;
  101. HADOOP_PREFIX=/home/hadoop-2.9.1;export HADOOP_PREFIX;

  102. ssh hadoop2 "HADOOP_PREFIX=/home/hadoop-2.9.1;export HADOOP_PREFIX;";ssh hadoop1 "HADOOP_PREFIX=/home/hadoop-2.9.1;export HADOOP_PREFIX;";
  103. 没有生效】
  104.  
  105. $HADOOP_PREFIX/bin/hdfs namenode -format my_cluster_name
  106. printenv检查
  107. HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop;export HADOOP_CONF_DIR;
  108. JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-8.b10.el7_5.x86_64/jre; export JAVA_HOME;
  109.  
  110. $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode
  111.  
  112. HADOOP_YARN_HOME=$HADOOP_PREFIX/;export HADOOP_YARN_HOME;

  

集群id不存在

  1. 2018-07-18 09:11:32,098 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/hadoop-2.9.1/mydata/datanode/
  2. java.io.IOException: Incompatible clusterIDs in /hadoop-2.9.1/mydata/datanode: namenode clusterID = CID-a6680204-4513-4ebc-b1eb-88be2c9cf9bc; datanode clusterID = CID-180160f6-f2cf-44c4-83eb-66e8164d99b5
  3. at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:760)
  4. at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:293)
  5. at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:409)
  6. at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:388)
  7. at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:556)
  8. at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1649)
  9. at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1610)
  10. at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:388)
  11. at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
  12. at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
  13. at java.lang.Thread.run(Thread.java:748)
  14. 2018-07-18 09:11:32,101 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid f4b5087b-1763-4d99-88f8-cc934716fc1a) service to hadoop1/192.168.3.101:9001. Exiting.
  15. java.io.IOException: All specified directories have failed to load.
  16. at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:557)
  17. at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1649)
  18. at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1610)
  19. at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:388)
  20. at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
  21. at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
  22. at java.lang.Thread.run(Thread.java:748)
  23. 2018-07-18 09:11:32,101 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid f4b5087b-1763-4d99-88f8-cc934716fc1a) service to hadoop1/192.168.3.101:9001
  24. 2018-07-18 09:11:32,202 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid f4b5087b-1763-4d99-88f8-cc934716fc1a)
  25.  
  26. ssh hadoop1 "rm -rf /home/hadoop-2.9.1/mydata/namenode/*;rm -rf /home/hadoop-2.9.1/mydata/datanode/*;";
  27. ssh hadoop2 "rm -rf /home/hadoop-2.9.1/mydata/namenode/*;rm -rf /home/hadoop-2.9.1/mydata/datanode/*;";
  28. ssh hadoop3 "rm -rf /home/hadoop-2.9.1/mydata/namenode/*;rm -rf /home/hadoop-2.9.1/mydata/datanode/*;";

  

检查文件

.bashrc .bash_profile

[root@d1 ~]# cat ~/.bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

JAVA_HOME=/usr/local/jdk;export JAVA_HOME;

HADOOP_PREFIX=/home/hadoop-2.9.1;export HADOOP_PREFIX;HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop;export HADOOP_CONF_DIR;HADOOP_HOME=/home/hadoop-2.9.1;export HADOOP_HOME;HADOOP_PREFIX=/home/hadoop-2.9.1;export HADOOP_PREFIX;HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop;export HADOOP_CONF_DIR;HADOOP_HOME=/home/hadoop-2.9.1;export HADOOP_HOME;HADOOP_YARN_HOME=$HADOOP_PREFIX;export HADOOP_YARN_HOME;
[root@d1 ~]#

source  ~/.bash_profile;

scp   ~/.bashrc_profile root@d2:~/;

ssh  d2  "source ~/.bash_profile";

  1. [root@d1 ~]# cd $HADOOP_HOME
  2. [root@d1 hadoop-2.9.1]# cat myCleanStart.sh
  3. #【stop】
  4. $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop namenode; $HADOOP_PREFIX/sbin/hadoop-daemons.sh --config $HADOOP_CONF_DIR --script hdfs stop datanode;$HADOOP_PREFIX/sbin/stop-dfs.sh;$HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop resourcemanager;$HADOOP_YARN_HOME/sbin/yarn-daemons.sh --config $HADOOP_CONF_DIR stop nodemanager;$HADOOP_PREFIX/sbin/stop-yarn.sh;ssh d3 "rm -rf /home/hadoop-2.9.1/mydata/namenode/*;rm -rf /home/hadoop-2.9.1/mydata/datanode/*;";ssh d2 "rm -rf /home/hadoop-2.9.1/mydata/namenode/*;rm -rf /home/hadoop-2.9.1/mydata/datanode/*;";ssh d1 "rm -rf /home/hadoop-2.9.1/mydata/namenode/*;rm -rf /home/hadoop-2.9.1/mydata/datanode/*;";ssh d3 "rm -rf /home/hadoop-2.9.1/logs/*;rm -rf /home/hadoop-2.9.1/logs/*;";ssh d2 "rm -rf /home/hadoop-2.9.1/logs/*;rm -rf /home/hadoop-2.9.1/logs/*;";ssh d1 "rm -rf /home/hadoop-2.9.1/logs/*;rm -rf /home/hadoop-2.9.1/logs/*;";
  5.  
  6. #【del】
  7. ssh d3 "rm -rf /home/hadoop-2.9.1/mydata/namenode/*;rm -rf /home/hadoop-2.9.1/mydata/datanode/*;";ssh d2 "rm -rf /home/hadoop-2.9.1/mydata/namenode/*;rm -rf /home/hadoop-2.9.1/mydata/datanode/*;";ssh d1 "rm -rf /home/hadoop-2.9.1/mydata/namenode/*;rm -rf /home/hadoop-2.9.1/mydata/datanode/*;";ssh d3 "rm -rf /home/hadoop-2.9.1/logs/*;rm -rf /home/hadoop-2.9.1/logs/*;";ssh d2 "rm -rf /home/hadoop-2.9.1/logs/*;rm -rf /home/hadoop-2.9.1/logs/*;";ssh d1 "rm -rf /home/hadoop-2.9.1/logs/*;rm -rf /home/hadoop-2.9.1/logs/*;";
  8.  
  9. #【start】
  10. $HADOOP_PREFIX/bin/hdfs namenode -format mycluster_name;$HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode;$HADOOP_PREFIX/sbin/hadoop-daemons.sh --config $HADOOP_CONF_DIR --script hdfs start datanode;$HADOOP_PREFIX/sbin/start-dfs.sh;$HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager;$HADOOP_YARN_HOME/sbin/yarn-daemons.sh --config $HADOOP_CONF_DIR start nodemanager;
  11.  
  12. [root@d1 hadoop-2.9.1]#

  

  1. 1.1.1.182 cmd
  2. 1.1.1.88 hadoop-name
  3. 1.1.1.89 hadoop-data-a
  4. 1.1.1.90 hadoop-data-b
  5.  
  6. 1.1.1.182
  7. +*uu
  8.  
  9. 1.1.1.88
  10. 1.1.1.89
  11. 1.1.1.90
  12. dd
  13.  
  14. scp /etc/hosts root@hadoop-name:/etc/;scp /etc/hosts root@hadoop-data-a:/etc/;scp /etc/hosts root@hadoop-data-b:/etc/;
  15.  
  16. # cd ~;ssh-keygen -t rsa;
  17.  
  18. ssh cmd "rm -rf ~/.ssh;ls ~/"; ssh hadoop-name "rm -rf ~/.ssh;ls ~/";ssh hadoop-data-a "rm -rf ~/.ssh;ls ~/";ssh hadoop-data-b "rm -rf ~/.ssh;ls ~/";
  19.  
  20. # TODO -->script
  21. ssh hadoop-name;
  22. hostname hadoop-name;cd ~;ssh-keygen -t rsa; scp ~/.ssh/id_rsa.pub root@cmd:~/.ssh/authorized_keys_`hostname`;
  23. ssh hadoop-data-a;
  24. hostname hadoop-data-a;cd ~;ssh-keygen -t rsa; scp ~/.ssh/id_rsa.pub root@cmd:~/.ssh/authorized_keys_`hostname`;
  25.  
  26. ssh hadoop-data-b;
  27. hostname hadoop-data-b;cd ~;ssh-keygen -t rsa; scp ~/.ssh/id_rsa.pub root@cmd:~/.ssh/authorized_keys_`hostname`;
  28.  
  29. ssh cmd;
  30. cd ~/.ssh/;cat authorized_keys_*>>authorized_keys; cat id_rsa.pub>>authorized_keys; chmod 400 authorized_keys;
  31. scp authorized_keys root@hadoop-name:~/.ssh/;scp authorized_keys root@hadoop-data-a:~/.ssh/;scp authorized_keys root@hadoop-data-b:~/.ssh/;
  32. scp known_hosts root@hadoop-name:~/.ssh/;scp known_hosts root@hadoop-data-a:~/.ssh/;scp known_hosts root@hadoop-data-b:~/.ssh/;

  

  1. tail hadoop-hdp-datanode-hadoop-name.log
  2. at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2799)
  3. at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2714)
  4. at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2756)
  5. at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2900)
  6. at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2924)
  7. 2019-11-07 13:14:44,477 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0
  8. 2019-11-07 13:14:44,482 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
  9. /************************************************************

  

http://www.firefoxbug.com/index.php/archives/2424/

hadoop 集群搭建 配置 spark yarn 对效率的提升永无止境 Hadoop Volume 配置的更多相关文章

  1. hadoop 集群搭建 配置 spark yarn 对效率的提升永无止境

    [手动验证:任意2个节点间是否实现 双向 ssh免密登录] 弄懂通信原理和集群的容错性 任意2个节点间实现双向 ssh免密登录,默认在~目录下 [实现上步后,在其中任一节点安装\配置hadoop后,可 ...

  2. Hadoop集群搭建-05安装配置YARN

    Hadoop集群搭建-04安装配置HDFS  Hadoop集群搭建-03编译安装hadoop Hadoop集群搭建-02安装配置Zookeeper Hadoop集群搭建-01前期准备 先保证集群5台虚 ...

  3. 【原创 Hadoop&Spark 动手实践 5】Spark 基础入门,集群搭建以及Spark Shell

    Spark 基础入门,集群搭建以及Spark Shell 主要借助Spark基础的PPT,再加上实际的动手操作来加强概念的理解和实践. Spark 安装部署 理论已经了解的差不多了,接下来是实际动手实 ...

  4. Hadoop集群搭建-04安装配置HDFS

    Hadoop集群搭建-05安装配置YARN Hadoop集群搭建-04安装配置HDFS  Hadoop集群搭建-03编译安装hadoop Hadoop集群搭建-02安装配置Zookeeper Hado ...

  5. Hadoop集群搭建-02安装配置Zookeeper

    Hadoop集群搭建-05安装配置YARN Hadoop集群搭建-04安装配置HDFS  Hadoop集群搭建-03编译安装hadoop Hadoop集群搭建-02安装配置Zookeeper Hado ...

  6. 大数据初级笔记二:Hadoop入门之Hadoop集群搭建

    Hadoop集群搭建 把环境全部准备好,包括编程环境. JDK安装 版本要求: 强烈建议使用64位的JDK版本,这样的优势在于JVM的能够访问到的最大内存就不受限制,基于后期可能会学习到Spark技术 ...

  7. Hadoop 集群搭建和维护文档

    一.前言 -- 基础环境准备 节点名称 IP NN DN JNN ZKFC ZK RM NM Master Worker master1 192.168.8.106 * * * * * * maste ...

  8. Hadoop 集群搭建

    Hadoop 集群搭建 2016-09-24 杜亦舒 目标 在3台服务器上搭建 Hadoop2.7.3 集群,然后测试验证,要能够向 HDFS 上传文件,并成功运行 mapreduce 示例程序 搭建 ...

  9. Hadoop集群搭建安装过程(三)(图文详解---尽情点击!!!)

    Hadoop集群搭建安装过程(三)(图文详解---尽情点击!!!) 一.JDK的安装 安装位置都在同一位置(/usr/tools/jdk1.8.0_73) jdk的安装在克隆三台机器的时候可以提前安装 ...

随机推荐

  1. 5.innodb B+tree索引

    索引基础 索引是数据结构 1.图例 2.B+tree 特征 1.非叶子节点不保存数据,只用来索引,数据都保存在叶子节点 2.查询任何一条数据,查询的索引深度都是一样的 3. B+ 树中各个页之间是通过 ...

  2. SQLServer多事务——事务嵌套

    在ERP中,偶尔会有存储过程里面继续调用存储过程的情况 其中更有一些特殊的存储过程分别都使用了存储过程,大致可以分为下面几种情况: 1.平行事务,在多个事务中,任意一个成功则提交数据库,失败则各自RO ...

  3. 老板居然让我在Java项目中“造假”

    1. 前言 老板说,明天甲方要来看产品,你得造点数据,而且数据必须是"真"的,演示效果要好看一些,这样他才会买我们的产品,我好明年给你换个嫂子.一般开发接到这种过分要求都不会很乐意 ...

  4. Spring源码--debug分析循环依赖--构造器注入

    目的:源码调试构造器注入,看看是怎么报错的. spring:5.2.3 jdk:1.8 一.准备 首先准备两个循环依赖的类:userService和roleServic <bean id=&qu ...

  5. 有关CSS 定位中的盒装模型、position、z-index的学习心得

    开始整体之前我需要说明两个概念: 第一个就是   一切皆为框  也就是说在HTML中的不管是是块级的还是内联的,都可以认为成块的,唯一的区别就是块的会独自占据一行 第二个文档流:  一个网页可以看作是 ...

  6. WebApplicationContext

    在Web应用中,我们会用到WebApplicationContext  用它来保存上下文信息 那么它set到ServletContext的过程是怎么样呢 1)通过WEB.XML中监听类 p.p1 { ...

  7. Kubernetes K8S之通过helm部署metrics-server与HPA详解

    Kubernetes K8S之通过helm部署metrics-server与 Horizontal Pod Autoscaling (HPA)详解 主机配置规划 服务器名称(hostname) 系统版 ...

  8. Beta冲刺——第五天

    这个作业属于哪个课程 https://edu.cnblogs.com/campus/fzzcxy/2018SE1 这个作业要求在哪里 https://edu.cnblogs.com/campus/fz ...

  9. express安装问题

    步骤1 npm install -g express(全局安装express) (安装node就不必说了) 步骤2 npm install -g express-generator(安装命令工具) 完 ...

  10. Solon rpc 之 SocketD 协议 - RPC调用模式

    Solon rpc 之 SocketD 协议系列 Solon rpc 之 SocketD 协议 - 概述 Solon rpc 之 SocketD 协议 - 消息上报模式 Solon rpc 之 Soc ...