hadoop全分布式环境搭建
本文主要介绍基本的hadoop的搭建过程。首先说下我的环境准备。我的笔记本使用的是Windows10专业版,装的虚拟机软件为VMware WorkStation Pro,虚拟机使用的系统为centos7。hadoop安装需要的软件有hadoop-2.6.0,jdk-1.8.0。软件版本可不同,请网友们自行百度下载。
整体规划
1.本次集群搭建共需要四个节点,每个节点都是最小化安装的centos7。并且每个节点都有一个zgw用户。将安装所需要的hadoop,jdk文件已预先放置在了zgw用户的家目录。
2.四个节点的名字分别为namenode,datanode1,datanode2,SecondNamenode。其中namenode为Master节点。
CentOS安装之前的准备工作
1.准备四台安装好的centos7虚拟机。安装过程请自行百度。
2.设置静态IP.
1 sudo cp /etc/sysconfig/network-scripts/ifcfg-eno16777736 /etc/sysconfig/network-scripts/ifcfg-eno16777736.bak.zgw
2 sudo vi /etc/sysconfig/network-scripts/ifcfg-eno16777736
内容如下:
1 TYPE=Ethernet
2 BOOTPROTO=static
3 DEFROUTE=yes
4 PEERDNS=yes
5 PEERROUTES=yes
6 IPV4_FAILURE_FATAL=no
7 IPV6INIT=yes
8 IPV6_AUTOCONF=yes
9 IPV6_DEFROUTE=yes
10 IPV6_PEERDNS=yes
11 IPV6_PEERROUTES=yes
12 IPV6_FAILURE_FATAL=no
13 NAME=eno16777736
14 UUID=32b53370-f40b-4b40-b29a-daef1a58d6dc
15 DEVICE=eno16777736
16 ONBOOT=yes
17 IPADDR=192.168.190.11
18 NETMASK=255.255.255.0
19 DNS1=192.168.190.2
20 DNS2=223.5.5.5
21 GATEWAY=192.168.190.2
3.关闭centos7防火墙。
1 systemctl stop firewalld.service #停止firewall
2
3 systemctl disable firewalld.service #禁止firewall开机启动
4.修改每个节点的/etc/hostname,修改成相应的主机名(四个节点不能相同)。
sudo vi /etc/hostname
5.将每个节点的IP地址和主机名对应关系写到每个节点的/etc/hosts中。
5.1修改hosts。
sudo vi /etc/hosts
内容如下,后四行为添加内容。
1 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
2 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
3 192.168.190.11 namenode
4 192.168.190.12 datanode1
5 192.168.190.13 datanode2
6 192.168.190.14 SecondNamenode
5.2测试
1 [zgw@namenode ~]$ ping datanode1
2 PING datanode1 (192.168.190.12) 56(84) bytes of data.
3 64 bytes from datanode1 (192.168.190.12): icmp_seq=1 ttl=64 time=0.711 ms
4 64 bytes from datanode1 (192.168.190.12): icmp_seq=2 ttl=64 time=0.377 ms
5 64 bytes from datanode1 (192.168.190.12): icmp_seq=3 ttl=64 time=0.424 ms
6 ^C
7 --- datanode1 ping statistics ---
8 3 packets transmitted, 3 received, 0% packet loss, time 2016ms
9 rtt min/avg/max/mdev = 0.377/0.504/0.711/0.147 ms
10 [zgw@namenode ~]$ ping datanode2
11 PING datanode2 (192.168.190.13) 56(84) bytes of data.
12 64 bytes from datanode2 (192.168.190.13): icmp_seq=1 ttl=64 time=2.31 ms
13 64 bytes from datanode2 (192.168.190.13): icmp_seq=2 ttl=64 time=3.22 ms
14 64 bytes from datanode2 (192.168.190.13): icmp_seq=3 ttl=64 time=2.62 ms
15 ^C
16 --- datanode2 ping statistics ---
17 3 packets transmitted, 3 received, 0% packet loss, time 2025ms
18 rtt min/avg/max/mdev = 2.316/2.722/3.221/0.375 ms
19 [zgw@namenode ~]$ ping SecondNamenode
20 PING SecondNamenode (192.168.190.14) 56(84) bytes of data.
21 64 bytes from SecondNamenode (192.168.190.14): icmp_seq=1 ttl=64 time=1.23 ms
22 64 bytes from SecondNamenode (192.168.190.14): icmp_seq=2 ttl=64 time=0.404 ms
23 ^C
24 --- SecondNamenode ping statistics ---
25 2 packets transmitted, 2 received, 0% packet loss, time 1011ms
26 rtt min/avg/max/mdev = 0.404/0.817/1.230/0.413 ms
6添加免密钥登录
6.1在namenode节点上生成密钥对:ssh-keygen。
1 [zgw@namenode ~]$ ssh-keygen
2 Generating public/private rsa key pair.
3 Enter file in which to save the key (/home/zgw/.ssh/id_rsa):
4 Created directory '/home/zgw/.ssh'.
5 Enter passphrase (empty for no passphrase):
6 Enter same passphrase again:
7 Your identification has been saved in /home/zgw/.ssh/id_rsa.
8 Your public key has been saved in /home/zgw/.ssh/id_rsa.pub.
9 The key fingerprint is:
10 b1:a5:c5:c6:81:9e:8a:68:0c:ba:b6:76:24:3c:5c:33 zgw@namenode
11 The key's randomart image is:
12 +--[ RSA 2048]----+
13 | .. |
14 | .o . |
15 | ...* |
16 |. E oB |
17 |+o..o. .S |
18 |.=+.. . |
19 | o+ |
20 |.o . |
21 |o.o |
22 +-----------------+
6.2将namenode机器上的~/.ssh/id_rsa.pub复制到其他机器上。此处需注意,一定要个namenode本机发送一份。
1 [zgw@namenode ~]$ ssh-copy-id namenode
2 /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
3 /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
4 zgw@namenode's password:
5
6 Number of key(s) added: 1
7
8 Now try logging into the machine, with: "ssh 'namenode'"
9 and check to make sure that only the key(s) you wanted were added.
10
11 [zgw@namenode ~]$ ssh-copy-id datanode1
12 /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
13 /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
14 zgw@datanode1's password:
15
16 Number of key(s) added: 1
17
18 Now try logging into the machine, with: "ssh 'datanode1'"
19 and check to make sure that only the key(s) you wanted were added.
20
21 [zgw@namenode ~]$ ssh-copy-id datanode2
22 The authenticity of host 'datanode2 (192.168.190.13)' can't be established.
23 ECDSA key fingerprint is 63:6b:24:0d:60:93:5c:a0:98:2f:b9:79:85:ca:90:dd.
24 Are you sure you want to continue connecting (yes/no)? yes
25 /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
26 /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
27 zgw@datanode2's password:
28
29 Number of key(s) added: 1
30
31 Now try logging into the machine, with: "ssh 'datanode2'"
32 and check to make sure that only the key(s) you wanted were added.
33
34 [zgw@namenode ~]$ ssh-copy-id SecondNamenode
35 The authenticity of host 'secondnamenode (192.168.190.14)' can't be established.
36 ECDSA key fingerprint is 63:6b:24:0d:60:93:5c:a0:98:2f:b9:79:85:ca:90:dd.
37 Are you sure you want to continue connecting (yes/no)? yes
38 /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
39 /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
40 zgw@secondnamenode's password:
41
42 Number of key(s) added: 1
43
44 Now try logging into the machine, with: "ssh 'SecondNamenode'"
45 and check to make sure that only the key(s) you wanted were added.
46
47 [zgw@namenode ~]$
6.3复制完成后进行测试。
1 [zgw@namenode ~]$ ssh datanode1
2 Last login: Tue Dec 27 06:26:37 2016 from 192.168.190.1
3 [zgw@datanode1 ~]$ exit
4 登出
5 Connection to datanode1 closed.
6 [zgw@namenode ~]$ ssh datanode2
7 Last login: Tue Dec 27 05:56:22 2016 from 192.168.190.1
8 [zgw@datanode2 ~]$ exit
9 登出
10 Connection to datanode2 closed.
11 [zgw@namenode ~]$ ssh SecnodNamenode
12 ssh: Could not resolve hostname secnodnamenode: Name or service not known
13 [zgw@namenode ~]$ ssh SecondNamenode
14 Last login: Tue Dec 27 05:56:27 2016 from 192.168.190.1
15 [zgw@SecondNamenode ~]$ exit
16 登出
17 Connection to secondnamenode closed.
18 [zgw@namenode ~]$
hadoop安装准备
1jdk的安装与配置。
1.1解压jdk到/opt目录。若jdk不在命令行所在的目录下,需加上路径。
1 tar -zxvf jdk-8u91-linux-x64.tar.gz -C /opt
1.2设置软连接。方便以后的升级。
1 ln -s /opt/jdk1.8.0_91 /opt/jdk
1.3设置环境变量。必须执行source命令。
1 echo "export JAVA_HOME=/opt/jdk" >> /etc/profile #设置JAVA_HONE
2 echo 'export PATH=$JAVA_HOME/bin:$PATH' >> /etc/profile #添加PATH
3 source /etc/profile
1.4jdk测试。如出现如下结果则表示安装成功。
1 [zgw@namenode ~]$ java -version
2 java version "1.8.0_91"
3 Java(TM) SE Runtime Environment (build 1.8.0_91-b14)
4 Java HotSpot(TM) 64-Bit Server VM (build 25.91-b14, mixed mode)
5 [zgw@namenode ~]$
2创建hadoop用户。
2.1 创建hadoop用户组。
groupadd -g 20000 hadoop #组号为20000
2.2创建用户hdfs,yarn,mr。
1 useradd -m -d /home/hdfs -u 20001 -s /bin/bash -g hadoop hdfs
2 useradd -m -d /home/yarn -u 20002 -s /bin/bash -g hadoop yarn
3 useradd -m -d /home/mr -u 20003 -s /bin/bash -g hadoop mr
2.3为用户创建密码。
1 echo hdfs:zgw | chpasswd
2 echo yarn:zgw | chpasswd
3 echo mr:zgw | chpasswd
2.4将用户添加到sudo用户组。
1 usermod -G sudo hdfs
2 usermod -G sudo yarn
3 usermod -G sudo mr
2.5为每个用户创建免密钥登录。如前文所述,这里不再赘述。切记,一定要做!!!!
3创建目录。
3.1创建hadoop所需的目录如下。
1 mkdir -p /data/hadoop/hdfs/nn
2 mkdir -p /data/hadoop/hdfs/snn
3 mkdir -p /data/hadoop/hdfs/dn
4 mkdir -p /data/hadoop/yarn/nm
3.2设置目录权限。/opt为安装hadoop的目录,在这里一并设置。
1 chown -R 20000:hadoop /data
2 chown -R hdfs /data/hadoop/hdfs
3 chown -R yarn /data/hadoop/yarn
4 chmod -R 777 /opt
5 chmod -R 777 /data/hadoop/hdfs
6 chmod -R 777 /data/hadoop/yarn
hadoop安装
1解压hadoop。若hadoop不在命令行所在的目录下,需加上路径。
1 tar -zxvf hadoop-2.6.0.tar.gz -C /opt
2创建软连接。
1 ln -s /opt/hadoop-2.6.0 /opt/hadoop
3设置环境变量。
1 echo "export HADOOP_HOME=/opt/hadoop" >> /etc/profile
2 echo 'export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH' >> /etc/profile
3 source /etc/profile
4hadoop命令测试。
1 [zgw@namenode ~]$ hadoop version
2 Hadoop 2.6.0
3 Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
4 Compiled by jenkins on 2014-11-13T21:10Z
5 Compiled with protoc 2.5.0
6 From source with checksum 18e43357c8f927c0695f1e9522859d6a
7 This command was run using /opt/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0.jar
8 [zgw@namenode ~]$
5hadoop设置如下。
5.1 core-site.xml设置。
1 sudo vi /opt/hadoop/etc/hadoop/core-site.xml
内容如下:
1 <?xml version="1.0" encoding="UTF-8"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3 <!--
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License. See accompanying LICENSE file.
15 -->
16
17 <!-- Put site-specific property overrides in this file. -->
18
19 <configuration>
20 <property>
21 <name>fs.defaultFS</name>
22 <value>hdfs://192.168.190.11:9000</value>
23 </property>
24 </configuration>
22行配置为Master节点IP。我的是namenode节点。
5.2hdfs-site.xml设置。
1 sudo vi /opt/hadoop/etc/hadoop/hdfs-site.xml
内容如下:
1 <?xml version="1.0" encoding="UTF-8"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3 <!--
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License. See accompanying LICENSE file.
15 -->
16
17 <!-- Put site-specific property overrides in this file. -->
18
19 <configuration>
20 <property>
21 <name>dfs.permissions.enabled</name>
22 <value>false</value>
23 </property>
24 <property>
25 <name>dfs.blocksize</name>
26 <value>32m</value>
27 <description>
28 The default block size for new files, in bytes.
29 You can use the following suffix (case insensitive):
30 k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) to specify the size (such as 128k, 512m, 1g, etc.),
31 Or provide complete size in bytes (such as 134217728 for 128 MB).
32 </description>
33 </property>
34
35 <property>
36 <name>dfs.nameservices</name>
37 <value>hadoop-cluster-zgw</value>
38 </property>
39 <property>
40 <name>dfs.replication</name>
41 <value>3</value>
42 </property>
43 <property>
44 <name>dfs.namenode.name.dir</name>
45 <value>/data/hadoop/hdfs/nn</value>
46 </property>
47 <property>
48 <name>dfs.namenode.checkpoint.dir</name>
49 <value>/data/hadoop/hdfs/snn</value>
50 </property>
51 <property>
52 <name>dfs.namenode.checkpoint.edits.dir</name>
53 <value>/data/hadoop/hdfs/snn</value>
54 </property>
55 <property>
56 <name>dfs.datanode.data.dir</name>
57 <value>/data/hadoop/hdfs/dn</value>
58 </property>
59 <property>
60 <name>dfs.namenode.secondary.http-address</name>
61 <value>192.168.190.14:50090</value>
62 </property>
63 </configuration>
61行IP为SecondNamenode节点。
5.3yarn-site.xml设置。
1 sudo vi /opt/hadoop/etc/hadoop/yarn-site.xml
内容如下:
1 <?xml version="1.0"?>
2 <!--
3 Licensed under the Apache License, Version 2.0 (the "License");
4 you may not use this file except in compliance with the License.
5 You may obtain a copy of the License at
6
7 http://www.apache.org/licenses/LICENSE-2.0
8
9 Unless required by applicable law or agreed to in writing, software
10 distributed under the License is distributed on an "AS IS" BASIS,
11 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 See the License for the specific language governing permissions and
13 limitations under the License. See accompanying LICENSE file.
14 -->
15 <configuration>
16
17 <!-- Site specific YARN configuration properties -->
18 <property>
19 <name>yarn.resourcemanager.hostname</name>
20 <value>192.168.190.11</value>
21 </property>
22 <property>
23 <name>yarn.nodemanager.aux-services</name>
24 <value>mapreduce_shuffle</value>
25 </property>
26 <property>
27 <name>yarn.nodemanager.local-dirs</name>
28 <value>/data/hadoop/yarn/nm</value>
29 </property>
30 </configuration>
20行IP为yarn集群的resourcemanager节点。可以和namenode相同,也可以不同,他们没有必然联系。
5.4mapred设置。
1 sudo vi /opt/hadoop/etc/hadoop/mapred-site.xml
内容如下:
1 <?xml version="1.0"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3 <!--
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License. See accompanying LICENSE file.
15 -->
16
17 <!-- Put site-specific property overrides in this file. -->
18
19 <configuration>
20 <property>
21 <name>mapreduce.framework.name</name>
22 <value>yarn</value>
23 </property>
24 </configuration>
5.5slaves设置。将datanode写进slaves,我把SecondNamenode也作为一个datanode。
1 sudo vi /opt/hadoop/etc/hadoop/slaves
内容如下:
datanode1
datanode2
SecondNamenode
5.6设置JDK路径。
1 sudo vi /opt/hadoop/etc/hadoop/hadoop-env.sh
修改内容如下,找到#export JAVA_HOME=/opt/jdk,我的在25行。将其设置如下。
1 export JAVA_HOME=/opt/jdk
注意:一定要去掉前面的#。
5.7创建logs目录。在/opt/hadoop/下查看,如若有logs目录则不用创建,如若没有logs目录,则创建logs。
1 sudo mkdir /opt/hadoop/logs
然后修改其权限:
1 chown -R mr:hadoop //opt/hadoop/logs
2 chmod 777 //opt/hadoop/logs
hadoop集群开启
1在每个节点上格式化HDFS文件系统。只在主节点格式化也行,但有时候只在主节点格式化不行,我一般都是在每个节点都格式化。
1 hdfs namenode –format
2开启hdfs集群。
2.1切换到hdfs用户。
1 su - hdfs
2.2在namenode上执行如下命令开启hdfs集群。
1 start-dfs.sh
2.3在namenode上执行jps查看。
1 [hdfs@namenode ~]$ start-dfs.sh
2 Starting namenodes on [namenode]
3 namenode: starting namenode, logging to /opt/hadoop-2.6.0/logs/hadoop-hdfs-namenode-namenode.out
4 SecondNamenode: starting datanode, logging to /opt/hadoop-2.6.0/logs/hadoop-hdfs-datanode-SecondNamenode.out
5 datanode2: starting datanode, logging to /opt/hadoop-2.6.0/logs/hadoop-hdfs-datanode-datanode2.out
6 datanode1: starting datanode, logging to /opt/hadoop-2.6.0/logs/hadoop-hdfs-datanode-datanode1.out
7 [hdfs@namenode ~]$ jps
8 10902 Jps
9 10712 NameNode
2.4在其他三个节点上执行jps查看如下。
1 [hdfs@datanode1 ~]$ jps
2 4547 DataNode
3 5190 Jps
1 [hdfs@datanode2 ~]$ jps
2 4416 DataNode
3 5070 Jps
1 [hdfs@SecondNamenode ~]$ jps
2 5110 Jps
3 4394 DataNode
2.5网页端查看:http://192.168.190.11:50070。
2.6在namenode上关闭集群。
1 stop-dfs.sh
3开启yarn集群。
3.1切换到yarn用户。
1 su - yarn
3.2在namenode上执行如下命令开启yarn集群。
1 start-yarn.sh
3.3在namenode上执行jps查看。
1 [yarn@namenode ~]$ start-yarn.sh
2 starting yarn daemons
3 resourcemanager running as process 9741. Stop it first.
4 datanode2: nodemanager running as process 4713. Stop it first.
5 SecondNamenode: nodemanager running as process 4695. Stop it first.
6 datanode1: nodemanager running as process 4828. Stop it first.
7 [yarn@namenode ~]$ jps
8 11004 Jps
9 9741 ResourceManager
3.4在其他三个节点上执行jps查看如下。
1 [yarn@datanode1 ~]$ jps
2 5398 Jps
3 4828 NodeManager
1 [yarn@datanode2 ~]$ jps
2 5281 Jps
3 4713 NodeManager
1 [yarn@SecondNamenode ~]$ jps
2 4695 NodeManager
3 5308 Jps
3.5网页端查看:http://192.168.190.11:8088。
3.6 在namenode上关闭yarn集群
1 stop-yarn.sh
4开启作业日志服务器。
4.1切换到mr用户。
1 su - mr
4.2执行如下命令开启作业日志服务器。
1 mr-jobhistory-daemon.sh start historyserver
4.3执行jps查看如下。
1 [mr@namenode ~]$ mr-jobhistory-daemon.sh start historyserver
2 starting historyserver, logging to /opt/hadoop-2.6.0/logs/mapred-mr-historyserver-namenode.out
3 [mr@namenode ~]$ jps
4 11157 Jps
5 11126 JobHistoryServer
4.4网页端查看:http://192.168.190.11:19888。
4.5在namenode上关闭作业日志服务器。
1 mr-jobhistory-daemon.sh stop historyserver
hadoop全分布式环境搭建的更多相关文章
- 【转】Hadoop HDFS分布式环境搭建
原文地址 http://blog.sina.com.cn/s/blog_7060fb5a0101cson.html Hadoop HDFS分布式环境搭建 最近选择给大家介绍Hadoop HDFS系统 ...
- 【Hadoop离线基础总结】CDH版本Hadoop 伪分布式环境搭建
CDH版本Hadoop 伪分布式环境搭建 服务规划 步骤 第一步:上传压缩包并解压 cd /export/softwares/ tar -zxvf hadoop-2.6.0-cdh5.14.0.tar ...
- hadoop ——完全分布式环境搭建
hadoop 完全分布式环境搭建 1.虚拟机角色分配: 192.168.44.184 hadoop02 NameNode/DataNode ResourceManager/NodeManager 19 ...
- CentOS7下Hadoop伪分布式环境搭建
CentOS7下Hadoop伪分布式环境搭建 前期准备 1.配置hostname(可选,了解) 在CentOS中,有三种定义的主机名:静态的(static),瞬态的(transient),和灵活的(p ...
- Linux下配置Hadoop全分布式环境
1. 前提 部署全分布式环境,我们肯定不能在一台服务器上了,这里我用了7台服务器,在VMware上开了7个虚拟机,如下图所示: 我基本配置了一晚上才搞定,第一次配置一般都有错,这时候不妨去到hadoo ...
- Hadoop完全分布式环境搭建(二)——基于Ubuntu16.04设置免密登录
在Windows里,使用虚拟机软件Vmware WorkStation搭建三台机器,操作系统Ubuntu16.04,下面是IP和机器名称. [实验目标]:在这三台机器之间实现免密登录 1.从主节点可以 ...
- 《OD大数据实战》Hadoop伪分布式环境搭建
一.安装并配置Linux 8. 使用当前root用户创建文件夹,并给/opt/下的所有文件夹及文件赋予775权限,修改用户组为当前用户 mkdir -p /opt/modules mkdir -p / ...
- Hadoop完全分布式环境搭建
前言 本文搭建了一个由三节点(master.slave1.slave2)构成的Hadoop完全分布式集群(区别单节点伪分布式集群),并通过Hadoop分布式计算的一个示例测试集群的正确性. 本文集群三 ...
- Hadoop伪分布式环境搭建+Ubuntu:16.04+hadoop-2.6.0
Hello,大家好 !下面就让我带大家一起来搭建hadoop伪分布式的环境吧!不足的地方请大家多交流.谢谢大家的支持 准备环境: 1, ubuntu系统,(我在16.04测试通过.其他版本请自行测试, ...
随机推荐
- 移动端使用rem同时适应安卓ios手机原理解析,移动端响应式开发
rem单位大家可能已经很熟悉,rem是随着html的字体大小来显示代表宽度的方法,我们怎样进行移动端响应式开发呢 浏览器默认的字体大小为16px 及1rem 等于 16px 如果我们想要使1rem等于 ...
- cpp(第五章)
1.副作用,指的是在计算表达式时对某些东西(如存储在变量的值)进行修改:顺序点,是程序执行过程中的一个点,在这里,进入下一步之前将确保对所有的副作用 都进行评估.(分号就是一个顺序点).for exa ...
- oracle中的function的简单语法定义
1. create or replace 函数名 (参数名 in 类型) return 返回值类型 as 定义变量 begin 函数体 end;
- javascript基础(幼兔、小兔成兔数量等典型例题)
一张纸的厚度是0.0001米,将纸对折,对折多少次厚度超过珠峰高度8848米var sum=0; var a=0.0001 for(var i=0;i<100;i++){ a=a*2; sum= ...
- 用php+mysql+ajax实现淘宝客服或阿里旺旺聊天功能 之 后台页面
在上一篇随笔中,我们已经看了如何实现前台的对话功能:前台我限定了店主只有一人,店铺只有一个,所有比较单一,但后台就不一样了,而后台更像是我们常见的聊天软件:当然,前台也应该实现这种效果,但原理懂了,可 ...
- [0] Visual studio 2010 快捷键大全
[窗口快捷键]Ctrl+W,W: 浏览器窗口 Ctrl+W,S: 解决方案管理器 Ctrl+W,C: 类视图 Ctrl+W,E: 错误列表 Ctrl+W,O: 输出视图 trl+W,P: 属性窗口 C ...
- Java NIO学习笔记六 SocketChannel 和 ServerSocketChannel
Java NIO SocketChannel Java NIO SocketChannel是连接到TCP网络socket(套接字)的通道.Java NIO相当于Java Networking的sock ...
- 什么是nginx?
1.nginx是一款服务器软件 2.nginx是一个高性能的HTTP和反向代理服务器: 3.nginx是一个代理邮件服务器: 4.nginx还可以实现负载均衡: nginx的优缺点: 优点:可以实现高 ...
- 【DG】[三思笔记]一步一步学DataGuard
[DG][三思笔记]一步一步学DataGuard 它有无数个名字,有人叫它dg,有人叫它数据卫士,有人叫它data guard,在oracle的各项特性中它有着举足轻理的地位,它就是(掌声)..... ...
- 查找oracle自己用户的表
查找oracle自己用户的表 select table_name from user_tables;