Hadoop的运行模式可分为单机模式、伪分布模式和分布模式。

首先无论哪种模式都需要安装JDK的,这一步之前的随笔Ubuntu 14.04 LTE下安装JDK 1.8中已经做了。这里就不多说了。

其次是安装SSH。安装SSH是为了每次可以免密码登陆数据节点服务器。因为集群的环境下,每次登陆到数据节点服务器不可能每次都输入密码。这一步在前面的随笔Ubuntu 14.04 LTE下配置SSH免密码登录中已经做了。这里也不多说了。

伪分布模式安装:

首先下载Hadoop 1.2.1到本机,再解压到用户目录下。

jerry@ubuntu:~/Downloads$ tar zxf hadoop-1.2.1.tar.gz -C ~/hadoop_1.2.1
jerry@ubuntu:~/Downloads$ cd ~/hadoop_1.2.1/
jerry@ubuntu:~/hadoop_1.2.1$ ls
hadoop-1.2.1
jerry@ubuntu:~/hadoop_1.2.1$ cd hadoop-1.2.1/
jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1$ ls
bin hadoop-ant-1.2.1.jar ivy sbin
build.xml hadoop-client-1.2.1.jar ivy.xml share
c++ hadoop-core-1.2.1.jar lib src
CHANGES.txt hadoop-examples-1.2.1.jar libexec webapps
conf hadoop-minicluster-1.2.1.jar LICENSE.txt
contrib hadoop-test-1.2.1.jar NOTICE.txt
docs hadoop-tools-1.2.1.jar README.txt

然后配置hadoop的几个配置文件,都是XML格式。

首先是core-default.xml。这里配置hadoop分布式文件系统的地址和端口,以及Hadoop临时文件目录(/tmp/hadoop-${user.name})。

jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2./conf$ cat core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/hadoop/hadooptmp</value>
</property>
</configuration>
jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2./conf$

修改hadoop系统环境配置文件,告诉hadoop安装好的jdk的主目录路径

jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2.$ cd conf/
jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2./conf$ ls
capacity-scheduler.xml hadoop-policy.xml slaves
configuration.xsl hdfs-site.xml ssl-client.xml.example
core-site.xml log4j.properties ssl-server.xml.example
fair-scheduler.xml mapred-queue-acls.xml taskcontroller.cfg
hadoop-env.sh mapred-site.xml task-log4j.properties
hadoop-metrics2.properties masters
jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2./conf$ sudo vim hadoop-env.sh n
[sudo] password for jerry:
files to edit
jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2./conf$ sudo vim hadoop-env.sh
jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2./conf$ tail -n hadoop-env.sh
export JAVA_HOME=/usr/lib/jvm/jdk

然后是hdfs-site.xml 。修改hdfs的文件备份数量为1,dfs命名节点的主目录,dfs数据节点的目录。

jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2./conf$ cat hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration>
<property>
<name>dfs.replication</name>
<value></value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/hadoop/hdfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/hadoop/hdfs/data</value>
</property>
</configuration>
jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2./conf$

最后配置mapreduce的job tracker的地址和端口

jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ cat mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$

配置masters文件和slaves文件,这里因为我们是伪分布式,命名节点和数据节点其实都是一样。

jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2./conf$ cat masters
localhost
192.168.2.100 jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2./conf$ cat slaves
localhost
192.168.2.100
jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2./conf$

编辑/etc/hosts文件,配置主机名和IP地址的映射关系

jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2./conf$ cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 ubuntu # The following lines are desirable for IPv6 capable hosts
:: ip6-localhost ip6-loopback
fe00:: ip6-localnet
ff00:: ip6-mcastprefix
ff02:: ip6-allnodes
ff02:: ip6-allrouters
192.168.2.100 master
192.168.2.100 slave
jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2./conf$

创建好core-default.xml,hdfs-site.xml,mapred-site.xml 三个配置文件里面写到的目录

jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2./conf$ mkdir -p /hadoop/hadooptmp
jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2./conf$ mkdir -p /hadoop/hdfs/name
jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2./conf$ mkdir -p /hadoop/hdfs/data

格式化HDFS

jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2./bin$ ./hadoop namenode -format

启动所有Hadoop服务,包括JobTracker,TaskTracker,Namenode

jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2./bin$ ./start-all.sh
starting namenode, logging to /home/jerry/hadoop_1.2.1/hadoop-1.2./libexec/../logs/hadoop-jerry-namenode-ubuntu.out
192.168.68.130: starting datanode, logging to /home/jerry/hadoop_1.2.1/hadoop-1.2./libexec/../logs/hadoop-jerry-datanode-ubuntu.out
localhost: starting datanode, logging to /home/jerry/hadoop_1.2.1/hadoop-1.2./libexec/../logs/hadoop-jerry-datanode-ubuntu.out
localhost: ulimit -a for user jerry
localhost: core file size (blocks, -c)
localhost: data seg size (kbytes, -d) unlimited
localhost: scheduling priority (-e)
localhost: file size (blocks, -f) unlimited
localhost: pending signals (-i)
localhost: max locked memory (kbytes, -l)
localhost: max memory size (kbytes, -m) unlimited
localhost: open files (-n)
localhost: pipe size ( bytes, -p)
localhost: starting secondarynamenode, logging to /home/jerry/hadoop_1.2.1/hadoop-1.2./libexec/../logs/hadoop-jerry-secondarynamenode-ubuntu.out
192.168.68.130: secondarynamenode running as process . Stop it first.
starting jobtracker, logging to /home/jerry/hadoop_1.2.1/hadoop-1.2./libexec/../logs/hadoop-jerry-jobtracker-ubuntu.out
192.168.68.130: starting tasktracker, logging to /home/jerry/hadoop_1.2.1/hadoop-1.2./libexec/../logs/hadoop-jerry-tasktracker-ubuntu.out
localhost: starting tasktracker, logging to /home/jerry/hadoop_1.2.1/hadoop-1.2./libexec/../logs/hadoop-jerry-tasktracker-ubuntu.out
localhost: ulimit -a for user jerry
localhost: core file size (blocks, -c)
localhost: data seg size (kbytes, -d) unlimited
localhost: scheduling priority (-e)
localhost: file size (blocks, -f) unlimited
localhost: pending signals (-i)
localhost: max locked memory (kbytes, -l)
localhost: max memory size (kbytes, -m) unlimited
localhost: open files (-n)
localhost: pipe size ( bytes, -p)
jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2./bin$

查看Hadoop服务是否启动成功

jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ jps
3472 JobTracker
3604 TaskTracker
3084 NameNode
5550 Jps
3247 DataNode
3391 SecondaryNameNode
jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$

查看hadoop群集的状态

jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2./bin$ ./hadoop dfsadmin -report
Configured Capacity: (38.26 GB)
Present Capacity: (30.48 GB)
DFS Remaining: (30.48 GB)
DFS Used: ( KB)
DFS Used%: %
Under replicated blocks:
Blocks with corrupt replicas:
Missing blocks: -------------------------------------------------
Datanodes available: ( total, dead) Name: 127.0.0.1:
Decommission Status : Normal
Configured Capacity: (38.26 GB)
DFS Used: ( KB)
Non DFS Used: (7.79 GB)
DFS Remaining: (30.48 GB)
DFS Used%: %
DFS Remaining%: 79.65%
Last contact: Sat Dec :: PST jerry@ubuntu:~/hadoop_1.2.1/hadoop-1.2./bin$

过程中遇到不少问题,这里贴下一些有用的链接:

Hadoop伪分布模式安装

hadoop配置、运行错误总结

hadoop环境配置过程中可能遇到问题的解决方案

Hadoop的datanode无法启动

Hadoop 添加删除datanode及tasktracker

hadoop datanode启动不起来

Linux ->> UBuntu 14.04 LTE下安装Hadoop 1.2.1(伪分布模式)的更多相关文章

  1. Linux ->> UBuntu 14.04 LTE下安装Hadoop 1.2.1(集群分布式模式)

    安装步骤: 1) JDK -- Hadoop是用Java写的,不安装Java虚拟机怎么运行Hadoop的程序: 2)创建专门用于运行和执行hadoop任务(比如map和reduce任务)的linux用 ...

  2. Linux ->> Ubuntu 14.04 LTE下安装JDK 1.8

    先到Oracle官网的下载中心下载JDK8的tar包到本地. 下载地址:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-dow ...

  3. Linux ->> UBuntu 14.04 LTE下主机名称和IP地址解析

    UBuntu 14.04 LTE下主机名称和IP地址解析一些相关的配置文件: /etc/hosts: 主机文件.手工配置IP地址和主机名称间的映射.格式为每行一条映射条项: <machine_n ...

  4. Linux ->> UBuntu 14.04 LTE下设置静态IP地址

    UBuntu 14.04 LTE设置IP地址和一些服务器版本的Linux还不太一样.以Centos 7.0为例,网卡IP地址的配置文件应该是/etc/sysconfig/network-scripts ...

  5. Git使用:Linux(Ubuntu 14.04 x64)下安装Git并配置连接GitHub

    github是一个非常好的网络代码托管仓库,知晓许久,但是一直没有用起来,最近才开始使用git管理自己的文档和代码. Git是非常强大的版本管理工具,今天就告诉大家,如何在Linux下安装GIt,并且 ...

  6. Linux ->> Ubuntu 14.04 LTE下配置SSH免密码登录

    首先用apt-get命令安装SSH jerry@ubuntu:~$ sudo apt-get install ssh [sudo] password for jerry: Reading packag ...

  7. Ubuntu 14.04 LTS下安装Google Chrome浏览器

    在Ubuntu 14.04下安装Google Chrome浏览器非常简单,只要到Chrome的网站下载Deb安装包并进行安装即可.当然你也可以使用APT软件包管理器来安装Google Chrome浏览 ...

  8. Linux Ubuntu 14.04 LTS下VirtualBox连接USB

    1.环境 主机:Ubuntu 14.04 LTS 虚拟机:Windows 7 专业版本 VirtualBox: 图形用户界面版本 5.1.8 r111374 (Qt5.6.1) 2.在主机上给Virt ...

  9. Linux:Ubuntu 14.04 Server 离线安装Jjava8(及在线安装)

    (离线安装)首先,通过winscp上传本地下载好的jdk-8u102-linux-x64.gz (离线安装)将jdk-8u102-linux-x64.gz解压到到/usr/lib/jvm下,并把文件夹 ...

随机推荐

  1. MySQL数据库 InnoDB引擎 事务及行锁总结

    一.事务 1.事务的四大特性 (1)原子性:事务开始后所有的操作要么一起成功,要么一起失败,整个事务是一个不可分割的整体. (2)一致性:是物开始前到结束后,数据库的完整性约束没有被破坏. (3)隔离 ...

  2. Math.round、Math.floor、Math.ceil 区别

    1.Math.round() 按照四舍五入的方式返回值 例如:Math.round(9.5)=10    Math.round(9.4)=9 2.Math.floor()返回最小整数 例如:Math. ...

  3. easygui.py的安装和下载地址

    easygui下载地址:http://nchc.dl.sourceforge.net/project/easygui/0.97/easygui-0.97.zip 安装:解压后将easygui.py拷贝 ...

  4. System.Security.Cryptography.CryptographicException 微信支付中公众号发红包时候碰到的错误。

    转 留记录.我是第二个错误原因 我总结了一下出现证书无法加载的原因有以下三个 1.证书密码不正确,微信证书密码就是商户号 解决办法:请检查证书密码是不是和商户号一致 2.IIS设置错误,未加载用户配置 ...

  5. VirtualBox 命令行操作

    vboxmanage list vmsvboxmanage list runningvmsvboxmanage startvmvboxmanage controlvm "RHEL6.1_fo ...

  6. JAVA学习4:用Maven创建Struts2项目

    采用struts版本:struts-2.3.8 一.创建一个web项目 参考前面文章,项目名:maven-struts-demo. 二.配置pom.xml文件添加struts2依赖   <pro ...

  7. MySQL修改提示符

    MySQL提示符 \D 完整日期 \d 当前数据库 \h 服务器名称 \u 当前用户 1.连接之前修改提示符 mysql -uroot -proot --prompt [MySQL提示符] 2.连接之 ...

  8. 如何在cmd窗口里快速且正确打开任意位置路径(各版本windows系统都适合)(图文详解)(博主推荐)

    问题的由来 有时候,我们很苦恼,总是先系统键 + R,然后再去手动敲.尤其对win7系统比较麻烦 解决办法 方法一:复制路径(这点对win10系统做得好,直接可以复制) ,win7系统的话可能还需要设 ...

  9. c++ 网络编程(一)TCP/UDP windows/linux 下入门级socket通信 客户端与服务端交互代码

    原文作者:aircraft 原文地址:https://www.cnblogs.com/DOMLX/p/9601511.html c++ 网络编程(一)TCP/UDP  入门级客户端与服务端交互代码 网 ...

  10. Call to a member function assign() on null

    Thinkphp: 在子控制器里面写了一个构造函数,如下 //构造函数 public function __construct(){ echo 1; } 结果页面报错了  ---->  Call ...