资料来源 : http://www.tutorialspoint.com/hadoop/hadoop_enviornment_setup.htm

Hadoop 安装

  1. 创建新用户

    $ su

    password:

    # useradd hadoop -g root

    # passwd hadoop

    New passwd:

    Retype new passwd

    修改/etc/sudoers 赋予sudo 权限

  2. 设置ssh

SSH Setup and Key Generation

SSH setup is required to do different operations on a cluster such as starting, stopping, distributed daemon shell operations. To authenticate different users of Hadoop, it is required to provide public/private key pair for a Hadoop user and share it with different users.

The following commands are used for generating a key value pair using SSH. Copy the public keys form id_rsa.pub to authorized_keys, and provide the owner with read and write permissions to authorized_keys file respectively.

$ ssh-keygen -t rsa

$ cat ~/.ssh/id_rsa.pub >>
~/.ssh/authorized_keys

$ chmod 0600
~/.ssh/authorized_keys

  1. Install java

Java is the main prerequisite for Hadoop. First of all, you should verify the existence of java in your system using the command "java -version". The syntax of java version command is given below.

$ java -version

If everything is in order, it will give you the following output.

java version "1.7.0_71"

Java(TM) SE Runtime
Environment
(build 1.7.0_71-b13)

Java
HotSpot(TM)
Client VM (build 25.0-b02, mixed mode)

If java is not installed in your system, then follow the steps given below for installing java.

Step 1

Download java (JDK <latest version> - X64.tar.gz) by visiting the following linkhttp://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads1880260.html.

Then jdk-7u71-linux-x64.tar.gz will be downloaded into your system.

Step 2

Generally you will find the downloaded java file in Downloads folder. Verify it and extract the jdk-7u71-linux-x64.gz file using the following commands.

$ cd Downloads/

$ ls

jdk-7u71-linux-x64.gz

$ tar zxf jdk-7u71-linux-x64.gz

$ ls

jdk1.7.0_71 jdk-7u71-linux-x64.gz

Step 3

To make java available to all the users, you have to move it to the location "/usr/local/". Open root, and type the following commands.

$ su

password:

# mv jdk1.7.0_71 /usr/local/

# exit

Step 4

For setting up PATH and JAVA_HOME variables, add the following commands to ~/.bashrc file.

export JAVA_HOME=/usr/local/jdk1.7.0_71

export PATH=$PATH:$JAVA_HOME/bin

Now apply all the changes into the current running system.

$ source ~/.bashrc

Step 5

Use the following commands to configure java alternatives:

# alternatives --install /usr/bin/java java usr/local/java/bin/java 2

 

# alternatives --install /usr/bin/javac javac usr/local/java/bin/javac 2

 

# alternatives --install /usr/bin/jar jar usr/local/java/bin/jar 2

 

# alternatives --set java usr/local/java/bin/java

 

# alternatives --set javac usr/local/java/bin/javac

 

# alternatives --set jar usr/local/java/bin/jar

Now verify the java -version command from the terminal as explained above.

 

  1. Downloading Hadoop

Download and extract Hadoop 2.4.1 from Apache software foundation using the following commands.

$ su

password:

# cd /usr/local

# wget http://apache.claz.org/hadoop/common/hadoop-2.4.1/

hadoop-2.4.1.tar.gz

# tar xzf hadoop-2.4.1.tar.gz

# mv hadoop-2.4.1/* to hadoop/

# exit

 

  1. Installing Hadoop in Standalone Mode

Here we will discuss the installation of Hadoop 2.4.1 in standalone mode.

There are no daemons running and everything runs in a single JVM. Standalone mode is suitable for running MapReduce programs during development, since it is easy to test and debug them.

Setting Up Hadoop

You can set Hadoop environment variables by appending the following commands to ~/.bashrc file.

export HADOOP_HOME=/usr/local/hadoop

Before proceeding further, you need to make sure that Hadoop is working fine. Just issue the following command:

$ hadoop version

If everything is fine with your setup, then you should see the following result:

Hadoop
2.4.1

Subversion https://svn.apache.org/repos/asf/hadoop/common -r 1529768

Compiled
by hortonmu on 2013-10-07T06:28Z

Compiled
with protoc 2.5.0

From source with checksum 79e53ce7994d1628b240f09af91e1af4

It means your Hadoop's standalone mode setup is working fine. By default, Hadoop is configured to run in a non-distributed mode on a single machine.

Example

Let's check a simple example of Hadoop. Hadoop installation delivers the following example MapReduce jar file, which provides basic functionality of MapReduce and can be used for calculating, like Pi value, word counts in a given list of files, etc.

$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar

Let's have an input directory where we will push a few files and our requirement is to count the total number of words in those files. To calculate the total number of words, we do not need to write our MapReduce, provided the .jar file contains the implementation for word count. You can try other examples using the same .jar file; just issue the following commands to check supported MapReduce functional programs by hadoop-mapreduce-examples-2.2.0.jar file.

$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduceexamples-2.2.0.jar

Step 1

Create temporary content files in the input directory. You can create this input directory anywhere you would like to work.

$ mkdir input

$ cp $HADOOP_HOME/*.txt input

$ ls -l input

It will give the following files in your input directory:

total 24

-rw-r--r--
1 root root 15164
Feb
21
10:14 LICENSE.txt

-rw-r--r--
1 root root 101
Feb
21
10:14 NOTICE.txt

-rw-r--r--
1 root root 1366
Feb
21
10:14 README.txt

These files have been copied from the Hadoop installation home directory. For your experiment, you can have different and large sets of files.

Step 2

Let's start the Hadoop process to count the total number of words in all the files available in the input directory, as follows:

$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduceexamples-2.2.0.jar wordcount input ouput

Step 3

Step-2 will do the required processing and save the output in output/part-r00000 file, which you can check by using:

$cat output/*

It will list down all the words along with their total counts available in all the files available in the input directory.

"AS 4

"Contribution" 1

"Contributor" 1

"Derivative
1

"Legal 1

"License" 1

"License"); 1

"Licensor" 1

"NOTICE"
1

"Not 1

"Object" 1

"Source"
1

"Work" 1

"You" 1

"Your") 1

"[]" 1

"control" 1

"printed 1

"submitted"
1

(50%)
1

(BIS),
1

(C)
1

(Don't) 1

(ECCN) 1

(INCLUDING 2

(INCLUDING, 2

.............

 

 

  1. Installing Hadoop in Pseudo Distributed Mode

Follow the steps given below to install Hadoop 2.4.1 in pseudo distributed mode.

Step 1: Setting Up Hadoop

You can set Hadoop environment variables by appending the following commands to ~/.bashrc file.

export HADOOP_HOME=/usr/local/hadoop

export HADOOP_MAPRED_HOME=$HADOOP_HOME

export HADOOP_COMMON_HOME=$HADOOP_HOME

export HADOOP_HDFS_HOME=$HADOOP_HOME

export YARN_HOME=$HADOOP_HOME

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native

export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

export HADOOP_INSTALL=$HADOOP_HOME

Now apply all the changes into the current running system.

$ source ~/.bashrc

Step 2: Hadoop Configuration

You can find all the Hadoop configuration files in the location "$HADOOP_HOME/etc/hadoop". It is required to make changes in those configuration files according to your Hadoop infrastructure.

$ cd $HADOOP_HOME/etc/hadoop

In order to develop Hadoop programs in java, you have to reset the java environment variables in hadoop-env.sh file by replacing JAVA_HOME value with the location of java in your system.

export JAVA_HOME=/usr/local/jdk1.7.0_71

The following are the list of files that you have to edit to configure Hadoop.

core-site.xml

The core-site.xml file contains information such as the port number used for Hadoop instance, memory allocated for the file system, memory limit for storing the data, and size of Read/Write buffers.

Open the core-site.xml and add the following properties in between <configuration>, </configuration> tags.

<configuration>

 

<property>

<name>fs.default.name </name>

<value> hdfs://localhost:9000 </value>

</property>

 

</configuration>

hdfs-site.xml

The hdfs-site.xml file contains information such as the value of replication data, namenode path, and datanode paths of your local file systems. It means the place where you want to store the Hadoop infrastructure.

Let us assume the following data.

dfs.replication (data replication value)
=
1

(In the below given path /hadoop/
is the user name.

hadoopinfra/hdfs/namenode is the directory created by hdfs file system.)

namenode path =
//home/hadoop/hadoopinfra/hdfs/namenode

(hadoopinfra/hdfs/datanode is the directory created by hdfs file system.)

datanode path =
//home/hadoop/hadoopinfra/hdfs/datanode

Open this file and add the following properties in between the <configuration> </configuration> tags in this file.

<configuration>

 

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

 

<property>

<name>dfs.name.dir</name>

<value>file:///home/hadoop/hadoopinfra/hdfs/namenode </value>

</property>

 

<property>

<name>dfs.data.dir</name>

<value>file:///home/hadoop/hadoopinfra/hdfs/datanode </value>

</property>

 

</configuration>

Note: In the above file, all the property values are user-defined and you can make changes according to your Hadoop infrastructure.

yarn-site.xml

This file is used to configure yarn into Hadoop. Open the yarn-site.xml file and add the following properties in between the <configuration>, </configuration> tags in this file.

<configuration>

 

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

 

</configuration>

mapred-site.xml

This file is used to specify which MapReduce framework we are using. By default, Hadoop contains a template of yarn-site.xml. First of all, it is required to copy the file from mapred-site,xml.template to mapred-site.xml file using the following command.

$ cp mapred-site.xml.template mapred-site.xml

Open mapred-site.xml file and add the following properties in between the <configuration>, </configuration>tags in this file.

<configuration>

 

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

 

</configuration>

Verifying Hadoop Installation

The following steps are used to verify the Hadoop installation.

Step 1: Name Node Setup

Set up the namenode using the command "hdfs namenode -format" as follows.

$ cd ~

$ hdfs namenode -format

The expected result is as follows.

10/24/14
21:30:55 INFO namenode.NameNode: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG: host = localhost/192.168.1.11

STARTUP_MSG: args = [-format]

STARTUP_MSG: version = 2.4.1

...

...

10/24/14 21:30:56 INFO common.Storage: Storage directory

/home/hadoop/hadoopinfra/hdfs/namenode has been successfully formatted.

10/24/14 21:30:56 INFO namenode.NNStorageRetentionManager: Going to

retain 1 images with txid >= 0

10/24/14 21:30:56 INFO util.ExitUtil: Exiting with status 0

10/24/14 21:30:56 INFO namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at localhost/192.168.1.11

************************************************************/

Step 2: Verifying Hadoop dfs

The following command is used to start dfs. Executing this command will start your Hadoop file system.

$ start-dfs.sh

The expected output is as follows:

10/24/14
21:37:56

Starting namenodes on [localhost]

localhost: starting namenode, logging to /home/hadoop/hadoop

2.4.1/logs/hadoop-hadoop-namenode-localhost.out

localhost: starting datanode, logging to /home/hadoop/hadoop

2.4.1/logs/hadoop-hadoop-datanode-localhost.out

Starting secondary namenodes [0.0.0.0]

Step 3: Verifying Yarn Script

The following command is used to start the yarn script. Executing this command will start your yarn daemons.

$ start-yarn.sh

The expected output as follows:

starting yarn daemons

starting resourcemanager, logging to /home/hadoop/hadoop

2.4.1/logs/yarn-hadoop-resourcemanager-localhost.out

localhost: starting nodemanager, logging to /home/hadoop/hadoop

2.4.1/logs/yarn-hadoop-nodemanager-localhost.out

Step 4: Accessing Hadoop on Browser

The default port number to access Hadoop is 50070. Use the following url to get Hadoop services on browser.

http://localhost:50070/

Step 5: Verify All Applications for Cluster

The default port number to access all applications of cluster is 8088. Use the following url to visit this service.

http://localhost:8088/

 

 

Hadoop学习日志- install hadoop的更多相关文章

  1. 大数据Hadoop学习之搭建hadoop平台(2.2)

    关于大数据,一看就懂,一懂就懵. 一.概述 本文介绍如何搭建hadoop分布式集群环境,前面文章已经介绍了如何搭建hadoop单机环境和伪分布式环境,如需要,请参看:大数据Hadoop学习之搭建had ...

  2. [转帖]hadoop学习笔记:hadoop文件系统浅析

    hadoop学习笔记:hadoop文件系统浅析 https://www.cnblogs.com/sharpxiajun/archive/2013/06/15/3137765.html 1.什么是分布式 ...

  3. Hadoop学习笔记【Hadoop家族成员概述】

    Hadoop家族成员概述 一.Hadoop简介 1.1 什么是Hadoop? Hadoop是一个分布式系统基础架构,由Apache基金会所开发,目前Yahoo!是其最重要的贡献者. Hadoop实现了 ...

  4. Hadoop学习4--安装Hadoop

    首先献上Hadoop下载地址: http://apache.fayea.com/hadoop/core/ 选择相应版本,点一下,直接进行http下载了. 对原来写的一篇文章,相当不满意,过于粗糙了,于 ...

  5. [Hadoop] Hadoop学习笔记之Hadoop基础

    1 Hadoop是什么? Google公司发表了两篇论文:一篇论文是“The Google File System”,介绍如何实现分布式地存储海量数据:另一篇论文是“Mapreduce:Simplif ...

  6. hadoop学习第一天-hadoop初步环境搭建&伪分布式计算配置(详细)

    一.虚拟机环境搭建 我们用的虚拟机为vmware,Linux镜像为centOS6.5. vmware安装 安装没什么多说的,一路下一步,但是在新建虚拟机的时候有两个地方需要注意: 1.分配处理器1个就 ...

  7. 大数据Hadoop学习之搭建Hadoop平台(2.1)

     关于大数据,一看就懂,一懂就懵. 一.简介 Hadoop的平台搭建,设置为三种搭建方式,第一种是"单节点安装",这种安装方式最为简单,但是并没有展示出Hadoop的技术优势,适合 ...

  8. 大数据Hadoop学习之了解Hadoop(1)

    关于大数据,一看就懂,一懂就懵. 大数据的发展也有些年头了,如今正走在风口浪尖上,作为小白,我也来凑一份热闹. 大数据经过多年的发展,有着不同的实现方案和分支,不过,要说大数据实现方案中的翘楚,那就是 ...

  9. 【Hadoop学习之三】Hadoop全分布式安装

    环境 虚拟机:VMware 10 Linux版本:CentOS-6.5-x86_64 客户端:Xshell4 FTP:Xftp4 jdk8 hadoop3.1.1 全分布式就是集群,注意配置主机名. ...

随机推荐

  1. Visual Studio 2015正式发布

    Windows 10 RTM正式版要7月29日发布,微软的另一个重磅软件Visual Studio 2015已经率先发布,今天如期放出了正式版本.Visual Studio 2015包括许多新功能和更 ...

  2. ASP.NET Web API Model-ParameterBinding

    ASP.NET Web API Model-ParameterBinding 前言 通过上个篇幅的学习了解Model绑定的基础知识,然而在ASP.NET Web API中Model绑定功能模块并不是被 ...

  3. 简单的例子了解自定义ViewGroup(一)

    在Android中,控件可以分为ViewGroup控件与View控件.自定义View控件,我之前的文章已经说过.这次我们主要说一下自定义ViewGroup控件.ViewGroup是作为父控件可以包含多 ...

  4. Android笔记——Matrix

    转自:http://www.cnblogs.com/qiengo/archive/2012/06/30/2570874.html#translate Matrix的数学原理 在Android中,如果你 ...

  5. WCF学习之旅—第三个示例之三(二十九)

    上接WCF学习之旅—第三个示例之一(二十七) WCF学习之旅—第三个示例之二(二十八) 在上一篇文章中我们创建了实体对象与接口协定,在这一篇文章中我们来学习如何创建WCF的服务端代码.具体步骤见下面. ...

  6. 解析大型.NET ERP系统核心组件 查询设计器 报表设计器 窗体设计器 工作流设计器 任务计划设计器

    企业管理软件包含一些公共的组件,这些基础的组件在每个新项目立项阶段就必须考虑.核心的稳定不变功能,方便系统开发与维护,也为系统二次开发提供了诸多便利.比如通用权限管理系统,通用附件管理,通用查询等组件 ...

  7. $(document).ready() 与window.onload的区别

    1.执行时间 window.onload必须等到页面内包括图片的所有元素加载完毕后才能执行. $(document).ready()是DOM结构绘制完毕后就执行,不必等到加载完毕. 2.编写个数不同 ...

  8. 【转】linux内核中writesb(), writesw(), writesl() 宏函数

    writesb(), writesw(), writesl() 宏函数 功能 : writesb()    I/O 上写入 8 位数据流数据 (1字节) writesw()   I/O  上写入 16 ...

  9. 一个由Response.Redirect 引起的性能问题的分析

    现象: 某系统通过单点登录(SSO) 技术验证用户登录.用户在SSO 系统上通过验证后,跳转到某系统的主页上面.而跳转的时间很长,约1分钟以上. 分析步骤: 在问题复现时抓取Hang dump 进行分 ...

  10. iOS冰与火之歌(番外篇) - 基于PEGASUS(Trident三叉戟)的OS X 10.11.6本地提权

    iOS冰与火之歌(番外篇) 基于PEGASUS(Trident三叉戟)的OS X 10.11.6本地提权 蒸米@阿里移动安全 0x00 序 这段时间最火的漏洞当属阿联酋的人权活动人士被apt攻击所使用 ...