Hadoop MapReduce Next Generation - Setting up a Single Node Cluster
Hadoop MapReduce Next Generation - Setting up a Single Node Cluster.
Purpose
This document describes how to set up and configure a single-node Hadoop installation so that you can quickly perform simple operations using Hadoop MapReduce and the Hadoop Distributed File System (HDFS).
Prerequisites
Supported Platforms
- GNU/Linux is supported as a development and production platform. Hadoop has been demonstrated on GNU/Linux clusters with 2000 nodes.
Required Software
Required software for Linux include:
- Java™ must be installed. Recommended Java versions are described at HadoopJavaVersions.
- ssh must be installed and sshd must be running to use the Hadoop scripts that manage remote Hadoop daemons.
Installing Software
If your cluster doesn't have the requisite software you will need to install it.
For example on Ubuntu Linux:
$ sudo apt-get install ssh
$ sudo apt-get install rsync
Download
To get a Hadoop distribution, download a recent stable release from one of the Apache Download Mirrors.
Prepare to Start the Hadoop Cluster
Unpack the downloaded Hadoop distribution. In the distribution, edit the file etc/hadoop/hadoop-env.sh to define some parameters as follows:
# set to the root of your Java installation
export JAVA_HOME=/usr/java/latest # Assuming your installation directory is /usr/local/hadoop
export HADOOP_PREFIX=/usr/local/hadoop
Try the following command:
$ bin/hadoop
This will display the usage documentation for the hadoop script.
Now you are ready to start your Hadoop cluster in one of the three supported modes:
- Local (Standalone) Mode
- Pseudo-Distributed Mode
- Fully-Distributed Mode
Standalone Operation
By default, Hadoop is configured to run in a non-distributed mode, as a single Java process. This is useful for debugging.
The following example copies the unpacked conf directory to use as input and then finds and displays every match of the given regular expression. Output is written to the given output directory.
$ mkdir input
$ cp etc/hadoop/*.xml input
$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar grep input output 'dfs[a-z.]+'
$ cat output/*
Pseudo-Distributed Operation
Hadoop can also be run on a single-node in a pseudo-distributed mode where each Hadoop daemon runs in a separate Java process.
Configuration
Use the following:
# etc/hadoop/core-site.xml: <configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration> # etc/hadoop/hdfs-site.xml: <configuration>
<property>
<name>dfs.replication</name>
<value></value>
</property>
</configuration>
Setup passphraseless ssh
Now check that you can ssh to the localhost without a passphrase:
$ ssh localhost
If you cannot ssh to localhost without a passphrase, execute the following commands:
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
Execution
The following instructions are to run a MapReduce job locally. If you want to execute a job on YARN, see YARN on Single Node.
- Format the filesystem:
$ bin/hdfs namenode -format
- Start NameNode daemon and DataNode daemon:
$ sbin/start-dfs.sh
The hadoop daemon log output is written to the $HADOOP_LOG_DIR directory (defaults to $HADOOP_HOME/logs).
- Browse the web interface for the NameNode; by default it is available at:
- NameNode - http://localhost:50070/
- Make the HDFS directories required to execute MapReduce jobs:
$ bin/hdfs dfs -mkdir /user
$ bin/hdfs dfs -mkdir /user/<username> - Copy the input files into the distributed filesystem:
$ bin/hdfs dfs -put etc/hadoop input
- Run some of the examples provided:
$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar grep input output 'dfs[a-z.]+'
- Examine the output files:
Copy the output files from the distributed filesystem to the local filesystem and examine them:
$ bin/hdfs dfs -get output output
$ cat output/*or
View the output files on the distributed filesystem:
$ bin/hdfs dfs -cat output/*
- When you're done, stop the daemons with:
$ sbin/stop-dfs.sh
YARN on Single Node
You can run a MapReduce job on YARN in a pseudo-distributed mode by setting a few parameters and running ResourceManager daemon and NodeManager daemon in addition.
The following instructions assume that 1. ~ 4. steps of the above instructions are already executed.
- Configure parameters as follows:
etc/hadoop/mapred-site.xml:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>etc/hadoop/yarn-site.xml:
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration> - Start ResourceManager daemon and NodeManager daemon:
$ sbin/start-yarn.sh
- Browse the web interface for the ResourceManager; by default it is available at:
- ResourceManager - http://localhost:8088/
- Run a MapReduce job.
- When you're done, stop the daemons with:
$ sbin/stop-yarn.sh
Hadoop MapReduce Next Generation - Setting up a Single Node Cluster的更多相关文章
- Setting up a Single Node Cluster Hadoop on Ubuntu/Debian
Hadoop: Setting up a Single Node Cluster. Hadoop: Setting up a Single Node Cluster. Purpose Prerequi ...
- CentOS6.4安装Hadoop2.0.5 alpha - Single Node Cluster
1.安装JDK7 rpm到/usr/java/jdk1.7.0_40,并建立软链接/usr/java/default到/usr/java/jdk1.7.0_40 [root@server-308 ~] ...
- Hadoop Single Node Setup(hadoop本地模式和伪分布式模式安装-官方文档翻译 2.7.3)
Purpose(目标) This document describes how to set up and configure a single-node Hadoop installation so ...
- Writing an Hadoop MapReduce Program in Python
In this tutorial I will describe how to write a simpleMapReduce program for Hadoop in thePython prog ...
- Hadoop MapReduce编程学习
一直在搞spark,也没时间弄hadoop,不过Hadoop基本的编程我觉得我还是要会吧,看到一篇不错的文章,不过应该应用于hadoop2.0以前,因为代码中有 conf.set("map ...
- 用Python语言写Hadoop MapReduce程序Writing an Hadoop MapReduce Program in Python
In this tutorial I will describe how to write a simple MapReduce program for Hadoop in the Python pr ...
- Hadoop mapreduce自定义分组RawComparator
本文发表于本人博客. 今天接着上次[Hadoop mapreduce自定义排序WritableComparable]文章写,按照顺序那么这次应该是讲解自定义分组如何实现,关于操作顺序在这里不多说了,需 ...
- 下一代Apache Hadoop MapReduce框架的架构
背景 随着集群规模和负载增加,MapReduce JobTracker在内存消耗,线程模型和扩展性/可靠性/性能方面暴露出了缺点,为此需要对它进行大整修. 需求 当我们对Hadoop MapReduc ...
- Hadoop MapReduce编程 API入门系列之join(二十六)(未完)
不多说,直接上代码. 天气记录数据库 Station ID Timestamp Temperature 气象站数据库 Station ID Station Name 气象站和天气记录合并之后的示意图如 ...
随机推荐
- div:给div加滚动栏 div的滚动栏设置
今天做了个样例: div 的滚动栏问题: 两种方法: 一. <div style=" overflow:scroll; width:400px; height:400px;”>& ...
- Yii CGridView 基本使用(三)关联表相关字段搜索
加入 关联表 相关字段的搜索: 先说一句,我们在这里仅仅谈 "一对多" 的关联搜索,首先,不要忘了我们的数据库,忘记的同学请戳这里:这里.能够看到在 tbl_post 中是有一个外 ...
- android 98 MediaPlayer+SurfaceView播放视频
package com.itheima.videoplayer; import java.io.IOException; import android.media.MediaPlayer; impor ...
- 快速高效的破解MySQL本地和远程密码
http://www.kankanews.com/ICkengine/archives/212.shtml 快速的 MySQL 本地和远程密码破解!首先需要对数据库维护人员说明的是,不必紧张,你无需修 ...
- linux上gcc
查看gcc版本号 rpm -qa | grep gcc gnu的gcc是linux/unix下开发的,不能直接在window下运行.window下有gcc的移植版本.就是楼上说的MinGW和cygwi ...
- 【转】三次握手与accept()函数
1. 客户端发送SYN给服务器 2. 服务器发送SYN+ACK给客户端 3. 客户端发送ACK给服务器 4. 连接建立,调用accept()函数获取连接
- Android Activiy的作用
在Android应用程序中 ,Activity主要的负责创建窗口的,一个Activicy就是代表一个单独的屏幕,并且是用户唯一可以看到的东西 也就是说Activity就是用来实现和用户交互的,就和.n ...
- 页面常见效果js实现
2015.12.2 页面常见效果js实现 [有没有觉得很坑,[笑哭,邮箱写上]] 复习: Js内置对象: 1.浏览器对象 window document history location event ...
- 转:ORACLE制造方法的比较
转自:http://blog.itpub.net/133041/viewspace-438549/ 1.离散制造. 2.重复制造 3.流式制造 Oracle Applications 支持离散.项目. ...
- iOS应用审核的通关秘籍
磨刀不误砍柴工.作为手机应用开发者,你需要向应用商店提交应用审核,迅速通过审核可以让你抢占先机.对苹果iOS应用开发者来说尤其如此.苹果应用商店的审核近乎吹毛求疵,下面这些清单可以让你知道苹果会在哪些 ...