Hadoop MapReduce Next Generation - Setting up a Single Node Cluster.

Purpose

This document describes how to set up and configure a single-node Hadoop installation so that you can quickly perform simple operations using Hadoop MapReduce and the Hadoop Distributed File System (HDFS).

Prerequisites

Supported Platforms

  • GNU/Linux is supported as a development and production platform. Hadoop has been demonstrated on GNU/Linux clusters with 2000 nodes.

Required Software

Required software for Linux include:

  1. Java™ must be installed. Recommended Java versions are described at HadoopJavaVersions.
  2. ssh must be installed and sshd must be running to use the Hadoop scripts that manage remote Hadoop daemons.

Installing Software

If your cluster doesn't have the requisite software you will need to install it.

For example on Ubuntu Linux:

  $ sudo apt-get install ssh
$ sudo apt-get install rsync

Download

To get a Hadoop distribution, download a recent stable release from one of the Apache Download Mirrors.

Prepare to Start the Hadoop Cluster

Unpack the downloaded Hadoop distribution. In the distribution, edit the file etc/hadoop/hadoop-env.sh to define some parameters as follows:

  # set to the root of your Java installation
export JAVA_HOME=/usr/java/latest # Assuming your installation directory is /usr/local/hadoop
export HADOOP_PREFIX=/usr/local/hadoop

Try the following command:

  $ bin/hadoop

This will display the usage documentation for the hadoop script.

Now you are ready to start your Hadoop cluster in one of the three supported modes:

  • Local (Standalone) Mode
  • Pseudo-Distributed Mode
  • Fully-Distributed Mode

Standalone Operation

By default, Hadoop is configured to run in a non-distributed mode, as a single Java process. This is useful for debugging.

The following example copies the unpacked conf directory to use as input and then finds and displays every match of the given regular expression. Output is written to the given output directory.

$ mkdir input
$ cp etc/hadoop/*.xml input
$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar grep input output 'dfs[a-z.]+'
$ cat output/*

Pseudo-Distributed Operation

Hadoop can also be run on a single-node in a pseudo-distributed mode where each Hadoop daemon runs in a separate Java process.

Configuration

Use the following:

# etc/hadoop/core-site.xml:

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration> # etc/hadoop/hdfs-site.xml: <configuration>
<property>
<name>dfs.replication</name>
<value></value>
</property>
</configuration>

Setup passphraseless ssh

Now check that you can ssh to the localhost without a passphrase:

 $ ssh localhost

If you cannot ssh to localhost without a passphrase, execute the following commands:

$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Execution

The following instructions are to run a MapReduce job locally. If you want to execute a job on YARN, see YARN on Single Node.

  1. Format the filesystem:

      $ bin/hdfs namenode -format
  2. Start NameNode daemon and DataNode daemon:
      $ sbin/start-dfs.sh

    The hadoop daemon log output is written to the $HADOOP_LOG_DIR directory (defaults to $HADOOP_HOME/logs).

  3. Browse the web interface for the NameNode; by default it is available at:
    • NameNode - http://localhost:50070/
  4. Make the HDFS directories required to execute MapReduce jobs:
      $ bin/hdfs dfs -mkdir /user
    $ bin/hdfs dfs -mkdir /user/<username>
  5. Copy the input files into the distributed filesystem:
      $ bin/hdfs dfs -put etc/hadoop input
  6. Run some of the examples provided:
      $ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar grep input output 'dfs[a-z.]+'
  7. Examine the output files:

    Copy the output files from the distributed filesystem to the local filesystem and examine them:

      $ bin/hdfs dfs -get output output
    $ cat output/*

    or

    View the output files on the distributed filesystem:

      $ bin/hdfs dfs -cat output/*
  8. When you're done, stop the daemons with:
      $ sbin/stop-dfs.sh

YARN on Single Node

You can run a MapReduce job on YARN in a pseudo-distributed mode by setting a few parameters and running ResourceManager daemon and NodeManager daemon in addition.

The following instructions assume that 1. ~ 4. steps of the above instructions are already executed.

  1. Configure parameters as follows:

    etc/hadoop/mapred-site.xml:

    <configuration>
    <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
    </property>
    </configuration>

    etc/hadoop/yarn-site.xml:

    <configuration>
    <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
    </property>
    </configuration>
  2. Start ResourceManager daemon and NodeManager daemon:
      $ sbin/start-yarn.sh
  3. Browse the web interface for the ResourceManager; by default it is available at:
    • ResourceManager - http://localhost:8088/
  4. Run a MapReduce job.
  5. When you're done, stop the daemons with:
      $ sbin/stop-yarn.sh

Hadoop MapReduce Next Generation - Setting up a Single Node Cluster的更多相关文章

  1. Setting up a Single Node Cluster Hadoop on Ubuntu/Debian

    Hadoop: Setting up a Single Node Cluster. Hadoop: Setting up a Single Node Cluster. Purpose Prerequi ...

  2. CentOS6.4安装Hadoop2.0.5 alpha - Single Node Cluster

    1.安装JDK7 rpm到/usr/java/jdk1.7.0_40,并建立软链接/usr/java/default到/usr/java/jdk1.7.0_40 [root@server-308 ~] ...

  3. Hadoop Single Node Setup(hadoop本地模式和伪分布式模式安装-官方文档翻译 2.7.3)

    Purpose(目标) This document describes how to set up and configure a single-node Hadoop installation so ...

  4. Writing an Hadoop MapReduce Program in Python

    In this tutorial I will describe how to write a simpleMapReduce program for Hadoop in thePython prog ...

  5. Hadoop MapReduce编程学习

    一直在搞spark,也没时间弄hadoop,不过Hadoop基本的编程我觉得我还是要会吧,看到一篇不错的文章,不过应该应用于hadoop2.0以前,因为代码中有  conf.set("map ...

  6. 用Python语言写Hadoop MapReduce程序Writing an Hadoop MapReduce Program in Python

    In this tutorial I will describe how to write a simple MapReduce program for Hadoop in the Python pr ...

  7. Hadoop mapreduce自定义分组RawComparator

    本文发表于本人博客. 今天接着上次[Hadoop mapreduce自定义排序WritableComparable]文章写,按照顺序那么这次应该是讲解自定义分组如何实现,关于操作顺序在这里不多说了,需 ...

  8. 下一代Apache Hadoop MapReduce框架的架构

    背景 随着集群规模和负载增加,MapReduce JobTracker在内存消耗,线程模型和扩展性/可靠性/性能方面暴露出了缺点,为此需要对它进行大整修. 需求 当我们对Hadoop MapReduc ...

  9. Hadoop MapReduce编程 API入门系列之join(二十六)(未完)

    不多说,直接上代码. 天气记录数据库 Station ID Timestamp Temperature 气象站数据库 Station ID Station Name 气象站和天气记录合并之后的示意图如 ...

随机推荐

  1. hud 1241 Oil Deposits

    Oil Deposits Time Limit: 2000/1000 MS (Java/Others)    Memory Limit: 65536/32768 K (Java/Others) Tot ...

  2. gitservergitlab之搭建和使用

    gitserver比較有名的是gitosis和gitolite,这两个管理和使用起来略微有些复杂,没有web页面,而gitlab则是类似于github的一个工具,github无法免费建立私有仓库,而且 ...

  3. c++制作小游戏--雷电

    用c++实现了一个小游戏--雷电,貌似执行的还不错.贴图和声效也是Duang!Duang!的.整个项目我也会给出下载链接,有兴趣的能够编译执行一下.用到了C++11的新特性,最好是使用vs2013编译 ...

  4. MYSQL在线注释文档--- 在gdb中显示源码(gdbtui使用方法)----赖明星的个人博客

    http://mingxinglai.com/cn/2013/07/gdbtui/ MySQL源码注释与类图 http://mingxinglai.com/cn/2015/08/mysql-annot ...

  5. 系统数据文件和信息之附加组ID

    4.2BSD引入了附加组ID(supplementary group ID)的概念.我们不仅可以属于口令文件记录项中组ID所对应的组,也可属于多达16个另外的组.文件访问权限检查相应被修改为:不仅将进 ...

  6. Git 安装与简单使用(新手必看)

    1.安装git,默认下一步下一步等待安装完成 2.设置全局账号 安装之后去快速启动栏点击GitBash git config --global user.name "xiefeng" ...

  7. java.sql.SQLException: ORA-28001: the password has expired。

    java.sql.SQLException: ORA-28001: the password has expired. Oracle11g的密码过期. 原因:是由于oracle11g中默认在defau ...

  8. modelsim remote

    远程桌面登陆我的台式机上的账号,然后运行modelsim 出现该问题: Unable to checkout a viewer license necessary for use of the Mod ...

  9. js 一个关于图片onload加载的事

    前几天一个项目让我头疼了很久,一个关于图片加载时的loading效果,因为不是太懂js,所以在网上各种找资料,但还是不理想,无赖苦心研究,终于有了一点眉目了,虽然个中还有一些道理不懂,至少目的达到了; ...

  10. js 取到相同的字符串 返回对应的下标

    ["aaa","aaa","","ddd","eee","eee"," ...