一 安装JDK

  下载JDK      jdk-8u112-linux-i586.tar.gz

  解压JDK     hadoop@ubuntu:/soft$ tar -zxvf jdk-8u112-linux-i586.tar.gz

  配置环境变量   

  使配置生效  hadoop@ubuntu:/soft/jdk1.8.0_112$ source /etc/profile

  检验配置:hadoop@ubuntu:/soft/jdk1.8.0_112$ java

Usage: java [-options] class [args...]
(to execute a class)
or java [-options] -jar jarfile [args...]
(to execute a jar file)
where options include:
-d32 use a 32-bit data model if available
-d64 use a 64-bit data model if available
-client to select the "client" VM
-server to select the "server" VM
-minimal to select the "minimal" VM
The default VM is client.

-cp <class search path of directories and zip/jar files>
-classpath <class search path of directories and zip/jar files>
A : separated list of directories, JAR archives,
and ZIP archives to search for class files.
-D<name>=<value>
set a system property
-verbose:[class|gc|jni]
enable verbose output
-version print product version and exit
-version:<value>
Warning: this feature is deprecated and will be removed
in a future release.
require the specified version to run
-showversion print product version and continue
-jre-restrict-search | -no-jre-restrict-search
Warning: this feature is deprecated and will be removed
in a future release.
include/exclude user private JREs in the version search
-? -help print this help message
-X print help on non-standard options
-ea[:<packagename>...|:<classname>]
-enableassertions[:<packagename>...|:<classname>]
enable assertions with specified granularity
-da[:<packagename>...|:<classname>]
-disableassertions[:<packagename>...|:<classname>]
disable assertions with specified granularity
-esa | -enablesystemassertions
enable system assertions
-dsa | -disablesystemassertions
disable system assertions
-agentlib:<libname>[=<options>]
load native agent library <libname>, e.g. -agentlib:hprof
see also, -agentlib:jdwp=help and -agentlib:hprof=help
-agentpath:<pathname>[=<options>]
load native agent library by full pathname
-javaagent:<jarpath>[=<options>]
load Java programming language agent, see java.lang.instrument
-splash:<imagepath>
show splash screen with specified image
See http://www.oracle.com/technetwork/java/javase/documentation/index.html for more details.

检验配置:hadoop@ubuntu:/soft/jdk1.8.0_112$ javac

Usage: javac <options> <source files>
where possible options include:
-g Generate all debugging info
-g:none Generate no debugging info
-g:{lines,vars,source} Generate only some debugging info
-nowarn Generate no warnings
-verbose Output messages about what the compiler is doing
-deprecation Output source locations where deprecated APIs are used
-classpath <path> Specify where to find user class files and annotation processors
-cp <path> Specify where to find user class files and annotation processors
-sourcepath <path> Specify where to find input source files
-bootclasspath <path> Override location of bootstrap class files
-extdirs <dirs> Override location of installed extensions
-endorseddirs <dirs> Override location of endorsed standards path
-proc:{none,only} Control whether annotation processing and/or compilation is done.
-processor <class1>[,<class2>,<class3>...] Names of the annotation processors to run; bypasses default discovery process
-processorpath <path> Specify where to find annotation processors
-parameters Generate metadata for reflection on method parameters
-d <directory> Specify where to place generated class files
-s <directory> Specify where to place generated source files
-h <directory> Specify where to place generated native header files
-implicit:{none,class} Specify whether or not to generate class files for implicitly referenced files
-encoding <encoding> Specify character encoding used by source files
-source <release> Provide source compatibility with specified release
-target <release> Generate class files for specific VM version
-profile <profile> Check that API used is available in the specified profile
-version Version information
-help Print a synopsis of standard options
-Akey[=value] Options to pass to annotation processors
-X Print a synopsis of nonstandard options
-J<flag> Pass <flag> directly to the runtime system
-Werror Terminate compilation if warnings occur
@<filename> Read options and filenames from file

二 安装SSH

  下载安装: hadoop@ubuntu:/soft/jdk1.8.0_112$ sudo apt-get install ssh

  配置无密码登录本机

    生成KEY                  hadoop@ubuntu:/soft/jdk1.8.0_112$ ssh-keygen -t rsa -P ""  回车

    追加公钥到授权KEY里  hadoop@ubuntu:/soft/jdk1.8.0_112$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

  验证SSH

    ssh localhost   如果不用输入密码即安装成功

三 安装hadoop及配置

  1 下载 hadoop   http://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz

  2 解压hadoop   hadoop@ubuntu:/soft/hadoop$ tar -zxvf hadoop-2.7.3.tar.gz

  3 修改Hadoop-env.sh,添加JAVA_HOME路径

    hadoop@ubuntu:/soft/hadoop/hadoop-2.7.3/etc/hadoop$ vim hadoop-env.sh

    添加:export JAVA_HOME=/soft/jdk1.8.0_112

  4 修改core-site.xml  

<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
  </property>

  <property>
    <name>hadoop.tmp.dir</name>
    <value>/soft/hadoop/hdfs-dir</value>
  </property>

</configuration>

  5修改hdfs-site.xml

<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
</configuration>

  6 新建mapred-site.xml

  <configuration>

    <property>
      <name>mapreduce.framwork.name</name>
      <value>yarn</value>
    </property>

  </configuration>

  7 配置yarn-site.xml

<configuration>
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
  <property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>
</configuration>

  8 格式化HDFS文件系统

    hadoop@ubuntu:/soft/hadoop/hadoop-2.7.3$ bin/hadoop namenode -format

  9 启动相关进程

    hadoop@ubuntu:/soft/hadoop/hadoop-2.7.3$ sbin/start-all.sh

  10  查看进程是否启动成功

hadoop@ubuntu:/soft/hadoop/hadoop-2.7.3$ jps
  6373 SecondaryNameNode
  6859 Jps
  6523 ResourceManager
  6766 NodeManager
  6206 DataNode

四 验证安装是否成功

打开浏览器

  • 输入:http://localhost:8088进入ResourceManager管理页面
  • 输入:http://localhost:50070进入HDFS页面

五 测试验证

1新建HDFS文件夹

$ bin/hadoop dfs -mkdir /user

$ bin/hadoop dfs -mkdir /user/hadoop

$ bin/hadoop dfs -mkdir /user/hadoop/input

2 将数据导入HDFS 的input文件夹

     $ bin/hadoop dfs -put /etc/protocols /user/hadoop/input    

3    执行Hadoop WordCount应用(词频统计)

# 如果存在上一次测试生成的output,由于hadoop的安全机制,直接运行可能会报错,所以请手动删除上一次生成的output文件夹

$ bin/hadoop jar share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.7.3-sources.jar org.apache.hadoop.examples.WordCount input output

4查看运行结果

hadoop@ubuntu:/soft/hadoop/hadoop-2.7.3$ bin/hadoop dfs -cat output/part-r-00000

                

hadoop 伪分布模式环境搭建的更多相关文章

  1. 【Hadoop离线基础总结】伪分布模式环境搭建

    伪分布模式环境搭建 服务规划 适用于学习测试开发集群模式 步骤 第一步:停止单节点集群,删除/export/servers/hadoop-2.7.5/hadoopDatas,重新创建文件夹 停止单节点 ...

  2. 【Hadoop离线基础总结】CDH版本Hadoop 伪分布式环境搭建

    CDH版本Hadoop 伪分布式环境搭建 服务规划 步骤 第一步:上传压缩包并解压 cd /export/softwares/ tar -zxvf hadoop-2.6.0-cdh5.14.0.tar ...

  3. CentOS7下Hadoop伪分布式环境搭建

    CentOS7下Hadoop伪分布式环境搭建 前期准备 1.配置hostname(可选,了解) 在CentOS中,有三种定义的主机名:静态的(static),瞬态的(transient),和灵活的(p ...

  4. 《OD大数据实战》Hadoop伪分布式环境搭建

    一.安装并配置Linux 8. 使用当前root用户创建文件夹,并给/opt/下的所有文件夹及文件赋予775权限,修改用户组为当前用户 mkdir -p /opt/modules mkdir -p / ...

  5. Hadoop伪分布式环境搭建+Ubuntu:16.04+hadoop-2.6.0

    Hello,大家好 !下面就让我带大家一起来搭建hadoop伪分布式的环境吧!不足的地方请大家多交流.谢谢大家的支持 准备环境: 1, ubuntu系统,(我在16.04测试通过.其他版本请自行测试, ...

  6. hadoop伪分布式环境搭建

    环境:Centos6.9+jdk+hadoop1.下载hadoop的tar包,这里以hadoop2.6.5版本为例,下载地址https://archive.apache.org/dist/hadoop ...

  7. hadoop伪分布式环境搭建之linux系统安装教程

    本篇文章是接上一篇<超详细hadoop虚拟机安装教程(附图文步骤)>,上一篇有人问怎么没写hadoop安装.在文章开头就已经说明了,hadoop安装会在后面写到,因为整个系列的文章涉及到每 ...

  8. Hadoop学习笔记1:伪分布式环境搭建

    在搭建Hadoop环境之前,请先阅读如下博文,把搭建Hadoop环境之前的准备工作做好,博文如下: 1.CentOS 6.7下安装JDK , 地址: http://blog.csdn.net/yule ...

  9. 【Hadoop】伪分布式环境搭建、验证

    Hadoop伪分布式环境搭建: 自动部署脚本: #!/bin/bash set -eux export APP_PATH=/opt/applications export APP_NAME=Ares ...

随机推荐

  1. JS中的关键字和保留字

    JavaScript中不能作为变量名的关键字和保留字总结: 1.js中的关键字: break case catch continue default delete do else finally fo ...

  2. Unique Encryption Keys

    The security of many ciphers strongly depends on the fact that the keys are unique and never re-used ...

  3. Java的四种引用,强弱软虚,用到的场景

    众所周知,java中是JVM负责内存的分配和回收,这是它的优点(使用方便,程序不用再像使用c那样操心内存),但同时也是它的缺点(不够灵活).为了解决内存操作不灵活这个问题,可以采用软引用等方法. 在J ...

  4. api文档的书写

    写文档写要与写代码一样,增加复用. 比如 model 说明就只需要一个,api中含有哪些字段,就在api说明中增加到那些 models 的链接. 使用 sophinx 如何生成目录 .. toctre ...

  5. docker postgres

    启动一个 Postgres 实例 docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d daocloud.i ...

  6. 生产者,消费者,CDN

    1 生产者消费者模型应用场景及优势? 什么是生产者消费者模型 在 工作中,大家可能会碰到这样一种情况:某个模块负责产生数据,这些数据由另一个模块来负责处理(此处的模块是广义的,可以是类.函数.线程.进 ...

  7. MySql 自适应哈希索引

    一.介绍 哈希(hash)是一种非常快的查找方法,一般情况下查找的时间复杂度为O(1).常用于连接(join)操作,如Oracle中的哈希连接(hash join). InnoDB存储引擎会监控对表上 ...

  8. jquerymobile模板

    <!DOCTYPE html> <html> <head> <title>Page Title</title> <meta name= ...

  9. Libsvm在matlab环境下使用指南

    一.安装 http://www.csie.ntu.edu.tw/~cjlin/libsvm/matlab/.在这个地址上可以下的包含matlab接口的源程序.下载完后可以放到放到任意的盘上解压,最好建 ...

  10. ABAP开发中message dump

    系统里边 消息 造成dump示例, 1.面向对象的method 中一般不能用stop, 例如data_change事件, ** sm30 不能stop, 2. 增强中 有些地方不能stop, 3.还有 ...