Installing Hadoop on Mac OSX Yosemite Tutorial Part 1.

Install HomeBrew
Installing Hadoop
SSH Localhost
Configuring Hadoop
Starting and Stopping Hadoop
Good to know

Install HomeBrew

Found here:http://brew.sh/ or simply paste this inside the terminal

$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Install Hadoop

$ brew install hadoop

Hadoop will be installed in the following directory
/usr/local/Cellar/hadoop

Configuring Hadoop

Edit hadoop-env.sh

The file can be located at /usr/local/Cellar/hadoop/2.6.0/libexec/etc/hadoop/hadoop-env.sh
where 2.6.0 is the hadoop version.

Find the line with

export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"

and change it to

export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true -Djava.security.krb5.realm= -Djava.security.krb5.kdc="

Edit Core-site.xml

The file can be located at /usr/local/Cellar/hadoop/2.6.0/libexec/etc/hadoop/core-site.xml

 <configuration>
  <property>
     <name>hadoop.tmp.dir</name>
<value>/usr/local/Cellar/hadoop/hdfs/tmp</value>
    <description>A base for other temporary directories.</description>
  </property>
  <property>
     <name>fs.default.name</name>
     <value>hdfs://localhost:9000</value>
  </property>
</configuration>

Edit mapred-site.xml

The file can be located at /usr/local/Cellar/hadoop/2.6.0/libexec/etc/hadoop/mapred-site.xml and by default will be blank.

 <configuration>
       <property>
         <name>mapred.job.tracker</name>
         <value>localhost:9010</value>
       </property>
 </configuration>

Edit hdfs-site.xml

The file can be located at /usr/local/Cellar/hadoop/2.6.0/libexec/etc/hadoop/hdfs-site.xml

 <configuration>
    <property>
      <name>dfs.replication</name>
      <value>1</value>
     </property>
 </configuration>

To simplify life edit your ~/.profile using vim or your favorite editor and add the following two commands

alias hstart="/usr/local/Cellar/hadoop/2.6.0/sbin/start-dfs.sh;/usr/local/Cellar/hadoop/2.6.0/sbin/start-yarn.sh"
alias hstop="/usr/local/Cellar/hadoop/2.6.0/sbin/stop-yarn.sh;/usr/local/Cellar/hadoop/2.6.0/sbin/stop-dfs.sh"

and execute

$ source ~/.profile

in the terminal to update.

Before we can run Hadoop we first need to format the HDFS using

$ hdfs namenode -format

SSH Localhost

Nothing needs to be done here if you have already generated ssh keys. To verify just check for the existance of ~/.ssh/id_rsa and the ~/.ssh/id_rsa.pub files. If not the keys can be generated using

$ ssh-keygen -t rsa

Enable Remote Login
“System Preferences” -> “Sharing”. Check “Remote Login”
Authorize SSH Keys
To allow your system to accept login, we have to make it aware of the keys that will be used

$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

Let’s try to login.

$ ssh localhost
> Last login: Fri Mar  6 20:30:53 2015
$ exit

Running Hadoop

Now we can run Hadoop just by typing

$ hstart

and stopping using

$ hstop

Download Examples

To run examples, Hadoop needs to be started.

Hadoop Examples 1.2.1 (Old)
Hadoop Examples 2.6.0 (Current)

Test them out using:

$ hadoop jar <path to the hadoop-examples file> pi 10 100

Good to know

We can access the Hadoop web interface by connecting to

Resource Manager: http://localhost:50070
JobTracker:http://localhost:8088
Specific Node Information:http://localhost:8042

This we can use to access the HDFS filesystem, for any resulting output files.

Errors

To resolve ‘WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable’ (Stackoverflow.com)

Connection Refused after installing Hadoop

$ hdfs dfs -ls
> 15/03/06 20:13:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> ls: Call From spaceship.local/192.168.1.65 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:   http://wiki.apache.org/hadoop/ConnectionRefused

The start-up scripts such as start-all.sh do not provide you with specifics about why the startups failed. Some of the time it won’t even notify you that a startup failed… To troubleshoot the service that isn’t functioning execute it manually.

$ hdfs namenode
> 15/03/06 20:18:31 WARN namenode.FSNamesystem: Encountered exception loading fsimage
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /usr/local/Cellar/hadoop/hdfs/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
> 15/03/06 20:18:31 FATAL namenode.NameNode: Failed to start namenode.

and the problem is…

$ hadoop namenode -format

To verify the problem is fixed run

$ hstart
$ hdfs dfs -ls /

If ‘hdfs dfs -ls’ gives you a error

> ls: `.': No such file or directory

then we need to create the default directory structure Hadoop expects (ie. /user/whoami_output/)

$ whoami
> spaceship
$ hdfs dfs -mkdir -p /user/spaceship
> 15/03/06 20:31:19 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
$ hdfs dfs -ls
> 15/03/06 20:31:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
$ hdfs dfs -put book.txt
> 15/03/06 20:32:29 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
$ hdfs dfs -ls
> 15/03/06 20:32:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   1 marekbejda supergroup      29578 2015-03-06 20:32 book.txt

JPS and Nothing Works…

Seems like certain builds of Java 1.8 (i.e.. 1.8_40) are missing a critical package that breaks Yarn. Check your logs at

$ jps
> 5935 Jps
$ vim /usr/local/Cellar/hadoop/2.6.0/libexec/logs/yarn-*
> 2015-03-07 16:21:32,934 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.lang.NoClassDefFoundError: sun/management/ExtendedPlatformComponent
..
> 2015-03-07 16:21:32,937 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
> 2015-03-07 16:21:32,939 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:

http://mail.openjdk.java.net/pipermail/core-libs-dev/2014-November/029818.html

Either downgrade to Java 1.7 or I’m currently running 1.8.0_20

$ java -version
> java version "1.8.0_20"
> Java(TM) SE Runtime Environment (build 1.8.0_20-b26)
> Java HotSpot(TM) 64-Bit Server VM (build 25.20-b23, mixed mode)

Like this:

Like Loading...
 
转自:http://amodernstory.com/2014/09/23/installing-hadoop-on-mac-osx-yosemite/#hadoop

Hbase(参考:http://freddy.cellcore.org/post/52568231952/hadoop-hbase-on-osx-10-8-mountain-lion)

Downloading Hbase

Now that you have successfully setup and launch Hadoop it’s time to install Hbase. Similarly to Hadoop, you have two options to get Hbase. You can either go to the Hbase distribution site, choose a mirror close to your location and download it (then copy to $HD_HOME), or execute the following commands:
cd ~/Downloads
curl http://apache.websitebeheerjd.nl/hbase/stable/hbase-0.94.8.tar.gz > hbase-0.94.8.tar.gz
mv hbase-0.94.8.tar.gz $HD_HOME/
cd $HD_HOME
tar xvzf hbase-0.94.8.tar.gz
ln -s hbase-0.94.8 hbase备注使用,省去很多事情
brew install hbase

Configuring Hbase

Configuring Hbase is quite easy (a very basic instance), you need to modify only two files (located under $HBASE_HOME/conf).

hbase-env.sh

The file hbase-env.sh sets the execution environment for Hbase. This file works the same way with as hadoop-env.sh for Hadoop. Add the following lines to hbase-env.sh:
  1. JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home
  2. HBASE_OPTS="-Djava.net.preferIPv4Stack=true -Djava.security.krb5.realm=OX.AC.UK -Djava.security.krb5.kdc=kdc0.ox.ac.uk:kdc1.ox.ac.uk"

hbase-site.xml

Hbase properties are governed by the file hbase-site.xml. The only configuration parameter that you need to specify to make Hbase work is hbase.rootdir, the Hbase root directory. This directory can be either a local file file:/// or an HDFS instancehdfs://. In this particular case we are pointing Hbase to our newly installed HDFS instance. Other properties that can be set in this files can be found here.
Hbase requires Zookeper to work. By default Hbase comes with an embedded instance of Zookeeper, which relieves us from the task of setting one by ourselves. In the case that you may want to know more about Zookeper, its configuration, and its role on the Hbase architecture checkout this article.
  1. <configuration>
  2. <property>
  3. <name>hbase.rootdir</name>
  4. <value>hdfs://localhost:9000/hbase</value>
  5. </property>
  6. </configuration>

Running Hbase

Now you are ready to launch with Hbase. To start Hbase just execute the following command:
$HBASE_HOME/bin/start-hbase.sh

Test it

In order to test your Hbase installation, launch the Hbase shell and play with it (heavily inspired from http://hbase.apache.org/book/quickstart.html). To launch the Hbase shell execute the following command:
$HBASE_HOME/bin/hbase shell
You should be prompted to the Hbase interactive interpreter:
HBase Shell; enter 'help' for list of supported commands.
Type "exit" to leave the HBase Shell
Version 0.94.8, r1485407, Wed May 22 20:53:13 UTC 2013
Create a new table and put new values on it:
hbase(main):003:0> create 'test', 'cf'
0 row(s) in 1.2200 seconds
hbase(main):003:0> list 'test'
..
1 row(s) in 0.0550 seconds
hbase(main):004:0> put 'test', 'row1', 'cf:a', 'value1'
0 row(s) in 0.0560 seconds
hbase(main):005:0> put 'test', 'row2', 'cf:b', 'value2'
0 row(s) in 0.0370 seconds
hbase(main):006:0> put 'test', 'row3', 'cf:c', 'value3'
0 row(s) in 0.0450 seconds
scan the table values:
hbase(main):007:0> scan 'test'
ROW        COLUMN+CELL
row1       column=cf:a, timestamp=1288380727188, value=value1
row2       column=cf:b, timestamp=1288380738440, value=value2
row3       column=cf:c, timestamp=1288380747365, value=value3
3 row(s) in 0.0590 seconds
get a value through its key:
hbase(main):008:0> get 'test', 'row1'
COLUMN      CELL
cf:a        timestamp=1288380727188, value=value1
1 row(s) in 0.0400 seconds
disable and drop (delete) the table.
hbase(main):012:0> disable 'test'
0 row(s) in 1.0930 seconds
hbase(main):013:0> drop 'test'
0 row(s) in 0.0770 seconds 
If you could execute those commands successfully then your hbase instance is working properly.

Hbase web-interfaces

http://localhost:60010/ Hbase master webuihttp://localhost:60030/ Hbase region server webui

Stopping Hbase

$HBASE_HOME/bin/stop-hbase.sh


 
 
 
 
 
 
 
 

Installing Hadoop on Mac OSX Yosemite Tutorial Part 1.的更多相关文章

  1. Dia Diagram Mac OSX Yosemite Fix 闪退 xterm

    [转]http://navkirats.blogspot.hk/2014/10/dia-diagram-mac-osx-yosemite-fix-i-use.html I use the Dia to ...

  2. Setting up Latex-vim (or Latex-suite) plugin within macVim under Mac OSX Yosemite 2015-1-20 by congliu

    1. Overview: Vim是命令行下的文本编辑程序,gVim是Vim的Linux下的图形化版本,macVim是Mac下的图形化版本 Latex-vim是vim写Latex文件时的插件 Skim是 ...

  3. Installing XGBoost on Mac OSX

      0. Get gcc with open mp.  Just paste and execute the following command in your terminal, once Home ...

  4. Mac OSX Yosemite 10.10 brew 错误:mktemp: mkdtemp failed on /tmp/git-LIPo: No such file or directory

    这个问题困扰了我非常久非常久.使得我不得不花一点时间来说一下解决方法. 事情是这种:前两天兴高採烈的更新了一下宝贝mac到10.10. 一切看起来都那么美好,可是. .当我又一次安装magento的时 ...

  5. Install mcrypt for php on Mac OSX 10.10 Yosemite for a Development Server

    mcrypt is a file encryption method using secure techniques to exchange data. It is required for some ...

  6. mac osx 系统 brew install hadoop 安装指南

    mac osx 系统 brew  install hadoop 安装指南   brew install hadoop 配置 core-site.xml:配置hdfs文件地址(记得chmod 对应文件夹 ...

  7. Mac OSX系统中Hadoop / Hive 与 spark 的安装与配置 环境搭建 记录

    Mac OSX系统中Hadoop / Hive 与 spark 的安装与配置 环境搭建 记录     Hadoop 2.6 的安装与配置(伪分布式) 下载并解压缩 配置 .bash_profile : ...

  8. Mac 操作系统安装 SVN server教程(Subversion With Mac OS X Tutorial)

    Find recent articles on my github page: rubyrobot.github.io © 2006-2014 Imagine Ecommerce Subversion ...

  9. Install Ansible on Mac OSX

    from: https://devopsu.com/guides/ansible-mac-osx.html and : https://devopsu.com/guides/ansible-post- ...

随机推荐

  1. ndk学习14: 进程

    Linux进程管理 来自为知笔记(Wiz)

  2. 强制QQ好友

    tencent://AddContact/?fromId=45&fromSubId=1&subcmd=all&uin=32595667&website=www.oicq ...

  3. Best Time to Buy and Sell Stock with Cooldown

    Say you have an array for which the ith element is the price of a given stock on day i. Design an al ...

  4. 关于QQ使用的一些代码

    http://wiki.open.qq.com/wiki/website/网站接入wiki索引

  5. struts2 servlet api 访问方式

    Action中访问ServletAPI. 主要就是接收表单参数及向域对象中存取值. 关于SevletAPI的方法在Action中有三种方式: 1.完全解耦合的形式: * 使用一个类:ActionCon ...

  6. centos vim配置高亮语法和格式化粘贴

    centos vim配置高亮语法和格式化粘贴 设置vim别名和高亮grep词语 echo -e "\nalias vi=vim\nalias grep='grep --color'\n&qu ...

  7. C++ 获取vector容器最后一个元素

    声明:vector<T>  vec; 方法一: return vec.at(vec.size()-1); 方法二: return vec.back(); 方法三: return vec.e ...

  8. selenium 右键下载图片,结合sikuli

    上一次写右键下载是结合robot,这次是使用selenium+sikuli 上一次日志:http://www.cnblogs.com/tobecrazy/p/3969390.html 有关sikuli ...

  9. 6.js模式-中介者模式

    1. 中介者模式 所有对象通过中介者进行通信 var playDirector = (function(){ var players = []; var options = {}; options.a ...

  10. pair<>结构体模版的用法

    1.pair算是一个结构体模版,定义的时候是这样的: pair<T1,T2> P; 其中T1,T2可以是int,string,double,甚至是vector<>. 2.进行初 ...