Hadoop Ecosytem
There are a lot of Hadoop related projects which are open sourced and widely used by many componies. This article will go through the installations of them.
l Install JDK
l Install Hadoop
l Install Hbase
l Install Hive
l Install Spark
l Install Impala
l Install Sqoop
l Install Alluxio
Install JDK
Step 1: download package from offical site, and choose appropriate version.
Step 2: unzip the package and copy to destination folder
tar zxf jdk-8u111-linux-x64.tar.gz
cp -R jdk1.8.0_111/* /usr/share
Step 3: setting PATH and JAVA_HOME
vi ~/.bashrc
export JAVA_HOME=/usr/share/jdk1.8.0_111
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarsource ~/.bashrc
Step 4: reboot server to make the changes take effect
Step 5: check java version
java -version
javac -version
Install Hadoop
Follow below steps to install Hadoop in standalone mode.
Step 1: download package from apache site
Step 2: unzip the package and copy to destination folder
tar zxf hadoop-2.7.3.tar.gz
cp -R hadoop-2.7.3/* /usr/share/hadoop
Step 3: create 'hadoop' fiolder under 'home'
mkdir /home/hadoop
Step 4: set PATH and HADOOP_HOME
vi ~/.bashrc
export HADOOP_HOME=/usr/share/hadoop
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
export HADOOP_INSTALL=$HADOOP_HOMEsource ~/.bashrc
Step 5: check hadoop version
hadoop version
Step 6: config hadoop hdfs, core site, yarn and map-reduce
cd $HADOOP_HOME/etc/hadoop
vi hadoop-env.sh
export JAVA_HOME=/usr/share/jdk1..0_111vi core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>vi hdfs-site.xml
<property>
<name>dfs.replication</name>
<value></value>
</property> <property>
<name>dfs.name.dir</name>
<value>file:///home/hadoop/hadoopinfra/hdfs/namenode</value>
</property> <property>
<name>dfs.data.dir</name>
<value>file:///home/hadoop/hadoopinfra/hdfs/datanode</value>
</property>vi yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>cp mapred-site.xml.template mapred-site.xml
vi mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
Step 7: initialize hadoop namenode
hdfs namenode -format
Step 8: start hadoop
start-dfs.sh
start-yarn.sh
Step 9: check hadoop site to see if it works
http://localhost:50070/
http://localhost:8088/
Install HBase
Follow below steps to install HBase in standalone mode.
Step 1: check if Hadoop installed
hadoop version
Step 2: download version 1.2.4 of hbase from apache site
Step3: unzip package and copy to destination folder
tar zxf hbase-1.2.4-bin.tar.gz
cp -R hbase-1.2.4-bin/* /usr/share/base
Step 4: configure hbase env
cd /usr/shar/hbase/conf
vi hbase-env.sh
export JAVA_HOME=/usr/share/jdk1..0_111
Step 5: modify hbase-site.xml
vi hbase-site.xml
<configuration>
//Here you have to set the path where you want HBase to store its files.
<property>
<name>hbase.rootdir</name>
<value>file:/home/hadoop/HBase/HFiles</value>
</property> //Here you have to set the path where you want HBase to store its built in zookeeper files.
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hadoop/zookeeper</value>
</property>
</configuration>
Step 6: start hbase and check hbase directory in hdfs
cd /usr/share/hbase/bin
start-hbase.sh
hadoop fs -ls /hbase
Step 7: check hbase via web interface
http://localhost:60010
Install Hive
Step 1: download version 1.2.1 of hive from apache site
Step 2: unzip the package and copy to destination folder
tar zxf apache-hive-1.2.1-bin.tar.gz
cp -R apache-hive-1.2.1-bin/* /usr/share/hive
Step 3: set HIVE_HOME
vi ~/.bashrc
export HIVE_HOME=/usr/share/hive
export PATH=$PATH:$HIVE_HOME/bin
export CLASSPATH=$CLASSPATH:/usr/share/hadoop/lib/*:.
export CLASSPATH=$CLASSPATH:/usr/share/hive/lib/*:.source ~/.bashrc
Step 4: configure env for hive
cd $HIVE_HOME/conf
cp hive-env.sh.template hive-env.sh
export HADOOP_HOME-/usr/share/hadoop
Step 5: download version 10.12.1.1 of Apache Derby from apache site
Step 6: unzip derby package and copy to destination folder
tar zxf db-derby-10.12.1.1-bin.tar.gz
cp -R db-derby-10.12.1.1-bin/* /usr/share/derby
Step 7: setup DERBY_HOME
vi ~/.bashrc
export DERBY_HOME=/usr/local/derby
export PATH=$PATH:$DERBY_HOME/bin
export CLASSPATH=$CLASSPATH:$DERBY_HOME/lib/derby.jar:$DERBY_HOME/lib/derbytools.jarsource ~/.bashrc
Step 8: create a directory to store metastore
mkdir $DERBY_HOME/data
Step 9: configure metasore of hive
cd $HIVE_HOME/conf
cp hive-default.xml.template hive-site.xml
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby://localhost:1527/metastore_db;create=true </value>
<description>JDBC connect string for a JDBC metastore </description>
</property>
Step 10: create a file named jpox.properties and add the following content into it
touch jpox.properties
vi jpox.properties
javax.jdo.PersistenceManagerFactoryClass = org.jpox.PersistenceManagerFactoryImpl
org.jpox.autoCreateSchema = false
org.jpox.validateTables = false
org.jpox.validateColumns = false
org.jpox.validateConstraints = false
org.jpox.storeManagerType = rdbms
org.jpox.autoCreateSchema = true
org.jpox.autoStartMechanismMode = checked
org.jpox.transactionIsolation = read_committed
javax.jdo.option.DetachAllOnCommit = true
javax.jdo.option.NontransactionalRead = true
javax.jdo.option.ConnectionDriverName = org.apache.derby.jdbc.ClientDriver
javax.jdo.option.ConnectionURL = jdbc:derby://hadoop1:1527/metastore_db;create = true
javax.jdo.option.ConnectionUserName = APP
javax.jdo.option.ConnectionPassword = mine
Step 11: enter into hive shell and execute command 'show tables'
cd $HIVE_HOME/bin
hive
hive> show tables;
Install Spark
Step 1: download version 2.12.0 of scala from scala site
Step 2: unzip the package and copy to destination folder
tar zxf scala-2.12.0.tgz
cp -R scala-2.12.0/* /usr/share/scala
Step 3: set PATH for scala
vi ~/.bashrc
export PATH=$PATH:/usr/share/scala/binsource ~/.bashrc
Step 4: check scala version
scala -version
Step 5: download version 2.0.2 of spark from apache site
Step 6: unzip the package and copy to destination folder
tar zxf spark-2.0.2-bin-hadoop2.7.tgz
copy spark-2.0.2-bin-hadoop2.7/* /usr/share/spark
Step 7: setup PATH
vi ~/.bashrc
export PATH=$PATH:/usr/share/spark/binsource ~/.bashrc
Step 8: enter into spark-shell to see if spark is installed successfully
spark-shell
Install Impala
Step 1: download version 2.7.0 of impala from impala site
Step 2: unzip the package and copy to destination folder
tar zxf apache-impala-incubating-2.7.0.tar.gz
cp -R apache-impala-incubating-2.7.0/* /usr/share/impala
Step 3: set PATH and IMPALA_HOME
vi ~/.bashrc
export IMPALA_HOME=/usr/share/impala
export PATH=$PATH:/usr/share/impalasource ~/.bashrc
Step 4: to be continued...
Install Sqoop
Preconditions: should have Hadoop (HDFS and Map-Reduce) installed
Step 1: download version 1.4.6 of sqoop from apache site
Step 2: unzip the package and copy to destination folder
tar zxf sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar.gz
cp -R sqoop-1.4.6.bin__hadoop-2.0.4-alpha/* /usr/share/sqoop
Step 3: set SQOOP_HOME and PATH
vi ~/.bashrc
export SQOOP_HOME=/usr/lib/sqoop
export PATH=$PATH:$SQOOP_HOME/binsource ~/.bashrc
Step 4: configure sqoop
cd $SQOOP_HOME/conf
mv sqoop-env-template.sh sqoop-env.sh
vi sqoop-env.sh
export HADOOP_COMMON_HOME=/usr/share/hadoop
export HADOOP_MAPRED_HOME=/usr/share/hadoop
Step 5: download version 5.1.40 of mysql-connector-java from site
Step 6: unzip the package and move related jar file into destination folder
$ tar -zxf mysql-connector-java-5.1.40.tar.gz
# cd mysql-connector-java-5.1.40
# mv mysql-connector-java-5.1.40-bin.jar /usr/lib/sqoop/lib
Step 7: verify if sqoop is installed successfully
cd $SQOOP_HOME/bin
sqoop-version
Install Alluxio
Step 1: download version 1.3.0 of alluxio from site
Step 2: unzip the package and move it to destination folder
tar zxf alluxio-1.3.0-hadoop2.7-bin.tar.gz
cp -R alluxio-1.3.0-hadoop2.7-bin/* /usr/share/alluxio
Step 3: create alluxio-env
cd /usr/share/alluxio
bin/alluxio bootstrapConf localhost local
vi conf/alluxio-env.sh
export ALLUXIO_UNDERFS_ADDRESS=/tmp
Step 4: format alluxio file system and start alluxio
cd /usr/share/alluxio
bin/alluxio format
bin/alluxio-start.sh local
Step 5: verify if alluxio is running by visiting http://localhost:19999
Step 6: run predefined tests
cd /usr/share/alluxio
bin/alluxio runTests
Hadoop Ecosytem的更多相关文章
- Hadoop 中利用 mapreduce 读写 mysql 数据
Hadoop 中利用 mapreduce 读写 mysql 数据 有时候我们在项目中会遇到输入结果集很大,但是输出结果很小,比如一些 pv.uv 数据,然后为了实时查询的需求,或者一些 OLAP ...
- 初识Hadoop、Hive
2016.10.13 20:28 很久没有写随笔了,自打小宝出生后就没有写过新的文章.数次来到博客园,想开始新的学习历程,总是被各种琐事中断.一方面确实是最近的项目工作比较忙,各个集群频繁地上线加多版 ...
- hadoop 2.7.3本地环境运行官方wordcount-基于HDFS
接上篇<hadoop 2.7.3本地环境运行官方wordcount>.继续在本地模式下测试,本次使用hdfs. 2 本地模式使用fs计数wodcount 上面是直接使用的是linux的文件 ...
- hadoop 2.7.3本地环境运行官方wordcount
hadoop 2.7.3本地环境运行官方wordcount 基本环境: 系统:win7 虚机环境:virtualBox 虚机:centos 7 hadoop版本:2.7.3 本次先以独立模式(本地模式 ...
- 【Big Data】HADOOP集群的配置(一)
Hadoop集群的配置(一) 摘要: hadoop集群配置系列文档,是笔者在实验室真机环境实验后整理而得.以便随后工作所需,做以知识整理,另则与博客园朋友分享实验成果,因为笔者在学习初期,也遇到不少问 ...
- Hadoop学习之旅二:HDFS
本文基于Hadoop1.X 概述 分布式文件系统主要用来解决如下几个问题: 读写大文件 加速运算 对于某些体积巨大的文件,比如其大小超过了计算机文件系统所能存放的最大限制或者是其大小甚至超过了计算机整 ...
- 程序员必须要知道的Hadoop的一些事实
程序员必须要知道的Hadoop的一些事实.现如今,Apache Hadoop已经无人不知无人不晓.当年雅虎搜索工程师Doug Cutting开发出这个用以创建分布式计算机环境的开源软...... 1: ...
- Hadoop 2.x 生态系统及技术架构图
一.负责收集数据的工具:Sqoop(关系型数据导入Hadoop)Flume(日志数据导入Hadoop,支持数据源广泛)Kafka(支持数据源有限,但吞吐大) 二.负责存储数据的工具:HBaseMong ...
- Hadoop的安装与设置(1)
在Ubuntu下安装与设置Hadoop的主要过程. 1. 创建Hadoop用户 创建一个用户,用户名为hadoop,在home下创建该用户的主目录,就不详细介绍了. 2. 安装Java环境 下载Lin ...
随机推荐
- 使用Boost库(1)
如何说服你的公司.组织使用Boost库 one of the most highly regarded and expertly designed C++ library projects in th ...
- python 爬虫proxy,BeautifulSoup+requests+mysql 爬取样例
实现思路: 由于反扒机制,所以需要做代理切换,去爬取,内容通过BeautifulSoup去解析,最后入mysql库 1.在西刺免费代理网获取代理ip,并自我检测是否可用 2.根据获取的可用代理ip去发 ...
- string Format转义大括号
String.Format("{0} world!","hello") //将输出 hello world!,没有问题,但是只要在第一个参数的任意位置加上一个大 ...
- webapi put 请求405问题
put 请求的时候 浏览器会像服务器发送两个请求 如何没做任何配置第一个options请求是会报错的 这是需要配置路由给options作响应 这时options请求就通过了,然后你们会看到你的put ...
- sqlserver中 多条数据合并成一条数据 (stuff 与 for xml path 连用)
SQL 列转行,即多行合并成一条 需求:按照分组,将多条记录内容合并成一条,效果如下: 数据库示例: CREATE TABLE [t2]([NID] [bigint] NULL,[district ...
- angular 基本依赖注入
import { Injectable } from '@angular/core'; @Injectable() export class ProductServiceService { const ...
- 类与类之间的两种关系------新标准c++程序设计
在c++中,类和类之间有两种基本关系:复合关系和继承关系. 复合关系也称为“has a”关系或“有”的关系,表现为封闭类,即一个类以另一个类的对象作为成员变量. 继承关系也称为“is a”关系或“是” ...
- eclipse操作
1.手动添加组件源码 2.源码阅读技巧 选择类Ctrl+T(Quick Type Hierarchy),查看该类的继承关系: 选择方法Ctrl+Alt+H(Open Call Hierarchy),查 ...
- spring 学习(三):aop 学习
spring 学习(三):aop 学习 aop 概念 1 aop:面向切面(方面)编程,扩展功能不修改源代码实现 2 AOP采取横向抽取机制,取代了传统纵向继承体系重复性代码 3 aop底层使用动态代 ...
- 最短路径 Dijkstra算法 AND Floyd算法
无权单源最短路:直接广搜 void Unweighted ( vertex s) { queue <int> Q; Q.push( S ); while( !Q.empty() ) { V ...