环境 腾讯云centos7

1、hadoop下载

http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.7.7/hadoop-2.7.7.tar.gz

2、解压

tar -xvf hadoop-2.7.7.tar.gz -C /usr/java

3、修改hadoop-2.7.7/etc/hadoop/hadoop-env.sh文件

将jdk环境添加进去:
# The java implementation to use.
export JAVA_HOME=/usr/java/jdk1.8

4、添加hadoop环境变量

    HADOOP_HOME=/usr/java/hadoop-2.7.7
MAVEN_HOME=/usr/java/maven3.6
RABBITMQ_HOME=/usr/java/rabbitmq_server
TOMCAT_HOME=/usr/java/tomcat8.5
JAVA_HOME=/usr/java/jdk1.8
CLASSPATH=$JAVA_HOME/lib/
PATH=$PATH:$JAVA_HOME/bin:$TOMCAT_HOME/bin:$RABBITMQ_HOME/sbin:$MAVEN_HOME/bin:$HADOOP_HOME/bin
export PATH JAVA_HOME CLASSPATH TOMCAT_HOME RABBITMQ_HOME MAVEN_HOME HADOOP_HOME    环境变量生效:source /etc/profile

5、修改hadoop-2.7.7/etc/hadoop/core-site.xml

  <!-- 指定HDFS老大(namenode)的通信地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<!-- 指定hadoop运行时产生文件的存储路径 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/java/hadoop-2.7.7/tmp</value>
</property>

6、修改hadoop-2.7.7/etc/hadoop/hdfs-site.xml

  <configuration>
<property>
<name>dfs.name.dir</name>
<value>/usr/java/hadoop-2.7.7/hdfs/name</value>
<description>namenode上存储hdfs名字空间元数据 </description>
</property> <property>
<name>dfs.data.dir</name>
<value>/usr/java/hadoop-2.7.7/hdfs/data</value>
<description>datanode上数据块的物理存储位置</description>
</property>
<!-- 设置hdfs副本数量 -->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>

7、免密登陆 

    ssh-keygen -t rsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

8、hdfs启动与停止

    ./bin/hdfs namenode -format  #初始化,必须对namenode进行格式化
出现:19/08/13 09:46:05 INFO common.Storage: Storage directory /usr/java/hadoop-2.7.7/hdfs/name has been successfully formatted。说明格式化成功!   ./sbin/start-dfs.sh #启动hadoop
(base) [root@medecineit hadoop-2.7.7]# ./sbin/start-dfs.sh
Starting namenodes on [localhost]
The authenticity of host 'localhost (127.0.0.1)' can't be established.
ECDSA key fingerprint is SHA256:SLOXW/SMogWE3wmK/H310vL74h0dsYohaSF31oEsdBw.
ECDSA key fingerprint is MD5:fe:a4:15:38:15:e7:32:c3:9f:c3:8e:43:c6:80:6b:ac.
Are you sure you want to continue connecting (yes/no)? yes
localhost: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
localhost: starting namenode, logging to /usr/java/hadoop-2.7.7/logs/hadoop-root-namenode-medecineit.out
localhost: starting datanode, logging to /usr/java/hadoop-2.7.7/logs/hadoop-root-datanode-medecineit.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is SHA256:SLOXW/SMogWE3wmK/H310vL74h0dsYohaSF31oEsdBw.
ECDSA key fingerprint is MD5:fe:a4:15:38:15:e7:32:c3:9f:c3:8e:43:c6:80:6b:ac.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/java/hadoop-2.7.7/logs/hadoop-root-secondarynamenode-medecineit.out   ./sbin/stop-dfs.sh #停止hadoop

9、查看是否启动相应的节点

  jps命令查看
(base) [root@medecineit hadoop-2.7.7]# jps
4416 NameNode
4916 Jps
4740 SecondaryNameNode
4553 DataNode
975 Bootstrap 说明NameNode,SecondaryNameNode,DataNode启动成功。

10、web界面查看

http://ip:50070

11、配置yarn -->mapred-site.xml

        复制一份文件:cp mapred-site.xml.template mapred-site.xml

        <!-- 通知框架MR使用YARN -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

12、配置yarn-site.xml文件

    <!-- reducer取数据的方式是mapreduce_shuffle -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

13、启动/停止yarn

        ./sbin/start-yarn.sh  #启动

            (base) [root@medecineit hadoop-2.7.7]# ./sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/java/hadoop-2.7.7/logs/yarn-root-resourcemanager-medecineit.out
localhost: starting nodemanager, logging to /usr/java/hadoop-2.7.7/logs/yarn-root-nodemanager-medecineit.out (base) [root@medecineit hadoop-2.7.7]# jps
8469 ResourceManager
8585 NodeManager
8812 Jps
975 Bootstrap 然后再启动hdfs : ./sbin/start-dfs.sh (base) [root@medecineit hadoop-2.7.7]# jps
8469 ResourceManager
9208 DataNode 9401 SecondaryNameNode
9065 NameNode
8585 NodeManager
9550 Jps
975 Bootstrap ./sbin/stop-yarn.sh #停止

14、web界面查看yarn

http://ip:8088

单机hadoop和yarn的配置完毕!

########zookeeper安装###########

1、下载地址

https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz

2、解压

tar -xvf zookeeper-3.4.14.tar.gz -C /usr/java/

3、修改配置文件

    cp zoo_sample.cfg  zoo.cfg
将数据保存到zookeeper的data目录中
dataDir=/usr/java/zookeeper-3.4.14/data

4、启动zookeeper

    ./bin/zkServer.sh start  #启动

    ./bin/zkServer.sh status #查看状态

zookeeper完毕!

#######hbase安装##########

1、下载地址

https://www.apache.org/dyn/closer.lua/hbase/2.0.5/hbase-2.0.5-bin.tar.gz

2、解压

tar -xvf hbase-2.0.5-bin.tar.gz -C /usr/java/

3、修改hbase-env.sh

export JAVA_HOME=/usr/java/jdk1.8/

4、修改hbase-site.xml

<configuration>
<property>
  <name>hbase.rootdir</name>
  <value>hdfs://medecineit:9000/hbase</value>
</property>
<property>
  <name>hbase.cluster.distributed</name>
  <value>true</value>
</property>
<property>
  <name>hbase.zookeeper.quorum</name>
  <value>medecineit</value>
</property>
<property>
  <name>dfs.replication</name>
  <value>1</value>
</property>
<property>
<name>hbase.master.dns.nameserver</name>
<value>medecineit</value>
<description>DNS</description>
</property> <property>
<name>hbase.regionserver.dns.nameserver</name>
<value>medecineit</value>
<description>DNS</description>
</property>
<property>
<name>hbase.security.authentication</name>
<value>simple</value>
</property>
<property>
<name>hbase.security.authorization</name>
<value>false</value>
</property>
<property>
<name>hbase.regionserver.hostname</name>
<value>medecineit</value>
</property>

</configuration>

##注意,红色的部分一定要加,否则远程连接hbase报错!

5、修改 regionservers

改为主机名:medecineit

6、启动hbase

 ./bin/start-hbase.sh #启动
(base) [root@medecineit hbase-2.0.5]# jps
8469 ResourceManager
16902 Jps
16823 HRegionServer
9208 DataNode
16152 QuorumPeerMain
9401 SecondaryNameNode
9065 NameNode
16681 HMaster
8585 NodeManager
975 Bootstrap 表明已经启动了HRegionServer,HMaster。

7、web访问

http://ip:16010/master-status

8、启动hbase shell进行表的操作

./bin/hbase shell  #启动hbase shell

完毕!

#####关闭顺序####

停止集群服务的顺序
停止spark集群
master>spark/sbin/stop-slaves.sh
master>spark/sbin/stop-master.sh
停止hbase集群
master>stop-hbase.sh
停止yarn集群
master>stop-yarn.sh
停止hadoop集群
master>stop-dfs.sh
停止zookeeper集群
master>runRemoteCmd.sh “zkServer.sh stop” zookeeper
停止集群服务完毕!

#####hive安装######

1、下载安装包

https://www-eu.apache.org/dist/hive/hive-2.3.5/apache-hive-2.3.5-bin.tar.gz

2、解压

tar -xzvf apache-hive-2.3.5-bin.tar.gz

3、配置hive-env.sh

export HADOOP_HOME=/usr/java/hadoop-2.7.7
export HIVE_CONF_DIR=/usr/java/hive-2.3.5/conf
export HIVE_AUX_JARS_PATH=/usr/java/hive-2.3.5/lib

4、配置vim hive-site.xml文件

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://medecineit:3306/hive?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>yang156122</value>
<description>password to use against metastore database</description>
</property>
</configuration>

5、添加配置文件

cp hive-exec-log4j2.properties.template hive-exec-log4j2.properties

cp hive-log4j2.properties.template hive-log4j2.properties

6、启动hive

./hive --service hiveserver2  #启动

./beeline -u jdbc:hive2://localhost:10000  #测试 -beeline工具测试使用jdbc方式连接

http://ip:10002/  #web界面

完毕!

hadoop2.7.7+habse2.0.5+zookeeper3.4.14+hive2.3.5单机安装的更多相关文章

  1. Hadoop2.7.3+HBase1.2.5+ZooKeeper3.4.6搭建分布式集群环境

    Hadoop2.7.3+HBase1.2.5+ZooKeeper3.4.6搭建分布式集群环境 一.环境说明 个人理解:zookeeper可以独立搭建集群,hbase本身不能独立搭建集群需要和hadoo ...

  2. hadoop-2.3.0-cdh5.1.0完全分布式搭建(基于centos)

    先参考:<hadoop-2.3.0-cdh5.1.0伪分布安装(基于centos)> http://blog.csdn.net/jameshadoop/article/details/39 ...

  3. hadoop-2.3.0-cdh5.1.0伪分布安装(基于centos)

    一.环境 操作系统:CentOS 6.5 64位操作系统  注:Hadoop2.0以上采用的是jdk环境是1.7,Linux自带的jdk卸载掉,重新安装 下载地址:http://www.oracle. ...

  4. centos7.2环境elasticsearch-5.0.1+kibana-5.0.1+zookeeper3.4.6+kafka_2.9.2-0.8.2.1部署详解

    centos7.2环境elasticsearch-5.0.1+kibana-5.0.1+zookeeper3.4.6+kafka_2.9.2-0.8.2.1部署详解 环境准备: 操作系统:centos ...

  5. Storm-1.0.1+ZooKeeper-3.4.8+Netty-4.1.3 HA集群安装

    Storm-1.0.1+ZooKeeper-3.4.8+Netty-4.1.3 HA集群安装 下载Storm-1.0.1 http://mirrors.tuna.tsinghua.edu.cn/apa ...

  6. Solr 5.5.0 + tomcat 7.0.69 + zookeeper-3.4.6 Cloud部署

    Solr介绍:Solr是一个独立的企业级搜索应用服务器,Solr基于Lucene的全文搜索服务器,同时对其进行了扩展,提供了比Lucene更为丰富的查询语言,同时实现了可配置.可扩展并对查询性能进行了 ...

  7. HBase0.99.2集群的搭建步骤(在hadoop2.6.4集群和zookeeper3.4.5集群上)

    HBase介绍(NoSql,不是关系型数据库) HBase是一个高可靠性.高性能.面向列.可伸缩的分布式存储系统,利用HBASE技术可在廉价PC Server上搭建起大规模结构化存储集群. HBase ...

  8. spark编译安装 spark 2.1.0 hadoop2.6.0-cdh5.7.0

    1.准备: centos 6.5 jdk 1.7 Java SE安装包下载地址:http://www.oracle.com/technetwork/java/javase/downloads/java ...

  9. Solr4.8.0源码分析(14)之SolrCloud索引深入(1)

    Solr4.8.0源码分析(14) 之 SolrCloud索引深入(1) 上一章节<Solr In Action 笔记(4) 之 SolrCloud分布式索引基础>简要学习了SolrClo ...

随机推荐

  1. Spring Boot实现自定义注解

    在Spring Boot项目中可以使用AOP实现自定义注解,从而实现统一.侵入性小的自定义功能. 实现自定义注解的过程也比较简单,只需要3步,下面实现一个统一打印日志的自定义注解: 1. 引入AOP依 ...

  2. Image Processing and Analysis_8_Edge Detection:Design of steerable filters for feature detection using canny-like criteria ——2004

    此主要讨论图像处理与分析.虽然计算机视觉部分的有些内容比如特 征提取等也可以归结到图像分析中来,但鉴于它们与计算机视觉的紧密联系,以 及它们的出处,没有把它们纳入到图像处理与分析中来.同样,这里面也有 ...

  3. Cannot debug in IntellijIdea on Linux

    OS: Deepin LinuxIDE: Intellij IdeaProject: SpringBoot based maven project Issue: cannot debug in Ide ...

  4. inux中查看各文件夹大小命令:du -h --max-depth=1

    du [-abcDhHklmsSx] [-L <符号连接>][-X <文件>][--block-size][--exclude=<目录或文件>] [--max-de ...

  5. Vue入门——v-if和v-show

    v-if 特点:每次都会重新删除或创元素 有较高的切换性能消耗 v-show 特点:每次不会重新进行DOM的删除和创建操作,只是切换了元素的display:none样式 有较高的初始渲染消耗

  6. FineAdmin.Mvc 使用ok-admin+ASP.NET MVC搭建的通用权限后台管理系统

    FineAdmin.Mvc 介绍 使用ok-admin+ASP.NET MVC搭建的通用权限后台管理系统RightControl后台layui模板不太好看,换成ok-admin v2.0重写一遍.项目 ...

  7. Disconnected from the target VM, address: '127.0.0.1:56577', transport: 'socket'

    Disconnected from the target VM, address: '127.0.0.1:56577', transport: 'socket' Disconnected from t ...

  8. 2019牛客多校E Androgynos——自补图&&构造

    题目 给出一个 $n$,判断是否存在 $n$ 个顶点的自补图,如果存在,输出边和映射. 分析 一个无向图若同构于它的补图,则称该图为自补图. 定理:一个自补图一定存在 $4k$ 或 $4k+1$ 个顶 ...

  9. JS BOM基础 全局对象 window location history screen navigator

    全局变量声明的两种方式:1,window.变量名=值;2,var 变量名=值; 全局函数声明的两种方式:1,window.函数名=function(){}2,function 函数名=function ...

  10. MySQL 中Redo与Binlog顺序一致性问题

    首先,我们知道在MySQL中,二进制日志是server层的,主要用来做主从复制和即时点恢复时使用的.而事务日志(redo log)是InnoDB存储引擎层的,用来保证事务安全的.现在我们来讨论一下My ...