《OD大数据实战》Hive环境搭建
一、搭建hadoop环境
二、Hive环境搭建
1. 准备安装文件
下载地址:
http://archive.cloudera.com/cdh5/cdh/5/
2. 解压
- tar -zxvf hive-0.13.-cdh5.3.6.tar.gz -C /opt/modules/cdh/
3. 修改配置
- cd /opt/modules/cdh/hive-0.13.-cdh5.3.6/conf
- mv hive-env.sh.template hive-env.sh
- mv hive-default.xml.template hive-site.xml
- mv hive-exec-log4j.properties.template hive-exec-log4j.properties
- mv hive-log4j.properties.template hive-log4j.properties
1)修改hive-env.sh
- #增加一行
- export JAVA_HOME=/opt/modules/jdk1..0_67
- HADOOP_HOME=/opt/modules/cdh/hadoop-2.5.-cdh5.3.6
- export HIVE_CONF_DIR=/opt/modules/cdh/hive-0.13.-cdh5.3.6/conf
2)修改hive-log4j.properties
- hive.log.dir=/opt/modules/cdh/hive-0.13.-cdh5.3.6/logs
3)修改hive-exec-log4j.properties
- hive.log.dir=/opt/modules/cdh/hive-0.13.-cdh5.3.6/logs
4)修改hive-site.xml
- <property>
- <name>hive.lazysimple.extended_boolean_literal</name>
- <value>false</value>
- <description>
- LazySiimpleSerde uses this properties to determine if it treats 'T', 't', 'F', 'f',
- '1', and '0' as extened, legal boolean literal, in addition to 'TRUE' and 'FALSE'.
- The default is false, which means only 'TRUE' and 'FALSE' are treated as legal
- boolean literal.
- </description>
- </property>
- <property>
- <name>hive.mapjoin.optimized.hashtable</name>
- <value>true</value>
- <description>Whether Hive should use memory-optimized hash table for MapJoin. Only works on Tez, because memory-optimized hashtable cannot be serialized.</description>
- </property>
4. 验证hive环境结果
- bin/hive
- dfs -ls /;
三、mysql环境搭建
1. 目标是安装mysql 5.1.17
2. 在官网下载yum源
http://dev.mysql.com/downloads/repo/yum/
http://repo.mysql.com//mysql57-community-release-el6-8.noarch.rpm
3. 安装yum源到/etc/yum.repos.d/目录
sudo rpm -Uvh mysql57-community-release-el6-8.noarch.rpm
cd /etc/yum.repos.d/
4. 修改yum源配置
修改文件:mysql-community.repo 和mysql-community-resource.repo
5.6 enable = 1
5.7 enable = 0
5. 安装mysql
sudo yum -y install mysql-community-server
6. mysql安全性设置
sudo mysql_secure_installation
grant all privileges on *.* to 'root'@'%' identified by 'beifeng' with grant option
7. 验证mysql安装结果
进入命令行: mysql -uroot -p
四、本地mysql作为metastore模式
1. copy mysql驱动到${HIVE_HOME}/lib中
- cp mysql-connector-java-5.1.-bin.jar /opt/modules/cdh/hive-0.13.-cdh5.3.6/lib/
2. 修改hive-site.xml
- <property>
- <name>javax.jdo.option.ConnectionURL</name>
- <value>jdbc:mysql://localhost:3306/cdh_hive_local_hive?createDatabaseIfNotExist=true</value>
- <description>JDBC connect string for a JDBC metastore</description>
- </property>
- <property>
- <name>javax.jdo.option.ConnectionDriverName</name>
- <value>com.mysql.jdbc.Driver</value>
- <description>Driver class name for a JDBC metastore</description>
- </property>
- <property>
- <name>javax.jdo.option.ConnectionUserName</name>
- <value>root</value>
- <description>username to use against metastore database</description>
- </property>
- <property>
- <name>javax.jdo.option.ConnectionPassword</name>
- <value>beifeng</value>
- <description>password to use against metastore database</description>
- </property>
3. 运行bin/hive命令
4. 查看mysql数据库,发现多了一个cdh_hive_local_hive数据库
五、远程mysql作为metastore模式
1. copy mysql驱动到${HIVE_HOME}/lib中
- cp mysql-connector-java-5.1.27-bin.jar /opt/modules/cdh/hive-0.13.1-cdh5.3.6/lib/
2. 启动metastore服务器
- nohup hive --service metastore > /home/beifeng/hive_metastore.run.log 2>&1 &
系统日志输出级别: 2 错误,1正常
查看进程信息: ps -ef | grep HiveMetaStore
关闭Hive
kill -9 processId
kill -9 `ps -ef | grep HiveMetaStore | awk '{print $2'} | head -n 1`
3. 修改hive-site.xml
- <property>
- <name>hive.metastore.uris</name>
- <value>thrift://beifeng-hadoop-02:9083</value>
- <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
- </property>
- <property>
- <name>javax.jdo.option.ConnectionURL</name>
- <value>jdbc:mysql://localhost:3306/cdh_hive_remote_hive?createDatabaseIfNotExist=true</value>
- <description>JDBC connect string for a JDBC metastore</description>
- </property>
- <property>
- <name>javax.jdo.option.ConnectionDriverName</name>
- <value>com.mysql.jdbc.Driver</value>
- <description>Driver class name for a JDBC metastore</description>
- </property>
- <property>
- <name>javax.jdo.option.ConnectionUserName</name>
- <value>root</value>
- <description>username to use against metastore database</description>
- </property>
- <property>
- <name>javax.jdo.option.ConnectionPassword</name>
- <value>beifeng</value>
- <description>password to use against metastore database</description>
- </property>
4. 运行bin/hive命令
5. 查看mysql数据库,发现多了一个cdh_hive_local_hive数据库
六、JDBC连接hive
1. 修改hive-site.xml
- <property>
- <name>hive.server2.thrift.port</name>
- <value>10000</value>
- <description>Port number of HiveServer2 Thrift interface.
- Can be overridden by setting $HIVE_SERVER2_THRIFT_PORT</description>
- </property>
- <property>
- <name>hive.server2.thrift.bind.host</name>
- <value>0.0.0.0</value>
- <description>Bind host on which to run the HiveServer2 Thrift interface.
- Can be overridden by setting $HIVE_SERVER2_THRIFT_BIND_HOST</description>
- </property>
2. 启动hiveserver2服务器
- nohup hive --service hiveserver2 > /home/beifeng/hiveserver2.run.log >& &
- ps -ef | grep HiveServer2
- netstat -tlnup | grep
3. 进入beeline客户端
- beeline
4. 连接hive
- beeline> !connect jdbc:hive2://beifeng-hadoop-02:10000
- scan complete in 5ms
- Connecting to jdbc:hive2://beifeng-hadoop-02:10000
- Enter username for jdbc:hive2://beifeng-hadoop-02:10000: beifeng
- Enter password for jdbc:hive2://beifeng-hadoop-02:10000: *******
5. 修改配置
- <property>
- <name>hive.server2.long.polling.timeout</name>
- <value></value>
- <description>Time in milliseconds that HiveServer2 will wait, before responding to asynchronous calls that use long polling</description>
- </property>
《OD大数据实战》Hive环境搭建的更多相关文章
- 《OD大数据实战》环境整理
一.关机后服务重新启动 1. 启动hadoop服务 sbin/hadoop-daemon.sh start namenode sbin/hadoop-daemon.sh start datanode ...
- 《OD大数据实战》驴妈妈旅游网大型离线数据电商分析平台
一.环境搭建 1. <OD大数据实战>Hadoop伪分布式环境搭建 2. <OD大数据实战>Hive环境搭建 3. <OD大数据实战>Sqoop入门实例 4. &l ...
- 《OD大数据实战》HDFS入门实例
一.环境搭建 1. 下载安装配置 <OD大数据实战>Hadoop伪分布式环境搭建 2. Hadoop配置信息 1)${HADOOP_HOME}/libexec:存储hadoop的默认环境 ...
- 《OD大数据实战》HBase整合MapReduce和Hive
一.HBase整合MapReduce环境搭建 1. 搭建步骤1)在etc/hadoop目录中创建hbase-site.xml的软连接.在真正的集群环境中的时候,hadoop运行mapreduce会通过 ...
- 《OD大数据实战》Hue环境搭建
官网: http://archive.cloudera.com/cdh5/cdh/5/hue-3.7.0-cdh5.3.6/ 一.Hue环境搭建 1. 下载 http://archive.cloude ...
- 《OD大数据实战》Hadoop伪分布式环境搭建
一.安装并配置Linux 8. 使用当前root用户创建文件夹,并给/opt/下的所有文件夹及文件赋予775权限,修改用户组为当前用户 mkdir -p /opt/modules mkdir -p / ...
- 《OD大数据实战》Storm环境搭建
一.环境搭建 1. 下载 http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.6/apache-storm-0.9.6.tar.gz ...
- 《OD大数据实战》MongoDB环境搭建
一.MongonDB环境搭建 1. 下载 https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-3.0.6.tgz 2. 解压 tar -zxvf ...
- 《OD大数据实战》HBase环境搭建
一.环境搭建 1. 下载 hbase-0.98.6-cdh5.3.6.tar.gz 2. 解压 tar -zxvf hbase-0.98.6-cdh5.3.6.tar.gz -C /opt/modul ...
随机推荐
- 二分图匹配(KM算法)n^4 分类: ACM TYPE 2014-10-04 11:36 88人阅读 评论(0) 收藏
#include <iostream> #include<cstring> #include<cstdio> #include<cmath> #incl ...
- The 15th Zhejiang University Programming Contest
a ZOJ 3860 求和大家不一样的那个数,签到,map水之 #include<cstdio> #include<map> using namespace std; map ...
- c++ assert
#include<iostream> #include <assert.h> using namespace std; int main() { ; assert(a == ) ...
- jQuery中的height()、innerheight()、outerheight()的区别总结
在前端jQuery代码中突然看到outerheight(),第一感觉就是,这是什么鬼?然后仔细查阅了一下,居然发现还有这么多相似的东西. 在jQuery中,获取元素高度的函数有3个,它们分别是heig ...
- swfObject 使用说明
1.Embed your SWF with JavaScript 使用方法 swfobject.embedSWF(swfUrl, id, width, height, version, expres ...
- Javascript 正则表达式_3
<!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <m ...
- 定制CentOS (Redhat AS 5.1)安装盘
CentOS(Redhat)提供了一套完整的自动化安装机制,利用该机制,我们可以自己定制无人值守的自动安装光盘,也可以进行系统裁减,甚至可以以CentOS为基础制作自己软件系统的系统安装盘.以下全部内 ...
- libevent简单介绍
http://blog.csdn.net/mafuli007/article/details/7476014 1 简介 主页:http://www.monkey.org/~provos/li ...
- java三种调用方式(同步调用/回调/异步调用)
1:同步调用:一种阻塞式调用,调用方要等待对方执行完毕才返回,它是一种单向调用 2:回调:一种双向调用模式,也就是说,被调用方在接口被调用时也会调用对方的接口: 3:异步调用:一种类似消息或事件的机制 ...
- Fatal error: cannot allocate memory for the buffer pool
mysql有时候会被系统kill掉,原因是内存不够了,一般都是Ubuntu出现的,因为Ubuntu吃内存,你们又给的不多.. 咋解决呢? 重启服务器是可以的,起码暂时可以了, 可以考虑加内存,或者增加 ...