环境:centos 6.6 x64 (学习用3节点)

软件:jdk 1.7 + hadoop 2.7.3 + hive 2.1.1

环境准备:

1、安装必要工具

yum -y install openssh wget curl tree screen nano lftp htop mysql-client mysql-server

2、使用163的yum源:

cd /etc/yum.repo.d/
wget http://mirrors.163.com/.help/CentOS7-Base-163.repo
#备份
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
mv CentOS7-Base-.repo CentOS-Base.repo
#生成缓存
yum clean all
yum makecache

3、 关闭图形界面init 3:

vim /etc/inittab  #将启动级别5更改为级别3字符界面启动

4、设置静态IP、修改主机名、hosts、

(1)规划

192.168.235.138 node1
192.168.235.139 node2
192.168.235.140 node3

下面在各个节点上,根据当前机器的规划设置IP、主机名、hosts

(2)静态IP(各个节点)

#方式一:使用setup在图形界面下设置
# setup
#方式二:修改网络配置文件,一个完整的设置如下
# cat /etc/sysconfig/network-scripts/ifcfg-Auto_eth1
HWADDR=:0C::2C:9F:4A
TYPE=Ethernet
BOOTPROTO=none
IPADDR=192.168.235.139
PREFIX=
GATEWAY=192.168.235.1
DNS1=192.168.235.1
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="Auto eth1"
UUID=2753c781--47bd-85e7-44877cde27dd
ONBOOT=yes
LAST_CONNECT=

(3)主机名(各个节点)

# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=node1 #修改hostname的值

(4)hosts(各个节点)

# cat /etc/hosts
# 在文件末尾添加如下内容
192.168.235.138 node1
192.168.235.139 node2
192.168.235.140 node3

5、关闭防火墙

# service iptables stop
# service iptables status
# chkconfig iptables off

6、建立普通用户

# useradd hadoop
# passwd hadoop
# visudo
在root ALL=(ALL) ALL行下面增加:
hadoop ALL=(ALL) ALL

7、设置ssh免密码登录

方式一:自动部署脚本

# cat ssh.sh
ERVERS="node1 node2 node3"
PASSWORD=
BASE_SERVER=192.168.235.138 yum -y install expect auto_ssh_copy_id() {
expect -c "set timeout -1;
spawn ssh-copy-id $;
expect {
*(yes/no)* {send -- yes\r;exp_continue;}
*assord:* {send -- $\r;exp_continue;}
eof {exit ;}
}"
} ssh_copy_id_to_all() {
for SERVER in $SERVERS
do
auto_ssh_copy_id $SERVER $PASSWORD
done
} ssh_copy_id_to_all

方式二:手动设置

ssh-keygen -t  rsa  #生成公钥
scp ~/.ssh/id_rsa.pub hadoop@192.168.235.139:~/ #使用scp或scp-copy-id 分发公钥到其他节点上

集群规划与安装

1、节点规划

规划:
node01:NameNode、DataNode、NodeManager、
node02:ResourceManager、DataNode、NodeManager、JobHisotry
node03:SecondaryNameNode、DataNode、NodeManager、

说明:注意节点功能的划分,DataNode存储数据,NodeManager处理数据,需要放在同一节点上,避免占用大量的网络带宽。

此处仅用于个人机器,学习使用。事实上,一个典型的生产环境示例如下

7台节点参考配置hadoop2.x (HA: 高可用)

主机名    IP地址    进程
cloud01 192.168.2.31 namenode zkfc
cloud02 192.168.2.32 namenode zkfc
cloud03 192.168.2.33 resourcemanager
cloud04 192.168.2.34 resourcemanager
cloud05 192.168.2.35 journalNode datanode nodemanager QuorumaPeerMain
cloud06 192.168.2.36 journalNode datanode nodemanager QuorumaPeerMain
cloud07 192.168.2.37 journalNode datanode nodemanager QuorumaPeerMain 备注: namenode: 管理元数据
resourcemanager: 用于资源控制
datanode :用于存储数据
nodemanager:用于数据计算
journalNode: 用于共享元数据存储
zkfc: ZooKeeper failOverSwitch ,namenode失败切换
QuorumaPeerMain : 是ZooKeeper启动进程

HA+zookeeper可以有效防止单点故障,实现自动故障转移。

摘自:http://blog.csdn.net/shenfuli/article/details/44889757

2、安装jdk、Hadoop

(1)安装JDK、Hadoop

上传软件包到服务器,并在软件包所在目录下编辑、运行如下脚本:

#!/bin/bash

tar -zxvf jdk-7u79-linux-x64.tar.gz -C /opt/
tar -zxvf hadoop-2.7..tar.gz /usr/local/ cat >> /etc/profile << EOF
export JAVA_HOME=/opt/jdk1..0_79/
export HADOOP_HOME=/usr/local/hadoop-2.7.
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOE/sbin
EOF source /etc/profile

(2)预设Hadoop工作目录

  mkdir -p hadoop/tmp
mkdir -p hadoop/dfs/data
mkdir -p hadoop/dfs/name
mkdir -p hadoop/namesecondary

3、配置Hadoop

几个基本的配置文件如下:

# cd /usr/local/hadoop-2.7./etc/hadoop/
# ls -l | awk '{print $9}'
core-site.xml
hadoop-env.sh
hdfs-site.xml
mapred-site.xml
slaves
yarn-site.xml

配置内容如下:

(1)core-site.xml

    <property>
<name>fs.defaultFS</name>
<value>hdfs://node1:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value></value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.trash.interval</name>
<value></value>
</property>

(2)hadoop-env.sh

export JAVA_HOME=/opt/jdk1..0_79/

(3)hdfs-site.xml

    <property>
<name>dfs.namenode.name.dir</name>
<value>file:///usr/hadoop/dfs/name</value>
<description></description>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///usr/hadoop/dfs/data</value>
<description></description>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node3:</value>
<description></description>
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>file:///usr/hadoop/namesecondary</value>
<description></description>
</property> <property>
<name>dfs.replication</name>
<value></value>
<description>replication</description>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
<description></description>
</property> <property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.datanode.max.transfer.threads</name>
<value></value>
</property>

(4)mapred-site.xml

    <property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>node2:</value>
<description>MapReduce JobHistory Server host:port,Default port is .</description>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>node2:</value>
<description>MapReduce JobHistory Server Web UI host:port Default port is .</description>
</property>
<property>
<name>yarn.app.mapreduce.am.staging-dir</name>
<value>/history</value>
</property>
<property>
<name>mapreduce.jobhistory.done-dir</name>
<value>${yarn.app.mapreduce.am.staging-dir}/history/done</value>
</property>
<property>
<name>mapreduce.jobhistory.intermediate-done-dir</name>
<value>${yarn.app.mapreduce.am.staging-dir}/history/done_intermediate</value>
</property>
<property>
<name>mapreduce.map.log.level</name>
<value>DEBUG</value>
</property>
<property>
<name>mapreduce.reduce.log.level</name>
<value>DEBUG</value>
</property>

(5)slaves

node1
node2
node3

(6)yarn-site.xml

    <property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property> <!--Configurations for ResourceManager and NodeManager:-->
<!--
<property>
<name>yarn.acl.enable</name>
<value>false</value>
<description>Enable ACLs? Defaults to false.</description>
</property>
<property>
<name>yarn.admin.acl</name>
<value>Admin ACL</value>
<description>ACL to set admins on the cluster. ACLs are of for comma-separated-usersspacecomma-separated-groups. Defaults to special value of * which means anyone. Special value of just space means no one has access.</description>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>false</value>
<description>Configuration to enable or disable log aggregation</description>
</property>
--> <!--Configurations for ResourceManager:-->
<property>
<name>yarn.resourcemanager.address</name>
<value>node2:</value>
<description>ResourceManager host:port for clients to submit jobs.host:port If set, overrides the hostname set in yarn.resourcemanager.hostname.</description>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>node2:</value>
<description>ResourceManager host:port for ApplicationMasters to talk to Scheduler to obtain resources.host:port If set, overrides the hostname set in yarn.resourcemanager.hostname.</description>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>node2:</value>
<description>ResourceManager host:port for NodeManagers:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname.</description>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>node2:</value>
<description>ResourceManager host:port for administrative commands.:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname.</description>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>node2:</value>
<description>ResourceManager web-ui host:port.host:port If set, overrides the hostname set in yarn.resourcemanager.hostname.</description>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>node2</value>
<description>host Single hostname that can be set in place of setting all yarn.resourcemanager*address resources. Results in default ports for ResourceManager components.</description>
</property>
<!--
<property>
<name>yarn.resourcemanager.scheduler.class</name>
<value>ResourceManager Scheduler class.</value>
<description>CapacityScheduler (recommended), FairScheduler (also recommended), or FifoScheduler</description>
</property> <property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>Minimum limit of memory to allocate to each container request at the Resource Manager.</value>
<description>In MBs</description>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>Maximum limit of memory to allocate to each container request at the Resource Manager.</value>
<description>In MBs</description>
</property>
<property>
<name>yarn.resourcemanager.nodes.include-path/ yarn.resourcemanager.nodes.exclude-path</name>
<value>List of permitted/excluded NodeManagers.</value>
<description>If necessary, use these files to control the list of allowable NodeManagers.</description>
</property>
--> <!--Configurations for NodeManager:-->
<!--
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>Resource i.e. available physical memory, in MB, for given NodeManager</value>
<description>Defines total available resources on the NodeManager to be made available to running containers</description>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>Maximum ratio by which virtual memory usage of tasks may exceed physical memory</value>
<description>The virtual memory usage of each task may exceed its physical memory limit by this ratio. The total amount of virtual memory used by tasks on the NodeManager may exceed its physical memory usage by this ratio.</description>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>Comma-separated list of paths on the local filesystem where intermediate data is written.</value>
<description>Multiple paths help spread disk i/o.</description>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>Comma-separated list of paths on the local filesystem where logs are written.</value>
<description>Multiple paths help spread disk i/o.</description>
</property>
<property>
<name>yarn.nodemanager.log.retain-seconds</name>
<value></value>
<description>Default time (in seconds) to retain log files on the NodeManager Only applicable if log-aggregation is disabled.</description>
</property>
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/logs</value>
<description>HDFS directory where the application logs are moved on application completion. Need to set appropriate permissions. Only applicable if log-aggregation is enabled.</description>
</property>
<property>
<name>yarn.nodemanager.remote-app-log-dir-suffix</name>
<value>logs</value>
<description>Suffix appended to the remote log dir. Logs will be aggregated to ${yarn.nodemanager.remote-app-log-dir}/${user}/${thisParam} Only applicable if log-aggregation is enabled.</description>
</property>
-->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
<description>Shuffle service that needs to be set for Map Reduce applications. </description>
</property> <property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
<description></description>
</property>
<property>
<name>yarn.log.server.url</name>
<value>http://node2:19888/jobhistory/logs</value>
<description></description>
</property>

注意:实际配置文件中尽量不要有中文

此配置仅供参考,使用时,去掉注释即可,也可根据自己情况增删配置

4、启动集群

(1)格式化NameNode

hadoop namenode -format  #{HADOOP_HOME}/bin

(2)启动、关闭集群

几个常用启动命令

# ls -l | awk '{print $9}'
start-all.sh/stop-all.sh #启动、关闭所有进程
start-dfs.sh/stop-dfs.sh #启动、关闭hdfs
start-yarn.sh/stop-yarn.sh #启动、关闭yarn
mr-jobhistory-daemon.sh #作业查看
hadoop-daemon.sh / hadoop-daemons.sh
yarn-daemon.sh / yarn-daemons.sh
start-balancer.sh/stop-balancer.sh #更新datanode的文件块分布情况

三种启动方式:

三种启动方式:
方式一:逐一启动(实际生产环境中的启动方式)
hadoop-daemon.sh start|stop namenode|datanode| journalnode
yarn-daemon.sh start |stop resourcemanager|nodemanager
方式二:分开启动
start-dfs.sh
start-yarn.sh
方式三:一起启动
start-all.sh
作业查看服务:
mr-jobhistory-daemon.sh start historyserver

部署Hive

启动Hadoop集群。需要注意的一点是Hive只在其中一台节点上部署即可,是没有Hive集群这个概念的。

1、启动、初始化MySQL

#启动mysql服务
service mysqld start
#加入到开机启动项
chkconfig mysqld on
#初始化配置mysql服务
/usr/bin/mysql_secure_installation

注:

问题:Host '192.168.235.138' is not allowed to connect to this MySQL server
解决办法:
mysql> grant all privileges on *.* to 'root'@'%' identified by 'root';
mysql> flush privileges;

2、安装、配置hive

(1)安装

tar -zxvf apache-hive-2.1.-bin.tar.gz -C /usr/local/
cd /usr/local/
mv apache-hive-2.1.-bin/ hive-2.1.
find . -name "*.cmd" -exec rm -rf {} \;

(2)导入MySQL驱动

cp /usr/share/java/mysql-connector-java-commercial-5.1.-bin.jar /usr/local/hive-2.1./lib/

注:如果没有需要先安装,yum -y install mysql-connector-java

(3)创建HDFS存储目录

hdfs dfs -mkdir -p /usr/hive/warehouse
hdfs dfs -mkdir -p /usr/hive/tmp
hdfs dfs -mkdir -p /usr/hive/log
hdfs dfs -chmod g+w /usr/hive/warehouse
hdfs dfs -chmod g+w /usr/hive/tmp
hdfs dfs -chmod g+w /usr/hive/log

创建目录:

 hadoop dfs -ls /usr/tmp
hadoop dfs -mkdir -p /usr/tmp/hive/local
hadoop dfs -mkdir -p /usr/tmp/hive/resources

(4)配置

备份文件

# pwd
/usr/local/hive-2.1./conf
# cp hive-env.sh.template hive-env.sh
# cp hive-exec-log4j2.properties.template hive-exec-log4j2.properties
# cp hive-log4j2.properties.template hive-log4j2.properties
# cp hive-default.xml.template hive-site.xml

修改hive-site.xml

<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://node1:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value></value>
</property> <property>
<name>hive.metastore.warehouse.dir</name>
<value>/usr/hive/warehouse</value>
</property>
<property>
<name>hive.exec.scratchdir</name>
<value>/usr/hive/tmp</value>
</property>
<property>
<name>hive.querylog.location</name>
<value>/usr/hive/log</value>
</property>

增加配置:

  <property>
<name>hive.exec.scratchdir</name>
<value>/usr/tmp/hive</value>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/usr/tmp/hive/local</value>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/usr/tmp/hive/resources</value>
</property>

否则,会报错误:

Caused by: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D

修改hive-env.sh

HADOOP_HOME=/usr/local/hadoop-2.7.
export HIVE_CONF_DIR=/usr/local/hive-2.1./conf

(5)元数据初始化

# bin/schematool --help
# bin/schematool -dbType mysql -initSchema #元数据初始化

注:hive2需要元数据初始化,否则启动时会报错误:

# hive-2.1./bin/hive
which: no hbase in
...
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Logging initialized using configuration in file:/usr/local/hive-2.1./conf/hive-log4j2.properties Async: true
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
...
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate
...
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
...
Caused by: java.lang.reflect.InvocationTargetException
...
Caused by: MetaException(message:Version information not found in metastore. )
...

(6)启动

方式一:cli

# /usr/local/hive-2.1./bin/hive

方式二:webUI

i.打war包,并导入到hive的lib目录下

#下载、解压源码包
wget http://mirror.bit.edu.cn/apache/hive/stable-2/apache-hive-2.1.1-src.tar.gz
tar -zxvf apache-hive-2.1.-src.tar.gz
#提取jsp文件并打包成war文件
cd apache-hive-2.1.-src/hwi/
jar cfM hive-hwi-2.1..war -C web .
#将war包导入到hive中
cp hive-hwi-2.1..war /usr/local/hive-2.1./lib/
cp /opt/jdk1..0_79/lib/tools.jar /usr/local/hive-2.1./lib/

注意:打war包时,如果jar不加“-C”参数指定目录执行,都会报错:
adding: session_kill.jspjava.util.zip.ZipException: duplicate entry: session_kill.jsp

ii.修改hive-site.xml

  <property>
<name>hive.hwi.listen.host</name>
<value>0.0.0.0</value>
<description>This is the host address the Hive Web Interface will listen on</description>
</property>
<property>
<name>hive.hwi.listen.port</name>
<value></value>
<description>This is the port the Hive Web Interface will listen on</description>
</property>
<property>
<name>hive.hwi.war.file</name>
<value>lib/hive-hwi-2.1..war</value>
<description>This sets the path to the HWI war file, relative to ${HIVE_HOME}. </description>
</property>

iii.替换ant文件

wget http://124.205.69.164/files/823800000544EA17/mirror.bit.edu.cn/apache//ant/binaries/apache-ant-1.9.9-bin.tar.gz
tar -zxvf apache-ant-1.9.-bin.tar.gz -C /opt/ant
#替换原来的文件
cp /opt/ant-1.9./lib/ant.jar /usr/local/hive-2.1./lib/
cp /opt/ant-1.9./lib/ant-launcher.jar /usr/local/hive-2.1./lib/

若若不替换ant文件,会报500错误,这是因为hive使用的版本为1.9.1的,需要使用本地的ant版本

    The following error occurred while executing this line:
jar:file:/usr/local/hive-2.1./lib/ant-1.9..jar!/org/apache/tools/ant/antlib.xml:: Could not create task or type of type: componentdef.

参考:http://www.open-open.com/lib/view/open1433318284791.html

# 部署HUE(待续)

1、安装依赖

#!/bin/bash#!/bin/bash

yum -y install asciidoc
yum -y install cyrus-sasl-devel
yum -y install cyrus-sasl-gssapi
yum -y install cyrus-sasl-plain
yum -y install gcc
yum -y install gcc-c++
yum -y install krb5-devel
yum -y install libffi-devel
yum -y install libtidy #(for unit tests only)
yum -y install libxml2-devel
yum -y install libxslt-devel
yum -y install make
#mysql #已安装
yum -y install mysql-devel
yum -y install openldap-devel
yum -y install python-devel
yum -y install sqlite-devel
yum -y install openssl-devel #(for version +)
yum -y install gmp-devel #安装ant
wget wget http://mirror.bit.edu.cn/apache//ant/binaries/apache-ant-1.9.9-bin.tar.bz2
bzip2 -d apache-ant-1.9.-bin.tar.bz2
tar xf apache-ant-1.9.-bin.tar -C /opt/
cd /opt
mv apache-ant-1.9./ ant-1.9.
vim /etc/profile
source /etc/profile #安装maven
wget http://mirrors.tuna.tsinghua.edu.cn/apache/maven/maven-3/3.5.0/binaries/apache-maven-3.5.0-bin.tar.gz
tar -zxvf apache-maven-3.5.-bin.tar.gz -C /opt/
cd /opt/
mv apache-maven-3.5./ maven-3.5./
vim /etc/profile
source /etc/profile

2、下载、编译Hue

下载地址:http://gethue.com/,不要选择github上的,编译时会报如下错误:

error: can't copy 'lib/Crypto/SelfTest/Random/OSRNG/test_posix.py': doesn't exist or not a regular file
make[]: *** [/usr/local/hue/desktop/core/build/pycrypto-2.6./egg.stamp] 错误
make[]: Leaving directory `/usr/local/hue/desktop/core'
make[]: *** [.recursive-env-install/core] 错误
make[]: Leaving directory `/usr/local/hue/desktop'
make: *** [desktop] 错误

编译:

wget https://dl.dropboxusercontent.com/u/730827/hue/releases/3.12.0/hue-3.12.0.tgz  
tar -zxvf hue-3.12.0.tgz -C /opt
cd /opt/hue-3.12.0/
make apps

编译成功后,hue所在目录下新增两个文件夹:app.regbuild

3、启动测试服务

./build/env/bin/hue runserver

打开浏览器访问127.0.0.1:8000,能正常访问说明编译成功,正式使用之前还需修改配置。

4、修改配置

(1)全局配置

# vim /opt/hue-3.12./desktop/conf/hue.ini
修改内容如下:
secret_key=c!@#$%^&*yy{}[]<>?un`~:. #secret_key随便填写一个字符串即可,如果不填写的话Hue会提示错误信息,这个secret_key主要是出于安全考虑用来存储在session store中进行安全验证的。
http_host=192.168.235.140
time_zone=Asia/Shanghai #修改时区为亚洲时区

 (2)修改MySQL为元数据库

hue默认使用sqlite作为元数据库,不推荐在生产环境中使用。会经常出现database is lock的问题。

   i、修改hue.ini,配置MySQL信息如下:

  [[database]] 

    name=hue
engine=mysql
host=192.168.235.140
port=
user=root
password=root

   ii、创建并初始化MySQL元数据库

连接到MySQL,创建数据库hue。

初始化:

#  ./build/env/bin/hue help
# ./build/env/bin/hue syncdb
# ./build/env/bin/hue migrate

初始化完成后,可以在Hue库中看到创建的表。启动服务,可在浏览器中正常访问。

5、启动服务

# ./build/env/bin/hue runserver

访问地址:主机名:8888

#可能遇到的错误及处理方法汇总

1、启动服务后,浏览器不能正常访问:OperationalError: (1045, "Access denied for user 'root'@'node3' (using password: YES)")

解决:连接到MySQL,执行如下操作:

mysql> grant all privileges on *.* to 'root'@'%' identified by 'root';
mysql> flush privileges;

2、启动服务后,浏览器不能正常访问:OperationalError: (1049, "Unknown database '/opt/hue-3.12.0/desktop/desktop.db'")

解决:可能是在配置MySQL(或其他元数据库)时,信息有误。以本文为例,在MySQL库中创建hue数据库,并在hue.ini中配置:

[[database]] 

  engine=mysql
host=node3
port=
user=root
password=
name=hue

重启服务,问题得到解决

3、ProgrammingError: (1146, "Table 'hue.desktop_settings' doesn't exist")

可能的原因:使用MySQL(或其他数据库做元数据库)后,没有进行初始化操作。解决方法参考:修改MySQL为元数据库部分

Hue参考:

http://cloudera.github.io/hue/docs-3.12.0/manual.html

http://cloudera.github.io/hue/docs-3.12.0/sdk/sdk.html

安装配置和使用hue遇到的问题汇总 - https://my.oschina.net/aibati2008/blog/647493

https://github.com/cloudera/hue

https://github.com/cloudera/hue/wiki

http://ju.outofmemory.cn/entry/105162

个人集群部署hadoop 2.7 + hive 2.1的更多相关文章

  1. Hadoop集群部署-Hadoop 运行集群后Live Nodes显示0

    可以尝试以下步骤解决: 1 ,分别删除:主节点从节点的  /usr/local/hadoop-2.6.2/etc/tmp   下得所有文件; 2: 编辑cd usr/local/hadoop-2.6. ...

  2. Hadoop 2.6.0 集群部署

    Hadoop的集群部署和单节点部署类似,配置文件不同,另外需要修改网络方面的配置 首先,准备3台虚拟机,系统为CentOS 6.6,其中一台为namenode 剩余两台为 datanode: 修改主机 ...

  3. Hadoop实战:Hadoop分布式集群部署(一)

    一.系统参数优化配置 1.1 系统内核参数优化配置 修改文件/etc/sysctl.conf,使用sysctl -p命令即时生效.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 ...

  4. Hadoop系列之(二):Hadoop集群部署

    1. Hadoop集群介绍 Hadoop集群部署,就是以Cluster mode方式进行部署. Hadoop的节点构成如下: HDFS daemon:  NameNode, SecondaryName ...

  5. Hadoop(HA)分布式集群部署

    Hadoop(HA)分布式集群部署和单节点namenode部署其实一样,只是配置文件的不同罢了. 这篇就讲解hadoop双namenode的部署,实现高可用. 系统环境: OS: CentOS 6.8 ...

  6. Hadoop分布式集群部署(单namenode节点)

    Hadoop分布式集群部署 系统系统环境: OS: CentOS 6.8 内存:2G CPU:1核 Software:jdk-8u151-linux-x64.rpm hadoop-2.7.4.tar. ...

  7. Hadoop教程(五)Hadoop分布式集群部署安装

    Hadoop教程(五)Hadoop分布式集群部署安装 1 Hadoop分布式集群部署安装 在hadoop2.0中通常由两个NameNode组成,一个处于active状态,还有一个处于standby状态 ...

  8. Hadoop及Zookeeper+HBase完全分布式集群部署

    Hadoop及HBase集群部署 一. 集群环境 系统版本 虚拟机:内存 16G CPU 双核心 系统: CentOS-7 64位 系统下载地址: http://124.202.164.6/files ...

  9. Hadoop记录-Apache hadoop+spark集群部署

    Hadoop+Spark集群部署指南 (多节点文件分发.集群操作建议salt/ansible) 1.集群规划节点名称 主机名 IP地址 操作系统Master centos1 192.168.0.1 C ...

随机推荐

  1. jmeter提取正则表达式中所有关联值-----我想获取所有的ID

    [{ "ID": 1, "Name": "张三" }, { "ID": 2, "Name": &qu ...

  2. React之智能组件和木偶组件

    智能组件 VS 木偶组件 在 React + Redux 结合作为前端框架的时候,提出了一个将组件分为“智能”和“木偶”两种 智能组件:它是数据的所有者,它拥有数据.且拥有操作数据的action,但是 ...

  3. Python语言算法的时间复杂度和空间复杂度

    算法复杂度分为时间复杂度和空间复杂度. 其作用: 时间复杂度是指执行算法所需要的计算工作量: 而空间复杂度是指执行这个算法所需要的内存空间. (算法的复杂性体现在运行该算法时的计算机所需资源的多少上, ...

  4. 多进程编程之守护进程Daemonize

    1.守护进程 守护进程(daemon)是一类在后台运行的特殊进程,用于执行特定的系统任务.很多守护进程在系统引导的时候启动,并且一直运行直到系统关闭.另一些只在需要的时候才启动,完成任务后就自动结束. ...

  5. Callable 和 Runnable 的区别

    Callable 和 Runnable 的使用方法大同小异, 区别在于: 1.Callable 使用 call() 方法, Runnable 使用 run() 方法 2.call() 可以返回值, 而 ...

  6. C++ STL 常用拷贝和替换算法

    C++ STL 常用拷贝和替换算法 copy() 复制 vector<int> vecIntA; vecIntA.push_back(1); vecIntA.push_back(3); v ...

  7. 【刷题】洛谷 P3950 部落冲突

    题目背景 在一个叫做Travian的世界里,生活着各个大大小小的部落.其中最为强大的是罗马.高卢和日耳曼.他们之间为了争夺资源和土地,进行了无数次的战斗.期间诞生了众多家喻户晓的英雄人物,也留下了许多 ...

  8. BZOJ 2243 染色 | 树链剖分模板题进阶版

    BZOJ 2243 染色 | 树链剖分模板题进阶版 这道题呢~就是个带区间修改的树链剖分~ 如何区间修改?跟树链剖分的区间询问一个道理,再加上线段树的区间修改就好了. 这道题要注意的是,无论是线段树上 ...

  9. Treat wchar_t as built-in type不一致导致的链接错误

    今天用VS2013新建了一个工程,生成时出现很多怪异的链接错误,比如: error LNK2019: unresolved external symbol "__declspec(dllim ...

  10. 【BZOJ3675】【Apio2014】序列分割

    Description 传送门 Solution ​ 之前我也遇到过一次这种"两段之和乘积作为贡献"的问题:考虑把这一种\((\sum) *(\sum)\)的形式拆括号,就可以发现 ...