这几天,无意之间,被这件事情给迷惑,不解!先暂时贴于此,以后再解决!

详细问题如下:

  在hive的安装目录下(我这里是 /home/hadoop/app/hive-1.2.1),hive的安装目录的lib下(我这里是/home/hadoop/app/hive-1.2.1/lib)存放了mysql-connector-java-5.1.21.jar。

  我的mysql,是用root用户安装的,在/home/hadoop/app目录,所以,启动也得在此目录下。

  对于djt002,我的mysql是root用户安装的,在/usr/java目录,所以,启动也得在此目录下。service mysqld start

  在单节点集群里。主机名为weekedn110。

<property>
  <name>javax.jdo.option.ConnectionURL</name>
  <value>jdbc:mysql://weekend110:3306/hive?createDatabaseIfNotExist=true</value>
  <description>JDBC connect string for a JDBC metastore</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionDriverName</name>
  <value>com.mysql.jdbc.Driver</value>
  <description>Driver class name for a JDBC metastore</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionUserName</name>
  <value>hive</value>
  <description>username to use against metastore database</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionPassword</name>
  <value>hive</value>
  <description>password to use against metastore database</description>
</property>

[root@weekend110 app]# service mysqld start
Starting mysqld: [ OK ]
[root@weekend110 app]# su hadoop
[hadoop@weekend110 app]$ cd hive-0.12.0/
[hadoop@weekend110 hive-0.12.0]$ bin/hive
16/11/02 09:00:20 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
16/11/02 09:00:20 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
16/11/02 09:00:20 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
16/11/02 09:00:20 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
16/11/02 09:00:20 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
16/11/02 09:00:20 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
16/11/02 09:00:20 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative

Logging initialized using configuration in jar:file:/home/hadoop/app/hive-0.12.0/lib/hive-common-0.12.0.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/app/hadoop-2.4.1/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/app/hive-0.12.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J:

  可以看出,这是单节点集群安装hive。直接这样既可。

但是,对于3节点集群。我测试过了,在HadoopMaster或HadoopSlave1或HadoopSlave2上,随便一台安装hive,以及同时启动hadoop进程。都是报如下错误。

[root@HadoopMaster app]# service mysqld status
mysqld (pid 3041) is running...
[root@HadoopMaster app]# su hadoop
[hadoop@HadoopMaster app]$ cd hive-1.2.1/
[hadoop@HadoopMaster hive-1.2.1]$ bin/hive

Logging initialized using configuration in jar:file:/home/hadoop/app/hive-1.2.1/lib/hive-common-1.2.1.jar!/hive-log4j.properties
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1523)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3024)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)
... 8 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
... 14 more
Caused by: MetaException(message:Got exception: java.io.IOException No FileSystem for scheme: hfds)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.logAndThrowMetaException(MetaStoreUtils.java:1213)
at org.apache.hadoop.hive.metastore.Warehouse.getFs(Warehouse.java:106)
at org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:140)
at org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:146)
at org.apache.hadoop.hive.metastore.Warehouse.getWhRoot(Warehouse.java:159)
at org.apache.hadoop.hive.metastore.Warehouse.getDefaultDatabasePath(Warehouse.java:177)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:600)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:199)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
... 19 more
[hadoop@HadoopMaster hive-1.2.1]$

[hadoop@HadoopSlave1 hive-1.2.1]$ bin/hive

Logging initialized using configuration in jar:file:/home/hadoop/app/hive-1.2.1/lib/hive-common-1.2.1.jar!/hive-log4j.properties
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1523)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3024)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)
... 8 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
... 14 more
Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused
at org.apache.thrift.transport.TSocket.open(TSocket.java:187)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:420)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:236)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3024)

  对此,真心,感觉到困惑。!!!

说说单节点集群里安装hive、3\5节点集群里安装hive的诡异区别的更多相关文章

  1. Cloudera Manager安装之利用parcels方式安装单节点集群(包含最新稳定版本或指定版本的安装)(添加服务)(CentOS6.5)(四)

    不多说,直接上干货! 福利 => 每天都推送 欢迎大家,关注微信扫码并加入我的4个微信公众号:   大数据躺过的坑      Java从入门到架构师      人工智能躺过的坑          ...

  2. Cloudera Manager安装之利用parcels方式安装3或4节点集群(包含最新稳定版本或指定版本的安装)(添加服务)(CentOS6.5)(五)

    参考博客 Cloudera Manager安装之利用parcels方式安装单节点集群  Cloudera Manager安装之Cloudera Manager 5.3.X安装(三)(tar方式.rpm ...

  3. Cloudera Manager安装之利用parcels方式(在线或离线)安装3或4节点集群(包含最新稳定版本或指定版本的安装)(添加服务)(Ubuntu14.04)(五)

    前期博客 Cloudera Manager安装之Cloudera Manager 5.6.X安装(tar方式.rpm方式和yum方式) (Ubuntu14.04) (三) 如果大家,在启动的时候,比如 ...

  4. 在 Linux 多节点安装配置 Apache Zookeeper 分布式集群

    规划: 三台物理服务器就形成了(法定人数).对于高可用性集群,您可以使用高于3的任何奇数.例如,如果设置5台服务器,则集群可以处理两个故障节点等. 物理服务器需要开启的端口 2888 , 3888 和 ...

  5. 60分钟内从零起步驾驭Hive实战学习笔记(Ubuntu里安装mysql)

    本博文的主要内容是: 1. Hive本质解析 2. Hive安装实战 3. 使用Hive操作搜索引擎数据实战 SparkSQL前身是Shark,Shark强烈依赖于Hive.Spark原来没有做SQL ...

  6. Centos7的安装、Docker1.12.3的安装,以及Docker Swarm集群的简单实例

    目录 [TOC] 1.环境准备 ​ 本文中的案例会有四台机器,他们的Host和IP地址如下 c1 -> 10.0.0.31 c2 -> 10.0.0.32 c3 -> 10.0.0. ...

  7. rabbitmq安装及基本操作(含集群配置)

    一.rabbitmq的安装 因为rabbitmq是基于 erlang语言开发,所有要先安装erlang 1.安装erlang 这里我下载的是19.2的版本,地址为https://www.erlang. ...

  8. [转载] Centos7的安装、Docker1.12.3的安装,以及Docker Swarm集群的简单实例

    1.环境准备 ​ 本文中的案例会有四台机器,他们的Host和IP地址如下 c1 -> 10.0.0.31 c2 -> 10.0.0.32 c3 -> 10.0.0.33 c4 -&g ...

  9. Elasticsearch之重要核心概念(cluster(集群)、shards(分配)、replicas(索引副本)、recovery(据恢复或叫数据重新分布)、gateway(es索引的持久化存储方式)、discovery.zen(es的自动发现节点机制机制)、Transport(内部节点或集群与客户端的交互方式)、settings(修改索引库默认配置)和mappings)

    Elasticsearch之重要核心概念如下: 1.cluster 代表一个集群,集群中有多个节点,其中有一个为主节点,这个主节点是可以通过选举产生的,主从节点是对于集群内部来说的.es的一个概念就是 ...

随机推荐

  1. linux Ubuntu12 设置root用户登录图形界面

    Ubuntu 12.04默认是不允许root登录的,在登录窗口只能看到普通用户和访客登录.以普通身份登陆Ubuntu后我们需要做一些修改,普通用户登录后,修改系统配置文件需要切换到超级用户模式,在终端 ...

  2. 第一章、关于SQL Server数据库的备份和还原(sp_addumpdevice、backup、Restore)

    在sql server数据库中,备份和还原都只能在服务器上进行,备份的数据文件在服务器上,还原的数据文件也只能在服务器上,当在非服务器的机器上启动sql server客户端的时候,也可以通过该客户端来 ...

  3. C#基础精华06(Linq To XML,读取xml文件,写入xml)

    1.XML概述: 可扩展标记语言XML(eXtensible Markup Language)是一种简单灵活的文本格式的可扩展标记语言,侧重于存储数据. 2.XML特点 xml 标记语言 html x ...

  4. POJ 2255 Tree Recovery 二叉树的遍历

    前序和中序输入二叉树,后序输出二叉树:核心思想只有一个,前序的每个根都把中序分成了两部分,例如 DBACEGF ABCDEFG D把中序遍历的结果分成了ABC和EFG两部分,实际上,这就是D这个根的左 ...

  5. Redpine的Lite-Fi解决方案获Wi-Fi CERTIFIED认证

    应用微电路公司(AMCC)和Redpine Signals日前共同宣布,已合作开发出新一代基于Power Architecture的嵌入式Wi-Fi连接性解决方案,目前双方已经在AMCC的PowerP ...

  6. MS SQL SERVER: msdb.dbo.MSdatatype_mappings & msdb.dbo.sysdatatypemappings

    --SQL转Oracle/DB2的类型对应关系SELECT *FROM msdb.dbo.MSdatatype_mappings; --MS SQL SERVER更详细得显示了ORACLE/DB2各个 ...

  7. vsphere client cd/dvd 驱动器1 正在连接

    esxi 5.1选择了客户端设备,打开控制台后,CD/DVD一直显示“CD/DVD驱动器1 正在连接” 解决方法:vSphere Client客户端问题,关闭和重新打开vSphere Client客户 ...

  8. Java之hashSet实现引用类型的禁止重复功能

    题目:在HashSet集合中添加Person对象,把姓名相同的人当作同一个人,禁止重复添加. 分析:1.定义一个Person类,定义name和age属性,并重写hashCode()和equals()方 ...

  9. 使用Amoeba 实现MySQL DB 读写分离

    Amoeba(变形虫)项目是一个开源框架,于2008年开始发布一款 Amoeba for Mysql软件: 这个软件致力于MySQL的分布式数据库前端代理层,它主要在应用层访问MySQL的时候充当SQ ...

  10. POJ 3207 Ikki's Story IV - Panda's Trick (2-SAT,基础)

    题意: 有一个环,环上n个点,现在在m个点对之间连一条线,线可以往圆外面绕,也可以往里面绕,问是否必定会相交? 思路: 根据所给的m条边可知,假设给的是a-b,那么a-b要么得绕环外,要么只能在环内, ...