离线安装Cloudera Manager 5和CDH5(最新版5.9.3) 完全教程(七)界面安装
一、安装过程
1.1 登录
1.2 接受许可协议
1.3 选择免费版本
1.4 选择下一步
1.5 选择当前管理的主机
1.6 选择使用Parcel安装,选择CDH版本,点击继续
1.7 等待安装
此处安装需要等待一段时间,请耐心等待,安装过程可能需要30分钟时间,这和物理机器的磁盘读写速度和机器性能有关,如果中断请继续之前的步骤重新操作,下图是安装成功界面
1.8 集群检测
检测全部通过
1.9 选择自定义服务,选择要安装的组件
1.10 分配角色
1.11 数据库设置
选择对应的数据库,点击测试连接,通过之后,继续
1.12 集群设置
使用默认设置即可
1.13 首次安装组件
1.14 安装Spark报错
查看stderr查看报错信息,发现找不到JAVA_HOME
解决方法:需要每个节点都操作
在以下文件中手工添加JAVA_HOME
[root@master soft]# cd /opt/cloudera-manager/cm-5.9./lib64/cmf/service/client/
[root@master client]# vi deploy-cc.sh
保存之后
[root@master client]# cat /etc/environment
点击重试
1.15 安装Hive报错
查看stderr查看报错信息,发现hive初始化失败
处理过程:
(1) 拷贝jdbc驱动包
[root@master ~]# cp /root/soft/mysql-connector-java-5.1.-bin.jar /opt/cloudera/parcels/CDH-5.9.-.cdh5.9.3.p0./lib/hive/lib/
点击重试,仍旧报错
点击查看完整日志
点击链接
在搜索框中搜索hive.metastore.schema.verification,把勾选去掉,保存更改,返回安装界面点击重试
继续报错,查看完整日志
Java HotSpot(TM) -Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
Java HotSpot(TM) -Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release
Java HotSpot(TM) -Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
javax.jdo.JDOUserException: Could not create "increment"/"table" value-generation container hivedb.`SEQUENCE_TABLE` since autoCreate flags do not allow it.
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:)
at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:)
at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.setMetaStoreSchemaVersion(ObjectStore.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:)
at com.sun.proxy.$Proxy6.verifySchema(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.util.RunJar.run(RunJar.java:)
at org.apache.hadoop.util.RunJar.main(RunJar.java:)
NestedThrowablesStackTrace:
Could not create "increment"/"table" value-generation container hivedb.`SEQUENCE_TABLE` since autoCreate flags do not allow it.
org.datanucleus.exceptions.NucleusUserException: Could not create "increment"/"table" value-generation container hivedb.`SEQUENCE_TABLE` since autoCreate flags do not allow it.
at org.datanucleus.store.rdbms.valuegenerator.TableGenerator.createRepository(TableGenerator.java:)
at org.datanucleus.store.rdbms.valuegenerator.AbstractRDBMSGenerator.obtainGenerationBlock(AbstractRDBMSGenerator.java:)
at org.datanucleus.store.valuegenerator.AbstractGenerator.obtainGenerationBlock(AbstractGenerator.java:)
at org.datanucleus.store.valuegenerator.AbstractGenerator.next(AbstractGenerator.java:)
at org.datanucleus.store.rdbms.RDBMSStoreManager.getStrategyValueForGenerator(RDBMSStoreManager.java:)
at org.datanucleus.store.AbstractStoreManager.getStrategyValue(AbstractStoreManager.java:)
at org.datanucleus.ExecutionContextImpl.newObjectId(ExecutionContextImpl.java:)
at org.datanucleus.state.JDOStateManager.setIdentity(JDOStateManager.java:)
at org.datanucleus.state.JDOStateManager.initialiseForPersistentNew(JDOStateManager.java:)
at org.datanucleus.state.ObjectProviderFactoryImpl.newForPersistentNew(ObjectProviderFactoryImpl.java:)
at org.datanucleus.ExecutionContextImpl.newObjectProviderForPersistentNew(ExecutionContextImpl.java:)
at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:)
at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:)
at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:)
at org.datanucleus.ExecutionContextThreadedImpl.persistObject(ExecutionContextThreadedImpl.java:)
at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:)
at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.setMetaStoreSchemaVersion(ObjectStore.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:)
at com.sun.proxy.$Proxy6.verifySchema(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.util.RunJar.run(RunJar.java:)
at org.apache.hadoop.util.RunJar.main(RunJar.java:)
Exception in thread "main" javax.jdo.JDOUserException: Could not create "increment"/"table" value-generation container hivedb.`SEQUENCE_TABLE` since autoCreate flags do not allow it.
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:)
at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:)
at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.setMetaStoreSchemaVersion(ObjectStore.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:)
at com.sun.proxy.$Proxy6.verifySchema(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.util.RunJar.run(RunJar.java:)
at org.apache.hadoop.util.RunJar.main(RunJar.java:)
NestedThrowablesStackTrace:
Could not create "increment"/"table" value-generation container hivedb.`SEQUENCE_TABLE` since autoCreate flags do not allow it.
org.datanucleus.exceptions.NucleusUserException: Could not create "increment"/"table" value-generation container hivedb.`SEQUENCE_TABLE` since autoCreate flags do not allow it.
at org.datanucleus.store.rdbms.valuegenerator.TableGenerator.createRepository(TableGenerator.java:)
at org.datanucleus.store.rdbms.valuegenerator.AbstractRDBMSGenerator.obtainGenerationBlock(AbstractRDBMSGenerator.java:)
at org.datanucleus.store.valuegenerator.AbstractGenerator.obtainGenerationBlock(AbstractGenerator.java:)
at org.datanucleus.store.valuegenerator.AbstractGenerator.next(AbstractGenerator.java:)
at org.datanucleus.store.rdbms.RDBMSStoreManager.getStrategyValueForGenerator(RDBMSStoreManager.java:)
at org.datanucleus.store.AbstractStoreManager.getStrategyValue(AbstractStoreManager.java:)
at org.datanucleus.ExecutionContextImpl.newObjectId(ExecutionContextImpl.java:)
at org.datanucleus.state.JDOStateManager.setIdentity(JDOStateManager.java:)
at org.datanucleus.state.JDOStateManager.initialiseForPersistentNew(JDOStateManager.java:)
at org.datanucleus.state.ObjectProviderFactoryImpl.newForPersistentNew(ObjectProviderFactoryImpl.java:)
at org.datanucleus.ExecutionContextImpl.newObjectProviderForPersistentNew(ExecutionContextImpl.java:)
at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:)
at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:)
at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:)
at org.datanucleus.ExecutionContextThreadedImpl.persistObject(ExecutionContextThreadedImpl.java:)
at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:)
at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.setMetaStoreSchemaVersion(ObjectStore.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:)
at com.sun.proxy.$Proxy6.verifySchema(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.util.RunJar.run(RunJar.java:)
at org.apache.hadoop.util.RunJar.main(RunJar.java:)
报错原因:
mysql数据库的binlog_format参数设置不正确,原来设置的是STATEMENT,修改为MIXED,修改方法,在/usr/my.cnf文件中加上binlog_format=MIXED
然后重启mysql数据库,再次点击重试,全部通过。点击继续
1.16 完成安装
二、调试
2.1 安装完成
2.2 HDFS配置报警告
点击黄色的扳手,查看是NameNode的Java堆栈大小
修改为4吉字节点击保存,框中全部改为4吉字节
重启过时服务
2.2 启用HDFS的高可用
离线安装Cloudera Manager 5和CDH5(最新版5.9.3) 完全教程(七)界面安装的更多相关文章
- 离线安装Cloudera Manager 5和CDH5(最新版5.1.3) 完全教程
关于CDH和Cloudera Manager CDH (Cloudera's Distribution, including Apache Hadoop),是Hadoop众多分支中的一种,由Cloud ...
- 离线安装Cloudera Manager 5和CDH5(最新版5.9.3) 完全教程(六)CM的安装
一.角色分配 Cloudera Manager Agent:向server端报告当前机器服务状态. Cloudera Manager Server:接受agent角色报告服务状态,以视图界面展现,方便 ...
- 离线安装Cloudera Manager 5和CDH5(最新版5.9.3) 完全教程(一)环境说明
关于CDH和Cloudera Manager CDH (Cloudera's Distribution, including Apache Hadoop),是Hadoop众多分支中的一种,由Cloud ...
- 离线安装Cloudera Manager 5和CDH5(最新版5.9.3) 完全教程(二)基础环境安装
一.安装CentOS 6.5 x64 具体安装过程自行百度 1.1 修改IP地址 [root@master ~]# vi /etc/sysconfig/network DEVICE=eth0 TYPE ...
- 离线安装Cloudera Manager 5和CDH5(最新版5.9.3) 完全教程(四)数据库安装(单节点)
一.卸载CentOS自带的MySQL 1.1 查看之前是否安装过mysql [root@master mysql]# rpm -qa|grep -i mysql mysql-libs--.el6.x8 ...
- 离线安装Cloudera Manager 5和CDH5(最新版5.9.3) 完全教程(五)数据库安装(双节点)
一.方案选择 通过Lvs+keepalived+mysql(主主同步)实现数据库层面的高可用方案,需要两台服务器作为数据库提供业务数据的存储,应用服务器通过vip访问数据库,允许同一时间内一台数据库服 ...
- 离线安装Cloudera Manager 5和CDH5(最新版5.9.3) 完全教程(三)重新分配磁盘空间(可选)
一.查看文件系统 [root@master ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_master-lv_ ...
- 离线安装 Cloudera Manager 5 和 CDH5.10
关于CDH和Cloudera Manager CDH (Cloudera's Distribution, including Apache Hadoop),是Hadoop众多分支中的一种,由Cloud ...
- 离线安装Cloudera Manager 5和CDH5
关于CDH和Cloudera Manager CDH (Cloudera's Distribution, including Apache Hadoop),是Cloudera 完全开源的Hadoop ...
随机推荐
- [翻译]EntityFramework Core 2.2 发布
原文来源 TechViews 今天我们将推出EF Core 2.2的最终版本,以及ASP.NET Core 2.2和.NET Core 2.2 .这是我们的开源和跨平台对象数据库映射技术的最新版本. ...
- Java学习笔记之——冒泡排序
冒泡排序:解决数组的排序问题,比如从大到小或者从小到大 原理:两两比较 案例:
- 【转】Dubbo和JDK的SPI究竟有何区别?
前言 上一篇简单的介绍了spi的基本一些概念,但是其实Dubbo对jdk的spi进行了一些改进,具体改进了什么,来看看文档的描述 JDK 标准的 SPI 会一次性实例化扩展点所有实现,如果有扩展实现初 ...
- Spring容器的初始化流程
一.创建BeanFactory流程 1.流程入口 创建BeanFactory的流程是从refresh方法的第二步开始的,通过调用obtainFreshBeanFactory方法完成流程. Config ...
- Spring基于纯注解方式的使用
经过上篇xml与注解混合方式,对注解有了简单额了解,上篇的配置方式极大地简化了xml中配置,但仍有部分配置在xml中进行,接下来我们就通过注解的方式将xml中的配置用注解的方式实现,并最终去掉xml配 ...
- SQL Server 基本INSERT语句
1.基本INSERT语句,单行插入 如果没有列出列,则使一一对应. 2.多行插入 3.INSERT INTO ... SELECT 语句 要插入的语句是从其他表中查询出来的. 注意:数据类型得相同或者 ...
- HTML--SVG基础
一 SVG概述 SVG是Scalable Vector Graphics的缩写,即缩放式矢量图形; 优点: 1.使用编辑器即可编辑图形; 2.基于XML,SVG图形可以被很容易的搜索,脚本化和压缩; ...
- blfs(systemv版本)学习笔记-使用apache创建简单的网页服务器
我的邮箱地址:zytrenren@163.com欢迎大家交流学习纠错! apache项目地址:http://www.linuxfromscratch.org/blfs/view/stable/serv ...
- Python 练习:简单的购物车
salary = int(input("Please input your salary: ")) msg = ''' 1. iphone6s 5800 2. mac book 9 ...
- CSS布局设置
一 盒模型 盒模型 在CSS中,"box model"这一术语是用来设计和布局时使用,然后在网页中基本上都会显示一些方方正正的盒子.我们称为这种盒子叫盒模型. 盒模型有两种:标准模 ...