5.在安装的时候遇到的问题

5.1使用ambari-server start的时候出现ERROR: Exiting with exit code -1.

5.1.1REASON: Ambari Server java process died with exitcode 255. Check /var/log/ambari-server/ambari-server.out for more information

解决:

由于是重新安装,所以在使用/etc/init.d/postgresql  initdb初始化数据库的时候会出现这个错误,所以需要

先用yum –y remove postgresql*命令把postgresql卸载

然后把/var/lib/pgsql/data目录下的文件全部删除

然后再配置postgresql数据库(执行1.6章节内容)

然后再次安装(3章节内容)

5.1.2在日志中有如下错误:ERROR [main] AmbariServer:820 - Failed to run the Ambari Server

com.google.inject.ProvisionException: Guice provision errors:

1) Error injecting method, java.lang.NullPointerException

at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:243)

at org.apache.ambari.server.api.services.AmbariMetaInfo.class(AmbariMetaInfo.java:125)

while locating org.apache.ambari.server.api.services.AmbariMetaInfo

for field at org.apache.ambari.server.controller.AmbariServer.ambariMetaInfo(AmbariServer.java:145)

at org.apache.ambari.server.controller.AmbariServer.class(AmbariServer.java:145)

while locating org.apache.ambari.server.controller.AmbariServer

1 error

at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:987)

at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1013)

at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:813)

Caused by: java.lang.NullPointerException

at org.apache.ambari.server.stack.StackModule.processRepositories(StackModule.java:665)

at org.apache.ambari.server.stack.StackModule.resolve(StackModule.java:158)

at org.apache.ambari.server.stack.StackManager.fullyResolveStacks(StackManager.java:201)

at org.apache.ambari.server.stack.StackManager.(StackManager.java:119)

at org.apache.ambari.server.stack.StackManager$$FastClassByGuice$$33e4ffe0.newInstance()

at com.google.inject.internal.cglib.reflect.$FastConstructor.newInstance(FastConstructor.java:40)

at com.google.inject.internal.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:60)

at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85)

at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)

at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978)

at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)

at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974)

at com.google.inject.assistedinject.FactoryProvider2.invoke(FactoryProvider2.java:632)

at com.sun.proxy.$Proxy26.create(Unknown Source)

at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:247)

5.2安装HDFS和HBASE的时候出现/usr/hdp/current/hadoop-client/conf  doesn't exist

5.2.1/etc/Hadoop/conf文件链接存在

是由于/etc/hadoop/conf和/usr/hdp/current/hadoop-client/conf目录互相链接,造成死循环,所以要改变一个的链接

cd /etc/hadoop

rm -rf conf

ln -s /etc/hadoop/conf.backup /etc/hadoop/conf

HBASE也会遇到同样的问题,解决方式同上

cd /etc/hbase

rm -rf conf

ln -s /etc/hbase/conf.backup /etc/hbase/conf

ZooKeeper也会遇到同样的问题,解决方式同上

cd /etc/zookeeper

rm -rf conf

ln -s /etc/zookeeper/conf.backup /etc/zookeeper/conf

5.2.2/etc/Hadoop/conf文件链接不存在

查看正确的配置,发现缺少两个目录文件config.backup和2.4.0.0-169,把文件夹拷贝到/etc/hadoop目录下

重新创建/etc/hadoop目录下的conf链接:

cd /etc/hadoop

rm -rf conf

ln -s /usr/hdp/current/hadoop-client/conf conf

问题解决

5.3在认证机器(Confirm Hosts)的时候出现错误Ambari agent machine hostname (localhost) does not match expected ambari server hostname

Ambari配置时在Confirm Hosts的步骤时,中间遇到一个很奇怪的问题:总是报错误:

Ambari agent machine hostname (localhost.localdomain) does not match expected ambari server hostname (xxx).

后来修改的/etc/hosts文件中

修改前:

127.0.0.1   localhost dsj-kj1
::1         localhost dsj-kj1

10.13.39.32     dsj-kj1

10.13.39.33     dsj-kj2

10.13.39.34     dsj-kj3

10.13.39.35     dsj-kj4

10.13.39.36     dsj-kj5

修改后:

127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4
::1          localhost localhost.localdomain localhost6 localhost6.localdomain6

10.13.39.32     dsj-kj1

10.13.39.33     dsj-kj2

10.13.39.34     dsj-kj3

10.13.39.35     dsj-kj4

10.13.39.36     dsj-kj5

感觉应该是走的ipv6协议,很奇怪,不过修改后就可以了。

5.4ambary-server重装

删除使用脚本删除

注意删除后要安装两个系统组件

yum -y install ruby*

yum -y install redhat-lsb*

yum -y install snappy*

安装参考3

5.5Ambari连接mysql设置

在主节点把mysql数据库连接包拷贝在/var/lib/ambary-server/resources目录下并改名为mysql-jdbc-driver.jar

cp /usr/share/java/mysql-connector-java-5.1.17.jar /var/lib/ambari-server/resources/mysql-jdbc-driver.jar

再在图形界面下启动hive

5.6在注册机器(Confirm Hosts)的时候出现错误Failed to start ping port listener of: [Errno 98] Address already in use

某个端口或者进程一直陪占用

解决方法:
发现df命令一直执行没有完成,

[root@testserver1 ~]# netstat -lanp|grep 8670
tcp        0      0 0.0.0.0:8670                0.0.0.0:*                   LISTEN      2587/df

[root@testserver1 ~]# kill -9 2587
kill后,再重启ambari-agent问题解决

[root@testserver1 ~]# service ambari-agent restart
Verifying Python version compatibility...
Using python  /usr/bin/python2.6
ambari-agent is not running. No PID found at /var/run/ambari-agent/ambari-agent.pid
Verifying Python version compatibility...
Using python  /usr/bin/python2.6
Checking for previously running Ambari Agent...
Starting ambari-agent
Verifying ambari-agent process status...
Ambari Agent successfully started
Agent PID at: /var/run/ambari-agent/ambari-agent.pid
Agent out at: /var/log/ambari-agent/ambari-agent.out
Agent log at: /var/log/ambari-agent/ambari-agent.log

5.7在注册机器(Confirm Hosts)的时候出现错误The following hosts have Transparent HugePages (THP) enabled。THP should be disabled to avoid potential Hadoop performance issues

解决方法:
在Linux下执行:

echo never >/sys/kernel/mm/redhat_transparent_hugepage/defrag

echo never >/sys/kernel/mm/redhat_transparent_hugepage/enabled

echo never >/sys/kernel/mm/transparent_hugepage/enabled

echo never >/sys/kernel/mm/transparent_hugepage/defrag

5.8启动hive的时候出现错误unicodedecodeerror ambari in position 117

查看/etc/sysconfig/i18n文件,发现内容如下:

LANG=”zh_CN.UTF8”

原来系统字符集设置成了中文,改成如下内容,问题解决:

LANG="en_US.UTF-8"

5.9安装Metrics的时候报如下错误,安装包找不到

1.failure: Updates-ambari-2.2.1.0/ambari/ambari-metrics-monitor-2.2.1.0-161.x86_64.rpm from HDP-UTILS-1.1.0.20: [Errno 256] No more mirrors to try.

在ftp源服务器上执行命令:

cd /var/www/html/ambari/HDP-UTILS-1.1.0.20/repos/centos6

mkdir Updates-ambari-2.2.1.0

cp -r /var/www/html/ambari/Updates-ambari-2.2.1.0/ambari /var/www/html/ambari/HDP-UTILS-1.1.0.20/repos/centos6/Updates-ambari-2.2.1.0

然后重新生成repodata

cd /var/www/html/ambari

rm -rf repodata

createrepo ./

2.failure: HDP-UTILS-1.1.0.20/repos/centos6/Updates-ambari-2.2.1.0/ambari/ambari-metrics-monitor-2.2.1.0-161.x86_64.rpm from HDP-UTILS-1.1.0.20: [Errno 256] No more mirrors to try.

在/etc/yum.repos.d目录下删除mnt.repo,并使用yum clean all命令来清空yum的缓存

cd /ec/yum.repos.d

rm -rf mnt.repo

yum clean all

5.11jps 报process information unavailable解决办法

4791 -- process information unavailable

解决办法:

进入tmp目录,

cd /tmp

删除该目录下

名称为hsperfdata_{ username}的文件夹

然后jps,清净了。

脚本:

cd /tmp

ls -l | grep hsperf | xargs rm -rf

ls -l | grep hsperf

5.12namenode启动报错在日志文件中ERROR namenode.NameNode (NameNode.java:main(1712)) - Failed to start namenode

日志中还有java.net.BindException: Port in use: gmaster:50070

Caused by: java.net.BindException: Address already in use

判断原因是50070上一次没有释放,端口占用

netstat下time_wait状态的tcp连接: 
1.这是一种处于连接完全关闭状态前的状态; 
2.通常要等上4分钟(windows server)的时间才能完全关闭; 
3.这种状态下的tcp连接占用句柄与端口等资源,服务器也要为维护这些连接状态消耗资源; 
4.解决这种time_wait的tcp连接只有让服务器能够快速回收和重用那些TIME_WAIT的资源:修改注册表[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Tcpip\Parameters]添加dword值TcpTimedWaitDelay=30(30也为微软建议值;默认为2分钟)和MaxUserPort:65534(可选值5000 - 65534); 
5.具体tcpip连接参数配置还可参照这里:http://technet.microsoft.com/zh-tw/library/cc776295%28v=ws.10%29.aspx 
6.linux下: 
vi /etc/sysctl.conf 
新增如下内容: 
net.ipv4.tcp_tw_reuse = 1 
net.ipv4.tcp_tw_recycle = 1 
net.ipv4.tcp_syncookies=1

net.ipv4.tcp_fin_timeout=30

net.ipv4.tcp_keepalive_time=1800

net.ipv4.tcp_max_syn_backlog=8192

使内核参数生效: 
[root@web02 ~]# sysctl -p 
readme: 
net.ipv4.tcp_syncookies=1 打开TIME-WAIT套接字重用功能,对于存在大量连接的Web服务器非常有效。 
net.ipv4.tcp_tw_recyle=1 
net.ipv4.tcp_tw_reuse=1 减少处于FIN-WAIT-2连接状态的时间,使系统可以处理更多的连接。 
net.ipv4.tcp_fin_timeout=30 减少TCP KeepAlive连接侦测的时间,使系统可以处理更多的连接。 
net.ipv4.tcp_keepalive_time=1800 增加TCP SYN队列长度,使系统可以处理更多的并发连接。 
net.ipv4.tcp_max_syn_backlog=8192

5.13在启动的时候报错误resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh  -H -E /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh

在日志中有如下内容:

2016-03-31 13:55:28,090 INFO  security.ShellBasedIdMapping (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static UID/GID mapping because '/etc/nfs.map' does not exist.

2016-03-31 13:55:28,096 INFO  nfs3.WriteManager (WriteManager.java:(92)) - Stream timeout is 600000ms.

2016-03-31 13:55:28,096 INFO  nfs3.WriteManager (WriteManager.java:(100)) - Maximum open streams is 256

2016-03-31 13:55:28,096 INFO  nfs3.OpenFileCtxCache (OpenFileCtxCache.java:(54)) - Maximum open streams is 256

2016-03-31 13:55:28,259 INFO  nfs3.RpcProgramNfs3 (RpcProgramNfs3.java:(205)) - Configured HDFS superuser is

2016-03-31 13:55:28,261 INFO  nfs3.RpcProgramNfs3 (RpcProgramNfs3.java:clearDirectory(231)) - Delete current dump directory /tmp/.hdfs-nfs

2016-03-31 13:55:28,269 WARN  fs.FileUtil (FileUtil.java:deleteImpl(187)) - Failed to delete file or dir [/tmp/.hdfs-nfs]: it still exists.

说明hdfs这个用户对/tmp没有权限

赋予权限给hdfs用户:

chown  hdfs:hadoop /tmp

再启动问题解决

5.14在安装ranger组件的时候出现错误连接不上mysql数据库rangeradmin用户和不能赋权的问题

在数据库中先删除所有rangeradmin用户,注意使用drop user命令:

drop user 'rangeradmin'@'%';

drop user 'rangeradmin'@'localhost';

drop user 'rangeradmin'@'gmaster';

drop user 'rangeradmin'@'gslave1';

drop user 'rangeradmin'@'gslave2';

FLUSH PRIVILEGES;

再创建用户(注意gmaster是ranger安装的服务器机器名)

CREATE USER 'rangeradmin'@'%' IDENTIFIED BY 'rangeradmin';

GRANT ALL PRIVILEGES ON *.* TO 'rangeradmin'@'%'  with grant option;

CREATE USER 'rangeradmin'@'localhost' IDENTIFIED BY 'rangeradmin';

GRANT ALL PRIVILEGES ON *.* TO 'rangeradmin'@'localhost'  with grant option;

CREATE USER 'rangeradmin'@'gmaster' IDENTIFIED BY 'rangeradmin';

GRANT ALL PRIVILEGES ON *.* TO 'rangeradmin'@'gmaster'  with grant option;

FLUSH PRIVILEGES;

再查看权限:

SELECT DISTINCT CONCAT('User: ''',user,'''@''',host,''';') AS query FROM mysql.user

select * from mysql.user where user='rangeradmin' \G;

问题解决

5.15在ambari启动的时候出现错误:AmbariServer:820 - Failed to run the Ambari Server

这个问题困扰了我很久,最后通过查看源码找到了问题所在:

在/var/log/ambari-server/ambary-server.log文件中报有错误:

13 Apr 2016 14:16:01,723  INFO [main] StackDirectory:458 - Stack '/var/lib/ambari-server/resources/stacks/HDP/2.1.GlusterFS' doesn't contain an upgrade directory

13 Apr 2016 14:16:01,723  INFO [main] StackDirectory:468 - Stack '/var/lib/ambari-server/resources/stacks/HDP/2.1.GlusterFS' doesn't contain config upgrade pack file

13 Apr 2016 14:16:01,744  INFO [main] StackDirectory:484 - Role command order info was loaded from file: /var/lib/ambari-server/resources/stacks/HDP/2.1.GlusterFS/role_command_order.json

13 Apr 2016 14:16:01,840  INFO [main] StackDirectory:484 - Role command order info was loaded from file: /var/lib/ambari-server/resources/stacks/HDP/2.4/role_command_order.json

13 Apr 2016 14:16:01,927 ERROR [main] AmbariServer:820 - Failed to run the Ambari Server

com.google.inject.ProvisionException: Guice provision errors:

1) Error injecting method, java.lang.NullPointerException

at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:243)

at org.apache.ambari.server.api.services.AmbariMetaInfo.class(AmbariMetaInfo.java:125)

while locating org.apache.ambari.server.api.services.AmbariMetaInfo

for field at org.apache.ambari.server.controller.AmbariServer.ambariMetaInfo(AmbariServer.java:145)

at org.apache.ambari.server.controller.AmbariServer.class(AmbariServer.java:145)

while locating org.apache.ambari.server.controller.AmbariServer

1 error

at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:987)

at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1013)

at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:813)

Caused by: java.lang.NullPointerException

at org.apache.ambari.server.stack.StackModule.processRepositories(StackModule.java:665)

at org.apache.ambari.server.stack.StackModule.resolve(StackModule.java:158)

at org.apache.ambari.server.stack.StackManager.fullyResolveStacks(StackManager.java:201)

at org.apache.ambari.server.stack.StackManager.(StackManager.java:119)

at org.apache.ambari.server.stack.StackManager$$FastClassByGuice$$33e4ffe0.newInstance()

at com.google.inject.internal.cglib.reflect.$FastConstructor.newInstance(FastConstructor.java:40)

at com.google.inject.internal.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:60)

at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85)

at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)

at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978)

at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)

at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974)

at com.google.inject.assistedinject.FactoryProvider2.invoke(FactoryProvider2.java:632)

at com.sun.proxy.$Proxy26.create(Unknown Source)

at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:247)

at org.apache.ambari.server.api.services.AmbariMetaInfo$$FastClassByGuice$$202844bc.invoke()

at com.google.inject.internal.cglib.reflect.$FastMethod.invoke(FastMethod.java:53)

at com.google.inject.internal.SingleMethodInjector$1.invoke(SingleMethodInjector.java:56)

at com.google.inject.internal.SingleMethodInjector.inject(SingleMethodInjector.java:90)

at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)

at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)

at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)

at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)

at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)

at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)

at com.google.inject.Scopes$1$1.get(Scopes.java:65)

at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)

at com.google.inject.internal.SingleFieldInjector.inject(SingleFieldInjector.java:53)

at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)

at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)

at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)

at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)

at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)

at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)

at com.google.inject.Scopes$1$1.get(Scopes.java:65)

at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)

at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978)

at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1024)

at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974)

... 2 more

解决方法:

后来发现在文件/var/lib/ambari-server/resources/stacks/HDP/2.4/repos/repoinfo.xml中的内容os这行原来的如下:

改成:

问题解决

5.16在启动hive的时候出现错误Error: Duplicate key name 'PCS_STATS_IDX' (state=42000,code=1061)

在HIVE安装的机器cd /var/lib/ambari-agent/data目录下有日志,日志中报如下错误:

Traceback (most recent call last):

File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 245, in

HiveMetastore().execute()

File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute

method(env)

File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 58, in start

self.configure(env)

File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 72, in configure

hive(name = 'metastore')

File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk

return fn(*args, **kwargs)

File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py", line 292, in hive

user = params.hive_user

File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__

self.env.run()

File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run

self.run_action(resource, action)

File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action

provider_action()

File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run

tries=self.resource.tries, try_sleep=self.resource.try_sleep)

File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner

result = function(command, **kwargs)

File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call

tries=tries, try_sleep=try_sleep)

File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper

result = _call(command, **kwargs_copy)

File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call

raise Fail(err_msg)

resource_management.core.exceptions.Fail: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED]' returned 1. WARNING: Use "yarn jar" to launch YARN applications.

Metastore connection URL:        jdbc:mysql://a2slave1/hive?createDatabaseIfNotExist=true

Metastore Connection Driver :    com.mysql.jdbc.Driver

Metastore connection User:       hive

Starting metastore schema initialization to 1.2.1000

Initialization script hive-schema-1.2.1000.mysql.sql

Error: Duplicate key name 'PCS_STATS_IDX' (state=42000,code=1061)

org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!

*** schemaTool failed ***

解决方法:

HIVE安装的机器拷贝脚本hive-schema-1.2.1000.mysql.sql到:

[root@a2master /]# scp /usr/hdp/2.4.0.0-169/hive/scripts/metastore/upgrade/mysql/hive-schema-1.2.1000.mysql.sql root@a2slave1:/usr/local/mysql

hive-schema-1.2.1000.mysql.sql                                                                   100%   34KB  34.4KB/s   00:00

在HIVE安装机器使用hive用户登陆,进入hive数据库,执行这个脚本

[root@a2slave1 conf.server]# mysql -uhive -p

Enter password:

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 505

Server version: 5.6.26-log MySQL Community Server (GPL)

Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> use hive;

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

Database changed

mysql> source /usr/local/mysql/hive-schema-1.2.1000.mysql.sql;

问题解决

5.17在启动hive的时候出现错误Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.

在/var/log/hive中的日志文件hiveserver.log中记录有:

2016-04-15 10:45:20,446 INFO  [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(405)) - Starting HiveServer2

2016-04-15 10:45:20,573 INFO  [main]: metastore.ObjectStore (ObjectStore.java:initialize(294)) - ObjectStore, initialize called

2016-04-15 10:45:20,585 INFO  [main]: metastore.MetaStoreDirectSql (MetaStoreDirectSql.java:(140)) - Using direct SQL, underlying DB is MYSQL

2016-04-15 10:45:20,585 INFO  [main]: metastore.ObjectStore (ObjectStore.java:setConf(277)) - Initialized ObjectStore

2016-04-15 10:45:20,590 WARN  [main]: metastore.ObjectStore (ObjectStore.java:getDatabase(577)) - Failed to get database default, returning NoSuchObjectException

2016-04-15 10:45:20,591 ERROR [main]: bonecp.ConnectionHandle (ConnectionHandle.java:markPossiblyBroken(388)) - Database access problem. Killing off this connection and all remaining connections in the connection pool. SQL State = HY000

2016-04-15 10:45:20,600 WARN  [main]: metastore.HiveMetaStore (HiveMetaStore.java:createDefaultDB(623)) - Retrying creating default database after error: Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.

javax.jdo.JDOUserException: Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.

at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:549)

at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:732)

at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752)

at org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:530)

at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114)

at com.sun.proxy.$Proxy6.createDatabase(Unknown Source)

at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:605)

at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:621)

at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:462)

at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:66)

at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)

at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5789)

at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:199)

at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:74)

at sun.reflect.GeneratedConstructorAccessor23.newInstance(Unknown Source)

at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1531)

at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:86)

at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)

at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)

at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3000)

at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3019)

at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:475)

at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:127)

at org.apache.hive.service.cli.CLIService.init(CLIService.java:112)

at org.apache.hive.service.CompositeService.init(CompositeService.java:59)

at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:104)

at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:411)

at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:78)

at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:654)

at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:527)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.hadoop.util.RunJar.run(RunJar.java:221)

at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

NestedThrowablesStackTrace:

Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.

org.datanucleus.exceptions.NucleusUserException: Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.

at org.datanucleus.store.rdbms.valuegenerator.TableGenerator.createRepository(TableGenerator.java:261)

at org.datanucleus.store.rdbms.valuegenerator.AbstractRDBMSGenerator.obtainGenerationBlock(AbstractRDBMSGenerator.java:162)

at org.datanucleus.store.valuegenerator.AbstractGenerator.obtainGenerationBlock(AbstractGenerator.java:197)

at org.datanucleus.store.valuegenerator.AbstractGenerator.next(AbstractGenerator.java:105)

at org.datanucleus.store.rdbms.RDBMSStoreManager.getStrategyValueForGenerator(RDBMSStoreManager.java:2005)

at org.datanucleus.store.AbstractStoreManager.getStrategyValue(AbstractStoreManager.java:1386)

at org.datanucleus.ExecutionContextImpl.newObjectId(ExecutionContextImpl.java:3827)

at org.datanucleus.state.JDOStateManager.setIdentity(JDOStateManager.java:2571)

at org.datanucleus.state.JDOStateManager.initialiseForPersistentNew(JDOStateManager.java:513)

at org.datanucleus.state.ObjectProviderFactoryImpl.newForPersistentNew(ObjectProviderFactoryImpl.java:232)

at org.datanucleus.ExecutionContextImpl.newObjectProviderForPersistentNew(ExecutionContextImpl.java:1414)

at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2218)

at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:2065)

at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:1913)

at org.datanucleus.ExecutionContextThreadedImpl.persistObject(ExecutionContextThreadedImpl.java:217)

at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:727)

at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752)

at org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:530)

at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114)

at com.sun.proxy.$Proxy6.createDatabase(Unknown Source)

at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:605)

at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:621)

at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:462)

at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:66)

at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)

at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5789)

at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:199)

at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:74)

at sun.reflect.GeneratedConstructorAccessor23.newInstance(Unknown Source)

at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1531)

at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:86)

at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)

at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)

at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3000)

at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3019)

at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:475)

at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:127)

at org.apache.hive.service.cli.CLIService.init(CLIService.java:112)

at org.apache.hive.service.CompositeService.init(CompositeService.java:59)

at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:104)

at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:411)

at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:78)

at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:654)

at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:527)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.hadoop.util.RunJar.run(RunJar.java:221)

at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

2016-04-15 10:45:20,607 WARN  [main]: metastore.ObjectStore (ObjectStore.java:getDatabase(577)) - Failed to get database default, returning NoSuchObjectException

2016-04-15 10:45:20,609 ERROR [main]: bonecp.ConnectionHandle (ConnectionHandle.java:markPossiblyBroken(388)) - Database access problem. Killing off this connection and all remaining connections in the connection pool. SQL State = HY000

2016-04-15 10:45:20,617 INFO  [main]: server.HiveServer2 (HiveServer2.java:stop(371)) - Shutting down HiveServer2

2016-04-15 10:45:20,618 WARN  [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(442)) - Error starting HiveServer2 on attempt 29, will retry in 60 seconds

java.lang.RuntimeException: Error applying authorization policy on hive configuration: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

at org.apache.hive.service.cli.CLIService.init(CLIService.java:114)

at org.apache.hive.service.CompositeService.init(CompositeService.java:59)

at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:104)

at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:411)

at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:78)

at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:654)

at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:527)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.hadoop.util.RunJar.run(RunJar.java:221)

at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

Caused by: java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:494)

at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:127)

at org.apache.hive.service.cli.CLIService.init(CLIService.java:112)

... 12 more

Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1533)

at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:86)

at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)

at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)

at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3000)

at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3019)

at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:475)

... 14 more

Caused by: java.lang.reflect.InvocationTargetException

at sun.reflect.GeneratedConstructorAccessor23.newInstance(Unknown Source)

at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1531)

... 20 more

Caused by: javax.jdo.JDOUserException: Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.

NestedThrowables:

org.datanucleus.exceptions.NucleusUserException: Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.

at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:549)

at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:732)

at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752)

at org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:530)

at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114)

at com.sun.proxy.$Proxy6.createDatabase(Unknown Source)

at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:605)

at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:625)

at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:462)

at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:66)

at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)

at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5789)

at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:199)

at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:74)

... 24 more

Caused by: org.datanucleus.exceptions.NucleusUserException: Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.

at org.datanucleus.store.rdbms.valuegenerator.TableGenerator.createRepository(TableGenerator.java:261)

at org.datanucleus.store.rdbms.valuegenerator.AbstractRDBMSGenerator.obtainGenerationBlock(AbstractRDBMSGenerator.java:162)

at org.datanucleus.store.valuegenerator.AbstractGenerator.obtainGenerationBlock(AbstractGenerator.java:197)

at org.datanucleus.store.valuegenerator.AbstractGenerator.next(AbstractGenerator.java:105)

at org.datanucleus.store.rdbms.RDBMSStoreManager.getStrategyValueForGenerator(RDBMSStoreManager.java:2005)

at org.datanucleus.store.AbstractStoreManager.getStrategyValue(AbstractStoreManager.java:1386)

at org.datanucleus.ExecutionContextImpl.newObjectId(ExecutionContextImpl.java:3827)

at org.datanucleus.state.JDOStateManager.setIdentity(JDOStateManager.java:2571)

at org.datanucleus.state.JDOStateManager.initialiseForPersistentNew(JDOStateManager.java:513)

at org.datanucleus.state.ObjectProviderFactoryImpl.newForPersistentNew(ObjectProviderFactoryImpl.java:232)

at org.datanucleus.ExecutionContextImpl.newObjectProviderForPersistentNew(ExecutionContextImpl.java:1414)

at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2218)

at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:2065)

at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:1913)

at org.datanucleus.ExecutionContextThreadedImpl.persistObject(ExecutionContextThreadedImpl.java:217)

at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:727)

... 39 more

最后发现是mysql数据库的binlog_format参数设置不正确,

原来设置的是STATEMENT,修改为MIXED

修改方法,在/etc/my.cnf文件中加上binlog_format=MIXED

然后重启mysql数据库

问题解决

5.18在使用hive命令进入hive的时候报错误Permission denied: user=root, access=WRITE, inode="/user/root":hdfs:hdfs:drwxr-xr-x  hive

解决方法:

1.       使用HDFS的命令行接口修改相应目录的权限,hadoop fs -chmod 777 /user,后面的/user是要上传文件的路径,不同的情况可能不一样,比如要上传的文件路径为hdfs://namenode/user/xxx.doc,则这样的修改可以,如果要上传的文件路径为hdfs://namenode/java/xxx.doc,则要修改的为hadoop fs -chmod 777 /java或者hadoop fs -chmod 777 /,java的那个需要先在HDFS里面建立Java目录,后面的这个是为根目录调整权限。

脚本

su - hdfs

hadoop fs -chmod 777 /user

2.       在/etc/profile文件中加上系统的环境变量或java JVM变量里面添加export HADOOP_USER_NAME=hdfs(ambari使用的hadoop用户是hdfs),这个值具体等于多少看自己的情况,以后会运行HADOOP上的Linux的用户名。

export HADOOP_USER_NAME=hdfs

5.19Spark Thrift Server组件启动后自动关闭

Spark Thrift Server组件在启动后自动关闭,查看/var/log/spark下的日志文件spark-hive-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-a2master.out,中有如下内容:

16/04/18 10:26:10 WARN DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.

16/04/18 10:26:10 INFO Client: Requesting a new application from cluster with 3 NodeManagers

16/04/18 10:26:10 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (512 MB per container)

16/04/18 10:26:10 ERROR SparkContext: Error initializing SparkContext.

java.lang.IllegalArgumentException: Required executor memory (1024+384 MB) is above the max threshold (512 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.

at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:283)

at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:139)

at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)

at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)

at org.apache.spark.SparkContext.(SparkContext.scala:530)

at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:56)

at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:76)

at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)

at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)

at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)

at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)

at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

16/04/18 10:26:11 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}

分析:原因是spark的内存设置的太小

解决方法:在前台界面修改内存为1536M,并把配置在其它机器上更新,重启spark服务

5.20hbase在启动的时候出现错误Permission denied: user=hbase, access=WRITE, inode="/apps/hbase/data/WALs/gslave2,16020,1461130860883":hdfs:hdfs:drwxr-xr-x

在日志中有如下内容:

2016-04-20 15:42:11,640 INFO  [regionserver/gslave2/192.168.1.253:16020] hfile.CacheConfig: Allocating LruBlockCache size=401.60 MB, blockSize=64 KB

2016-04-20 15:42:11,648 INFO  [regionserver/gslave2/192.168.1.253:16020] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=433080, freeSize=420675048, maxSize=421108128, heapSize=433080, minSize=400052704, minFactor=0.95, multiSize=200026352, multiFactor=0.5, singleSize=100013176, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false

2016-04-20 15:42:11,704 INFO  [regionserver/gslave2/192.168.1.253:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider

2016-04-20 15:42:11,729 INFO  [regionserver/gslave2/192.168.1.253:16020] regionserver.HRegionServer: STOPPED: Failed initialization

2016-04-20 15:42:11,729 ERROR [regionserver/gslave2/192.168.1.253:16020] regionserver.HRegionServer: Failed init

org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/apps/hbase/data/WALs/gslave2,16020,1461138130424":hdfs:hdfs:drwxr-xr-x

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)

at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)

at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)

at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)

at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3905)

at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1048)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)

at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)

at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)

at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2589)

at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2558)

at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:820)

at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:816)

at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:816)

at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:809)

at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)

at org.apache.hadoop.hbase.regionserver.wal.FSHLog.(FSHLog.java:488)

at org.apache.hadoop.hbase.wal.DefaultWALProvider.init(DefaultWALProvider.java:97)

at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:147)

at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:179)

at org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1624)

at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1362)

at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:899)

at java.lang.Thread.run(Thread.java:745)

Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=hbase, access=WRITE, inode="/apps/hbase/data/WALs/gslave2,16020,1461138130424":hdfs:hdfs:drwxr-xr-x

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)

at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)

at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)

at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)

at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3905)

at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1048)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)

at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)

at org.apache.hadoop.ipc.Client.call(Client.java:1411)

at org.apache.hadoop.ipc.Client.call(Client.java:1364)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)

at com.sun.proxy.$Proxy15.mkdirs(Unknown Source)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)

at com.sun.proxy.$Proxy15.mkdirs(Unknown Source)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:508)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)

at com.sun.proxy.$Proxy16.mkdirs(Unknown Source)

at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2587)

... 15 more

2016-04-20 15:42:11,732 FATAL [regionserver/gslave2/192.168.1.253:16020] regionserver.HRegionServer: ABORTING region server gslave2,16020,1461138130424: Unhandled: Permission denied: user=hbase, access=WRITE, inode="/apps/hbase/data/WALs/gslave2,16020,1461138130424":hdfs:hdfs:drwxr-xr-x

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)

at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)

at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)

at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)

at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3905)

at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1048)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)

at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)

org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/apps/hbase/data/WALs/gslave2,16020,1461138130424":hdfs:hdfs:drwxr-xr-x

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)

at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)

at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)

at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)

at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3905)

at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1048)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)

at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)

at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)

at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2589)

at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2558)

at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:820)

at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:816)

at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:816)

at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:809)

at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)

at org.apache.hadoop.hbase.regionserver.wal.FSHLog.(FSHLog.java:488)

at org.apache.hadoop.hbase.wal.DefaultWALProvider.init(DefaultWALProvider.java:97)

at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:147)

at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:179)

at org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1624)

at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1362)

at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:899)

at java.lang.Thread.run(Thread.java:745)

Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=hbase, access=WRITE, inode="/apps/hbase/data/WALs/gslave2,16020,1461138130424":hdfs:hdfs:drwxr-xr-x

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)

at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)

at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)

at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)

at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3905)

at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1048)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)

at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)

at org.apache.hadoop.ipc.Client.call(Client.java:1411)

at org.apache.hadoop.ipc.Client.call(Client.java:1364)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)

at com.sun.proxy.$Proxy15.mkdirs(Unknown Source)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)

at com.sun.proxy.$Proxy15.mkdirs(Unknown Source)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:508)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)

at com.sun.proxy.$Proxy16.mkdirs(Unknown Source)

at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2587)

... 15 more

2016-04-20 15:42:11,732 FATAL [regionserver/gslave2/192.168.1.253:16020] regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: []

2016-04-20 15:42:11,744 INFO  [regionserver/gslave2/192.168.1.253:16020] regionserver.HRegionServer: Dump of metrics as JSON on abort: {

解决方法:

su - hdfs

hdfs dfs -chown -R hbase:hbase /apps/hbase

【转】安装ambari的时候遇到的ambari和hadoop问题集的更多相关文章

  1. 升级ambari、HDP版本(ambari 2.1升级到2.4、HDP2.3升级到2.5)

    转载自:http://blog.csdn.net/levy_cui/article/details/52461377 官方升级版本说明 http://docs.hortonworks.com/HDPD ...

  2. 安装关系型数据库MySQL和大数据处理框架Hadoop

    1. 简述Hadoop平台的起源.发展历史与应用现状.列举发展过程中重要的事件.主要版本.主要厂商:国内外Hadoop应用的典型案例. (1)Hadoop的介绍: Hadoop最早起源于Nutch,N ...

  3. Eclipse的下载、安装和WordCount的初步使用(本地模式和集群模式)

    包括:    Eclipse的下载 Eclipse的安装 Eclipse的使用 本地模式或集群模式 Scala IDE for Eclipse的下载.安装和WordCount的初步使用(本地模式和集群 ...

  4. IntelliJ IDEA的下载、安装和WordCount的初步使用(本地模式和集群模式)

    包括: IntelliJ IDEA的下载  IntelliJ IDEA的安装 IntelliJ IDEA中的scala插件安装 用SBT方式来创建工程 或 选择Scala方式来创建工程 本地模式或集群 ...

  5. hadoop的集群安装

    hadoop的集群安装 1.安装JDK,解压jar,配置环境变量 1.1.解压jar tar -zxvf jdk-7u79-linux-x64.tar.gz -C /opt/install //将jd ...

  6. 详细版在虚拟机安装和使用hadoop分布式集群

    集群模式: 一台master 192.168.85.2 一台slave  192.168.85.3 jdk jdk1.8.0_74(版本不重要,看喜欢) hadoop版本 2.7.2(版本不重要,2. ...

  7. 基于zookeeper的高可用Hadoop HA集群安装

    (1)hadoop2.7.1源码编译 http://aperise.iteye.com/blog/2246856 (2)hadoop2.7.1安装准备 http://aperise.iteye.com ...

  8. IntelliJ IDEA(Ultimate版本)的下载、安装和WordCount的初步使用(本地模式和集群模式)

    不多说,直接上干货! IntelliJ IDEA号称当前Java开发效率最高的IDE工具.IntelliJ IDEA有两个版本:社区版(Community)和旗舰版(Ultimate).社区版时免费的 ...

  9. 通过tarball形式安装HBASE Cluster(CDH5.0.2)——Hadoop NameNode HA 切换引起的Hbase错误,以及Hbase如何基于NameNode的HA进行配置

    通过tarball形式安装HBASE Cluster(CDH5.0.2)——Hadoop NameNode HA 切换引起的Hbase错误,以及Hbase如何基于NameNode的HA进行配置 配置H ...

随机推荐

  1. win10同时安装 office2016和visio2016

    一.下载镜像文件 因为office 2016和 visio2016 镜像文件是一样的,只是名称不一样,所以只需要下载一个即可. 二.下载Office 2016 Deployment Tool工具 到微 ...

  2. linux c 语言之--fseek(),fseeko(),fseeko64()讲解 (转载)

    转载:http://blog.csdn.net/lemoncyb/article/details/16841317 fseek() 函数讲解: 函数定义: int fseek(FILE *stream ...

  3. contenteditable元素的placeholder输入提示语设置

    在某些情况下,textarea是不够用的,我们还需要显示一些图标或者高亮元素,这就需要用富文本编辑器,而富文本编辑器本质上是HTML元素设置了contenteditable. 然后可能需要像input ...

  4. 推荐几本FPGA书籍(更新中)

    1.<数字信号处理的FPGA实现>第三版 讲解比较详细的DSP理论,使用FPGA实现,不过使用VHDL语言:也颇具参考性. 2. <Xilinx Zynq-7000 嵌入式系统设计与 ...

  5. 【BZOJ2117】 [2010国家集训队]Crash的旅游计划

    [BZOJ2117] [2010国家集训队]Crash的旅游计划 Description 眼看着假期就要到了,Crash由于长期切题而感到无聊了,因此他决定利用这个假期和好友陶陶一起出去旅游. Cra ...

  6. [转]Oracle 清除incident和trace -- ADRCI用法

    在oracle11g中,dump file的目录已经有所改变,bdump和udump整合到trace中,cdump独立出一个. E:\ora11g\app\Administrator\diag\rdb ...

  7. 【转】Android-Accessibility(辅助功能/无障碍,自动安装APP)

    参考: http://www.infoq.com/cn/articles/android-accessibility-installing https://developer.android.com/ ...

  8. ogg-01027(长事务)

    OGG-01027(长事务) 示例9-25: WARNING OGG-01027  Long Running Transaction: XID 82.4.242063, Items 0,  Extra ...

  9. Charles 抓包工具绿化过程记录

    1.下载官方的软件,并安装. 下载地址:https://www.charlesproxy.com/latest-release/download.do 根据需求下载即可 2.使用在线破解工具生成jar ...

  10. iis 网页HTTP 错误 404.3 - Not Found解决方案

    一. 1.依次打开控制面板→程序和功能→打开或关闭Windwos功能 2.在打开的Windows功能窗口中依次展开Internet信息服务→万维网服务→应用程序开发功能,将需要的功能选项前面的勾上,确 ...