YARN & HDFS2 安装和配置Kerberos
今天尝试在Hadoop 2.x开发集群上配置Kerberos,遇到一些问题,记录一下
设置hadoop security
core-site.xml
<property>
<name>hadoop.security.authentication</name>
<value>kerberos</value>
</property>
<property>
<name>hadoop.security.authorization</name>
<value>true</value>
</property>
hadoop.security.authentication默认是simple方式,也就是基于文件系统的验证方式,这里我们改为kerberos
<property>
<name>dfs.block.access.token.enable</name>
<value>true</value>
</property>
<property>
<name>dfs.https.enable</name>
<value>false</value>
</property>
<property>
<name>dfs.namenode.https-address</name>
<value>dev80.hadoop:50470</value>
</property>
<property>
<name>dfs.https.port</name>
<value>50470</value>
</property>
<property>
<name>dfs.namenode.keytab.file</name>
<value>/etc/hadoop.keytab</value>
</property>
<property>
<name>dfs.namenode.kerberos.principal</name>
<value>hadoop/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.namenode.kerberos.https.principal</name>
<value>host/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>dev80.hadoop:50090</value>
</property>
<property>
<name>dfs.namenode.secondary.https-port</name>
<value>50470</value>
</property>
<property>
<name>dfs.namenode.secondary.keytab.file</name>
<value>/etc/hadoop.keytab</value>
</property>
<property>
<name>dfs.namenode.secondary.kerberos.principal</name>
<value>hadoop/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.namenode.secondary.kerberos.https.principal</name>
<value>host/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.datanode.data.dir.perm</name>
<value>700</value>
</property>
<property>
<name>dfs.datanode.address</name>
<value>0.0.0.0:1003</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:1007</value>
</property>
<property>
<name>dfs.datanode.https.address</name>
<value>0.0.0.0:1005</value>
</property>
<property>
<name>dfs.datanode.keytab.file</name>
<value>/etc/hadoop.keytab</value>
</property>
<property>
<name>dfs.datanode.kerberos.principal</name>
<value>hadoop/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.datanode.kerberos.https.principal</name>
<value>host/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.datanode.data.dir.perm</name>
<value>700</value>
</property>
<property>
<name>dfs.datanode.address</name>
<value>0.0.0.0:1003</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:1007</value>
</property>
<property>
<name>dfs.datanode.https.address</name>
<value>0.0.0.0:1005</value>
</property>
<property>
<name>dfs.datanode.keytab.file</name>
<value>/etc/hadoop.keytab</value>
</property>
<property>
<name>dfs.datanode.kerberos.principal</name>
<value>hadoop/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.datanode.kerberos.https.principal</name>
<value>host/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.web.authentication.kerberos.principal</name>
<value>HTTP/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.web.authentication.kerberos.keytab</name>
<value>/etc/hadoop.keytab</value>
<description>
The Kerberos keytab file with the credentials for the
HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint.
</description>
</property>
dfs.datanode.address表示data transceiver RPC server所绑定的hostname或IP地址,如果开启security,端口号必须小于1024,否则的话启动datanode时候会报“Cannot start secure cluster without privileged resources”错误
jsvc: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, not stripped
# The jsvc implementation to use. Jsvc is required to run secure datanodes.
export JSVC_HOME=/usr/local/hadoop/hadoop-2.1.0-beta/libexec
# On secure datanodes, user to run the datanode as after dropping privileges
export HADOOP_SECURE_DN_USER=hadoop
# The directory where pid files are stored. /tmp by default
export HADOOP_SECURE_DN_PID_DIR=/usr/local/hadoop
# Where log files are stored in the secure data environment.
export HADOOP_SECURE_DN_LOG_DIR=/data/logs
ON
"
<property>
<name>yarn.resourcemanager.keytab</name>
<value>/etc/hadoop.keytab</value>
</property>
<property>
<name>yarn.resourcemanager.principal</name>
<value>hadoop/_HOST@DIANPING.COM</value>
</property>
<property>
<name>yarn.nodemanager.keytab</name>
<value>/etc/hadoop.keytab</value>
</property>
<property>
<name>yarn.nodemanager.principal</name>
<value>hadoop/_HOST@DIANPING.COM</value>
</property>
<property>
<name>yarn.nodemanager.container-executor.class</name>
<value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
</property>
<property>
<name>yarn.nodemanager.linux-container-executor.group</name>
<value>hadoop</value>
</property>
Caused by: org.apache.hadoop.util.Shell$ExitCodeException: File /usr/local/hadoop/hadoop-2.1.0-beta/etc/hadoop must be owned by root, but is owned by 500
at org.apache.hadoop.util.Shell.runCommand(Shell.java:458)
at org.apache.hadoop.util.Shell.run(Shell.java:373)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:578)
at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:147)
/container-executor.cfg
min.user.id=499
<property>
<name>mapreduce.jobhistory.keytab</name>
<value>/etc/hadoop.keytab</value>
</property>
<property>
<name>mapreduce.jobhistory.principal</name>
<value>hadoop/_HOST@DIANPING.COM</value>
</property>
[hadoop@dev80 hadoop]$ kinit -r 24l -k -t /home/hadoop/.keytab hadoop
[hadoop@dev80 hadoop]$ klist
Ticket cache: FILE:/tmp/krb5cc_500
Default principal: hadoop@DIANPING.COM
Valid starting Expires Service principal
09/11/13 15:25:34 09/12/13 15:25:34 krbtgt/DIANPING.COM@DIANPING.COM
renew until 09/12/13 15:25:34
其中/tmp/krb5cc_500就是ticket cache file, 500表示hadoop帐号的uid,默认会读取
13/09/11 16:21:35 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
附上keytab中的principal
[hadoop@dev80 hadoop]$ klist -k -t /etc/hadoop.keytab
Keytab name: WRFILE:/etc/hadoop.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM
YARN & HDFS2 安装和配置Kerberos的更多相关文章
- (转)RedHat/CentOS安装和配置kerberos
RedHat/CentOS安装和配置kerberos 需要在kerberos server和客户端都先安装ntp (Internet时间协议,保证服务器和客户机时间同步 ) 1 kerberos 服 ...
- CentOS6安装各种大数据软件 第九章:Hue大数据可视化工具安装和配置
相关文章链接 CentOS6安装各种大数据软件 第一章:各个软件版本介绍 CentOS6安装各种大数据软件 第二章:Linux各个软件启动命令 CentOS6安装各种大数据软件 第三章:Linux基础 ...
- Hadoop-2.6.0 集群的 安装与配置
1. 配置节点bonnie1 hadoop环境 (1) 下载hadoop- 2.6.0 并解压缩 [root@bonnie1 ~]# wget http://apache.fayea.com/had ...
- Spark(三): 安装与配置
参见 HDP2.4安装(五):集群及组件安装 ,安装配置的spark版本为1.6, 在已安装HBase.hadoop集群的基础上通过 ambari 自动安装Spark集群,基于hadoop yarn ...
- 在虚拟机VM中安装的Ubuntu上安装和配置Hadoop
一.系统环境: 我使用的Ubuntu版本是:ubuntu-12.04-desktop-i386.iso jdk版本:jdk1.7.0_67 hadoop版本:hadoop-2.5.0 二.下载jdk和 ...
- 完全分布式Hadoop2.3安装与配置
一.Hadoop基本介绍 Hadoop优点 1.高可靠性:Hadoop按位存储和处理数据 2.高扩展性:Hadoop是在计算机集群中完成计算任务,这个集群可以方便的扩展到几千台 3.高效性:Hadoo ...
- Mysql多实例 安装以及配置
MySQL多实例 1.什么是MySQL多实例 简单地说,Mysql多实例就是在一台服务器上同时开启多个不同的服务端口(3306.3307),同时运行多个Mysql服务进程,这些服务进程通过不同的soc ...
- 在Linux上怎么安装和配置Apache Samza
samza是一个分布式的流式数据处理框架(streaming processing),它是基于Kafka消息队列来实现类实时的流式数据处理的.(准确的说,samza是通过模块化的形式来使用kafka的 ...
- 浅谈 zookeeper 原理,安装和配置
当前云计算流行, 单一机器额的处理能力已经不能满足我们的需求,不得不采用大量的服务集群.服务集群对外提供服务的过程中,有很多的配置需要随时更新,服务间需要协调工作,那么这些信息如何推送到各个节点?并且 ...
随机推荐
- HDU 1398 Square Coins
题目大意:有面值分别为.1,4,9,.......17^2的硬币无数多个.问你组成面值为n的钱的方法数. 最简单的母函数模板题: #include <cstdio> #include &l ...
- BZOJ 3402: [Usaco2009 Open]Hide and Seek 捉迷藏
题目 3402: [Usaco2009 Open]Hide and Seek 捉迷藏 Time Limit: 3 Sec Memory Limit: 128 MB Description 贝 ...
- B - 楼下水题(扩展欧几里德)
B - 楼下水题 Time Limit:1000MS Memory Limit:262144KB 64bit IO Format:%I64d & %I64u Submit St ...
- xpage 获取 附件
var db:NotesDatabase=session.getCurrentDatabase(); var doc:NotesDocument=db.getDocumentByUNID('80E21 ...
- jenkins+maven +svn+tomcat7集群部署(一)
在网上看了好多有关集群部署的文章,感觉都不是太连贯,非常多仅仅是给你说怎么安装而已,可是过程中遇到的问题真不少,可是也攻克了非常多问题,希望我的文章可以帮到那些想学习的人吧,jenkins主要是攻克了 ...
- Sql server 事务 存储过程
事务( Transaction )是并发控制的单位,是用户定义的一个操作序列.这些操作要么都做,要么都不做,是一个不可分割的工作单位. 通过事务,SQL Server能将逻辑相关的一组操作绑定在一起, ...
- 在ASP.NET中动态加载内容(用户控件和模板)
在ASP.NET中动态加载内容(用户控件和模板) 要点: 1. 使用Page.ParseControl 2. 使用base.LoadControl 第一部分:加载模板 下 面是一个模板“<tab ...
- 「OC」类的深入研究、description方法和sel
一.类的深入研究 (一)类的本质 类本身也是一个对象,是class类型的对象,简称“类对象”. Class类型的定义: Typedef struct obj class *class; 类名就代表着类 ...
- 文字适应DIV
今天突然碰到了一个奇怪的问题 那就是对于纯数字和英文字母 文字多了会超出div 且即使是设置了height:auto overflow-y:auto 也不管用 只是在x轴上出现滚动条 不论用 ...
- QComboBox 添加图片(自带addItem函数就有这个功能,从没有注意过)
方法: 使用 QComboxBox::addItem(QIcon, QString); 示例: 点击(此处)折叠或打开 QComboBox *combo_status = new QComboB ...