大数据安全系列的其它文章

https://www.cnblogs.com/bainianminguo/p/12548076.html-----------安装kerberos

https://www.cnblogs.com/bainianminguo/p/12548334.html-----------hadoop的kerberos认证

https://www.cnblogs.com/bainianminguo/p/12548175.html-----------zookeeper的kerberos认证

https://www.cnblogs.com/bainianminguo/p/12584732.html-----------hive的kerberos认证

https://www.cnblogs.com/bainianminguo/p/12584880.html-----------es的search-guard认证

https://www.cnblogs.com/bainianminguo/p/12639821.html-----------flink的kerberos认证

https://www.cnblogs.com/bainianminguo/p/12639887.html-----------spark的kerberos认证

本篇博客介绍配置zookeeper的kerberos配置

一、zookeeper安装

1、解压安装包和重命名和创建数据目录

tar -zxvf /data/apache-zookeeper-3.5.5-bin.tar.gz -C /usr/local/
mv apache-zookeeper-3.5.5-bin/ zookeeper/

  

2、查看解压目录

[root@localhost zookeeper]# ll
total 36
drwxr-xr-x. 2 2002 2002 4096 Apr 9 2019 bin
drwxr-xr-x. 2 2002 2002 88 Feb 27 22:09 conf
drwxr-xr-x. 2 root root 6 Feb 27 21:48 data
drwxr-xr-x. 5 2002 2002 4096 May 3 2019 docs
drwxr-xr-x. 2 root root 4096 Feb 27 21:25 lib
-rw-r--r--. 1 2002 2002 11358 Feb 15 2019 LICENSE.txt
drwxr-xr-x. 2 root root 6 Feb 27 21:48 log
-rw-r--r--. 1 2002 2002 432 Apr 9 2019 NOTICE.txt
-rw-r--r--. 1 2002 2002 1560 May 3 2019 README.md
-rw-r--r--. 1 2002 2002 1347 Apr 2 2019 README_packaging.txt

 

3、修改配置文件

[root@localhost conf]# ll
total 16
-rw-r--r--. 1 2002 2002 535 Feb 15 2019 configuration.xsl
-rw-r--r--. 1 2002 2002 2712 Apr 2 2019 log4j.properties
-rw-r--r--. 1 root root 922 Feb 27 21:36 zoo.cfg
-rw-r--r--. 1 2002 2002 922 Feb 15 2019 zoo_sample.cfg
[root@localhost conf]#

  

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/usr/local/zookeeper/data
dataLogDir=/usr/local/zookeeper/log
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=cluster1_host1:2888:3888
server.2=cluster1_host2:2888:3888
server.3=cluster1_host3:2888:3888

  

4、创建myid文件

[root@localhost data]# pwd
/usr/local/zookeeper/data
[root@localhost data]#
[root@localhost data]#
[root@localhost data]# ll
total 4
-rw-r--r--. 1 root root 2 Feb 27 22:10 myid
[root@localhost data]# cat myid
1
[root@localhost data]#

  

5、拷贝安装目录到其它节点

scp -r zookeeper/ root@10.8.8.33:/usr/local/

  

修改其它节点的myid文件

6、启动zk

[root@localhost bin]# ./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@localhost bin]# jps
28350 Jps
25135 QuorumPeerMain
[root@localhost bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: leader
[root@localhost bin]#

  

二、zookeeper的kerberos配置

1、生成zk的kerberos的认证标志

kadmin.local:  addprinc zookeeper/cluster2-host1
WARNING: no policy specified for zookeeper/cluster2-host1@HADOOP.COM; defaulting to no policy
Enter password for principal "zookeeper/cluster2-host1@HADOOP.COM":
Re-enter password for principal "zookeeper/cluster2-host1@HADOOP.COM":
Principal "zookeeper/cluster2-host1@HADOOP.COM" created.
kadmin.local: addprinc zookeeper/cluster2-host2
WARNING: no policy specified for zookeeper/cluster2-host2@HADOOP.COM; defaulting to no policy
Enter password for principal "zookeeper/cluster2-host2@HADOOP.COM":
Re-enter password for principal "zookeeper/cluster2-host2@HADOOP.COM":
Principal "zookeeper/cluster2-host2@HADOOP.COM" created.
kadmin.local: addprinc zookeeper/cluster2-host3
WARNING: no policy specified for zookeeper/cluster2-host3@HADOOP.COM; defaulting to no policy
Enter password for principal "zookeeper/cluster2-host3@HADOOP.COM":
Re-enter password for principal "zookeeper/cluster2-host3@HADOOP.COM":
Principal "zookeeper/cluster2-host3@HADOOP.COM" created.
[root@cluster2-host1 etc]# kadmin.local
Authenticating as principal root/admin@HADOOP.COM with password.
kadmin.local: addprinc zkcli/hadoop kadmin.local: ktadd -norandkey -k /etc/security/keytab/zk-cluster2-host1.keytab zookeeper/cluster2-host1
kadmin.local: ktadd -norandkey -k /etc/security/keytab/zk-server.keytab zookeeper/cluster2-host2 kadmin.local: ktadd -norandkey -k /etc/security/keytab/zk-server.keytab zookeeper/cluster2-host3

  

拷贝keytab到所有的节点

[root@cluster2-host1 keytab]# scp zk-server.keytab root@cluster2-host2:/usr/local/zookeeper/conf/
zk-server.keytab 100% 1664 1.6KB/s 00:00
[root@cluster2-host1 keytab]# scp zk-server.keytab root@cluster2-host1:/usr/local/zookeeper/conf/
zk-server.keytab 100% 1664 1.6KB/s 00:00
[root@cluster2-host1 keytab]# scp zk-server.keytab root@cluster2-host3:/usr/local/zookeeper/conf/
zk-server.keytab

  

2、修改zk的配置文件,加如下数据

authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
jaasLoginRenew=3600000
kerberos.removeHostFromPrincipal=true

  

同步到其他节点

[root@cluster2-host1 keytab]# scp /usr/local/zookeeper/conf/zoo.cfg root@cluster2-host2:/usr/local/zookeeper/conf/
zoo.cfg 100% 1207 1.2KB/s 00:00
[root@cluster2-host1 keytab]# scp /usr/local/zookeeper/conf/zoo.cfg root@cluster2-host3:/usr/local/zookeeper/conf/
zoo.cfg

  

3、生成jaas.conf文件

Server {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/usr/local/zookeeper/conf/zk-server.keytab"
storeKey=true
useTicketCache=false
principal="zookeeper/cluster2-host1@HADOOP.COM";
};

  

同步到其他节点,并修改节点的principal

[root@cluster2-host1 conf]# scp jaas.conf root@cluster2-host2:/usr/local/zookeeper/conf/
jaas.conf 100% 229 0.2KB/s 00:00
[root@cluster2-host1 conf]# scp jaas.conf root@cluster2-host3:/usr/local/zookeeper/conf/
jaas.conf

  

4、创建client的priincipal

kadmin.local:  addprinc zkcli/cluster2-host1
kadmin.local: addprinc zkcli/cluster2-host2
kadmin.local: addprinc zkcli/cluster2-host3

  

kadmin.local:  ktadd -norandkey -k /etc/security/keytab/zk-clie.keytab zkcli/cluster2-host1
kadmin.local: ktadd -norandkey -k /etc/security/keytab/zk-clie.keytab zkcli/cluster2-host2
kadmin.local: ktadd -norandkey -k /etc/security/keytab/zk-clie.keytab zkcli/cluster2-host3

  

分发keytab文件到其他节点

[root@cluster2-host1 conf]# scp /etc/security/keytab/zk-clie.keytab root@cluster2-host1:/usr/local/zookeeper/conf/
zk-clie.keytab 100% 1580 1.5KB/s 00:00
[root@cluster2-host1 conf]# scp /etc/security/keytab/zk-clie.keytab root@cluster2-host2:/usr/local/zookeeper/conf/
zk-clie.keytab 100% 1580 1.5KB/s 00:00
[root@cluster2-host1 conf]# scp /etc/security/keytab/zk-clie.keytab root@cluster2-host3:/usr/local/zookeeper/conf/
zk-clie.keytab

  

5、配置client-jaas.conf文件

[root@cluster2-host1 conf]# cat client-jaas.conf
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/usr/local/zookeeper/conf/zk-clie.keytab"
storeKey=true
useTicketCache=false
principal="zkcli/cluster2-host1@HADOOP.COM";
};

  

分发到其他节点,并修改其他节点的principal

[root@cluster2-host1 conf]# scp client-jaas.conf root@cluster2-host2:/usr/local/zookeeper/conf/
client-jaas.conf 100% 222 0.2KB/s 00:00
[root@cluster2-host1 conf]# scp client-jaas.conf root@cluster2-host3:/usr/local/zookeeper/conf/
client-jaas.conf

  

6、验证zk的kerberos

严格按照下面的顺序验证

[root@cluster2-host1 bin]# export JVMFLAGS="-Djava.security.auth.login.config=/usr/local/zookeeper/conf/jaas.conf"
[root@cluster2-host1 bin]# ./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@cluster2-host1 bin]# export JVMFLAGS="-Djava.security.auth.login.config=/usr/local/zookeeper/conf/client-jaas.conf"
[root@cluster2-host1 bin]#
[root@cluster2-host1 bin]#
[root@cluster2-host1 bin]# echo $JVMFLAGS
-Djava.security.auth.login.config=/usr/local/zookeeper/conf/client-jaas.conf
[root@cluster2-host1 bin]# ./zkCli.sh -server cluster2-host1:2181 [zk: cluster2-host1:2181(CONNECTED) 2] create /abcd "abcdata"
Created /abcd
[zk: cluster2-host1:2181(CONNECTED) 3] ls /
[abc, abcd, zookeeper]
[zk: cluster2-host1:2181(CONNECTED) 4] getAcl /abcd
'world,'anyone
: cdrwa
[zk: cluster2-host1:2181(CONNECTED) 5]

  

同时启动zk的client,也会login successfull的日志,大家可以注意留意下

kerberos系列之zookeeper的认证配置的更多相关文章

  1. kerberos系列之hdfs&yarn认证配置

    一.安装hadoop 1.解压安装包重命名安装目录 [root@cluster2_host1 data]# tar -zxvf hadoop-2.7.1.tar.gz -C /usr/local/ [ ...

  2. API网关Kong系列(四)认证配置

    目前根据业务需要先介绍2种认证插件:Key Authentication 及 HMAC-SHA1 认证  Key Authentication 向API添加密钥身份验证(也称为API密钥). 然后,消 ...

  3. kerberos系列之hive认证配置

    大数据安全系列之hive的kerberos认证配置,其它系列链接如下 https://www.cnblogs.com/bainianminguo/p/12548076.html-----------安 ...

  4. zookeeper acl认证机制及dubbo、kafka集成、zooviewer/idea zk插件配置

    ZooKeeper的ACL机制 zookeeper通过ACL机制控制znode节点的访问权限. 首先介绍下znode的5种操作权限:CREATE.READ.WRITE.DELETE.ADMIN 也就是 ...

  5. 【Zookeeper系列】ZooKeeper安装配置(转)

    原文链接:https://www.cnblogs.com/sunddenly/p/4018459.html 一.Zookeeper的搭建方式 Zookeeper安装方式有三种,单机模式和集群模式以及伪 ...

  6. 分布式服务Dubbo+Zookeeper安全认证

    前言 由于之前的服务都是在内网,Zookeeper集群配置都是走的内网IP,外网不开放相关端口.最近由于业务升级,购置了阿里云的服务,需要对外开放Zookeeper服务. 问题 Zookeeper+d ...

  7. 【Zookeeper系列】ZooKeeper机制架构(转)

    原文链接:https://www.cnblogs.com/sunddenly/p/4133784.html 一.ZooKeeper权限管理机制 1.1 权限管理ACL(Access Control L ...

  8. [大数据] zookeeper 安装和配置

    ZooKeeper是一个分布式的,开放源码的分布式应用程序协调服务,是Google的Chubby一个开源的实现,是Hadoop和Hbase的重要组件.它是一个为分布式应用提供一致性服务的软件,提供的功 ...

  9. zookeeper编程入门系列之zookeeper实现分布式进程监控和分布式共享锁(图文详解)

    本博文的主要内容有 一.zookeeper编程入门系列之利用zookeeper的临时节点的特性来监控程序是否还在运行   二.zookeeper编程入门系列之zookeeper实现分布式进程监控 三. ...

随机推荐

  1. <USACO09DEC>过路费Cow Toll Pathsの思路

    啊好气 在洛谷上A了之后 隔壁jzoj总wa 迷茫了很久.发现那题要文件输入输出 生气 肥肠不爽 Description 跟所有人一样,农夫约翰以着宁教我负天下牛,休叫天下牛负我的伟大精神,日日夜夜苦 ...

  2. 我是青年你是良品-魅蓝NOTE 2

    2" title="我是青年你是良品-魅蓝NOTE 2">   明天魅蓝即将迎来自己的新品发布会.选择儿童节的第二天后最喜爱的手机品牌.让其成为真正青年的良品. 在 ...

  3. 免密码 ssh 到其它机器

    背景:在配置 hadoop 的时候这样设置会比较方便.目标:A 机器上输入 ssh root@B 可以直接访问,不需要输入密码 步骤: 首先在 A 机器上生成密钥对,一路回车 1 ssh-keygen ...

  4. [人工智能]NumPy基础

    理解NumPy 本文主要介绍NumPy的基础知识,NumPy是一个功能强大的Python库,允许更高级的数据操作和数学计算. 什么是NumPy NumPy,来源自两个单词:Numerical和Pyth ...

  5. 初识Machine Learning

    What is Machine Learning 定义 Arthur Samuel:Field of study that gives computers the ability to learn w ...

  6. lsync+rsync 实时同步(ubuntu16.04系统)

    1.同步端需要安装 lsync/rsyncapt-get install lsyncd rsync2.生成ssh公钥,粘贴到目标机器里面3.创建配置文件mkdir /etc/lsyncdcat /et ...

  7. 如何理解TCP的三次握手协议?

    • TCP是一个面向链接的协议,任何一个面向连接的协议,我们都可以将其类比为我们最熟悉的打电话模型. 如何类比呢?我们可以从建立和销毁两个阶段分别来看这件事情. 建立连接阶段 首先,我们来看看TCP中 ...

  8. AAAI |如何保证人工智能系统的准确性?

    ​ |如何保证人工智能系统的准确性?" title="AAAI |如何保证人工智能系统的准确性?"> ​ 注:本文译自AI is getting smarter; ...

  9. 关于.net MVC中主视图和分部视图的数据共享遇到的问题

    今天在开发web时因为调用到的分部视图需要有个隐藏域.然后因为当我们第一次调用分部视图时,是用 @Html.Partial("DetailDataPart")在主视图里把它嵌进去主 ...

  10. 端口占用的一种形式 Failed to initialize end point associated with ProtocolHandler ["ajp-bio-8090"] java.net.BindException: Address already in use: JVM_Bind <null>:8090

    严重: Failed to initialize end point associated with ProtocolHandler ["ajp-bio-8090"]java.ne ...