IBM developer:Kafka ACLs
Overview
In Apache Kafka, the security feature is supported from version 0.9. When Kerberos is enabled, we need to have the authorization to access Kafka resources. In this blog, you will learn how to add authorization to Kafka resources using Kafka console ACL scripts. In addition, when SSL is enabled in Kafka, ACLs (access control list) can be enabled to authorize access to Kafka resources.
Kafka ACLs are defined in the general format of “Principal P is [Allowed/Denied] Operation O From Host H On Resource R”.
Kafka resources that can be protected with ACLS are:
- Topic
- Consumer group
- Cluster
Operations on the Kafka resources are as below:
Kafka resource | Operations |
---|---|
Topic | CREATE/READ/WRITE/DESCRIBE |
Consumer Group | WRITE |
Cluster | CLUSTER_ACTION |
Cluster operations (CLUSTER_ACTION) refer to operations necessary for the management of the cluster, like updating broker and partition metadata, changing the leader and the set of in-sync replicas of a partition, and triggering a controlled shutdown.
Kafka Kerberos with ACLs
To enable Kerberos in an IOP 4.2 cluster, you can follow the steps mentioned in the link Enable Kerberos on IOP 4.2
After Kerberos is enabled, the following properties are automatically added to custom Kafka broker configuration.
Kafka console commands running as super user kafka
By default, only the super.user will have the permissions to access the Kafka resources. The default value for super.users is kafka.
The Kafka home directory in IOP is located at /usr/iop/current/kafka-broker. The Kafka console scripts referenced in this article are located under /usr/iop/current/kafka-broker.
List Kafka service keytab
[kafka@hostname kafka]# klist -k -t /etc/security/keytabs/kafka.service.keytab
Keytab name: FILE:/etc/security/keytabs/kafka.service.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
1 06/22/16 13:53:01 kafka/hostname.abc.com@IBM.COM
1 06/22/16 13:53:01 kafka/hostname.abc.com@IBM.COM
1 06/22/16 13:53:01 kafka/hostname.abc.com@IBM.COM
1 06/22/16 13:53:01 kafka/hostname.abc.com@IBM.COM
1 06/22/16 13:53:01 kafka/hostname.abc.com@IBM.COM
Perform kinit to obtain and cache the Kerberos ticket
[kafka@hostname kafka]# kinit -f -k -t /etc/security/keytabs/kafka.service.keytab kafka/hostname.abc.com@IBM.COM
Create a topic
[kafka@hostname kafka]# bin/kafka-topics.sh --create --zookeeper hostname.abc.com:2181 --replication-factor 1 --partitions 1 --topic mytopic
Created topic "mytopic".
Run Kafka producer
[kafka@hostname kafka]# bin/kafka-console-producer.sh --broker-list hostname.abc.com:6667 --topic mytopic --producer.config producer.properties
Hi
Sending Message to Kafka topic
Message 1
Message 2
Message 3
^C
[kafka@hostname kafka]$ cat producer.properties
security.protocol=SASL_PLAINTEXT
Run Kafka consumer
[root@hostname kafka]# bin/kafka-console-consumer.sh --new-consumer --zookeeper hostname.abc.com:2181 --topic mytopic --from-beginning --bootstrap-server hostname.abc.com:6667 --consumer.config consumer.properties
Hi
Sending Message to Kafka topic
Message 1
Message 2
Message 3
^CProcessed a total of 5 messages
[root@hostname kafka]# cat consumer.properties
security.protocol=SASL_PLAINTEXT
As we have run the commands with super user kafka, we have access to Kafka resources without adding any ACLs.
How to add a new user as a super user?
- Update the super.users property in the “Custom kafka-broker” configuration to add additional users as super users. The list is a semicolon-separated list of user names in the format “User:”. The example shows how to configure the users kafka and kafkatest as super users.
- This will allow the user to access resources without adding any ACLs.
- Restart Kafka
How to add ACLs for new users?
The following example shows how to add ACLs for a new user “kafkatest”.
Create a user kafkatest
[root@hostname kafka]# useradd kafkatest
Note: In the example shown here the KDC server, Kafka broker and Producer/Consumer running are on the same machine. If the KDC server is setup on a different node in your environment, copy the keytab files to /etc/security/keytabs where Kafka producer and consumer are running.
Create a principal for kafkatest user
[root@hostname kafka]# kadmin.local
Authenticating as principal kafka/admin@IBM.COM with password.
kadmin.local: addprinc "kafkatest"
Create a Kerberos keytab file
kadmin.local: xst -norandkey -k /etc/security/keytabs/kafkatest.keytab kafkatest@IBM.COM
Quit from kadmin
kadmin.local: quit
List and cache the kafkatest Kerberos ticket
[kafkatest@hostname kafka]$ klist -k -t /etc/security/keytabs/kafkatest.keytab
Keytab name: FILE:/etc/security/keytabs/kafkatest.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
1 06/22/16 16:24:15 kafkatest@IBM.COM
1 06/22/16 16:24:15 kafkatest@IBM.COM
1 06/22/16 16:24:15 kafkatest@IBM.COM
1 06/22/16 16:24:15 kafkatest@IBM.COM
[kafkatest@hostname kafka]$ kinit -f -k -t /etc/security/keytabs/kafkatest.keytab kafkatest@IBM.COM
Create a topic
[kafkatest@hostname kafka]$ bin/kafka-topics.sh --create --zookeeper hostname.abc.com:2181 --partitions 1 --replication 1 --topic kafka-testtopic
Created topic "kafka-testtopic".
Add write permission for user kafkatest for topic kafka-testtopic:
[kafkatest@hostname kafka]$ bin/kafka-acls.sh --topic kafka-testtopic --add -allow-host 9.30.150.22 --allow-principal User:kafkatest --operation Write --authorizer-properties zookeeper.connect=hostname.abc.com:2181
Adding ACLs for resource `Topic:kafka-testtopic`:
User:kafkatest has Allow permission for operations: Write from hosts: 9.30.150.22
Current ACLs for resource `Topic:kafka-testtopic`:
User:kafkatest has Allow permission for operations: Write from hosts: 9.30.150.22
Run Kafka producer
[kafkatest@hostname kafka]$ bin/kafka-console-producer.sh --broker-list hostname.abc.com:6667 --topic kafka-testtopic --producer.config producer.properties
Hi
Writing Data as kafkatest user
Message 1
Message 2
Message 3
^C
[kafkatest@hostname kafka]$ cat producer.properties
security.protocol=SASL_PLAINTEXT
Add read permission for user kafkatest for topic kafka-testtopic and consumer group kafkatestgroup
[kafkatest@hostname kafka]bin/kafka-acls.sh --topic kafka-testtopic --add -allow-host 9.30.150.22 --allow-principal User:kafkatest --operation Read --authorizer-properties zookeeper.connect=hostname.abc.com:2181 --group kafkatestgroup
Adding ACLs for resource `Topic:kafka-testtopic`:
User:kafkatest has Allow permission for operations: Read from hosts: 9.30.150.22
Adding ACLs for resource `Group:kafkatestgroup`:
User:kafkatest has Allow permission for operations: Read from hosts: 9.30.150.22
Current ACLs for resource `Topic:kafka-testtopic`:
User:kafkatest has Allow permission for operations: Write from hosts: 9.30.150.22
User:kafkatest has Allow permission for operations: Read from hosts: 9.30.150.22
Current ACLs for resource `Group:kafkatestgroup`:
User:kafkatest has Allow permission for operations: Read from hosts: 9.30.150.22
Run Kafka consumer
[kafkatest@hostname kafka]$ bin/kafka-console-consumer.sh --new-consumer --zookeeper hostname.abc.com:2181 --topic kafka-testtopic --from-beginning --bootstrap-server hostname.abc.com:6667 --consumer.config consumer.properties
Hi
Writing Data as kafkatest user
Message 1
Message 2
Message 3
^CProcessed a total of 5 messages
[kafkatest@hostname kafka]$ cat consumer.properties
security.protocol=SASL_PLAINTEXT
group.id=kafkatestgroup
Information about kafka_jaas conf file:
When Kerberos is enabled in Kafka, this configuration file is passed as a security parameter (-Djava.security.auth.login.config=”/usr/iop/current/kafka-broker/conf/kafka_jaas.conf”) to Kafka console scripts.
[root@hostname kafka]# cat /usr/iop/current/kafka-broker/conf/kafka_jaas.conf
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="kafka"
principal="kafka/hostname.abc.com@IBM.COM";
};
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
renewTicket=true
serviceName="kafka";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="zookeeper"
principal="kafka/hostname.abc.com@IBM.COM";
};
- The KafkaServer section is used by the Kafka broker and inter-broker communication, for example during the creation of topic.
The KafkaClient is used when running Kafka producer or consumers. Because in the example KafkaClient is using the ticket cache, we have to run the kinit command to cache the Kerberos ticket before running the Kafka producer and consumer.
- The Client section is used for Zookeeper connection. Kafka ACLs are stored in the Zookeeper.
What to do when the SASL username (operating system user name) is different from the principal name
Generally, the SASL username is the same as the primary name of the Kerberos principal. However, if that’s not the case, we need to add a property sasl.kerberos.principal.to.local.rules to the Kafka broker configuration, to map the principal name to the user name. In the following example, a mapping from the principal name ambari-qa-bh to the user name (operating system user name) ambari-qa is added.
When Kerberos is enabled from Ambari, the principal name generated for the user “ambari-qa” will be of the form ambari-qa-[Cluster Name]. In the example shown here, I have provided my cluster name as “bh”, the principal name generated for user “ambari–qa” is generated as ambari-qa-bh.
[root@hostname kafka]# klist -k -t /etc/security/keytabs/smokeuser.headless.keytab
Keytab name: FILE:/etc/security/keytabs/smokeuser.headless.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
1 06/22/16 13:53:00 ambari-qa-bh@IBM.COM
1 06/22/16 13:53:00 ambari-qa-bh@IBM.COM
1 06/22/16 13:53:00 ambari-qa-bh@IBM.COM
1 06/22/16 13:53:00 ambari-qa-bh@IBM.COM
1 06/22/16 13:53:00 ambari-qa-bh@IBM.COM
For the user ambari-qa, we need to add the following rule::
RULE:[1:$1@$0](ambari-qa-bh@IBM.COM)s/.*/ambari-qa/
- Add sasl.kerberos.principal.to.local.rules in custom Kafka-broker configuration.
- Restart Kafka.
More information about the mapping between principal and username can be found in the section auth_to_local in the following article: auth to local
Kafka SSL with ACLs
In this section, we will see how to work with ACLs when SSL is enabled. For information on how to enable SSL in Kafka, follow the steps in the sections Setup SSL and Enable SSL in the Kafka Security Blog
There is an issue in IOP 4.2 when setting up SSL is enabled in Kafka with ACLs. Follow the steps mentioned in the technote, to resolve the issue.
Add the below properties in custom-kafka-broker section to enable authorization with SSL.
- authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
- super.users=User:CN=hostname.ibm.com,OU=iop,O=ibm,L=san jose,ST=california,C=US
Restart the Kafka service from Ambari UI for the changes to take effect.
Note: Add the output of the command below, used to generate the key and certificate for the broker, to the list of super users in Kafka. This allows the Kafka broker to access all Kafka resources. As mentioned above, by default only the super user has access to all Kafka resources. The output of the below command provides the SSL username which is used as the value for super.users.
[root@hostname security]# keytool -keystore kafka.server.keystore.jks -alias localhost -validity 365 -genkey
Enter keystore password:
Re-enter new password:
What is your first and last name?
[Unknown]: hostname.ibm.com
What is the name of your organizational unit?
[Unknown]: iop
What is the name of your organization?
[Unknown]: ibm
What is the name of your City or Locality?
[Unknown]: san jose
What is the name of your State or Province?
[Unknown]: california
What is the two-letter country code for this unit?
[Unknown]: US
Is CN=hostname.ibm.com, OU=iop, O=ibm, L=san jose, ST=california, C=US correct?
[no]: yes
Enter key password for
(RETURN if same as keystore password):
By default, the SSL username will be of the form “CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown”. This can be changed by adding the property principal.builder.class to the Kafka broker configuration in the Ambari UI, and setting the value to a class that needs to implement the interface PrincipalBuilder interface (org.apache.kafka.common.security.auth.PrincipalBuilder).
How to add ACLs for a new SSL user?
Create a topic
[root@hostname kafka]# bin/kafka-topics.sh --create --zookeeper hostname.ibm.com:2181 --replication-factor 1 --partitions 1 --topic ssltopic
Created topic "ssltopic".
Add write permission for SSL user (CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US) for topic ssltopic
[root@hostname kafka]# bin/kafka-acls.sh --topic ssltopic --add -allow-host 9.30.150.20 --allow-principal "User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US" --operation Write --authorizer-properties zookeeper.connect=hostname.ibm.com:2181
Adding ACLs for resource `Topic:ssltopic`:
User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US has Allow permission for operations: Write from hosts: 9.30.150.20
Current ACLs for resource `Topic:ssltopic`:
User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US has Allow permission for operations: Write from hosts: 9.30.150.20
The user name provided above is the output when running the below command, which is used to generate Key and Certificate for Kafka Client (Producer/Consumer).
[root@hostname security]# keytool -keystore kafka.client.keystore.jks -alias localhost -validity 365 -genkey
Enter keystore password:
Re-enter new password:
What is your first and last name?
[Unknown]: hostname.ibm.com
What is the name of your organizational unit?
[Unknown]: biginsights
What is the name of your organization?
[Unknown]: ibm
What is the name of your City or Locality?
[Unknown]: san jose
What is the name of your State or Province?
[Unknown]: california
What is the two-letter country code for this unit?
[Unknown]: US
Is CN=hostname.ibm.com, OU=biginsights, O=ibm, L=san jose, ST=california, C=US correct?
[no]: yes
Enter key password for
(RETURN if same as keystore password):
Run Kafka producer
[root@hostname kafka]# bin/kafka-console-producer.sh --broker-list hostname.ibm.com:6667 --topic ssltopic --producer.config client-ssl.properties
Testing Acl with SSl
Message 1
Message 2
^C
[root@hostname kafka]# cat client-ssl.properties
security.protocol=SSL
ssl.truststore.location=/etc/kafka/conf/security/kafka.client.truststore.jks
ssl.truststore.password=bigdata
ssl.keystore.location=/etc/kafka/conf/security/kafka.client.keystore.jks
ssl.keystore.password=bigdata
ssl.key.password=bigdata
Add read permission for SSL user (CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US) for topic ssltopic and consumer group ssl-group
[root@hostname kafka]# bin/kafka-acls.sh --topic ssltopic --add -allow-host 9.30.150.20 --allow-principal "User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US" --operation read --authorizer-properties zookeeper.connect=hostname.ibm.com:2181 --group ssl-group
Adding ACLs for resource `Topic:ssltopic`:
User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US has Allow permission for operations: Read from hosts: 9.30.150.20
Adding ACLs for resource `Group:ssl-group`:
User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US has Allow permission for operations: Read from hosts: 9.30.150.20
Current ACLs for resource `Topic:ssltopic`:
User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US has Allow permission for operations: Read from hosts: 9.30.150.20
User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US has Allow permission for operations: Write from hosts: 9.30.150.20
Current ACLs for resource `Group:ssl-group`:
User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US has Allow permission for operations: Read from hosts: 9.30.150.20
Run Kafka consumer
[root@hostname kafka]# bin/kafka-console-consumer.sh --zookeeper hostname.ibm.com:2181 --topic ssltopic --from-beginning --new-consumer --bootstrap-server hostname.ibm.com:6667 --consumer.config client-consumer-ssl.properties
Testing Acl with SSl
Message 1
Message 2
^CProcessed a total of 3 messages
[root@hostname kafka]# cat consumer-client-ssl.properties
group.id=ssl-group
security.protocol=SSL
ssl.truststore.location=/etc/kafka/conf/security/kafka.client.truststore.jks
ssl.truststore.password=bigdata
ssl.keystore.location=/etc/kafka/conf/security/kafka.client.keystore.jks
ssl.keystore.password=bigdata
ssl.key.password=bigdata
How to give everyone permission to access a resource if no ACLs are set for the resource.
- Add allow.everyone.if.no.acl.found=true in the “Custom kafka-broker” configuration.
- Restart Kafka
Conclusion:
This blog described how to configure ACLs in Kafka when SSL and Kerberos are enabled in IOP 4.2. For more information, see the Kafka documentation
IBM developer:Kafka ACLs的更多相关文章
- IBM Developer:Java 9 新特性概述
Author: 成富 Date: Dec 28, 2017 Category: IBM-Developer (20) Tags: Java (27) 原文地址:https://www.ibm.com/ ...
- IBM developer:Setting up the Kafka plugin for Ranger
Follow these steps to enable and configure the Kafka plugin for Ranger. Before you begin The default ...
- 分布式消息系统:Kafka
Kafka是分布式发布-订阅消息系统.它最初由LinkedIn公司开发,之后成为Apache项目的一部分.Kafka是一个分布式的,可划分的,冗余备份的持久性的日志服务.它主要用于处理活跃的流式数据. ...
- 最牛分布式消息系统:Kafka
Kafka是分布式发布-订阅消息系统.它最初由LinkedIn公司开发,之后成为Apache项目的一部分.Kafka是一个分布式的,可划分的,冗余备份的持久性的日志服务.它主要用于处理活跃的流式数据. ...
- Kafka深入理解-3:Kafka如何删除数据(日志)文件
Kafka作为消息中间件,数据需要按照一定的规则删除,否则数据量太大会把集群存储空间占满. 参考:apache Kafka是如何实现删除数据文件(日志)的 Kafka删除数据有两种方式 按照时间,超过 ...
- Kafka深入理解-1:Kafka高效的文件存储设计
文章摘自:美团点评技术团队 Kafka文件存储机制那些事 Kafka是什么 Kafka是最初由Linkedin公司开发,是一个分布式.分区的.多副本的.多订阅者,基于zookeeper协调的分布式日 ...
- Kafka 集群消息监控系统:Kafka Eagle
Kafka Eagle 1.概述 在开发工作当中,消费 Kafka 集群中的消息时,数据的变动是我们所关心的,当业务并不复杂的前提下,我们可以使用 Kafka 提供的命令工具,配合 Zookeeper ...
- kafka集群中常见错误的解决方法:kafka.common.KafkaException: Should not set log end offset on partition
问题描述:kafka单台机器做集群操作是没有问题的,如果分布多台机器并且partitions或者备份的个数大于1都会报kafka.common.KafkaException: Should not s ...
- IM系统的MQ消息中间件选型:Kafka还是RabbitMQ?
1.前言 在IM这种讲究高并发.高消息吞吐的互联网场景下,MQ消息中间件是个很重要的基础设施,它在IM系统的服务端架构中担当消息中转.消息削峰.消息交换异步化等等角色,当然MQ消息中间件的作用远不止于 ...
随机推荐
- Windows To Go入坑记录
什么是Windows To Go? https://en.wikipedia.org/wiki/Windows_To_Go 微软为了解决企业用户的需求而推出,可以在u盘或者移动硬盘启动window系统 ...
- Django-CSRF跨站请求伪造防护
前言 CSRF全称Cross-site request forgery(跨站请求伪造),是一种网络的攻击方式,也被称为“One Click Attack”或者Session Riding,通常缩写为C ...
- 逆向-攻防世界-crackme
查壳,nSpack壳,直接用软件脱壳,IDA载入程序. 很明显,就是将402130的数据和输入的数据进行异或,判断是否等于402150处的数据.dwrd占4字节. 这道题主要记录一下刚学到的,直接在I ...
- 使用CAS实现无锁列队-链表
#include <stdlib.h> #include <stdio.h> #include <pthread.h> #include <iostream& ...
- for in 使用
// JSON(JavaScript Object Notation) 是一种轻量级的数据交换格式. 易于人阅读和编写. var json = { "id": 1, "n ...
- Yii框架基础增删查改
返回一条数据 Country::find()->one(); 返回所有数据 Country::find()->all(); 返回记录的数量 $country =Country::find( ...
- 从壹开始前后端分离 [ vue + .netcore 补充教程 ] 三十║ Nuxt实战:动态路由+同构
上期回顾 说接上文<二九║ Nuxt实战:异步实现数据双端渲染>,昨天咱们通过项目二的首页数据处理,简单了解到了 nuxt 异步数据获取的作用,以及亲身体验了几个重要文件夹的意义,整篇文章 ...
- PreferencesUtils【SharedPreferences操作工具类】
版权声明:本文为HaiyuKing原创文章,转载请注明出处! 前言 可以替代ACache用来保存用户名.密码. 相较于Acache,不存在使用猎豹清理大师进行垃圾清理的时候把缓存的数据清理掉的问题. ...
- jdk切换小工具
今天无意之中看到一个小工具,就是可以自由切换jdk版本!以前每次切换jdk还要去找环境变量找半天,emmm.... 现在我们只需要双击一个xxx.bat的一个批处理文件,就可以自由切换jdk版本,很方 ...
- 五行Python代码教你用微信来控制电脑摄像头
如果说强大的标准库奠定了Python发展的基石,丰富的第三方库则是python不断发展的保证.今天就来通过itchart库来实现通过微信对电脑的一些操作. 一.安装库 安装itchat itchat ...