IBM developer:Kafka ACLs
Overview
In Apache Kafka, the security feature is supported from version 0.9. When Kerberos is enabled, we need to have the authorization to access Kafka resources. In this blog, you will learn how to add authorization to Kafka resources using Kafka console ACL scripts. In addition, when SSL is enabled in Kafka, ACLs (access control list) can be enabled to authorize access to Kafka resources.
Kafka ACLs are defined in the general format of “Principal P is [Allowed/Denied] Operation O From Host H On Resource R”.
Kafka resources that can be protected with ACLS are:
- Topic
- Consumer group
- Cluster
Operations on the Kafka resources are as below:
Kafka resource | Operations |
---|---|
Topic | CREATE/READ/WRITE/DESCRIBE |
Consumer Group | WRITE |
Cluster | CLUSTER_ACTION |
Cluster operations (CLUSTER_ACTION) refer to operations necessary for the management of the cluster, like updating broker and partition metadata, changing the leader and the set of in-sync replicas of a partition, and triggering a controlled shutdown.
Kafka Kerberos with ACLs
To enable Kerberos in an IOP 4.2 cluster, you can follow the steps mentioned in the link Enable Kerberos on IOP 4.2
After Kerberos is enabled, the following properties are automatically added to custom Kafka broker configuration.
Kafka console commands running as super user kafka
By default, only the super.user will have the permissions to access the Kafka resources. The default value for super.users is kafka.
The Kafka home directory in IOP is located at /usr/iop/current/kafka-broker. The Kafka console scripts referenced in this article are located under /usr/iop/current/kafka-broker.
List Kafka service keytab
[kafka@hostname kafka]# klist -k -t /etc/security/keytabs/kafka.service.keytab
Keytab name: FILE:/etc/security/keytabs/kafka.service.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
1 06/22/16 13:53:01 kafka/hostname.abc.com@IBM.COM
1 06/22/16 13:53:01 kafka/hostname.abc.com@IBM.COM
1 06/22/16 13:53:01 kafka/hostname.abc.com@IBM.COM
1 06/22/16 13:53:01 kafka/hostname.abc.com@IBM.COM
1 06/22/16 13:53:01 kafka/hostname.abc.com@IBM.COM
Perform kinit to obtain and cache the Kerberos ticket
[kafka@hostname kafka]# kinit -f -k -t /etc/security/keytabs/kafka.service.keytab kafka/hostname.abc.com@IBM.COM
Create a topic
[kafka@hostname kafka]# bin/kafka-topics.sh --create --zookeeper hostname.abc.com:2181 --replication-factor 1 --partitions 1 --topic mytopic
Created topic "mytopic".
Run Kafka producer
[kafka@hostname kafka]# bin/kafka-console-producer.sh --broker-list hostname.abc.com:6667 --topic mytopic --producer.config producer.properties
Hi
Sending Message to Kafka topic
Message 1
Message 2
Message 3
^C
[kafka@hostname kafka]$ cat producer.properties
security.protocol=SASL_PLAINTEXT
Run Kafka consumer
[root@hostname kafka]# bin/kafka-console-consumer.sh --new-consumer --zookeeper hostname.abc.com:2181 --topic mytopic --from-beginning --bootstrap-server hostname.abc.com:6667 --consumer.config consumer.properties
Hi
Sending Message to Kafka topic
Message 1
Message 2
Message 3
^CProcessed a total of 5 messages
[root@hostname kafka]# cat consumer.properties
security.protocol=SASL_PLAINTEXT
As we have run the commands with super user kafka, we have access to Kafka resources without adding any ACLs.
How to add a new user as a super user?
- Update the super.users property in the “Custom kafka-broker” configuration to add additional users as super users. The list is a semicolon-separated list of user names in the format “User:”. The example shows how to configure the users kafka and kafkatest as super users.
- This will allow the user to access resources without adding any ACLs.
- Restart Kafka
How to add ACLs for new users?
The following example shows how to add ACLs for a new user “kafkatest”.
Create a user kafkatest
[root@hostname kafka]# useradd kafkatest
Note: In the example shown here the KDC server, Kafka broker and Producer/Consumer running are on the same machine. If the KDC server is setup on a different node in your environment, copy the keytab files to /etc/security/keytabs where Kafka producer and consumer are running.
Create a principal for kafkatest user
[root@hostname kafka]# kadmin.local
Authenticating as principal kafka/admin@IBM.COM with password.
kadmin.local: addprinc "kafkatest"
Create a Kerberos keytab file
kadmin.local: xst -norandkey -k /etc/security/keytabs/kafkatest.keytab kafkatest@IBM.COM
Quit from kadmin
kadmin.local: quit
List and cache the kafkatest Kerberos ticket
[kafkatest@hostname kafka]$ klist -k -t /etc/security/keytabs/kafkatest.keytab
Keytab name: FILE:/etc/security/keytabs/kafkatest.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
1 06/22/16 16:24:15 kafkatest@IBM.COM
1 06/22/16 16:24:15 kafkatest@IBM.COM
1 06/22/16 16:24:15 kafkatest@IBM.COM
1 06/22/16 16:24:15 kafkatest@IBM.COM
[kafkatest@hostname kafka]$ kinit -f -k -t /etc/security/keytabs/kafkatest.keytab kafkatest@IBM.COM
Create a topic
[kafkatest@hostname kafka]$ bin/kafka-topics.sh --create --zookeeper hostname.abc.com:2181 --partitions 1 --replication 1 --topic kafka-testtopic
Created topic "kafka-testtopic".
Add write permission for user kafkatest for topic kafka-testtopic:
[kafkatest@hostname kafka]$ bin/kafka-acls.sh --topic kafka-testtopic --add -allow-host 9.30.150.22 --allow-principal User:kafkatest --operation Write --authorizer-properties zookeeper.connect=hostname.abc.com:2181
Adding ACLs for resource `Topic:kafka-testtopic`:
User:kafkatest has Allow permission for operations: Write from hosts: 9.30.150.22
Current ACLs for resource `Topic:kafka-testtopic`:
User:kafkatest has Allow permission for operations: Write from hosts: 9.30.150.22
Run Kafka producer
[kafkatest@hostname kafka]$ bin/kafka-console-producer.sh --broker-list hostname.abc.com:6667 --topic kafka-testtopic --producer.config producer.properties
Hi
Writing Data as kafkatest user
Message 1
Message 2
Message 3
^C
[kafkatest@hostname kafka]$ cat producer.properties
security.protocol=SASL_PLAINTEXT
Add read permission for user kafkatest for topic kafka-testtopic and consumer group kafkatestgroup
[kafkatest@hostname kafka]bin/kafka-acls.sh --topic kafka-testtopic --add -allow-host 9.30.150.22 --allow-principal User:kafkatest --operation Read --authorizer-properties zookeeper.connect=hostname.abc.com:2181 --group kafkatestgroup
Adding ACLs for resource `Topic:kafka-testtopic`:
User:kafkatest has Allow permission for operations: Read from hosts: 9.30.150.22
Adding ACLs for resource `Group:kafkatestgroup`:
User:kafkatest has Allow permission for operations: Read from hosts: 9.30.150.22
Current ACLs for resource `Topic:kafka-testtopic`:
User:kafkatest has Allow permission for operations: Write from hosts: 9.30.150.22
User:kafkatest has Allow permission for operations: Read from hosts: 9.30.150.22
Current ACLs for resource `Group:kafkatestgroup`:
User:kafkatest has Allow permission for operations: Read from hosts: 9.30.150.22
Run Kafka consumer
[kafkatest@hostname kafka]$ bin/kafka-console-consumer.sh --new-consumer --zookeeper hostname.abc.com:2181 --topic kafka-testtopic --from-beginning --bootstrap-server hostname.abc.com:6667 --consumer.config consumer.properties
Hi
Writing Data as kafkatest user
Message 1
Message 2
Message 3
^CProcessed a total of 5 messages
[kafkatest@hostname kafka]$ cat consumer.properties
security.protocol=SASL_PLAINTEXT
group.id=kafkatestgroup
Information about kafka_jaas conf file:
When Kerberos is enabled in Kafka, this configuration file is passed as a security parameter (-Djava.security.auth.login.config=”/usr/iop/current/kafka-broker/conf/kafka_jaas.conf”) to Kafka console scripts.
[root@hostname kafka]# cat /usr/iop/current/kafka-broker/conf/kafka_jaas.conf
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="kafka"
principal="kafka/hostname.abc.com@IBM.COM";
};
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
renewTicket=true
serviceName="kafka";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="zookeeper"
principal="kafka/hostname.abc.com@IBM.COM";
};
- The KafkaServer section is used by the Kafka broker and inter-broker communication, for example during the creation of topic.
The KafkaClient is used when running Kafka producer or consumers. Because in the example KafkaClient is using the ticket cache, we have to run the kinit command to cache the Kerberos ticket before running the Kafka producer and consumer.
- The Client section is used for Zookeeper connection. Kafka ACLs are stored in the Zookeeper.
What to do when the SASL username (operating system user name) is different from the principal name
Generally, the SASL username is the same as the primary name of the Kerberos principal. However, if that’s not the case, we need to add a property sasl.kerberos.principal.to.local.rules to the Kafka broker configuration, to map the principal name to the user name. In the following example, a mapping from the principal name ambari-qa-bh to the user name (operating system user name) ambari-qa is added.
When Kerberos is enabled from Ambari, the principal name generated for the user “ambari-qa” will be of the form ambari-qa-[Cluster Name]. In the example shown here, I have provided my cluster name as “bh”, the principal name generated for user “ambari–qa” is generated as ambari-qa-bh.
[root@hostname kafka]# klist -k -t /etc/security/keytabs/smokeuser.headless.keytab
Keytab name: FILE:/etc/security/keytabs/smokeuser.headless.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
1 06/22/16 13:53:00 ambari-qa-bh@IBM.COM
1 06/22/16 13:53:00 ambari-qa-bh@IBM.COM
1 06/22/16 13:53:00 ambari-qa-bh@IBM.COM
1 06/22/16 13:53:00 ambari-qa-bh@IBM.COM
1 06/22/16 13:53:00 ambari-qa-bh@IBM.COM
For the user ambari-qa, we need to add the following rule::
RULE:[1:$1@$0](ambari-qa-bh@IBM.COM)s/.*/ambari-qa/
- Add sasl.kerberos.principal.to.local.rules in custom Kafka-broker configuration.
- Restart Kafka.
More information about the mapping between principal and username can be found in the section auth_to_local in the following article: auth to local
Kafka SSL with ACLs
In this section, we will see how to work with ACLs when SSL is enabled. For information on how to enable SSL in Kafka, follow the steps in the sections Setup SSL and Enable SSL in the Kafka Security Blog
There is an issue in IOP 4.2 when setting up SSL is enabled in Kafka with ACLs. Follow the steps mentioned in the technote, to resolve the issue.
Add the below properties in custom-kafka-broker section to enable authorization with SSL.
- authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
- super.users=User:CN=hostname.ibm.com,OU=iop,O=ibm,L=san jose,ST=california,C=US
Restart the Kafka service from Ambari UI for the changes to take effect.
Note: Add the output of the command below, used to generate the key and certificate for the broker, to the list of super users in Kafka. This allows the Kafka broker to access all Kafka resources. As mentioned above, by default only the super user has access to all Kafka resources. The output of the below command provides the SSL username which is used as the value for super.users.
[root@hostname security]# keytool -keystore kafka.server.keystore.jks -alias localhost -validity 365 -genkey
Enter keystore password:
Re-enter new password:
What is your first and last name?
[Unknown]: hostname.ibm.com
What is the name of your organizational unit?
[Unknown]: iop
What is the name of your organization?
[Unknown]: ibm
What is the name of your City or Locality?
[Unknown]: san jose
What is the name of your State or Province?
[Unknown]: california
What is the two-letter country code for this unit?
[Unknown]: US
Is CN=hostname.ibm.com, OU=iop, O=ibm, L=san jose, ST=california, C=US correct?
[no]: yes
Enter key password for
(RETURN if same as keystore password):
By default, the SSL username will be of the form “CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown”. This can be changed by adding the property principal.builder.class to the Kafka broker configuration in the Ambari UI, and setting the value to a class that needs to implement the interface PrincipalBuilder interface (org.apache.kafka.common.security.auth.PrincipalBuilder).
How to add ACLs for a new SSL user?
Create a topic
[root@hostname kafka]# bin/kafka-topics.sh --create --zookeeper hostname.ibm.com:2181 --replication-factor 1 --partitions 1 --topic ssltopic
Created topic "ssltopic".
Add write permission for SSL user (CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US) for topic ssltopic
[root@hostname kafka]# bin/kafka-acls.sh --topic ssltopic --add -allow-host 9.30.150.20 --allow-principal "User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US" --operation Write --authorizer-properties zookeeper.connect=hostname.ibm.com:2181
Adding ACLs for resource `Topic:ssltopic`:
User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US has Allow permission for operations: Write from hosts: 9.30.150.20
Current ACLs for resource `Topic:ssltopic`:
User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US has Allow permission for operations: Write from hosts: 9.30.150.20
The user name provided above is the output when running the below command, which is used to generate Key and Certificate for Kafka Client (Producer/Consumer).
[root@hostname security]# keytool -keystore kafka.client.keystore.jks -alias localhost -validity 365 -genkey
Enter keystore password:
Re-enter new password:
What is your first and last name?
[Unknown]: hostname.ibm.com
What is the name of your organizational unit?
[Unknown]: biginsights
What is the name of your organization?
[Unknown]: ibm
What is the name of your City or Locality?
[Unknown]: san jose
What is the name of your State or Province?
[Unknown]: california
What is the two-letter country code for this unit?
[Unknown]: US
Is CN=hostname.ibm.com, OU=biginsights, O=ibm, L=san jose, ST=california, C=US correct?
[no]: yes
Enter key password for
(RETURN if same as keystore password):
Run Kafka producer
[root@hostname kafka]# bin/kafka-console-producer.sh --broker-list hostname.ibm.com:6667 --topic ssltopic --producer.config client-ssl.properties
Testing Acl with SSl
Message 1
Message 2
^C
[root@hostname kafka]# cat client-ssl.properties
security.protocol=SSL
ssl.truststore.location=/etc/kafka/conf/security/kafka.client.truststore.jks
ssl.truststore.password=bigdata
ssl.keystore.location=/etc/kafka/conf/security/kafka.client.keystore.jks
ssl.keystore.password=bigdata
ssl.key.password=bigdata
Add read permission for SSL user (CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US) for topic ssltopic and consumer group ssl-group
[root@hostname kafka]# bin/kafka-acls.sh --topic ssltopic --add -allow-host 9.30.150.20 --allow-principal "User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US" --operation read --authorizer-properties zookeeper.connect=hostname.ibm.com:2181 --group ssl-group
Adding ACLs for resource `Topic:ssltopic`:
User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US has Allow permission for operations: Read from hosts: 9.30.150.20
Adding ACLs for resource `Group:ssl-group`:
User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US has Allow permission for operations: Read from hosts: 9.30.150.20
Current ACLs for resource `Topic:ssltopic`:
User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US has Allow permission for operations: Read from hosts: 9.30.150.20
User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US has Allow permission for operations: Write from hosts: 9.30.150.20
Current ACLs for resource `Group:ssl-group`:
User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US has Allow permission for operations: Read from hosts: 9.30.150.20
Run Kafka consumer
[root@hostname kafka]# bin/kafka-console-consumer.sh --zookeeper hostname.ibm.com:2181 --topic ssltopic --from-beginning --new-consumer --bootstrap-server hostname.ibm.com:6667 --consumer.config client-consumer-ssl.properties
Testing Acl with SSl
Message 1
Message 2
^CProcessed a total of 3 messages
[root@hostname kafka]# cat consumer-client-ssl.properties
group.id=ssl-group
security.protocol=SSL
ssl.truststore.location=/etc/kafka/conf/security/kafka.client.truststore.jks
ssl.truststore.password=bigdata
ssl.keystore.location=/etc/kafka/conf/security/kafka.client.keystore.jks
ssl.keystore.password=bigdata
ssl.key.password=bigdata
How to give everyone permission to access a resource if no ACLs are set for the resource.
- Add allow.everyone.if.no.acl.found=true in the “Custom kafka-broker” configuration.
- Restart Kafka
Conclusion:
This blog described how to configure ACLs in Kafka when SSL and Kerberos are enabled in IOP 4.2. For more information, see the Kafka documentation
IBM developer:Kafka ACLs的更多相关文章
- IBM Developer:Java 9 新特性概述
Author: 成富 Date: Dec 28, 2017 Category: IBM-Developer (20) Tags: Java (27) 原文地址:https://www.ibm.com/ ...
- IBM developer:Setting up the Kafka plugin for Ranger
Follow these steps to enable and configure the Kafka plugin for Ranger. Before you begin The default ...
- 分布式消息系统:Kafka
Kafka是分布式发布-订阅消息系统.它最初由LinkedIn公司开发,之后成为Apache项目的一部分.Kafka是一个分布式的,可划分的,冗余备份的持久性的日志服务.它主要用于处理活跃的流式数据. ...
- 最牛分布式消息系统:Kafka
Kafka是分布式发布-订阅消息系统.它最初由LinkedIn公司开发,之后成为Apache项目的一部分.Kafka是一个分布式的,可划分的,冗余备份的持久性的日志服务.它主要用于处理活跃的流式数据. ...
- Kafka深入理解-3:Kafka如何删除数据(日志)文件
Kafka作为消息中间件,数据需要按照一定的规则删除,否则数据量太大会把集群存储空间占满. 参考:apache Kafka是如何实现删除数据文件(日志)的 Kafka删除数据有两种方式 按照时间,超过 ...
- Kafka深入理解-1:Kafka高效的文件存储设计
文章摘自:美团点评技术团队 Kafka文件存储机制那些事 Kafka是什么 Kafka是最初由Linkedin公司开发,是一个分布式.分区的.多副本的.多订阅者,基于zookeeper协调的分布式日 ...
- Kafka 集群消息监控系统:Kafka Eagle
Kafka Eagle 1.概述 在开发工作当中,消费 Kafka 集群中的消息时,数据的变动是我们所关心的,当业务并不复杂的前提下,我们可以使用 Kafka 提供的命令工具,配合 Zookeeper ...
- kafka集群中常见错误的解决方法:kafka.common.KafkaException: Should not set log end offset on partition
问题描述:kafka单台机器做集群操作是没有问题的,如果分布多台机器并且partitions或者备份的个数大于1都会报kafka.common.KafkaException: Should not s ...
- IM系统的MQ消息中间件选型:Kafka还是RabbitMQ?
1.前言 在IM这种讲究高并发.高消息吞吐的互联网场景下,MQ消息中间件是个很重要的基础设施,它在IM系统的服务端架构中担当消息中转.消息削峰.消息交换异步化等等角色,当然MQ消息中间件的作用远不止于 ...
随机推荐
- 【原】无脑操作:ElasticSearch学习笔记(01)
开篇来自于经典的“保安的哲学三问”(你是谁,在哪儿,要干嘛) 问题一.ElasticSearch是什么?有什么用处? 答:截至2018年12月28日,从ElasticSearch官网(https:// ...
- 第四周LINUX 学习笔记
内核编译丶sed丶awk Linux:单内核 模块化:动态 /lib/modules lsmod,modinfo,modprobe,insmod,,modprobe -r , ...
- Windows10文件目录下添加 Shift+右键打开管理员Powershell窗口
背景(可略过) 目前在调试 Python 程序,遇到了一个问题:当程序中包含多线程时,使用 IDLE 运行是不会执行多线程的语句的,在网上一顿搜罗了解到这种情况可以换成在命令行下执行.好像用 PyCh ...
- 【English EMail】2019 Q2 Public Holiday Announcement
Hi all, According to 2019 public holiday announcement released by Chinese government, this is to ann ...
- Docker & ASP.NET Core (2):定制Docker镜像
上一篇文章:把代码连接到容器 Dockerfile 在Docker的世界里,我们可以通过一个叫Dockerfile的文件来创建Docker镜像,随后可以运行容器. Dockerfile就是一个文本文件 ...
- Python基础(set集合)
#Author : Kelvin #Date : 2019/1/5 13:20 #set集合的创建(创建后可修改) li=["kelvin",1,2,"zhangsan& ...
- Vue 进阶之路(一)
vue 3.x 马上就要问世了,显然尤大大是不想让我们好好活了,但是转念一想,比你优秀的人都还在努力,那我们努力还有什么用,开个玩笑而已,本人对于 vue 的接触时间不长,对其也没有深入地去研究源码, ...
- What?VS2019创建新项目居然没有.NET Core3.0的模板?Bug?
今天是个值得欢喜的日子,因为VS2019在今天正式发布了.作为微软粉,我已经用了一段时间的VS2019 RC版本了.但是,今天有很多小伙伴在我的<ASP.NET Core 3.0 上的gRPC服 ...
- RecyclerViewLoadMoreDemo【封装上拉加载功能的RecyclerView,搭配SwipeRefreshLayout实现下拉刷新】
版权声明:本文为HaiyuKing原创文章,转载请注明出处! 前言 封装含有上拉加载功能的RecyclerView,然后搭配SwipeRefreshLayout实现下拉刷新.上拉加载功能. 在项目中将 ...
- SpringBoot入门教程(十一)过滤器和拦截器
在做web开发的时候,过滤器(Filter)和拦截器(Interceptor)很常见,通俗的讲,过滤器可以简单理解为“取你所想取”,忽视掉那些你不想要的东西:拦截器可以简单理解为“拒你所想拒”,关心你 ...