Configuring Apache Kafka Security
This topic describes additional steps you can take to ensure the safety and integrity of your data stored in Apache Kafka, with features available in CDK 2.0.0 and higher Powered By Apache Kafka:
Deploying SSL for Kafka
Kafka allows clients to connect over SSL. By default, SSL is disabled, but can be turned on as needed.
Step 1. Generating Keys and Certificates for Kafka Brokers
First, generate the key and the certificate for each machine in the cluster using the Java keytool utility. See Creating Certificates.
keystore is the keystore file that stores your certificate. validity is the valid time of the certificate in days.
$ keytool -keystore {tmp.server.keystore.jks} -alias localhost -validity {validity} -genkey
Make sure that the common name (CN) matches the fully qualified domain name (FQDN) of your server. The client compares the CN with the DNS domain name to ensure that it is connecting to the correct server.
Step 2. Creating Your Own Certificate Authority
You have generated a public-private key pair for each machine, and a certificate to identify the machine. However, the certificate is unsigned, so an attacker can create a certificate and pretend to be any machine. Sign certificates for each machine in the cluster to prevent unauthorized access.
openssl req -new -x509 -keyout ca-key -out ca-cert -days 365
The generated CA is a public-private key pair and certificate used to sign other certificates.
Add the generated CA to the client truststores so that clients can trust this CA:
keytool -keystore {client.truststore.jks} -alias CARoot -import -file {ca-cert}
Step 3. Signing the certificate
Now you can sign all certificates generated by step 1 with the CA generated in step 2.
- Export the certificate from the keystore:
keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file
- Sign it with the CA:
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password}
- Import both the certificate of the CA and the signed certificate into the keystore:
keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed
The definitions of the variables are as follows:
- keystore: the location of the keystore
- ca-cert: the certificate of the CA
- ca-key: the private key of the CA
- ca-password: the passphrase of the CA
- cert-file: the exported, unsigned certificate of the server
- cert-signed: the signed certificate of the server
The following Bash script demonstrates the steps described above. One of the commands assumes a password of test1234, so either use that password or edit the command before running it.
#!/bin/bash
#Step 1
keytool -keystore server.keystore.jks -alias localhost -validity 365 -genkey
#Step 2
openssl req -new -x509 -keyout ca-key -out ca-cert -days 365
keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert
keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert
#Step 3
keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 365 -CAcreateserial -passin pass:test1234
keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert
keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed
Step 4. Configuring Kafka Brokers
Kafka Brokers support listening for connections on multiple ports. If SSL is enabled for inter-broker communication (see below for how to enable it), both PLAINTEXT and SSL ports are required.
- In Cloudera Manager, click Kafka > Instances, and then click on "Kafka Broker" > Configurations > Kafka Broker Advanced Configuration Snippet (Safety Valve) for kafka.properties. Enter the following information:
listeners=PLAINTEXT://<kafka-broker-host-name>:9092,SSL://<kafka-broker-host-name>:9093
advertised.listeners=PLAINTEXT://<kafka-broker-host-name>:9092,SSL://<kafka-broker-host-name>:9093where kafka-broker-host-name is the FQDN of the broker that you selected from the Instances page Cloudera Manager. In the above sample configurations we used PLAINTEXT and SSL protocols for the SSL enabled brokers. For information about other supported security protocols, see Using Kafka Supported Protocols
- Repeat the above step for all the brokers. The advertised.listeners configuration above is needed to connect the brokers from external clients.
- Deploy the above client configurations and rolling restart the Kafka service from Cloudera Manager.
- Turn on SSL for the Kafka service by turning on the ssl_enabled configuration for the Kafka CSD.
- Set security.inter.broker.protocol as SSL, if Kerberos is disabled; otherwise, set it as SASL_SSL.
ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks
ssl.keystore.password=test1234
ssl.key.password=test1234
ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks
ssl.truststore.password=test1234
Other configuration settings might also be needed, depending on your requirements:
- ssl.client.auth=none: Other options for client authentication are required, or requested, where clients without certificates can still connect. The use of requested is discouraged, as it provides a false sense of security and misconfigured clients can still connect.
- ssl.cipher.suites: A cipher suite is a named combination of authentication, encryption, MAC, and a key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. This list is empty by default.
- ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1: Provide a list of SSL protocols that your brokers accept from clients.
- ssl.keystore.type=JKS
- ssl.truststore.type=JKS
To enable SSL for inter-broker communication, add the following line to the broker properties file. The default value is PLAINTEXT. See Using Kafka Supported Protocols.
security.inter.broker.protocol=SSL
Due to import regulations in some countries, the Oracle implementation limits the strength of cryptographic algorithms available by default. If you need stronger algorithms (for example, AES with 256-bit keys), you must obtain the JCE Unlimited Strength Jurisdiction Policy Files and install them in the JDK/JRE. For more information, see the JCA Providers Documentation.
with addresses: PLAINTEXT -> EndPoint(192.168.64.1,9092,PLAINTEXT),SSL -> EndPoint(192.168.64.1,9093,SSL)
To check whether the server keystore and truststore are set up properly, run the following command:
openssl s_client -debug -connect localhost:9093 -tls1
Note: TLSv1 should be listed under ssl.enabled.protocols.
-----BEGIN CERTIFICATE-----
{variable sized random bytes}
-----END CERTIFICATE-----
subject=/C=US/ST=CA/L=Santa Clara/O=org/OU=org/CN=John Smith
issuer=/C=US/ST=CA/L=Santa Clara/O=org/OU=org/CN=kafka/emailAddress=test@test.com
If the certificate does not appear, or if there are any other error messages, your keystore is not set up properly.
Step 5. Configuring Kafka Clients
SSL is supported only for the new Kafka Producer and Consumer APIs. The configurations for SSL are the same for both the producer and consumer.
If client authentication is not required in the broker, the following shows a minimal configuration example:
security.protocol=SSL
ssl.truststore.location=/var/private/ssl/kafka.client.truststore.jks
ssl.truststore.password=test1234
If client authentication is required, a keystore must be created as in step 1, and you must also configure the following properties:
ssl.keystore.location=/var/private/ssl/kafka.client.keystore.jks
ssl.keystore.password=test1234
ssl.key.password=test1234
- ssl.provider (Optional). The name of the security provider used for SSL connections. Default is the default security provider of the JVM.
- ssl.cipher.suites (Optional). A cipher suite is a named combination of authentication, encryption, MAC, and a key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol.
- ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1. This property should list at least one of the protocols configured on the broker side
- ssl.truststore.type=JKS
- ssl.keystore.type=JKS
Using Kafka Supported Protocols
- Enabling SSL encryption for client-broker communication but keeping broker-broker communication as PLAINTEXT. Because SSL has performance overhead, you might want to keep inter-broker communication as PLAINTEXT if your Kafka brokers are behind a firewall and not susceptible to network snooping.
- Migrating from a non-secure Kafka configuration to a secure Kafka configuration without requiring downtime. Use a rolling restart and keep security.inter.broker.protocol set to a protocol that is supported by all brokers until all brokers are updated to support the new protocol.
For example, if you have a Kafka cluster that needs to be configured to enable Kerberos without downtime, follow these steps:
- Set security.inter.broker.protocol to PLAINTEXT.
- Update the Kafka service configuration to enable Kerberos.
- Perform a rolling restart.
- Set security.inter.broker.protocol to SASL_PLAINTEXT.
SSL | Kerberos | |
---|---|---|
PLAINTEXT | No | No |
SSL | Yes | No |
SASL_PLAINTEXT | No | Yes |
SASL_SSL | Yes | Yes |
These protocols can be defined for broker-to-client interaction and for broker-to-broker interaction. security.inter.broker.protocol allows the broker-to-broker communication protocol to be different than the broker-to-client protocol. It was added to ease the upgrade from non-secure to secure clusters while allowing rolling upgrades.
In most cases, set security.inter.broker.protocol to the protocol you are using for broker-to-client communication. Set security.inter.broker.protocol to a protocol different than the broker-to-client protocol only when you are performing a rolling upgrade from a non-secure to a secure Kafka cluster.
Enabling Kerberos Authentication
CDK 2.0 and higher Powered By Apache Kafka supports Kerberos authentication. If you already have a Kerberos server, you can add Kafka to your current configuration. If you do not have a Kerberos server, install it before proceeding. See Enabling Kerberos Authentication Using the Wizard.
If you already have configured the mapping from Kerberos principals to short names using the hadoop.security.auth_to_local HDFS configuration property, configure the same rules for Kafka by adding the sasl.kerberos.principal.to.local.rules property to the Advanced Configuration Snippet for Kafka Broker Advanced Configuration Snippet using Cloudera Manager. Specify the rules as a comma separated list.
To enable Kerberos authentication for Kafka:
- From Cloudera Manager, navigate to Kafka > Configurations. Set SSL client authentication to none. Set Inter Broker Protocol to SASL_PLAINTEXT.
- Click Save Changes.
- Restart the Kafka service.
- Make sure that listeners = SASL_PLAINTEXT is present in the Kafka broker logs /var/log/kafka/server.log.
- Create a jaas.conf file with the following contents to use with cached Kerberos credentials (you can modify this to use keytab files instead of cached credentials. To generate keytabs, see Step 6: Get or Create a Kerberos Principal for Each User Account).
If you use kinit first, use this configuration.
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true;
};If you use keytab, use this configuration:KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/kafka_server.keytab"
principal="kafka/kafka1.hostname.com@EXAMPLE.COM";
}; - Create the client.properties file containing the following properties.
security.protocol=SASL_PLAINTEXT
sasl.kerberos.service.name=kafka - Test with the Kafka console producer and consumer. To obtain a Kerberos ticket-granting ticket (TGT):
$ kinit <user>
- Verify that your topic exists. (This does not use security features, but it is a best practice.)
$ kafka-topics --list --zookeeper <zkhost>:2181
- Verify that the jaas.conf file is used by setting the environment.
$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/user/jaas.conf"
- Run a Kafka console producer.
$ kafka-console-producer --broker-list <anybroker>:9092 --topic test1
--producer.config client.properties - Run a Kafka console consumer.
$ kafka-console-consumer --new-consumer --topic test1 --from-beginning
--bootstrap-server <anybroker>:9092 --consumer.config client.properties
Enabling Encryption at Rest
Data encryption is increasingly recognized as an optimal method for protecting data at rest.
- Stop the Kafka service.
- Archive the Kafka data to an alternate location, using TAR or another archive tool.
- Unmount the affected drives.
- Install and configure Navigator Encrypt.
- Expand the TAR archive into the encrypted directories.
Using Kafka with Sentry Authorization
Starting with CDK 2.1.x on CDH 5.9.x and higher Powered By Apache Kafka, Apache Sentry includes Kafka binding you can use to enable authorization in Kafka with Sentry. For more information, see Authorization With Apache Sentry.
Configuring Kafka to Use Sentry Authorization
The following steps describe how to configure Kafka to use Sentry authorization. These steps assume you have installed Kafka and Sentry on your cluster.
For more information, see Installing or Upgrading CDK Powered By Apache Kafka® and Sentry Installation.
To configure Sentry authentication for Kafka:
- Go to Kafka > Configuration.
- Select the checkbox Enable Kerberos Authentication.
- Select a Sentry service in the Kafka service configuration.
- Add Super users. Super users can perform any action on any resource in the Kafka cluster. The kafkauser is added as a super user by default. Super user requests are authorized without going through Sentry, which provides enhanced performance.
- Select the checkbox Enable Sentry Privileges Caching to enhance performance.
Authorizable Resources
Authorizable resources are resources or entities in a Kafka cluster that require special permissions for a user to be able to perform actions on them. Kafka has four authorizable resources.
- Cluster, which controls who can perform cluster-level operations such as creating or deleting a topic. This can only have one value, kafka-cluster, as one Kafka cluster cannot have more than one cluster resource.
- Topic, which controls who can perform topic-level operations such as producing and consuming topics. Its value must match exactly the topic name in the Kafka cluster.
Consumergroup, which controls who can perform consumergroup-level operations such as joining or describing a consumergroup. Its value must exactly match the group.id of a consumergroup.
- Host, which controls from where specific operations can be performed. Think of this as a way to achieve IP filtering in Kafka. You can set the value of this resource to the wildcard (*), which represents all hosts.
Authorized Actions
You can perform multiple actions on each resource. The following operations are supported by Kafka, though not all actions are valid on all resources.
- ALL, this is a wildcard action, and represents all possible actions on a resource.
- read
- write
- create
- delete
- alter
- describe
- clusteraction
Authorizing Privileges
Privileges define what actions are allowed on a resource. A privilege is represented as a string in Sentry. The following rules apply to a valid privilege.
- Can have at most one Host resource. If you do not specify a Host resource in your privilege string, Host=* is assumed.
- Must have exactly one non-Host resource.
- Must have exactly one action specified at the end of the privilege string.
For example, the following are valid privilege strings:
Host=*->Topic=myTopic->action=ALL
Topic=test->action=ALL
Granting Privileges to a Role
The following examples grant privileges to the role test, so that users in testGroup can create a topic named testTopic and produce to it.
The user executing these commands must be added to the Sentry parameter sentry.service.allow.connectand also be a member of a group defined in sentry.service.admin.group.
Before you can assign the test role, you must first create it. To create the test role:
$kafka-sentry -cr -r test
To confirm that the role was created, list the roles:
$ kafka-sentry -lr
If Sentry privileges caching is enabled, as recommended, the new privileges you assign take some time to appear in the system. The time is the time-to-live interval of the Sentry privileges cache, which is set using sentry.kafka.caching.ttl.ms. By default, this interval is 30 seconds. For test clusters, use a lower value, such as 1 ms.
- Allow users in testGroup to write to testTopic from localhost, which allows users to produce to testTopic.
$ kafka-sentry -gpr -r test -p "Host=127.0.0.1->Topic=testTopic->action=write"
- Assign the test role to the group testGroup:
kafka-sentry -arg -r test -g testGroup
- Verify that the test role is part of the group testGroup:
kafka-sentry -lr -g testGroup
- Create testTopic.
$ kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 \
--partitions 1 --topic testTopic
$ kafka-topics --list --zookeeper localhost:2181 testTopic - Produce to testTopic. Note that you have to pass a configuration file, producer.properties, with information on JAAS configuration and other Kerberos authentication related information. See SASL Configuration for Kafka Clients.
$ kafka-console-producer --broker-list localhost:9092 --topic testTopic \
--producer.config producer.properties
This is a message
This is another message - Grant the create privilege to the test role.
$ kafka-sentry -gpr -r test -p "Host=127.0.0.1->Cluster=kafka-cluster->action=create"
- Allow users in testGroup to describe testTopic from localhost, which the user creates and uses.
$ kafka-sentry -gpr -r test -p "Host=127.0.0.1->Topic=testTopic->action=describe"
- Grant the describe privilege to the test role.
$ kafka-sentry -gpr -r test -p "Host=127.0.0.1->Consumergroup=testconsumergroup->action=describe"
- Allow users in testGroup to read from a consumer group, testconsumergroup, that it will start and join.
$ kafka-sentry -gpr -r test -p "Host=127.0.0.1->Consumergroup=testconsumergroup->action=read"
- Allow users in testGroup to read from testTopic from localhost and to consume from testTopic.
$ kafka-sentry -gpr -r test -p "Host=127.0.0.1->Topic=testTopic->action=read"
- Consume from testTopic. Note that you have to pass a configuration file, consumer.properties, with information on JAAS configuration and other Kerberos authentication related information. The configuration file must also specify group.id as testconsumergroup.
kafka-console-consumer --new-consumer --topic test1 --from-beginning --bootstrap-server <anybroker>:9092 --consumer.config client.properties
This is a message
This is another message
Troubleshooting
If Kafka requests are failing due to authorization, the following steps can provide insight into the error:
- Make sure you are kinit'd as a user who has privileges to perform an operation.
- Identify which broker is hosting leader of the partition you are trying to produce to or consume from, as this leader is going to authorize your request against Sentry. One easy way of debugging is to just have one Kafka broker. Change log level of the Kafka broker to debug and restart the broker.
- Run the Kafka client or Kafka CLI with required arguments and capture the Kafka log, which should be something like /var/log/kafka/kafka-broker-<HOST_ID>.log on kafka broker's host.
- There will be many Jetty logs, and filtering that out usually helps in reducing noise. Look for log messages from org.apache.sentry.
- Look for following information in the filtered logs:
- Groups the user Kafka client or CLI is running as.
- Required privileges for the operation.
- Retrieved privileges from Sentry.
- Required and retrieved privileges comparison result.
This log information can provide insight into which privilege is not assigned to a user, causing a particular operation to fail.
Configuring Apache Kafka Security的更多相关文章
- Configuring Apache Kafka for Performance and Resource Management
Apache Kafka is optimized for small messages. According to benchmarks, the best performance occurs w ...
- Apache Kafka安全| Kafka的需求和组成部分
1.目标 - 卡夫卡安全 今天,在这个Kafka教程中,我们将看到Apache Kafka Security 的概念 .Kafka Security教程包括我们需要安全性的原因,详细介绍加密.有了这 ...
- 使用命令进行Apache Kafka操作
1.目标 我们可以在Kafka集群上执行几个Apache Kafka Operations .因此,在本文中,我们将详细讨论所有Apache Kafka操作.它还包括有助于实现这些Kafka操作的命令 ...
- Kafka工具教程 - Apache Kafka中的2个重要工具
1.目标 - 卡夫卡工具 在我们上一期的Kafka教程中,我们讨论了Kafka Workflow.今天,我们将讨论Kafka Tool.首先,我们将看到卡夫卡的意义.此外,我们将了解两个Kafka工具 ...
- Apache Kafka Connect - 2019完整指南
今天,我们将讨论Apache Kafka Connect.此Kafka Connect文章包含有关Kafka Connector类型的信息,Kafka Connect的功能和限制.此外,我们将了解Ka ...
- Configuring High Availability and Consistency for Apache Kafka
To achieve high availability and consistency targets, adjust the following parameters to meet your r ...
- How-to: Do Real-Time Log Analytics with Apache Kafka, Cloudera Search, and Hue
Cloudera recently announced formal support for Apache Kafka. This simple use case illustrates how to ...
- Apache Kafka - How to Load Test with JMeter
In this article, we are going to look at how to load test Apache Kafka, a distributed streaming plat ...
- org.apache.kafka.common.network.Selector
org.apache.kafka.common.client.Selector实现了Selectable接口,用于提供符合Kafka网络通讯特点的异步的.非阻塞的.面向多个连接的网络I/O. 这些网络 ...
随机推荐
- Python爬虫入门教程 35-100 知乎网全站用户爬虫 scrapy
爬前叨叨 全站爬虫有时候做起来其实比较容易,因为规则相对容易建立起来,只需要做好反爬就可以了,今天咱们爬取知乎.继续使用scrapy当然对于这个小需求来说,使用scrapy确实用了牛刀,不过毕竟本博客 ...
- C#版 - Leetcode 593. 有效的正方形 - 题解
版权声明: 本文为博主Bravo Yeung(知乎UserName同名)的原创文章,欲转载请先私信获博主允许,转载时请附上网址 http://blog.csdn.net/lzuacm. C#版 - L ...
- Shiro中的授权问题
在初识Shiro一文中,我们对Shiro的基本使用已经做了简单的介绍,不懂的小伙伴们可以先阅读上文,今天我们就来看看Shiro中的授权问题. Shiro中的授权,大体上可以分为两大类,一类是隐式角色, ...
- source map 的原理探究
线上产品代码一般是编译过的,前端的编译处理过程包括不限于 转译器/Transpilers (Babel, Traceur) 编译器/Compilers (Closure Compiler, TypeS ...
- 使用Flume消费Kafka数据到HDFS
1.概述 对于数据的转发,Kafka是一个不错的选择.Kafka能够装载数据到消息队列,然后等待其他业务场景去消费这些数据,Kafka的应用接口API非常的丰富,支持各种存储介质,例如HDFS.HBa ...
- 基于IdentityServer4 实现.NET Core的认证授权
IdentityServer4是什么? IdentityServer4是基于ASP.NET Core实现的认证和授权框架,是对OpenID Connect和OAuth 2.0协议的实现. OpenID ...
- 【Java入门提高篇】Day25 史上最详细的HashMap红黑树解析
当当当当当当当,好久不见,最近又是换工作,又是换房子,忙的不可开交,断更了一小段时间,最重要的一篇迟迟出不来,每次都犹抱琵琶半遮面,想要把它用通俗易懂的方式进行说明,确实有一定的难度,可愁煞我也,但自 ...
- Linux下Oracle client客户端安装
0.zip格式 (0)下载地址: a.Oracle官网 (1)安装过程: a.将zip文件分别解压到/software/下,放在同一个目录instandclient_11_2/下 b.在/softwa ...
- shell编程练习(二): 笔试11-20
笔试练习(二): 11.写一个shell脚本来得到当前的日期,时间,用户名和当前工作目录. [root@VM_0_5_centos test]# vi 11.sh [root@VM_0_5_cento ...
- QSS的使用(二)——实现ColorLabel
在上一篇文章中,我们已经了解了QSS的基础使用,现在我们将会看到一个简单的例子来加深对QSS的理解. 需求分析 我们想要在界面中让文本显示出指定的颜色,现在有几种方案: 使用paintEvent手动计 ...