Producer部分

Producer在实例化后, 对外提供send方法, 用于将数据送到指定的topic和partition; 以及在退出时需要的destroy方法.

接口 KafkaProducer.java

import java.util.List;
import java.util.Properties; public interface KafkaProducer<D> { default void init() {
}
default void destroy() {
}
boolean send(String topic, D data);
boolean send(String topic, Integer partition, D data);
boolean send(String topic, List<D> dataList);
boolean send(String topic, Integer partition, List<D> dataList); /**
* 默认配置
*/
default Properties getDefaultProps() {
Properties props = new Properties();
props.put("acks", "1");
props.put("retries", 1);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 32 * 1024 * 1024L);
return props;
}
}

参数说明

Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
// The acks config controls the criteria under which requests are considered complete. The "all" setting we have specified will result in blocking on the full commit of the record, the slowest but most durable setting.
props.put("acks", "all");
// If the request fails, the producer can automatically retry, though since we have specified retries as 0 it won't. Enabling retries also opens up the possibility of duplicates (see the documentation on message delivery semantics for details).
props.put("retries", 0);
// The producer maintains buffers of unsent records for each partition. These buffers are of a size specified by the batch.size config. Making this larger can result in more batching, but requires more memory (since we will generally have one of these buffers for each active partition).
props.put("batch.size", 16384);
// By default a buffer is available to send immediately even if there is additional unused space in the buffer. However if you want to reduce the number of requests you can set linger.ms to something greater than 0. This will instruct the producer to wait up to that number of milliseconds before sending a request in hope that more records will arrive to fill up the same batch.
props.put("linger.ms", 1);
// 生产者缓冲大小,当缓冲区耗尽后,额外的发送调用将被阻塞。时间超过max.block.ms将抛出TimeoutException
props.put("buffer.memory", 33554432);
// The key.serializer and value.serializer instruct how to turn the key and value objects the user provides with their ProducerRecord into bytes. You can use the included ByteArraySerializer or StringSerializer for simple string or byte types.
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

实现 KafkaProducerImpl.java

import com.google.common.base.Strings;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import java.util.List;
import java.util.Map;
import java.util.Properties; public class KafkaProducerImpl<D> implements KafkaProducer<D> {
private static final Logger logger = LoggerFactory.getLogger(KafkaProducerImpl.class);
private final Producer<D, D> producer; public KafkaProducerImpl() {
Properties props = this.getDefaultProps();
props.put("bootstrap.servers", servers);
props.put("key.serializer", serializer);
props.put("value.serializer", serializer);
producer = new org.apache.kafka.clients.producer.KafkaProducer<>(props);
} @Override
public void destroy() {
if (producer != null) {
producer.close();
}
} @Override
public boolean send(String topic, D data) {
boolean isSuc = true;
try {
producer.send(new ProducerRecord<>(topic, data));
} catch (Exception e) {
isSuc = false;
logger.error(String.format("KafkaStringProducer send error.topic:[%s],data:[%s]", topic, data), e);
}
return isSuc;
} @Override
public boolean send(String topic, Integer partition, D data) {
boolean isSuc = true;
try {
producer.send(new ProducerRecord<>(topic, partition, null, data));
} catch (Exception e) {
isSuc = false;
logger.error(String.format("KafkaStringProducer send error.topic:[%s],data:[%s]", topic, data), e);
}
return isSuc;
} @Override
public boolean send(String topic, List<D> dataList) {
boolean isSuc = true;
try {
if (dataList != null) {
dataList.forEach(item -> producer.send(new ProducerRecord<>(topic, item)));
}
} catch (Exception e) {
isSuc = false;
logger.error(String.format("KafkaStringProducer send error.topic:[%s],dataList:[%s]", topic, dataList), e);
}
return isSuc;
} @Override
public boolean send(String topic, Integer partition, List<D> dataList) {
boolean isSuc = true;
try {
if (dataList != null) {
dataList.forEach(item -> producer.send(new ProducerRecord<>(topic, partition, null, item)));
}
} catch (Exception e) {
isSuc = false;
logger.error(String.format("KafkaStringProducer send error.topic:[%s],partition[%s],dataList:[%s]", topic, partition, dataList), e);
}
return isSuc;
}
}

Consumer 部分

Consumer 在实例化后, 负责将ConsumerListener添加到列表, 并订阅指定的topic, 启动一个阻塞的循环, 在收到消息后依次调用ConsumerListener进行处理

接口 KafkaConsumer.java

import java.util.Properties;

public interface KafkaConsumer {

    default void init() {
} default void destroy() {
} void start(); /**
* 默认配置
*/
default Properties getDefaultProps() {
Properties props = new Properties();
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("session.timeout.ms", "30000");
return props;
}
}  

参数说明

Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", "test");
// Setting enable.auto.commit means that offsets are committed automatically with a frequency controlled by the config auto.commit.interval.ms.
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
// The deserializer settings specify how to turn bytes into objects. For example, by specifying string deserializers, we are saying that our record's key and value will just be simple strings.
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
// This consumer is subscribing to the topics foo and bar as part of a group of consumers called test as configured with group.id.
consumer.subscribe(Arrays.asList("foo", "bar"));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records)
System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
}

实现 KafkaConsumerImpl.java

import com.google.common.base.Strings;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import java.util.*; public class KafkaConsumerImpl<K, V> implements KafkaConsumer {
private static final Logger logger = LoggerFactory.getLogger(KafkaConsumerImpl.class);
private final List<KafkaConsumerListener<K, V>> consumerListeners = new ArrayList<>();
private Consumer<K, V> consumer;
private boolean running = true; private final int waitingTimeout = 100; public KafkaConsumerImpl(String topic, String groupId, String deserializer) {
Properties props = this.getDefaultProps();
props.put("group.id", groupId);
props.put("bootstrap.servers", servers);
props.put("key.deserializer", deserializer);
props.put("value.deserializer", deserializer);
consumer = new org.apache.kafka.clients.consumer.KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList(topic));
} public void setConsumerListeners(List<KafkaConsumerListener<K, V>> consumerListeners) {
synchronized (this) {
this.consumerListeners.clear();
if (null != consumerListeners && 0 != consumerListeners.size()) {
consumerListeners.forEach(this.consumerListeners::add);
}
}
} public void addConsumerListener(KafkaConsumerListener<K, V> consumerListener) {
synchronized (this) {
if (null != consumerListener && !this.consumerListeners.contains(consumerListener)) {
this.consumerListeners.add(consumerListener);
}
}
} public void removeConsumerListener(KafkaConsumerListener<K, V> consumerListener) {
synchronized (this) {
if (null != consumerListener && this.consumerListeners.contains(consumerListener)) {
this.consumerListeners.remove(consumerListener);
}
}
} @Override
public void init() {
this.start();
} @Override
public void destroy() {
running = false;
} @Override
public void start() {
new Thread(() -> {
while (running) {
ConsumerRecords<K, V> records = consumer.poll(waitingTimeout);
for (ConsumerRecord<K, V> record : records) {
if (consumerListeners != null) {
K key = record.key();
if (key == null)
consumerListeners.forEach(consumer -> consumer.consume(record.value()));
else
consumerListeners.forEach(consumer -> consumer.consume(record.key(), record.value()));
}
}
}
//should use consumer in different thread, or it will throw ConcurrentModificationException
if (consumer != null) {
try {
logger.info("start to close consumer.");
consumer.close();
} catch (Exception e) {
logger.error("close kafka consumer error.", e);
}
consumer = null;
}
}).start();
}
}

接口 KafkaConsumerListener.java

public interface KafkaConsumerListener<K, V> {
void consume(V value); default void consume(K key, V value) {
consume(value);
}
}

.

在Java中使用Kafka的更多相关文章

  1. 精选干货 在java中创建kafka

    这个详细的教程将帮助你创建一个简单的Kafka生产者,该生产者可将记录发布到Kafka集群. 通过优锐课的java学习架构分享中,在本教程中,我们将创建一个简单的Java示例,该示例创建一个Kafka ...

  2. Java中的Unsafe类111

    1.Unsafe类介绍 Unsafe类是在sun.misc包下,不属于Java标准.但是很多Java的基础类库,包括一些被广泛使用的高性能开发库都是基于Unsafe类开发的,比如Netty.Hadoo ...

  3. Java 中的纤程库 – Quasar

    来源:鸟窝, colobu.com/2016/07/14/Java-Fiber-Quasar/ 如有好文章投稿,请点击 → 这里了解详情 最近遇到的一个问题大概是微服务架构中经常会遇到的一个问题: 服 ...

  4. spark streaming中维护kafka偏移量到外部介质

    spark streaming中维护kafka偏移量到外部介质 以kafka偏移量维护到redis为例. redis存储格式 使用的数据结构为string,其中key为topic:partition, ...

  5. CentOS中配置Kafka集群

    环境:三台虚拟机Host0,Host1,Host2 Host0:192.168.10.2 Host1:  192.168.10.3 Host2:  192.168.10.4 在三台虚拟机上配置zook ...

  6. 1.1 Introduction中 Apache Kafka™ is a distributed streaming platform. What exactly does that mean?(官网剖析)(博主推荐)

    不多说,直接上干货! 一切来源于官网 http://kafka.apache.org/documentation/ Apache Kafka™ is a distributed streaming p ...

  7. CentOS7安装CDH 第九章:CDH中安装Kafka

    相关文章链接 CentOS7安装CDH 第一章:CentOS7系统安装 CentOS7安装CDH 第二章:CentOS7各个软件安装和启动 CentOS7安装CDH 第三章:CDH中的问题和解决方法 ...

  8. SUSE中搭建kafka

     搭建环境: JDK: java version 1.8.0_221 zookeeper:zookeeper-3.5.2 kafka: kafka-2.11-1.1.0 一.安装JDK 由于需要jav ...

  9. Springboot中使用kafka

    注:kafka消息队列默认采用配置消息主题进行消费,一个topic中的消息只能被同一个组(groupId)的消费者中的一个消费者消费. 1.在pom.xml依赖下新添加一下kafka依赖ar包 < ...

随机推荐

  1. 【Android归纳】开发中应该注意的事项

    1.子线程中不能更新界面,更新界面必须在主线程中进行 2.Fragment注意的事项: a)  Activity调用Fragment中的方法 b)  Thread或者Handler调用Fragment ...

  2. [填坑]解决"Your MaintenanceTool appears to be older than 3.0.2. ."问题

    之前我写过QT5.9版本在更新组件时出现“要继续此操作,至少需要一个有效且已启用的储存库”问题,得到了网友的热心转载,说明遇到此问题的人不在少数. 原文地址:https://blog.csdn.net ...

  3. C/C++ signal 信号处理函数

    软中断信号(signal,又简称为信号)用来通知进程发生了异步事件.进程之间可以互相通过系统调用kill发送软中断信号. 内核也可以因为内部事件而给进程发送信号,通知进程发生了某个事件. 注意,信号只 ...

  4. iOS开发-Reachability实时检测Wifi,2G/3G/4G/网络状态

    最近遇到一个功能就是根据用户当前的网络状,用户未联网需要提示一下,如果是Wifi可以推荐一些图片新闻,如果是3G模式设置为无图的模式,获取网络状态比较简单,毕竟中国现在的流量还是一个比较贵的状态,哪天 ...

  5. Git SVN 版本控制 简介 总结 MD

    Git 使用准备 主流的 Git 托管网站 GitLab,主流网站,私有仓库也完全免费,功能更强大,页面精美,操作方便 GitHub,最著名的免费Git托管网站,缺点是免费的不支持私有项目 OSChi ...

  6. java.sql.SQLException: ORA-01461: 仅能绑定要插入 LONG 列的 LONG 值

    问题来源:我在执行sql生成json并存入数据库是报的错. 原因:存json的字段我定义其类型为varchar2. 分析:这个异常是指,用户向数据库执行插入数据操作时,某条数据的某个字段值过长,如果是 ...

  7. tensorflow项目构建流程

    https://blog.csdn.net/hjimce/article/details/51899683 一.构建路线 个人感觉对于任何一个深度学习库,如mxnet.tensorflow.thean ...

  8. Structured Streaming编程向导

    简介 Structured Streaming is a scalable and fault-tolerant stream processing engine built on the Spark ...

  9. MongoDB的Invalid credentials for database

    前面都好好的,结果服务器数据库加了一个验证,查了一下,也不算复杂,只要把连接串一改就行了. 结果,不断报错——Invalid credentials for database 找了半天原因,原来是我用 ...

  10. C++ 构造与析构的执行顺序

    1.代码如下:class A{public: int _Id; A():_Id(0) { printf("A[%d]\n",_Id); } ~A() { printf(" ...