1.Kafka提供了两套API给Consumer

  1. The high-level Consumer API
  2. The SimpleConsumer API
第一种高度抽象的Consumer API,它使用起来简单、方便,可是对于某些特殊的需求我们可能要用到另外一种更底层的API,那么先介绍下另外一种API可以帮助我们做哪些事情
  • 一个消息读取多次
  • 在一个处理过程中仅仅消费Partition当中的一部分消息
  • 加入事务管理机制以保证消息被处理且仅被处理一次

2.使用SimpleConsumer有哪些弊端呢?

  • 必须在程序中跟踪offset值
  • 必须找出指定Topic Partition中的lead broker
  • 必须处理broker的变动

3.使用SimpleConsumer的步骤

  1. 从全部活跃的broker中找出哪个是指定Topic Partition中的leader broker
  2. 找出指定Topic Partition中的全部备份broker
  3. 构造请求
  4. 发送请求查询数据
  5. 处理leader broker变更

4.代码实例


package bonree.consumer;

import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map; import kafka.api.FetchRequest;
import kafka.api.FetchRequestBuilder;
import kafka.api.PartitionOffsetRequestInfo;
import kafka.common.ErrorMapping;
import kafka.common.TopicAndPartition;
import kafka.javaapi.FetchResponse;
import kafka.javaapi.OffsetResponse;
import kafka.javaapi.PartitionMetadata;
import kafka.javaapi.TopicMetadata;
import kafka.javaapi.TopicMetadataRequest;
import kafka.javaapi.consumer.SimpleConsumer;
import kafka.message.MessageAndOffset; public class SimpleExample {
private List<String> m_replicaBrokers = new ArrayList<String>(); public SimpleExample() {
m_replicaBrokers = new ArrayList<String>();
} public static void main(String args[]) {
SimpleExample example = new SimpleExample();
// 最大读取消息数量
long maxReads = Long.parseLong("3");
// 要订阅的topic
String topic = "mytopic";
// 要查找的分区
int partition = Integer.parseInt("0");
// broker节点的ip
List<String> seeds = new ArrayList<String>();
seeds.add("192.168.4.30");
seeds.add("192.168.4.31");
seeds.add("192.168.4.32");
// 端口
int port = Integer.parseInt("9092");
try {
example.run(maxReads, topic, partition, seeds, port);
} catch (Exception e) {
System.out.println("Oops:" + e);
e.printStackTrace();
}
} public void run(long a_maxReads, String a_topic, int a_partition, List<String> a_seedBrokers, int a_port) throws Exception {
// 获取指定Topic partition的元数据
PartitionMetadata metadata = findLeader(a_seedBrokers, a_port, a_topic, a_partition);
if (metadata == null) {
System.out.println("Can't find metadata for Topic and Partition. Exiting");
return;
}
if (metadata.leader() == null) {
System.out.println("Can't find Leader for Topic and Partition. Exiting");
return;
}
String leadBroker = metadata.leader().host();
String clientName = "Client_" + a_topic + "_" + a_partition; SimpleConsumer consumer = new SimpleConsumer(leadBroker, a_port, 100000, 64 * 1024, clientName);
long readOffset = getLastOffset(consumer, a_topic, a_partition, kafka.api.OffsetRequest.EarliestTime(), clientName);
int numErrors = 0;
while (a_maxReads > 0) {
if (consumer == null) {
consumer = new SimpleConsumer(leadBroker, a_port, 100000, 64 * 1024, clientName);
}
FetchRequest req = new FetchRequestBuilder().clientId(clientName).addFetch(a_topic, a_partition, readOffset, 100000).build();
FetchResponse fetchResponse = consumer.fetch(req); if (fetchResponse.hasError()) {
numErrors++;
// Something went wrong!
short code = fetchResponse.errorCode(a_topic, a_partition);
System.out.println("Error fetching data from the Broker:" + leadBroker + " Reason: " + code);
if (numErrors > 5)
break;
if (code == ErrorMapping.OffsetOutOfRangeCode()) {
// We asked for an invalid offset. For simple case ask for
// the last element to reset
readOffset = getLastOffset(consumer, a_topic, a_partition, kafka.api.OffsetRequest.LatestTime(), clientName);
continue;
}
consumer.close();
consumer = null;
leadBroker = findNewLeader(leadBroker, a_topic, a_partition, a_port);
continue;
}
numErrors = 0; long numRead = 0;
for (MessageAndOffset messageAndOffset : fetchResponse.messageSet(a_topic, a_partition)) {
long currentOffset = messageAndOffset.offset();
if (currentOffset < readOffset) {
System.out.println("Found an old offset: " + currentOffset + " Expecting: " + readOffset);
continue;
} readOffset = messageAndOffset.nextOffset();
ByteBuffer payload = messageAndOffset.message().payload(); byte[] bytes = new byte[payload.limit()];
payload.get(bytes);
System.out.println(String.valueOf(messageAndOffset.offset()) + ": " + new String(bytes, "UTF-8"));
numRead++;
a_maxReads--;
} if (numRead == 0) {
try {
Thread.sleep(1000);
} catch (InterruptedException ie) {
}
}
}
if (consumer != null)
consumer.close();
} public static long getLastOffset(SimpleConsumer consumer, String topic, int partition, long whichTime, String clientName) {
TopicAndPartition topicAndPartition = new TopicAndPartition(topic, partition);
Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(whichTime, 1));
kafka.javaapi.OffsetRequest request = new kafka.javaapi.OffsetRequest(requestInfo, kafka.api.OffsetRequest.CurrentVersion(), clientName);
OffsetResponse response = consumer.getOffsetsBefore(request); if (response.hasError()) {
System.out.println("Error fetching data Offset Data the Broker. Reason: " + response.errorCode(topic, partition));
return 0;
}
long[] offsets = response.offsets(topic, partition);
return offsets[0];
} /**
* @param a_oldLeader
* @param a_topic
* @param a_partition
* @param a_port
* @return String
* @throws Exception
* 找一个leader broker
*/
private String findNewLeader(String a_oldLeader, String a_topic, int a_partition, int a_port) throws Exception {
for (int i = 0; i < 3; i++) {
boolean goToSleep = false;
PartitionMetadata metadata = findLeader(m_replicaBrokers, a_port, a_topic, a_partition);
if (metadata == null) {
goToSleep = true;
} else if (metadata.leader() == null) {
goToSleep = true;
} else if (a_oldLeader.equalsIgnoreCase(metadata.leader().host()) && i == 0) {
// first time through if the leader hasn't changed give
// ZooKeeper a second to recover
// second time, assume the broker did recover before failover,
// or it was a non-Broker issue
//
goToSleep = true;
} else {
return metadata.leader().host();
}
if (goToSleep) {
try {
Thread.sleep(1000);
} catch (InterruptedException ie) {
}
}
}
System.out.println("Unable to find new leader after Broker failure. Exiting");
throw new Exception("Unable to find new leader after Broker failure. Exiting");
} private PartitionMetadata findLeader(List<String> a_seedBrokers, int a_port, String a_topic, int a_partition) {
PartitionMetadata returnMetaData = null;
loop: for (String seed : a_seedBrokers) {
SimpleConsumer consumer = null;
try {
consumer = new SimpleConsumer(seed, a_port, 100000, 64 * 1024, "leaderLookup");
List<String> topics = Collections.singletonList(a_topic);
TopicMetadataRequest req = new TopicMetadataRequest(topics);
kafka.javaapi.TopicMetadataResponse resp = consumer.send(req); List<TopicMetadata> metaData = resp.topicsMetadata();
for (TopicMetadata item : metaData) {
for (PartitionMetadata part : item.partitionsMetadata()) {
if (part.partitionId() == a_partition) {
returnMetaData = part;
break loop;
}
}
}
} catch (Exception e) {
System.out.println("Error communicating with Broker [" + seed + "] to find Leader for [" + a_topic + ", " + a_partition + "] Reason: " + e);
} finally {
if (consumer != null)
consumer.close();
}
}
if (returnMetaData != null) {
m_replicaBrokers.clear();
for (kafka.cluster.Broker replica : returnMetaData.replicas()) {
m_replicaBrokers.add(replica.host());
}
}
return returnMetaData;
}
}

Kafka具体解释五、Kafka Consumer的底层API- SimpleConsumer的更多相关文章

  1. Kafka详解五:Kafka Consumer的底层API- SimpleConsumer

    问题导读 1.Kafka如何实现和Consumer之间的交互?2.使用SimpleConsumer有哪些弊端呢? 1.Kafka提供了两套API给Consumer The high-level Con ...

  2. kafka具体解释四:Kafka的设计思想、理念

    版权声明:本文为博主原创文章,未经博主同意不得转载. https://blog.csdn.net/suifeng3051/article/details/37606001      本节主要从总体角度 ...

  3. Kafka具体解释二、怎样配置Kafka集群

    Kafka集群配置比較简单,为了更好的让大家理解.在这里要分别介绍以下三种配置 单节点:一个broker的集群 单节点:多个broker的集群 多节点:多broker集群 一.单节点单broker实例 ...

  4. Kafka学习笔记之Kafka Consumer设计解析

    0x00 摘要 本文主要介绍了Kafka High Level Consumer,Consumer Group,Consumer Rebalance,Low Level Consumer实现的语义,以 ...

  5. springboot kafka集成(实现producer和consumer)

    本文介绍如何在springboot项目中集成kafka收发message. 1.先解决依赖 springboot相关的依赖我们就不提了,和kafka相关的只依赖一个spring-kafka集成包 &l ...

  6. Kafka 温故(五):Kafka的消费编程模型

    Kafka的消费模型分为两种: 1.分区消费模型 2.分组消费模型 一.分区消费模型 二.分组消费模型 Producer : package cn.outofmemory.kafka; import ...

  7. kafka具体解释一、Kafka简单介绍

    背景:      当今社会各种应用系统诸如商业.社交.搜索.浏览等像信息工厂一样不断的生产出各种信息,在大数据时代,我们面临例如以下几个挑战: 怎样收集这些巨大的信息 怎样分析它 怎样及时做到如上两点 ...

  8. Kafka 学习笔记之 Producer/Consumer (Scala)

    既然Kafka使用Scala写的,最近也在慢慢学习Scala的语法,虽然还比较生疏,但是还是想尝试下用Scala实现Producer和Consumer,并且用HashPartitioner实现消息根据 ...

  9. kafka之二:Kafka 设计与原理详解

    一.Kafka简介 本文综合了我之前写的kafka相关文章,可作为一个全面了解学习kafka的培训学习资料. 转载请注明出处 : 本文链接 1.1 背景历史 当今社会各种应用系统诸如商业.社交.搜索. ...

随机推荐

  1. GAILS里面的SAVE方法

    用途 保存一个domain类的实例到数据库,需要的话会级联保存所有的子实例. 举例 def b = new Book(title:"The Shining") b.save() 描 ...

  2. winform treeview 绑定文件夹和文件

    转载:http://www.cnblogs.com/zhbsh/archive/2011/05/26/2057733.html #region treeview 绑定文件夹和文件 /// <su ...

  3. 用正则把url解析为对象

    用正则把url解析为对象 <!DOCTYPE html><html><head><meta charset="utf-8">< ...

  4. Mysql -- Can’t connect to local MySQL server through socket ‘/var/lib/mysql/mysql.sock’解决方法

    启动mysql 报错: ERROR 2002 (HY000): Can’t connect to local MySQL server through socket ‘/var/lib/mysql/m ...

  5. Java8 更快的原子类:LongAdder(笔记)

    更快的原子类:LongAdder      大家对AtomicInteger的基本实现机制应该比较了解,它们是在一个死循环内,不断尝试修改目标值,知道修改成功,如果竞争不激烈,那么修改成功的概率就很高 ...

  6. Unity 编辑器扩展 场景视图内控制对象

    http://blog.csdn.net/akof1314/article/details/38129031 假设有一个敌人生成器类,其中有个属性range用来表示敌人生成的范围区域大小,那么可以用O ...

  7. ios app在itunesConnect里面的几种状态

    原地址:http://blog.csdn.net/dean19900504/article/details/8164734 Waiting for Upload (Yellow) Appears wh ...

  8. 使用python-nmap 搭建基本端口扫描器

    代码地址如下:http://www.demodashi.com/demo/13255.html 一.前言 注意: 本文相关教程仅供个人学习使用,切勿用于非法用途,否则造成的相关损失及影响,作者不承担任 ...

  9. Oracle安装过程中的几点注意

    为追求统一,安装了oracle 11g step 4时需要改一下名称,利于记忆 数据库安装完之后需要取消SCOTT账户的锁定 完成后点击SQL Developer会出现——"应用程序开发&q ...

  10. outline轮廓线在不同CSS样式下的表现

    outline轮廓线在不同CSS样式下的表现 CSS 去除浏览器默认 轮廓外框 在默认情况下,点击 a 标签,input,或者添加click事件的时候,浏览器留下一个轮廓外框(chrome之下为蓝色) ...