注:kafka消息队列默认采用配置消息主题进行消费,一个topic中的消息只能被同一个组(groupId)的消费者中的一个消费者消费。

1.在pom.xml依赖下新添加一下kafka依赖ar包

  1. <!--kafka-->
  2. <dependency>
  3. <groupId>org.springframework.kafka</groupId>
  4. <artifactId>spring-kafka</artifactId>
  5. <version>1.1.1.RELEASE</version>
  6. </dependency>
  7. <dependency>
  8. <groupId>org.apache.kafka</groupId>
  9. <artifactId>kafka_2.10</artifactId>
  10. <version>0.10.0.1</version>
  11. </dependency>

2.在application.properties增加配置:

  1. #原始数据kafka读取
  2. kafka.consumer.servers=IP:9092,IP:9092(kafka消费集群ip+port端口)
  3. kafka.consumer.enable.auto.commit=true(是否自动提交)
  4. kafka.consumer.session.timeout=20000(连接超时时间)
  5. kafka.consumer.auto.commit.interval=100
  6. kafka.consumer.auto.offset.reset=latest(实时生产,实时消费,不会从头开始消费)
  7. kafka.consumer.topic=result(消费的topic)
  8. kafka.consumer.group.id=test(消费组)
  9. kafka.consumer.concurrency=10(设置消费线程数)
  10.  
  11. #协议转换后存储kafka
  12. kafka.producer.servers=IP:9092,IP:9092(kafka生产集群ip+port端口)
  13. kafka.producer.topic=result(生产的topic)
  14. kafka.producer.retries=0
  15. kafka.producer.batch.size=4096
  16. kafka.producer.linger=1
  17. kafka.producer.buffer.memory=40960

3.生产者配置类:

  1. package com.mapbar.track_storage.config;
  2.  
  3. import org.apache.kafka.clients.producer.ProducerConfig;
  4. import org.apache.kafka.common.serialization.StringSerializer;
  5. import org.springframework.beans.factory.annotation.Value;
  6. import org.springframework.context.annotation.Bean;
  7. import org.springframework.context.annotation.Configuration;
  8. import org.springframework.kafka.annotation.EnableKafka;
  9. import org.springframework.kafka.core.DefaultKafkaProducerFactory;
  10. import org.springframework.kafka.core.KafkaTemplate;
  11. import org.springframework.kafka.core.ProducerFactory;
  12.  
  13. import java.util.HashMap;
  14. import java.util.Map;
  15.  
  16. /**
  17. * kafka生产配置
  18. * @author Lvjiapeng
  19. *
  20. */
  21. @Configuration
  22. @EnableKafka
  23. public class KafkaProducerConfig {
  24. @Value("${kafka.producer.servers}")
  25. private String servers;
  26. @Value("${kafka.producer.retries}")
  27. private int retries;
  28. @Value("${kafka.producer.batch.size}")
  29. private int batchSize;
  30. @Value("${kafka.producer.linger}")
  31. private int linger;
  32. @Value("${kafka.producer.buffer.memory}")
  33. private int bufferMemory;
  34.  
  35. public Map<String, Object> producerConfigs() {
  36. Map<String, Object> props = new HashMap<>();
  37. props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, servers);
  38. props.put(ProducerConfig.RETRIES_CONFIG, retries);
  39. props.put(ProducerConfig.BATCH_SIZE_CONFIG, batchSize);
  40. props.put(ProducerConfig.LINGER_MS_CONFIG, linger);
  41. props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, bufferMemory);
  42. props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
  43. props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
  44. return props;
  45. }
  46.  
  47. public ProducerFactory<String, String> producerFactory() {
  48. return new DefaultKafkaProducerFactory<>(producerConfigs());
  49. }
  50.  
  51. @Bean
  52. public KafkaTemplate<String, String> kafkaTemplate() {
  53. return new KafkaTemplate<String, String>(producerFactory());
  54. }
  55. }

4.消费者配置类:

  1. package com.mapbar.track_storage.config;
  2.  
  3. import org.apache.kafka.clients.consumer.ConsumerConfig;
  4. import org.apache.kafka.common.serialization.StringDeserializer;
  5. import org.springframework.beans.factory.annotation.Value;
  6. import org.springframework.context.annotation.Bean;
  7. import org.springframework.context.annotation.Configuration;
  8. import org.springframework.kafka.annotation.EnableKafka;
  9. import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
  10. import org.springframework.kafka.config.KafkaListenerContainerFactory;
  11. import org.springframework.kafka.core.ConsumerFactory;
  12. import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
  13. import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
  14.  
  15. import java.util.HashMap;
  16. import java.util.Map;
  17.  
  18. /**
  19. * kafka消费者配置
  20. * @author Lvjiapeng
  21. *
  22. */
  23. @Configuration
  24. @EnableKafka
  25. public class KafkaConsumerConfig {
  26.  
  27. @Value("${kafka.consumer.servers}")
  28. private String servers;
  29. @Value("${kafka.consumer.enable.auto.commit}")
  30. private boolean enableAutoCommit;
  31. @Value("${kafka.consumer.session.timeout}")
  32. private String sessionTimeout;
  33. @Value("${kafka.consumer.auto.commit.interval}")
  34. private String autoCommitInterval;
  35. @Value("${kafka.consumer.group.id}")
  36. private String groupId;
  37. @Value("${kafka.consumer.auto.offset.reset}")
  38. private String autoOffsetReset;
  39. @Value("${kafka.consumer.concurrency}")
  40. private int concurrency;
  41.  
  42. @Bean
  43. public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
  44. ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
  45. factory.setConsumerFactory(consumerFactory());
  46. factory.setConcurrency(concurrency);
  47. factory.getContainerProperties().setPollTimeout(1500);
  48. return factory;
  49. }
  50.  
  51. public ConsumerFactory<String, String> consumerFactory() {
  52. return new DefaultKafkaConsumerFactory<>(consumerConfigs());
  53. }
  54.  
  55. public Map<String, Object> consumerConfigs() {
  56. Map<String, Object> propsMap = new HashMap<>();
  57. propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, servers);
  58. propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, enableAutoCommit);
  59. propsMap.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, autoCommitInterval);
  60. propsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, sessionTimeout);
  61. propsMap.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
  62. propsMap.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
  63. propsMap.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
  64. propsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, autoOffsetReset);
  65. return propsMap;
  66. }
  67. /**
  68. * kafka监听
  69. * @return
  70. */
  71. @Bean
  72. public RawDataListener listener() {
  73. return new RawDataListener();
  74. }
  75.  
  76. }

5.测试生产者:

  1. package com.mapbar.track_storage.controller;
  2.  
  3. import org.springframework.beans.factory.annotation.Autowired;
  4. import org.springframework.kafka.core.KafkaTemplate;
  5. import org.springframework.stereotype.Controller;
  6. import org.springframework.web.bind.annotation.RequestMapping;
  7. import org.springframework.web.bind.annotation.RequestMethod;
  8.  
  9. import javax.servlet.http.HttpServletRequest;
  10. import javax.servlet.http.HttpServletResponse;
  11. import java.io.IOException;
  12.  
  13. @RequestMapping(value = "/kafka")
  14. @Controller
  15. public class ProducerController {
  16. @Autowired
  17. private KafkaTemplate kafkaTemplate;
  18.  
  19. @RequestMapping(value = "/producer",method = RequestMethod.GET)
  20. public void consume(HttpServletRequest request, HttpServletResponse response) throws IOException{
  21. String value = "{\"code\":200,\"dataVersion\":\"17q1\",\"message\":\"\",\"id\":\"364f79f28eea48eefeca8c85477a10d3\",\"source\":\"didi\",\"tripList\":[{\"subTripList\":[{\"startTimeStamp\":1519879598,\"schemeList\":[{\"distance\":0.0,\"ids\":\"94666702,\",\"schemeId\":0,\"linkList\":[{\"score\":72,\"distance\":1,\"gpsList\":[{\"origLonLat\":\"116.321343,40.43242\",\"grabLonLat\":\"112.32312,40.32132\",\"timestamp\":1515149926000}]}]}],\"endTimeStamp\":1519879598,\"subTripId\":0},{\"startTimeStamp\":1519879727,\"schemeList\":[{\"distance\":1395.0,\"ids\":\"94666729,7298838,7291709,7291706,88613298,88613297,7297542,7297541,94698785,94698786,94698778,94698780,94698779,94698782,\",\"schemeId\":0,\"linkList\":[{\"score\":72,\"distance\":1,\"gpsList\":[{\"origLonLat\":\"116.321343,40.43242\",\"grabLonLat\":\"112.32312,40.32132\",\"timestamp\":1515149926000}]}]}],\"endTimeStamp\":1519879812,\"subTripId\":1},{\"startTimeStamp\":1519879836,\"schemeList\":[{\"distance\":0.0,\"ids\":\"54123007,\",\"schemeId\":0,\"linkList\":[{\"score\":72,\"distance\":1,\"gpsList\":[{\"origLonLat\":\"116.321343,40.43242\",\"grabLonLat\":\"112.32312,40.32132\",\"timestamp\":1515149926000}]}]}],\"endTimeStamp\":1519879904,\"subTripId\":2},{\"startTimeStamp\":1519879959,\"schemeList\":[{\"distance\":0.0,\"ids\":\"54190443,\",\"schemeId\":0,\"linkList\":[{\"score\":72,\"distance\":1,\"gpsList\":[{\"origLonLat\":\"116.321343,40.43242\",\"grabLonLat\":\"112.32312,40.32132\",\"timestamp\":1515149926000}]}]}],\"endTimeStamp\":1519879959,\"subTripId\":3},{\"startTimeStamp\":1519880088,\"schemeList\":[{\"distance\":2885.0,\"ids\":\"94698824,94698822,94698789,94698786,54123011,54123012,54123002,94698763,94698727,94698722,94698765,54123006,54123004,\",\"schemeId\":0,\"linkList\":[{\"score\":72,\"distance\":1,\"gpsList\":[{\"origLonLat\":\"116.321343,40.43242\",\"grabLonLat\":\"112.32312,40.32132\",\"timestamp\":1515149926000}]}]}],\"endTimeStamp\":1519880300,\"subTripId\":4},{\"startTimeStamp\":1519880393,\"schemeList\":[{\"distance\":2398.0,\"ids\":\"7309441,7303680,54123061,54123038,7309478,7309477,94698204,94698203,94698273,94698274,94698288,94698296,94698295,94698289,94698310,\",\"schemeId\":0,\"linkList\":[{\"score\":72,\"distance\":1,\"gpsList\":[{\"origLonLat\":\"116.321343,40.43242\",\"grabLonLat\":\"112.32312,40.32132\",\"timestamp\":1515149926000}]}]}],\"endTimeStamp\":1519880636,\"subTripId\":5},{\"startTimeStamp\":1519881064,\"schemeList\":[{\"distance\":35.0,\"ids\":\"7309474,\",\"schemeId\":0,\"linkList\":[{\"score\":72,\"distance\":1,\"gpsList\":[{\"origLonLat\":\"116.321343,40.43242\",\"grabLonLat\":\"112.32312,40.32132\",\"timestamp\":1515149926000}]}]}],\"endTimeStamp\":1519881204,\"subTripId\":6},{\"startTimeStamp\":1519881204,\"schemeList\":[{\"distance\":28.0,\"ids\":\"7309476,\",\"schemeId\":0,\"linkList\":[{\"score\":72,\"distance\":1,\"gpsList\":[{\"origLonLat\":\"116.321343,40.43242\",\"grabLonLat\":\"112.32312,40.32132\",\"timestamp\":1515149926000}]}]}],\"endTimeStamp\":1519881266,\"subTripId\":7},{\"startTimeStamp\":1519881291,\"schemeList\":[{\"distance\":463.0,\"ids\":\"7303683,\",\"schemeId\":0,\"linkList\":[{\"score\":72,\"distance\":1,\"gpsList\":[{\"origLonLat\":\"116.321343,40.43242\",\"grabLonLat\":\"112.32312,40.32132\",\"timestamp\":1515149926000}]}]}],\"endTimeStamp\":1519881329,\"subTripId\":8}],\"startTimeStamp\":1519879350,\"unUseTime\":1201,\"totalTime\":2049,\"endTimeStamp\":1519881399,\"tripId\":0}]}";
  22. for (int i = 1; i<=500; i++){
  23. kafkaTemplate.send("result",value);
  24. }
  25. }
  26. }

6.测试消费者:

  1. import net.sf.json.JSONObject;
  2. import org.apache.kafka.clients.consumer.ConsumerRecord;
  3. import org.apache.log4j.Logger;
  4. import org.springframework.beans.factory.annotation.Autowired;
  5. import org.springframework.kafka.annotation.KafkaListener;
  6. import org.springframework.stereotype.Component;
  7.  
  8. import java.io.IOException;
  9. import java.util.List;
  10.  
  11. /**
  12. * kafka监听
  13. * @author shangzz
  14. *
  15. */
  16. @Component
  17. public class RawDataListener {
  18. Logger logger=Logger.getLogger(RawDataListener.class);
  19. @Autowired
  20. private MatchRoadService matchRoadService;
  21.  
  22. /**
  23. * 实时获取kafka数据(生产一条,监听生产topic自动消费一条)
  24. * @param record
  25. * @throws IOException
  26. */
  27. @KafkaListener(topics = {"${kafka.consumer.topic}"})
  28. public void listen(ConsumerRecord<?, ?> record) throws IOException {
  29. String value = (String) record.value();
  30. System.out.println(value);
  31. }
  32.  
  33. }

总结:

①  生产者环境类配置好以后,@Autowired自动注入KafkaTemplate类,使用send方法生产消息

②  消费者环境类配置好以后,方法头前使用@KafkaListener(topics = {"${kafka.consumer.topic}"})注解监听topic并传入ConsumerRecord<?, ?> record对象即可自动消费topic

③  相关kafka配置只需在application.properties照葫芦画瓢添加,修改或者删除配置并在环境配置类中做出相应修改即可

二:怎么实现让一个topic可以让不同group消费呢

goupid不要用配置文件配置的方式,细心的话,会发现@KafkaListener 注解,里面有一个containerFactory参数,就是让你指定容器工厂的

栗子:

  1. import java.util.HashMap;
  2. import java.util.Map;
  3.  
  4. import org.apache.kafka.clients.consumer.ConsumerConfig;
  5. import org.apache.kafka.common.serialization.StringDeserializer;
  6. import org.springframework.context.annotation.Bean;
  7. import org.springframework.context.annotation.Configuration;
  8. import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
  9. import org.springframework.kafka.config.KafkaListenerContainerFactory;
  10. import org.springframework.kafka.core.ConsumerFactory;
  11. import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
  12. import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
  13.  
  14. @Configuration
  15. public class KafkaConsumerConfig {
  16.  
  17. private String brokers = "192.168.52.130:9092,192.168.52.131:9092,192.168.52.133:9092";
  18.  
  19. private String group1 = "test1";
  20. private String group2 = "test2";
  21.  
  22. @Bean
  23. public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory1() {
  24. ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<String, String>();
  25. factory.setConsumerFactory(consumerFactory1());
  26. factory.setConcurrency(4);
  27. factory.getContainerProperties().setPollTimeout(4000);
  28. return factory;
  29. }
  30.  
  31. @Bean
  32. public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory2() {
  33. ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<String, String>();
  34. factory.setConsumerFactory(consumerFactory2());
  35. factory.setConcurrency(4);
  36. factory.getContainerProperties().setPollTimeout(4000);
  37. return factory;
  38. }
  39.  
  40. public Map<String, Object> getCommonPropertis() {
  41. Map<String, Object> properties = new HashMap<String, Object>();
  42. properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, brokers);
  43. properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
  44. properties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100");
  45. properties.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000");
  46. properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
  47. properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
  48. properties.put(ConsumerConfig.GROUP_ID_CONFIG, group1);
  49. properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
  50. return properties;
  51. }
  52.  
  53. public ConsumerFactory<String, String> consumerFactory1() {
  54. Map<String, Object> properties = getCommonPropertis();
  55. properties.put(ConsumerConfig.GROUP_ID_CONFIG, group1);
  56. return new DefaultKafkaConsumerFactory<String, String>(properties);
  57. }
  58.  
  59. public ConsumerFactory<String, String> consumerFactory2() {
  60. Map<String, Object> properties = getCommonPropertis();
  61. properties.put(ConsumerConfig.GROUP_ID_CONFIG, group2);
  62. return new DefaultKafkaConsumerFactory<String, String>(properties);
  63. }
  64. }

最后,在@KafkaListener 中指定容器名称

@KafkaListener(id="test1",topics = "test-topic", containerFactory="kafkaListenerContainerFactory1")
@KafkaListener(id="test2",topics = "test-topic", containerFactory="kafkaListenerContainerFactory2")
高版本 在@KafkaListener 注解中有groupId属性可以设置

--------------------------------------------------------------------------------------------------
转载:https://blog.csdn.net/lv_1093964643/article/details/83177280

Springboot中使用kafka的更多相关文章

  1. SpringBoot中使用消息中间件Kafka实现Websocket的集群

    1.在实际项目中,由于数据量的增大及并发数的增多,我们不可能只用一台Websocket服务,这个时候就需要用到Webscoket的集群.但是Websocket集群会遇到一些问题.首先我们肯定会想到直接 ...

  2. SpringBoot中异步请求和异步调用(看这一篇就够了)

    原创不易,如需转载,请注明出处https://www.cnblogs.com/baixianlong/p/10661591.html,否则将追究法律责任!!! 一.SpringBoot中异步请求的使用 ...

  3. SpringBoot中如何灵活的实现接口数据的加解密功能?

    数据是企业的第四张名片,企业级开发中少不了数据的加密传输,所以本文介绍下SpringBoot中接口数据加密.解密的方式. 本文目录 一.加密方案介绍二.实现原理三.实战四.测试五.踩到的坑 一.加密方 ...

  4. SpringBoot中如何优雅的读取yml配置文件?

    YAML是一种简洁的非标记语言,以数据为中心,使用空白.缩进.分行组织数据,从而使得表示更加简洁易读.本文介绍下YAML的语法和SpringBoot读取该类型配置文件的过程. 本文目录 一.YAML基 ...

  5. 一次简单的springboot+dubbo+flume+kafka+storm+redis系统

    最近无事学习一下,用springboot+dubbo+flume+kafka+storm+redis做了一个简单的scenic系统 scenicweb:展现层,springboot+dubbo sce ...

  6. SpringBoot中yaml配置对象

    转载请在页首注明作者与出处 一:前言 YAML可以代替传统的xx.properties文件,但是它支持声明map,数组,list,字符串,boolean值,数值,NULL,日期,基本满足开发过程中的所 ...

  7. 如何在SpringBoot中使用JSP ?但强烈不推荐,果断改Themeleaf吧

    做WEB项目,一定都用过JSP这个大牌.Spring MVC里面也可以很方便的将JSP与一个View关联起来,使用还是非常方便的.当你从一个传统的Spring MVC项目转入一个Spring Boot ...

  8. springboot中swaggerUI的使用

    demo地址:demo-swagger-springboot springboot中swaggerUI的使用 1.pom文件中添加swagger依赖 2.从github项目中下载swaggerUI 然 ...

  9. spring-boot+mybatis开发实战:如何在spring-boot中使用myabtis持久层框架

    前言: 本项目基于maven构建,使用mybatis-spring-boot作为spring-boot项目的持久层框架 spring-boot中使用mybatis持久层框架与原spring项目使用方式 ...

随机推荐

  1. Django - installing mysqlclient error: mysqlclient 1.3.13 or newer is required; you have 0.9.3

    环境 Deepin Linux 15.11 Django 2.2 pymysql0.9.3 原因 因为用pymysql替换了默认的mysqlclient,Django官方推荐的数据库API drive ...

  2. IDEA与Tomcat相关配置

    idea会为每一个Tomcat部署的项目,独立建一份配置文件. 配置文件所在位置 怎么部署的(查看虚拟目录)使用的第三种部署方式 部署项目存放的路径 项目目录和Tomcat部署目录 Tomcat真正访 ...

  3. [NOIP2017(TG/PJ)] 真题选做

    [NOIPTG2017] 小凯的疑惑 题意 小凯有两种面值的金币,每种金币有无数个,求在无法准确支付的物品中,最贵的价值是多少金币. 分析 设两种金币面值分别为 $a$ 和 $b \; (a<b ...

  4. dijkstra堆优化板子

    咕咕咕. #include<queue> #include<cstdio> #include<cstring> #include<algorithm> ...

  5. 防止不同账号之间localStorage数据错误

    set和get的时候,key后面加上用户ID

  6. Linux基础命令小结(超全!!)

    Linux目录结构 1.bin 存放经常使用的指令比如ll,cp 2.sbin 系统管理员使用的系统管理指令 3.home 存放普通用户的住目录 4.root 系统管理员的用户主目录 5.boot 存 ...

  7. 阻塞队列BlockingQueue之LinkedBlokingQueue

    1.简介 LinkedBlokingQueue 是链表实现的有界阻塞队列,此队列的默认和最大长度为 Integer.MAX_VALUE.此队列按照先进先出的原则对元素进行排序.ArrayList和Ar ...

  8. Oracle中的数据迁移到Mysql数据库中的方式Navicat premium工具

    1.安装 Navicat premium工具 2.破解 Navicat premium工具 3.连接需要相互迁移的两个库Mysql和Oracle(可以是远程的或者本机的数据库都是可以的) 4.连接上之 ...

  9. IntelliJ IDEA 2017.3尚硅谷-----设置项目文件编码

    这也可以 R暂时显示 C转换

  10. 欧拉函数-bzoj2818-简单推导

    This article is made by Jason-Cow.Welcome to reprint.But please post the writer's address. http://ww ...