版本

springboot 2.1.5.RELEASE
kafka 2.2

遇到的坑

  1. 用最新的springboot就要用最新的kafka版本!
  2. 当我启动云服务器上的zk后,再启动kafka后台日志也没报错,只感觉EndPoint日志信息有点奇怪,然后springboot项目连接kafka,老是有warn级别的日志:"Connection to node -1 could not be established. Broker may not be available.",这是未连接上kafka
  3. springboot项目控制台抛出ip地址不合法的异常。

telnet一下云服务器的9092端口没有响应,然后看云服务器安全组里也添加了啊,netstat也看到9092被监听,到底咋回事?

原来是kafka配置文件的问题,导致9092端口未被正确监听,ip地址的问题就是要绑定kafka服务器的ip地址。

注意下面红色三项配置很重要,解决了我所有的问题!

advertised.host.name必须写kafka服务器的ip地址!如果写localhost,并且项目运行的服务器和kafka运行的不是同一台服务器,会连接不上。

将kafka服务端的配置文件修改如下:

  1. ############################# Server Basics #############################
  2.  
  3. # The id of the broker. This must be set to a unique integer for each broker.
  4. #broker的全局唯一编号,不能重复
  5. broker.id=
  6.  
  7. ############################# Socket Server Settings #############################
  8.  
  9. #监听的端口
  10. listeners=PLAINTEXT://:9092
  11. # 客户端连接的ip地址,必须要写成服务器的ip地址!advertised.host.name
  12. advertised.host.name = 47.XX.XX.XX
  13. host.name=localhost
  14.  
  15. # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
  16. #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
  17.  
  18. # The number of threads that the server uses for receiving requests from the network and sending responses to the network
  19. num.network.threads=
  20.  
  21. # The number of threads that the server uses for processing requests, which may include disk I/O
  22. num.io.threads=
  23.  
  24. # The send buffer (SO_SNDBUF) used by the socket server
  25. socket.send.buffer.bytes=
  26.  
  27. # The receive buffer (SO_RCVBUF) used by the socket server
  28. socket.receive.buffer.bytes=
  29.  
  30. # The maximum size of a request that the socket server will accept (protection against OOM)
  31. socket.request.max.bytes=
  32.  
  33. ############################# Log Basics #############################
  34.  
  35. # A comma separated list of directories under which to store log files
  36. log.dirs=/root/mysoftware/kafka_2.-2.2./logs
  37.  
  38. # The default number of log partitions per topic. More partitions allow greater
  39. # parallelism for consumption, but this will also result in more files across
  40. # the brokers.
  41. num.partitions=
  42.  
  43. # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
  44. # This value is recommended to be increased for installations with data dirs located in RAID array.
  45. num.recovery.threads.per.data.dir=
  46.  
  47. ############################# Internal Topic Settings #############################
  48. # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
  49. # For anything other than development testing, a value greater than is recommended for to ensure availability such as .
  50. offsets.topic.replication.factor=
  51. transaction.state.log.replication.factor=
  52. transaction.state.log.min.isr=
  53.  
  54. ############################# Log Flush Policy #############################
  55.  
  56. # Messages are immediately written to the filesystem but by default we only fsync() to sync
  57. # the OS cache lazily. The following configurations control the flush of data to disk.
  58. # There are a few important trade-offs here:
  59. # . Durability: Unflushed data may be lost if you are not using replication.
  60. # . Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
  61. # . Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
  62. # The settings below allow one to configure the flush policy to flush data after a period of time or
  63. # every N messages (or both). This can be done globally and overridden on a per-topic basis.
  64.  
  65. # The number of messages to accept before forcing a flush of data to disk
  66. #log.flush.interval.messages=
  67.  
  68. # The maximum amount of time a message can sit in a log before we force a flush
  69. #log.flush.interval.ms=
  70.  
  71. ############################# Log Retention Policy #############################
  72.  
  73. # The following configurations control the disposal of log segments. The policy can
  74. # be set to delete segments after a period of time, or after a given size has accumulated.
  75. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
  76. # from the end of the log.
  77.  
  78. # The minimum age of a log file to be eligible for deletion due to age
  79. log.retention.hours=
  80.  
  81. # A size-based retention policy for logs. Segments are pruned from the log unless the remaining
  82. # segments drop below log.retention.bytes. Functions independently of log.retention.hours.
  83. #log.retention.bytes=
  84.  
  85. # The maximum size of a log segment file. When this size is reached a new log segment will be created.
  86. log.segment.bytes=
  87.  
  88. # The interval at which log segments are checked to see if they can be deleted according
  89. # to the retention policies
  90. log.retention.check.interval.ms=
  91.  
  92. ############################# Zookeeper #############################
  93.  
  94. # Zookeeper connection string (see zookeeper docs for details).
  95. # This is a comma separated host:port pairs, each corresponding to a zk
  96. # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
  97. # You can also append an optional chroot string to the urls to specify the
  98. # root directory for all kafka znodes.
  99. zookeeper.connect=localhost:
  100.  
  101. # Timeout in ms for connecting to zookeeper
  102. zookeeper.connection.timeout.ms=
  103.  
  104. ############################# Group Coordinator Settings #############################
  105.  
  106. # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
  107. # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
  108. # The default value for this is seconds.
  109. # We override this to here as it makes for a better out-of-the-box experience for development and testing.
  110. # However, in production environments the default value of seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
  111. group.initial.rebalance.delay.ms=

代码

pom.xml

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  3. xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  4. <modelVersion>4.0.0</modelVersion>
  5. <parent>
  6. <groupId>org.springframework.boot</groupId>
  7. <artifactId>spring-boot-starter-parent</artifactId>
  8. <version>2.1.5.RELEASE</version>
  9. <relativePath/> <!-- lookup parent from repository -->
  10. </parent>
  11. <groupId>xy.study</groupId>
  12. <artifactId>kafka-demo</artifactId>
  13. <version>0.0.1-SNAPSHOT</version>
  14. <name>kafka-demo</name>
  15. <description>Kafka demo project for Spring Boot</description>
  16.  
  17. <properties>
  18. <java.version>1.8</java.version>
  19. </properties>
  20.  
  21. <dependencies>
  22. <dependency>
  23. <groupId>org.springframework.boot</groupId>
  24. <artifactId>spring-boot-starter</artifactId>
  25. </dependency>
  26. <dependency>
  27. <groupId>org.springframework.kafka</groupId>
  28. <artifactId>spring-kafka</artifactId>
  29. </dependency>
  30.  
  31. <dependency>
  32. <groupId>org.springframework.boot</groupId>
  33. <artifactId>spring-boot-devtools</artifactId>
  34. <scope>runtime</scope>
  35. </dependency>
  36. <dependency>
  37. <groupId>com.alibaba</groupId>
  38. <artifactId>fastjson</artifactId>
  39. <version>1.2.47</version>
  40. </dependency>
  41.  
  42. <dependency>
  43. <groupId>org.projectlombok</groupId>
  44. <artifactId>lombok</artifactId>
  45. <optional>true</optional>
  46. </dependency>
  47. <dependency>
  48. <groupId>org.springframework.boot</groupId>
  49. <artifactId>spring-boot-starter-test</artifactId>
  50. <scope>test</scope>
  51. </dependency>
  52. <dependency>
  53. <groupId>org.springframework.kafka</groupId>
  54. <artifactId>spring-kafka-test</artifactId>
  55. <scope>test</scope>
  56. </dependency>
  57. </dependencies>
  58.  
  59. <build>
  60. <plugins>
  61. <plugin>
  62. <groupId>org.springframework.boot</groupId>
  63. <artifactId>spring-boot-maven-plugin</artifactId>
  64. </plugin>
  65. </plugins>
  66. </build>
  67.  
  68. </project>

application.properties

  1. #============== kafka ===================
  2. # 指定kafka 代理地址,可以多个
  3. spring.kafka.bootstrap-servers=47.XX.XX.XX:9092
  4.  
  5. #=============== provider =======================
  6.  
  7. spring.kafka.producer.retries=0
  8. # 每次批量发送消息的数量
  9. spring.kafka.producer.batchSize=16384
  10. spring.kafka.producer.bufferMemory=33554432
  11.  
  12. # 指定消息key和消息体的编解码方式
  13. spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
  14. spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
  15.  
  16. #=============== consumer =======================
  17. # 指定默认消费者group id
  18. spring.kafka.consumer.group-id=consumer-group-test
  19.  
  20. spring.kafka.consumer.auto-offset-reset=earliest
  21. spring.kafka.consumer.enable-auto-commit=true
  22. spring.kafka.consumer.auto-commit-interval=100
  23.  
  24. # 指定消息key和消息体的编解码方式
  25. spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
  26. spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer

生产者和消费者

  1. @Component
  2. @Slf4j
  3. public class KafkaProducer {
  4.  
  5. @Autowired
  6. private KafkaTemplate<String, String> kafkaTemplate;
  7.  
  8. public void sendADotaHero() {
  9. DotaHero dotaHero = new DotaHero("虚空假面", "敏捷", "男");
  10.  
  11. ListenableFuture<SendResult<String, String>> future = kafkaTemplate.send(KafkaTopic.A_DOTA_HERO, JSONObject.toJSONString(dotaHero));
  12.  
  13. future.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
  14. @Override
  15. public void onFailure(Throwable throwable) {
  16. log.error("kafka sendMessage error, throwable = {}, topic = {}, data = {}", throwable, KafkaTopic.A_DOTA_HERO, dotaHero);
  17. }
  18.  
  19. @Override
  20. public void onSuccess(SendResult<String, String> stringDotaHeroSendResult) {
  21. log.info("kafka sendMessage success topic = {}, data = {}",KafkaTopic.A_DOTA_HERO, dotaHero);
  22. }
  23. });
  24.  
  25. log.info("kafka sendMessage end");
  26.  
  27. }
  28.  
  29. }
  1. @Slf4j
  2. @Component
  3. public class KafkaConsumer {
  4.  
  5. @KafkaListener(topics = KafkaTopic.A_DOTA_HERO, groupId = "${spring.kafka.consumer.group-id}")
  6. private void kafkaConsumer(ConsumerRecord<String, DotaHero> consumerRecord) {
  7.  
  8. log.info("kafkaConsumer: topic = {}, msg = {}", consumerRecord.topic(), consumerRecord.value());
  9.  
  10. }
  11. }
  1. @Data
  2. @AllArgsConstructor
  3. @NoArgsConstructor
  4. public class DotaHero {
  5.  
  6. private String name;
  7. private String kind;
  8. private String sex;
  9.  
  10. /**
  11. * 返回一个不同元素的数组
  12. * @return
  13. */
  14. public static List<DotaHero> bulidDiffObjectList(){
  15. List<DotaHero> list = new ArrayList<>();
  16. list.add(new DotaHero("影魔", "敏捷", "男"));
  17. list.add(new DotaHero("小黑", "敏捷", "女"));
  18. list.add(new DotaHero("马尔斯", "力量", "男"));
  19.  
  20. return list;
  21. }
  22. }
  1. public class KafkaTopic {
  2. public static final String A_DOTA_HERO = "a_dota_hero";
  3.  
  4. private KafkaTopic() {
  5. }
  6. }

测试

当启动完springboot项目后,再运行test启动生产者:

  1. @Slf4j
  2. @RunWith(SpringRunner.class)
  3. @SpringBootTest
  4. public class KafkaDemoApplicationTests {
  5.  
  6. @Autowired
  7. private KafkaProducer kafkaProducer;
  8.  
  9. private Clock clock = Clock.systemDefaultZone();
  10. private long begin;
  11. private long end;
  12.  
  13. @Before
  14. public void init(){
  15.  
  16. begin = clock.millis();
  17. }
  18.  
  19. @Test
  20. public void send(){
  21. kafkaProducer.sendADotaHero();
  22. }
  23.  
  24. @After
  25. public void end(){
  26. end = clock.millis();
  27. log.info("Spend {} millis .", end-begin);
  28. }
  29.  
  30. }

搞定springboot项目连接远程服务器上kafka遇到的坑以及完整的例子的更多相关文章

  1. ORA-12538;ORA-12154;使用PL/SQL dve无法连接远程服务器上的oracle数据库,同时本机上也安装了一个oracle数据库

    问题描述:本人使用PL/SQL dve连接远程服务器上的oracle数据库,一直是没有问题的.我想提高下自己在数据库方面的能力就在自己的笔记本上安装了一个oracle数据库实例,安装并配置好之后,使用 ...

  2. [转]oracle10客户端PL/SQL Developer如何连接远程服务器上的oracle数据库

    时间:2013年8月21日 前提条件:假设你已经安装好了oracle和PL/SQL Developer,知道远程服务器的IP和数据库端口,知道远程服务器上的oracle数据库名和密码 如何用PL/SQ ...

  3. Springboot 项目部署到服务器上

    项目部署到服务器上,有两种方式,一种 jar 包,一种 war 包 jar包 部署时,后续的域名配置,SSL证书等在nginx中配置 war包 部署时,后续的域名配置可以在tomcat中配置就好,修改 ...

  4. Jenkins部署码云SpringBoot项目到远程服务器

    本文是上一篇文章的后续,上一篇只是利用Jenkins部署项目到本地,并启动,本文是将项目部署到远程服务器并执行. 1.环境准备 1.1 安装插件 上一篇文章已经介绍了需要安装的应用及插件,这一篇还需要 ...

  5. SpringBoot项目部署到服务器上,tomcat不启动该项目

    今天lz把项目重新传到服务器上后,重启tomcat遇到个问题,就是这个tomcat怎么都不启动这个项目,别的项目都没事,一番查找后发现问题所在. 我们先建个SpringBoot工程,重现一下问题: 写 ...

  6. Jenkins 发布项目到远程服务器上

    最近公司弄一个项目,jenkins在本地服务器,需要打包发布到远程的阿里云服务器上,弄了好一阵子. 这里记录下中间的几个坑. 这个Remote DIrectory 很重要,到时候时候会拷贝到这个目录下 ...

  7. 部署基于maven的springboot项目到linux服务器上

    目录 本地运行调试 导入数据库: 导入项目: 将项目打包: linux准备: 运行项目: 脚本运行 本地运行调试 导入数据库: 导入数据库的时候使用的是sqlYog导入navcat的脚本:由于两个应用 ...

  8. springboot项目部署到服务器上

    链接:https://blog.csdn.net/qq_22638399/article/details/81506448#commentsedit 链接2:https://blog.csdn.net ...

  9. mysql 连接远程服务器

    想要在本地连接远程服务器上的mysql, 需要在远程服务器的mysql配置里面,修改一下访问权限 mysql的配置里面,默认只能本地访问,在服务器上,修改/etc/mysql/my.cnf文件找到这一 ...

随机推荐

  1. 解决在使用Amoeba遇到的问题

    最近有同行在使用Amoeba 的过程中多少遇到了一些问题. 总结一下遇到问题的解决方法: 1.读写分离的时候设置的在queryRouter中设置无效? 读写分离配置的优先级别:        1)满足 ...

  2. 【NOIP2017练习&BZOJ4998】星球联盟(强联通分量,并查集)

    题意: 在遥远的S星系中一共有N个星球,编号为1…N.其中的一些星球决定组成联盟,以方便相互间的交流. 但是,组成联盟的首要条件就是交通条件.初始时,在这N个星球间有M条太空隧道.每条太空隧道连接两个 ...

  3. codevs4439 YJQ Requires Food

    题目描述 Description 神犇YJQ有n个不同的妹子和m种食物,每一天每一种食物只供应一个妹子吃的份量.在接下来的t天内,YJQ准备包养所有的妹子. 对于每个妹子,她在t天内都只会吃某些特定的 ...

  4. P1230 智力大冲浪 洛谷

    https://www.luogu.org/problem/show?pid=1230 题目描述 小伟报名参加中央电视台的智力大冲浪节目.本次挑战赛吸引了众多参赛者,主持人为了表彰大家的勇气,先奖励每 ...

  5. P1160 队列安排 洛谷

    https://www.luogu.org/problem/show?pid=1160 题目描述 一个学校里老师要将班上N个同学排成一列,同学被编号为1-N,他采取如下的方法: 1.先将1号同学安排进 ...

  6. HDU——2874 Connections between cities

    Connections between cities Time Limit: 10000/5000 MS (Java/Others)    Memory Limit: 32768/32768 K (J ...

  7. 洛谷 P4181 [USACO18JAN]Rental Service

    P4181 [USACO18JAN]Rental Service 题意翻译 farmer john有N(1≤N≤100,000)头牛,他想赚跟多的钱,所以他准备买牛奶和出租牛.有M(1≤M≤100,0 ...

  8. 洛谷 P1122 最大子树和

    P1122 最大子树和 题目描述 小明对数学饱有兴趣,并且是个勤奋好学的学生,总是在课后留在教室向老师请教一些问题.一天他早晨骑车去上课,路上见到一个老伯正在修剪花花草草,顿时想到了一个有关修剪花卉的 ...

  9. 大话设计模式C++实现-第8章-工厂方法模式

    一.UML图 二.概念 工厂方法模式(Factory Method):定义一个用于创建对象的接口,让子类决定实例化哪一个类.工厂方法是一个类的实例化延迟到其子类. 三.包括的角色 (1)抽象工厂 (2 ...

  10. 码农小汪-spring框架学习之2-spring IoC and Beans 控制反转 依赖注入 ApplicationContext BeanFactory

    spring Ioc依赖注入控制反转 事实上这个东西很好理解的,并非那么的复杂. 当某个Java对象,须要调用还有一个Java对象的时候(被依赖的对象)的方法时.曾经我们的做法是怎么做呢?主动的去创建 ...