搞定springboot项目连接远程服务器上kafka遇到的坑以及完整的例子
版本
springboot | 2.1.5.RELEASE |
kafka | 2.2 |
遇到的坑
- 用最新的springboot就要用最新的kafka版本!
- 当我启动云服务器上的zk后,再启动kafka后台日志也没报错,只感觉EndPoint日志信息有点奇怪,然后springboot项目连接kafka,老是有warn级别的日志:"Connection to node -1 could not be established. Broker may not be available.",这是未连接上kafka
- springboot项目控制台抛出ip地址不合法的异常。
telnet一下云服务器的9092端口没有响应,然后看云服务器安全组里也添加了啊,netstat也看到9092被监听,到底咋回事?
原来是kafka配置文件的问题,导致9092端口未被正确监听,ip地址的问题就是要绑定kafka服务器的ip地址。
注意下面红色三项配置很重要,解决了我所有的问题!
advertised.host.name必须写kafka服务器的ip地址!如果写localhost,并且项目运行的服务器和kafka运行的不是同一台服务器,会连接不上。
将kafka服务端的配置文件修改如下:
- ############################# Server Basics #############################
- # The id of the broker. This must be set to a unique integer for each broker.
- #broker的全局唯一编号,不能重复
- broker.id=
- ############################# Socket Server Settings #############################
- #监听的端口
- listeners=PLAINTEXT://:9092
- # 客户端连接的ip地址,必须要写成服务器的ip地址!advertised.host.name
- advertised.host.name = 47.XX.XX.XX
- host.name=localhost
- # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
- #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
- # The number of threads that the server uses for receiving requests from the network and sending responses to the network
- num.network.threads=
- # The number of threads that the server uses for processing requests, which may include disk I/O
- num.io.threads=
- # The send buffer (SO_SNDBUF) used by the socket server
- socket.send.buffer.bytes=
- # The receive buffer (SO_RCVBUF) used by the socket server
- socket.receive.buffer.bytes=
- # The maximum size of a request that the socket server will accept (protection against OOM)
- socket.request.max.bytes=
- ############################# Log Basics #############################
- # A comma separated list of directories under which to store log files
- log.dirs=/root/mysoftware/kafka_2.-2.2./logs
- # The default number of log partitions per topic. More partitions allow greater
- # parallelism for consumption, but this will also result in more files across
- # the brokers.
- num.partitions=
- # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
- # This value is recommended to be increased for installations with data dirs located in RAID array.
- num.recovery.threads.per.data.dir=
- ############################# Internal Topic Settings #############################
- # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
- # For anything other than development testing, a value greater than is recommended for to ensure availability such as .
- offsets.topic.replication.factor=
- transaction.state.log.replication.factor=
- transaction.state.log.min.isr=
- ############################# Log Flush Policy #############################
- # Messages are immediately written to the filesystem but by default we only fsync() to sync
- # the OS cache lazily. The following configurations control the flush of data to disk.
- # There are a few important trade-offs here:
- # . Durability: Unflushed data may be lost if you are not using replication.
- # . Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
- # . Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
- # The settings below allow one to configure the flush policy to flush data after a period of time or
- # every N messages (or both). This can be done globally and overridden on a per-topic basis.
- # The number of messages to accept before forcing a flush of data to disk
- #log.flush.interval.messages=
- # The maximum amount of time a message can sit in a log before we force a flush
- #log.flush.interval.ms=
- ############################# Log Retention Policy #############################
- # The following configurations control the disposal of log segments. The policy can
- # be set to delete segments after a period of time, or after a given size has accumulated.
- # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
- # from the end of the log.
- # The minimum age of a log file to be eligible for deletion due to age
- log.retention.hours=
- # A size-based retention policy for logs. Segments are pruned from the log unless the remaining
- # segments drop below log.retention.bytes. Functions independently of log.retention.hours.
- #log.retention.bytes=
- # The maximum size of a log segment file. When this size is reached a new log segment will be created.
- log.segment.bytes=
- # The interval at which log segments are checked to see if they can be deleted according
- # to the retention policies
- log.retention.check.interval.ms=
- ############################# Zookeeper #############################
- # Zookeeper connection string (see zookeeper docs for details).
- # This is a comma separated host:port pairs, each corresponding to a zk
- # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
- # You can also append an optional chroot string to the urls to specify the
- # root directory for all kafka znodes.
- zookeeper.connect=localhost:
- # Timeout in ms for connecting to zookeeper
- zookeeper.connection.timeout.ms=
- ############################# Group Coordinator Settings #############################
- # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
- # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
- # The default value for this is seconds.
- # We override this to here as it makes for a better out-of-the-box experience for development and testing.
- # However, in production environments the default value of seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
- group.initial.rebalance.delay.ms=
代码
pom.xml
- <?xml version="1.0" encoding="UTF-8"?>
- <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
- <modelVersion>4.0.0</modelVersion>
- <parent>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-parent</artifactId>
- <version>2.1.5.RELEASE</version>
- <relativePath/> <!-- lookup parent from repository -->
- </parent>
- <groupId>xy.study</groupId>
- <artifactId>kafka-demo</artifactId>
- <version>0.0.1-SNAPSHOT</version>
- <name>kafka-demo</name>
- <description>Kafka demo project for Spring Boot</description>
- <properties>
- <java.version>1.8</java.version>
- </properties>
- <dependencies>
- <dependency>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter</artifactId>
- </dependency>
- <dependency>
- <groupId>org.springframework.kafka</groupId>
- <artifactId>spring-kafka</artifactId>
- </dependency>
- <dependency>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-devtools</artifactId>
- <scope>runtime</scope>
- </dependency>
- <dependency>
- <groupId>com.alibaba</groupId>
- <artifactId>fastjson</artifactId>
- <version>1.2.47</version>
- </dependency>
- <dependency>
- <groupId>org.projectlombok</groupId>
- <artifactId>lombok</artifactId>
- <optional>true</optional>
- </dependency>
- <dependency>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-test</artifactId>
- <scope>test</scope>
- </dependency>
- <dependency>
- <groupId>org.springframework.kafka</groupId>
- <artifactId>spring-kafka-test</artifactId>
- <scope>test</scope>
- </dependency>
- </dependencies>
- <build>
- <plugins>
- <plugin>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-maven-plugin</artifactId>
- </plugin>
- </plugins>
- </build>
- </project>
application.properties
- #============== kafka ===================
- # 指定kafka 代理地址,可以多个
- spring.kafka.bootstrap-servers=47.XX.XX.XX:9092
- #=============== provider =======================
- spring.kafka.producer.retries=0
- # 每次批量发送消息的数量
- spring.kafka.producer.batchSize=16384
- spring.kafka.producer.bufferMemory=33554432
- # 指定消息key和消息体的编解码方式
- spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
- spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
- #=============== consumer =======================
- # 指定默认消费者group id
- spring.kafka.consumer.group-id=consumer-group-test
- spring.kafka.consumer.auto-offset-reset=earliest
- spring.kafka.consumer.enable-auto-commit=true
- spring.kafka.consumer.auto-commit-interval=100
- # 指定消息key和消息体的编解码方式
- spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
- spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
生产者和消费者
- @Component
- @Slf4j
- public class KafkaProducer {
- @Autowired
- private KafkaTemplate<String, String> kafkaTemplate;
- public void sendADotaHero() {
- DotaHero dotaHero = new DotaHero("虚空假面", "敏捷", "男");
- ListenableFuture<SendResult<String, String>> future = kafkaTemplate.send(KafkaTopic.A_DOTA_HERO, JSONObject.toJSONString(dotaHero));
- future.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
- @Override
- public void onFailure(Throwable throwable) {
- log.error("kafka sendMessage error, throwable = {}, topic = {}, data = {}", throwable, KafkaTopic.A_DOTA_HERO, dotaHero);
- }
- @Override
- public void onSuccess(SendResult<String, String> stringDotaHeroSendResult) {
- log.info("kafka sendMessage success topic = {}, data = {}",KafkaTopic.A_DOTA_HERO, dotaHero);
- }
- });
- log.info("kafka sendMessage end");
- }
- }
- @Slf4j
- @Component
- public class KafkaConsumer {
- @KafkaListener(topics = KafkaTopic.A_DOTA_HERO, groupId = "${spring.kafka.consumer.group-id}")
- private void kafkaConsumer(ConsumerRecord<String, DotaHero> consumerRecord) {
- log.info("kafkaConsumer: topic = {}, msg = {}", consumerRecord.topic(), consumerRecord.value());
- }
- }
- @Data
- @AllArgsConstructor
- @NoArgsConstructor
- public class DotaHero {
- private String name;
- private String kind;
- private String sex;
- /**
- * 返回一个不同元素的数组
- * @return
- */
- public static List<DotaHero> bulidDiffObjectList(){
- List<DotaHero> list = new ArrayList<>();
- list.add(new DotaHero("影魔", "敏捷", "男"));
- list.add(new DotaHero("小黑", "敏捷", "女"));
- list.add(new DotaHero("马尔斯", "力量", "男"));
- return list;
- }
- }
- public class KafkaTopic {
- public static final String A_DOTA_HERO = "a_dota_hero";
- private KafkaTopic() {
- }
- }
测试
当启动完springboot项目后,再运行test启动生产者:
- @Slf4j
- @RunWith(SpringRunner.class)
- @SpringBootTest
- public class KafkaDemoApplicationTests {
- @Autowired
- private KafkaProducer kafkaProducer;
- private Clock clock = Clock.systemDefaultZone();
- private long begin;
- private long end;
- @Before
- public void init(){
- begin = clock.millis();
- }
- @Test
- public void send(){
- kafkaProducer.sendADotaHero();
- }
- @After
- public void end(){
- end = clock.millis();
- log.info("Spend {} millis .", end-begin);
- }
- }
搞定springboot项目连接远程服务器上kafka遇到的坑以及完整的例子的更多相关文章
- ORA-12538;ORA-12154;使用PL/SQL dve无法连接远程服务器上的oracle数据库,同时本机上也安装了一个oracle数据库
问题描述:本人使用PL/SQL dve连接远程服务器上的oracle数据库,一直是没有问题的.我想提高下自己在数据库方面的能力就在自己的笔记本上安装了一个oracle数据库实例,安装并配置好之后,使用 ...
- [转]oracle10客户端PL/SQL Developer如何连接远程服务器上的oracle数据库
时间:2013年8月21日 前提条件:假设你已经安装好了oracle和PL/SQL Developer,知道远程服务器的IP和数据库端口,知道远程服务器上的oracle数据库名和密码 如何用PL/SQ ...
- Springboot 项目部署到服务器上
项目部署到服务器上,有两种方式,一种 jar 包,一种 war 包 jar包 部署时,后续的域名配置,SSL证书等在nginx中配置 war包 部署时,后续的域名配置可以在tomcat中配置就好,修改 ...
- Jenkins部署码云SpringBoot项目到远程服务器
本文是上一篇文章的后续,上一篇只是利用Jenkins部署项目到本地,并启动,本文是将项目部署到远程服务器并执行. 1.环境准备 1.1 安装插件 上一篇文章已经介绍了需要安装的应用及插件,这一篇还需要 ...
- SpringBoot项目部署到服务器上,tomcat不启动该项目
今天lz把项目重新传到服务器上后,重启tomcat遇到个问题,就是这个tomcat怎么都不启动这个项目,别的项目都没事,一番查找后发现问题所在. 我们先建个SpringBoot工程,重现一下问题: 写 ...
- Jenkins 发布项目到远程服务器上
最近公司弄一个项目,jenkins在本地服务器,需要打包发布到远程的阿里云服务器上,弄了好一阵子. 这里记录下中间的几个坑. 这个Remote DIrectory 很重要,到时候时候会拷贝到这个目录下 ...
- 部署基于maven的springboot项目到linux服务器上
目录 本地运行调试 导入数据库: 导入项目: 将项目打包: linux准备: 运行项目: 脚本运行 本地运行调试 导入数据库: 导入数据库的时候使用的是sqlYog导入navcat的脚本:由于两个应用 ...
- springboot项目部署到服务器上
链接:https://blog.csdn.net/qq_22638399/article/details/81506448#commentsedit 链接2:https://blog.csdn.net ...
- mysql 连接远程服务器
想要在本地连接远程服务器上的mysql, 需要在远程服务器的mysql配置里面,修改一下访问权限 mysql的配置里面,默认只能本地访问,在服务器上,修改/etc/mysql/my.cnf文件找到这一 ...
随机推荐
- 解决在使用Amoeba遇到的问题
最近有同行在使用Amoeba 的过程中多少遇到了一些问题. 总结一下遇到问题的解决方法: 1.读写分离的时候设置的在queryRouter中设置无效? 读写分离配置的优先级别: 1)满足 ...
- 【NOIP2017练习&BZOJ4998】星球联盟(强联通分量,并查集)
题意: 在遥远的S星系中一共有N个星球,编号为1…N.其中的一些星球决定组成联盟,以方便相互间的交流. 但是,组成联盟的首要条件就是交通条件.初始时,在这N个星球间有M条太空隧道.每条太空隧道连接两个 ...
- codevs4439 YJQ Requires Food
题目描述 Description 神犇YJQ有n个不同的妹子和m种食物,每一天每一种食物只供应一个妹子吃的份量.在接下来的t天内,YJQ准备包养所有的妹子. 对于每个妹子,她在t天内都只会吃某些特定的 ...
- P1230 智力大冲浪 洛谷
https://www.luogu.org/problem/show?pid=1230 题目描述 小伟报名参加中央电视台的智力大冲浪节目.本次挑战赛吸引了众多参赛者,主持人为了表彰大家的勇气,先奖励每 ...
- P1160 队列安排 洛谷
https://www.luogu.org/problem/show?pid=1160 题目描述 一个学校里老师要将班上N个同学排成一列,同学被编号为1-N,他采取如下的方法: 1.先将1号同学安排进 ...
- HDU——2874 Connections between cities
Connections between cities Time Limit: 10000/5000 MS (Java/Others) Memory Limit: 32768/32768 K (J ...
- 洛谷 P4181 [USACO18JAN]Rental Service
P4181 [USACO18JAN]Rental Service 题意翻译 farmer john有N(1≤N≤100,000)头牛,他想赚跟多的钱,所以他准备买牛奶和出租牛.有M(1≤M≤100,0 ...
- 洛谷 P1122 最大子树和
P1122 最大子树和 题目描述 小明对数学饱有兴趣,并且是个勤奋好学的学生,总是在课后留在教室向老师请教一些问题.一天他早晨骑车去上课,路上见到一个老伯正在修剪花花草草,顿时想到了一个有关修剪花卉的 ...
- 大话设计模式C++实现-第8章-工厂方法模式
一.UML图 二.概念 工厂方法模式(Factory Method):定义一个用于创建对象的接口,让子类决定实例化哪一个类.工厂方法是一个类的实例化延迟到其子类. 三.包括的角色 (1)抽象工厂 (2 ...
- 码农小汪-spring框架学习之2-spring IoC and Beans 控制反转 依赖注入 ApplicationContext BeanFactory
spring Ioc依赖注入控制反转 事实上这个东西很好理解的,并非那么的复杂. 当某个Java对象,须要调用还有一个Java对象的时候(被依赖的对象)的方法时.曾经我们的做法是怎么做呢?主动的去创建 ...