问题描述

查阅了Azure的官方文档( 将事件发送到特定分区: https://docs.azure.cn/zh-cn/event-hubs/event-hubs-availability-and-consistency?tabs=java#send-events-to-a-specific-partition),在工程里引用组件“azure-spring-cloud-stream-binder-eventhubs”来连接EventHub发送和消费消息事件。在发送端一个For循环中发送带顺序号的消息,编号从0开始,并且在消息的header中指定了 "Partition Key",相同PartitionKey的消息会被发送到相同的Partition,来保证这些消息的顺序。

但是在消费端的工程中消费这些消息时,看到打印到日志中的结果并不是从0递增的。所以想知道是发送端在发送时就已经乱序发送了?还是消息到达EventHub后乱序保存了?还是消费端的消费方式的问题,导致打印出的结果是乱序的?

下面是发送端的代码:

public void testPushMessages(int mcount, String partitionKey) {
String message = "Message ";
for (int i=0; i <mcount; i++) {
source.output().send(MessageBuilder.withPayload(partitionKey + mcount + i).setHeaderIfAbsent(AzureHeaders.PARTITION_KEY,partitionKey).build());
}
}

下面是消费端代码:

@StreamListener(Sink.INPUT)
public void onEvent(String message, @Header(AzureHeaders.CHECKPOINTER) Checkpointer checkpointer,
@Header(AzureHeaders.RAW_PARTITION_ID) String rawPartitionId,
@Header(AzureHeaders.PARTITION_KEY) String partitionKey) {
checkpointer.success()
.doOnSuccess(s -> log.info("Message '{}' successfully check pointed.rawPartitionId={},partitionKey={}", message, rawPartitionId, partitionKey))
.doOnError(s -> log.error("Checkpoint message got exception."))
.subscribe();

下面是打印的日志

......,"data":"Message 'testKey4testMessage1' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage29' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage27' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage26' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage25' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage28' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage14' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage13' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage15' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage5' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage7' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage20' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage19' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage18' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage0' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage9' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage12' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage8' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}

从日志中可以看到,消息确实都被发送到了同一个分区(rawPartitionId=1),但是从消息体的序号上看,是乱序的

问题分析

这个是和这个配置相关的fixedDelay,指定默认轮询器的固定延迟,是一个周期性触发器,之前代码会根据这个轮询器进行发送和接受消息的。使用Send发送的方法,现在最新的SDK 不使用这个方法,所以需要使用新的sdk 发送数据测试一下。

新sdk 参考文档您可以参考一下:https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/spring/azure-spring-boot-samples/azure-spring-cloud-sample-eventhubs-binder

SDK版本为

<dependency>
<groupId>com.azure.spring</groupId>
<artifactId>azure-spring-cloud-stream-binder-eventhubs</artifactId>
<version>2.4.0</version>
</dependency>

在参考官网的示例后,使用Supplier方法发送消息,代替Send。经过多次测试,指定partitionkey 之后,发送消息是顺序发送的,消费的时候也是按照顺序消费的,下面是测试的代码和结果

发送端的代码

// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. package com.azure.spring.sample.eventhubs.binder; import com.azure.spring.integration.core.EventHubHeaders;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import org.springframework.messaging.Message;
import org.springframework.messaging.support.MessageBuilder; import java.util.function.Supplier; import static com.azure.spring.integration.core.EventHubHeaders.SEQUENCE_NUMBER; @Configuration
public class EventProducerConfiguration { private static final Logger LOGGER = LoggerFactory.getLogger(EventProducerConfiguration.class); private int i = 0; @Bean
public Supplier<Message<String>> supply() {
return () -> {
//LOGGER.info("Sending message, sequence " + i);
String partitionKey="info"; LOGGER.info("Send message " + MessageBuilder.withPayload("hello world, "+i).setHeaderIfAbsent(EventHubHeaders.PARTITION_KEY, partitionKey).build());
return MessageBuilder.withPayload("hello world, "+ i++).
setHeaderIfAbsent(EventHubHeaders.PARTITION_KEY, partitionKey).build(); };
} }

接收端的代码

package com.ywt.demoEventhub;

import com.azure.spring.integration.core.EventHubHeaders;
import com.azure.spring.integration.core.api.reactor.Checkpointer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.integration.annotation.ServiceActivator;
import org.springframework.messaging.Message; import java.util.function.Consumer; import static com.azure.spring.integration.core.AzureHeaders.CHECKPOINTER; @Configuration
public class EventConsume { private static final Logger LOGGER = LoggerFactory.getLogger(EventConsume.class);
@Bean
public Consumer<Message<String>> consume() {
return message -> {
Checkpointer checkpointer = (Checkpointer) message.getHeaders().get(CHECKPOINTER);
LOGGER.info("New message received: '{}', partition key: {}, sequence number: {}, offset: {}, enqueued time: {}",
message.getPayload(),
message.getHeaders().get(EventHubHeaders.PARTITION_KEY),
message.getHeaders().get(EventHubHeaders.SEQUENCE_NUMBER),
message.getHeaders().get(EventHubHeaders.OFFSET),
message.getHeaders().get(EventHubHeaders.ENQUEUED_TIME)
); checkpointer.success()
.doOnSuccess(success -> LOGGER.info("Message '{}' successfully checkpointed number '{}' ", message.getPayload(), message.getHeaders().get(EventHubHeaders.CHECKPOINTER)))
.doOnError(error -> LOGGER.error("Exception found", error))
.subscribe();
};
}
}

发送消息的日志

消费消息的日志

参考资料

Azure Spring Cloud Stream Binder for Event Hub Code Sample shared library for Javahttps://github.com/Azure/azure-sdk-for-java/tree/master/sdk/spring/azure-spring-boot-samples/azure-spring-cloud-sample-eventhubs-binder

How to create a Spring Cloud Stream Binder application with Azure Event Hubs - Add sample code to implement basic event hub functionalityhttps://docs.microsoft.com/en-us/azure/developer/java/spring-framework/configure-spring-cloud-stream-binder-java-app-azure-event-hub#add-sample-code-to-implement-basic-event-hub-functionality

[END]

【Azure 事件中心】azure-spring-cloud-stream-binder-eventhubs客户端组件问题, 实践消息非顺序可达的更多相关文章

  1. 整合Spring Cloud Stream Binder与GCP Pubsub进行消息发送与接收

    我最新最全的文章都在南瓜慢说 www.pkslow.com,欢迎大家来喝茶! 1 前言 之前的文章<整合Spring Cloud Stream Binder与RabbitMQ进行消息发送与接收& ...

  2. 整合Spring Cloud Stream Binder与RabbitMQ进行消息发送与接收

    我最新最全的文章都在南瓜慢说 www.pkslow.com,欢迎大家来喝茶! 1 前言 Spring Cloud Stream专门用于事件驱动的微服务系统,使用消息中间件来收发信息.使用Spring ...

  3. 【进阶技术】一篇文章搞掂:Spring Cloud Stream

    本文总结自官方文档http://cloud.spring.io/spring-cloud-static/spring-cloud-stream/2.1.0.RC3/single/spring-clou ...

  4. 官方文档中文版!Spring Cloud Stream 快速入门

    本文内容翻译自官方文档,spring-cloud-stream docs,对 Spring Cloud Stream的应用入门介绍. 一.Spring Cloud Stream 简介 官方定义 Spr ...

  5. Kafka及Spring Cloud Stream

    安装 下载kafka http://mirrors.hust.edu.cn/apache/kafka/2.0.0/kafka_2.11-2.0.0.tgz kafka最为重要三个配置依次为:broke ...

  6. 消息驱动式微服务:Spring Cloud Stream & RabbitMQ

    1. 概述 在本文中,我们将向您介绍Spring Cloud Stream,这是一个用于构建消息驱动的微服务应用程序的框架,这些应用程序由一个常见的消息传递代理(如RabbitMQ.Apache Ka ...

  7. Spring Cloud Stream 进行服务之间的通讯

    Spring Cloud Stream Srping cloud Bus的底层实现就是Spring Cloud Stream,Spring Cloud Stream的目的是用于构建基于消息驱动(或事件 ...

  8. 【Azure 事件中心】为应用程序网关(Application Gateway with WAF) 配置诊断日志,发送到事件中心

    问题描述 在Application Gateway中,开启WAF(Web application firewall)后,现在需要把访问的日志输出到第三方分析代码中进行分析,如何来获取WAF的诊断日志呢 ...

  9. 【事件中心 Azure Event Hub】在Linux环境中(Ubuntu)安装Logstash的简易步骤及配置连接到Event Hub

    在文章([事件中心 Azure Event Hub]使用Logstash消费EventHub中的event时遇见的几种异常(TimeoutException, ReceiverDisconnected ...

随机推荐

  1. Go-25-文件管理

    FileInfo接口 package main import ( "fmt" "os" ) // FileInfo 接口文件的信息包括文件名.文件大小.修改权限 ...

  2. Redis——急速安装并设置自启(CentOS)

    现状 对于开发人员来说,部署服务器环境并不是一个高频操作.所以就导致绝大部分开发人员不会花太多时间去学习记忆,而是直接百度(有一些同学可能连链接都懒得收藏).所以到了部署环境的时候就头疼,甚至是抗拒. ...

  3. 10.for循环

    for循环 语法: for(初始化; 布尔表达式; 更新) { //代码语句 } 初始化最先执行,可以声明一种类型,可初始化一个或多个循环控制变量,也可以是一个空语句. 布尔值判断,为 true 执行 ...

  4. C实现十进制与十六进制转换

    include <stdio.h> include <stdlib.h> include <string.h> include <locale.h> i ...

  5. POJ1422 最小路径覆盖

    题意:      一个战场,往战场上投放伞兵,每个伞兵不能后退,只能往前走,问你最少多少个伞兵可以吧所有的点都占领. 思路:      这个题是最小路径覆盖,最小路径覆盖 = n - 最大匹配数,首先 ...

  6. hdu4950 打怪(简单题目)

    题意:       打怪,一开始怪有h滴血,每回合可以让对方减少a滴血,每次打完之后怪会恢复b滴血,每连续k回合之后自己会休息一回合,这一回合怪物依然回血,问是否可以把怪打死. 思路:      比较 ...

  7. POJ 2516 基础费用流

    题意       有n个顾客,m个供应商,k种货物,给你顾客对于每种货物的要求个数,和供应商对于每种货物的现有量,以及供应每种货物的时候供应商和顾客之间的运输单价,问你满足所有顾客的前提下的最小运输费 ...

  8. UVA11021麻球繁衍

    题意:      有K只麻球,每只生存一天就会死亡,每只麻球在死之前有可能生下一些麻球,生i个麻球的概率是pi,问m天后所有的麻球都死亡的概率是多少? 思路:       涉及到全概率公式,因为麻球的 ...

  9. JAVA的安装

    1.从JAVA官网 下载 注意选择自己需要的版本 2.百度云盘 链接:https://pan.baidu.com/s/1deOFGN1xB0mgz6s2mTRXdA 提取码:ke97 安装JAVA J ...

  10. 浅谈src与href的区别

    src 和 href 都是用来引入外部资源的属性,例如:图片.视频.CSS 文件.JavaScript 文件等. 那么它们两者之间究竟有什么样的区别呢? <link href="sty ...