笔者使用Spark streaming读取Kakfa中的数据,做进一步处理,用到了KafkaUtil的createDirectStream()方法;该方法不会自动保存topic partition的offset到zk,需要在代码中编写提交逻辑,此处介绍了保存offset的方法。 
当删除已经使用过的kafka topic,然后新建同名topic,使用该方式时出现了"numRecords must not be negative"异常 
详细信息如下图: 

是不合法的参数异常,RDD的记录数目必须不能是负数。 
下文详细分析该问题的出现的场景,以及解决方法。
异常分析
numRecords确定
首先,定位出异常出现的问题,和大致原因。异常中打印出了出现的位置org.apache.spark.streaming.scheduler.StreamInputInfo.InputInfoTracker的第38行,此处代码:

代码38行,判断了numRecords是否大于等于0,当不满足条件时抛出异常,可判断此时numRecords<0。 
numRecords的解释: 
numRecords: the number of records in a batch 
应该是当前rdd中records 数目计算出了问题。 
numRecords 构造StreamInputInfo时的参数,结合异常中的信息,找到了DirectKafkaInputDStream中的构造InputInfo的位置: 

可知 numRecords是rdd.count()的值。
rdd.count的计算
根据以上分析可知rdd.count()值为负值,因此需要分析rdd的是如何生成的。 
同样在DirectKafkaInputDStream中找到rdd的生成代码:

从此处一路跟踪代码,可在KafkaRDD.Scala中找到rdd.count的赋值逻辑:

offsetRanges的计算逻辑
offsetRanges的定义
offsetRanges: offset ranges that define the Kafka data belonging to this RDD
在KafkaRDDPartition 40行找到kafka partition offsetRange的计算逻辑:
def count(): Long = untilOffset - fromOffset 
fromOffset: per-topic/partition Kafka offset defining the (inclusive) starting point of the batch 
untilOffset: per-topic/partition Kafka offset defining the (inclusive) ending point of the batch
fromOffset来自zk中保存; 
untilOffset通过DirectKafkaInputDStream第145行:
val untilOffsets = clamp(latestLeaderOffsets(maxRetries))
计算得到,计算过程得到最新的offset,然后使用spark.streaming.kafka.maxRatePerPartition做clamp,得到允许的最大untilOffsets,##而此时新建的topic,如果topic中没有数据,untilOffsets应该为0##
原因总结
当删除一个topic时,zk中的offset信息并没有被清除,因此KafkaDirectStreaming再次启动时仍会得到旧的topic offset为old_offset,作为fromOffset。 
当新建了topic后,使用untiloffset计算逻辑,得到untilOffset为0(如果topic已有数据则>0); 
再次被启动的KafkaDirectStreaming Job通过异常的计算逻辑得到的rdd numRecords值为可计算为: 
numRecords = untilOffset - fromOffset(old_offset) 
当untilOffset < old_offset时,此异常会出现,对于新建的topic这种情况的可能性很大
解决方法
思路
根据以上分析,可在确定KafkaDirectStreaming 的fromOffsets时判断fromOffset与untiloffset的大小关系,当untilOffset < fromOffset时,矫正fromOffset为offset初始值0。
流程
  • 从zk获取topic/partition 的fromOffset(获取方法链接
  • 利用SimpleConsumer获取每个partiton的lastOffset(untilOffset )
  • 判断每个partition lastOffset与fromOffset的关系
  • 当lastOffset < fromOffset时,将fromOffset赋值为0
通过以上步骤完成fromOffset的值矫正。

核心代码

获取kafka topic partition lastoffset代码:
package org.frey.example.utils.kafka; import com.google.common.collect.Lists;
import com.google.common.collect.Maps;
import kafka.api.PartitionOffsetRequestInfo;
import kafka.cluster.Broker;
import kafka.common.TopicAndPartition;
import kafka.javaapi.*;
import kafka.javaapi.consumer.SimpleConsumer; import java.util.Date;
import java.util.HashMap;
import java.util.List;
import java.util.Map; /**
* KafkaOffsetTool
*
* @author angel
* @date 2016/4/11
*/
public class KafkaOffsetTool { private static KafkaOffsetTool instance;
final int TIMEOUT = 100000;
final int BUFFERSIZE = 64 * 1024; private KafkaOffsetTool() {
} public static synchronized KafkaOffsetTool getInstance() {
if (instance == null) {
instance = new KafkaOffsetTool();
}
return instance;
} public Map<TopicAndPartition, Long> getLastOffset(String brokerList, List<String> topics,
String groupId) { Map<TopicAndPartition, Long> topicAndPartitionLongMap = Maps.newHashMap(); Map<TopicAndPartition, Broker> topicAndPartitionBrokerMap =
KafkaOffsetTool.getInstance().findLeader(brokerList, topics); for (Map.Entry<TopicAndPartition, Broker> topicAndPartitionBrokerEntry : topicAndPartitionBrokerMap
.entrySet()) {
// get leader broker
Broker leaderBroker = topicAndPartitionBrokerEntry.getValue(); SimpleConsumer simpleConsumer = new SimpleConsumer(leaderBroker.host(), leaderBroker.port(),
TIMEOUT, BUFFERSIZE, groupId); long readOffset = getTopicAndPartitionLastOffset(simpleConsumer,
topicAndPartitionBrokerEntry.getKey(), groupId); topicAndPartitionLongMap.put(topicAndPartitionBrokerEntry.getKey(), readOffset); } return topicAndPartitionLongMap; } /**
* 得到所有的 TopicAndPartition
*
* @param brokerList
* @param topics
* @return topicAndPartitions
*/
private Map<TopicAndPartition, Broker> findLeader(String brokerList, List<String> topics) {
// get broker's url array
String[] brokerUrlArray = getBorkerUrlFromBrokerList(brokerList);
// get broker's port map
Map<String, Integer> brokerPortMap = getPortFromBrokerList(brokerList); // create array list of TopicAndPartition
Map<TopicAndPartition, Broker> topicAndPartitionBrokerMap = Maps.newHashMap(); for (String broker : brokerUrlArray) { SimpleConsumer consumer = null;
try {
// new instance of simple Consumer
consumer = new SimpleConsumer(broker, brokerPortMap.get(broker), TIMEOUT, BUFFERSIZE,
"leaderLookup" + new Date().getTime()); TopicMetadataRequest req = new TopicMetadataRequest(topics); TopicMetadataResponse resp = consumer.send(req); List<TopicMetadata> metaData = resp.topicsMetadata(); for (TopicMetadata item : metaData) {
for (PartitionMetadata part : item.partitionsMetadata()) {
TopicAndPartition topicAndPartition =
new TopicAndPartition(item.topic(), part.partitionId());
topicAndPartitionBrokerMap.put(topicAndPartition, part.leader());
}
}
} catch (Exception e) {
e.printStackTrace();
} finally {
if (consumer != null)
consumer.close();
}
}
return topicAndPartitionBrokerMap;
} /**
* get last offset
* @param consumer
* @param topicAndPartition
* @param clientName
* @return
*/
private long getTopicAndPartitionLastOffset(SimpleConsumer consumer,
TopicAndPartition topicAndPartition, String clientName) {
Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo =
new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>(); requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(
kafka.api.OffsetRequest.LatestTime(), 1)); OffsetRequest request = new OffsetRequest(
requestInfo, kafka.api.OffsetRequest.CurrentVersion(),
clientName); OffsetResponse response = consumer.getOffsetsBefore(request); if (response.hasError()) {
System.out
.println("Error fetching data Offset Data the Broker. Reason: "
+ response.errorCode(topicAndPartition.topic(), topicAndPartition.partition()));
return 0;
}
long[] offsets = response.offsets(topicAndPartition.topic(), topicAndPartition.partition());
return offsets[0];
}
/**
* 得到所有的broker url
*
* @param brokerlist
* @return
*/
private String[] getBorkerUrlFromBrokerList(String brokerlist) {
String[] brokers = brokerlist.split(",");
for (int i = 0; i < brokers.length; i++) {
brokers[i] = brokers[i].split(":")[0];
}
return brokers;
} /**
* 得到broker url 与 其port 的映射关系
*
* @param brokerlist
* @return
*/
private Map<String, Integer> getPortFromBrokerList(String brokerlist) {
Map<String, Integer> map = new HashMap<String, Integer>();
String[] brokers = brokerlist.split(",");
for (String item : brokers) {
String[] itemArr = item.split(":");
if (itemArr.length > 1) {
map.put(itemArr[0], Integer.parseInt(itemArr[1]));
}
}
return map;
} public static void main(String[] args) {
List<String> topics = Lists.newArrayList();
topics.add("ys");
topics.add("bugfix");
Map<TopicAndPartition, Long> topicAndPartitionLongMap =
KafkaOffsetTool.getInstance().getLastOffset("broker001:9092,broker002:9092", topics, "my.group.id"); for (Map.Entry<TopicAndPartition, Long> entry : topicAndPartitionLongMap.entrySet()) {
System.out.println(entry.getKey().topic() + "-"+ entry.getKey().partition() + ":" + entry.getValue());
}
}
} 矫正offset核心代码:
/** 以下 矫正 offset */
// 得到Topic/partition 的lastOffsets
Map<TopicAndPartition, Long> topicAndPartitionLongMap =
KafkaOffsetTool.getInstance().getLastOffset(kafkaParams.get("metadata.broker.list"),
topicList, "my.group.id"); // 遍历每个Topic.partition
for (Map.Entry<TopicAndPartition, Long> topicAndPartitionLongEntry : fromOffsets.entrySet()) {
// fromOffset > lastOffset时
if (topicAndPartitionLongEntry.getValue() >
topicAndPartitionLongMap.get(topicAndPartitionLongEntry.getKey())) {
//矫正fromoffset为offset初始值0
topicAndPartitionLongEntry.setValue(0L);
}
}
/** 以上 矫正 offset */

删除了原有的offset之后再次启动会报错park Streaming from Kafka has error numRecords must not ...的更多相关文章

  1. 【转】Eclipse下启动tomcat报错:/bin/bootstrap.jar which is referenced by the classpath, does not exist.

    转载地址:http://blog.csdn.net/jnqqls/article/details/8946964 1.错误: 在Eclipse下启动tomcat的时候,报错为:Eclipse下启动to ...

  2. Sql Server 2008卸载后再次安装一直报错

    sql server 2008卸载之后再次安装一直报错问题. 第一:由于上一次的卸载不干净,可参照百度完全卸载sql server2008 的方式 1. 用WindowsInstaller删除所有与S ...

  3. Eclipse中启动tomcat报错:A child container failed during start

    我真的很崩溃,先是workspace崩了,费了好久重建的workspace,然后建立了一个小demo项目,tomcat中启动却报错,挑选其中比较重要的2条信息如下: A child container ...

  4. 启动Mysql报错:Another MySQL daemon already running with the same unix socket.

    启动Mysql报错: Another MySQL daemon already running with the same unix socket. 删除如下文件即可解决 /var/lib/mysql ...

  5. 启动MySQL报错

    安装完MySQL,启动MySQL报错,报错信息如下:Starting MySQL....The server quit without updating PID file (/data/mysqlda ...

  6. C# 解决SharpSvn启动窗口报错 Unable to connect to a repository at URL 'svn://....'

    在远程机打开sharpsvn客户端测试,结果报错 Svn启动窗口报错 Unable to connect to a repository at URL 'svn://...' 咋整,我在win10我的 ...

  7. Svn启动窗口报错 Could not load file or assembly 'SharpSvn.dll' or one of its

    win10 64位系统生成没问题,测试都没问题,结果换到win7 64位系统上,点开就出现,网上搜了下,通过以下方式解决, 必须把bin 文件夹全部删除,重新生成.要不还是会报错. Solve it. ...

  8. Eclipse启动项目正常,放到tomcat下单独启动就报错的 一例

    一个老的ssh的项目,进行二次开发(增加一些新功能)后, 首先用Eclipse中集成的Tomcat启动没有任何问题,但是把启动后的webapps下得目录放到 windows的普通tomcat下单独启动 ...

  9. (转)启动网卡报错(Failed to start LSB: Bring up/down networking )解决办法总结

    启动网卡报错(Failed to start LSB: Bring up/down networking )解决办法总结 原文:http://blog.51cto.com/11863547/19059 ...

随机推荐

  1. selenium控制浏览器

    1.要把浏览器设置为全屏,否则有些元素是操作失败的,如对下图进行操作按钮是失败的,因为按钮没有显示出来 2.设置浏览器的宽.高 3.控制前进.后退(不建议使用driver.black().driver ...

  2. linux备忘簿

    1.ubuntu中按ctrl+s锁定屏幕,按ctrl+q解锁. 2.vim中撤销和恢复为u和ctlr+r 3.静态库和动态库编译命令: (1)得到hello.o g++ -c hello.cpp (2 ...

  3. mina使用总结

    1.在会话中获得远程IP和端口 @Override public void messageReceived(IoSession session, Object message) throws Exce ...

  4. 前端 ---client、offset、scroll系列

    client.offset.scroll系列   1.client系列 代码如下: <!DOCTYPE html> <html> <head> <meta c ...

  5. Modbus库开发笔记:Modbus ASCII Master开发

    这一节我们来封装Modbus ASCII Master应用,Modbus ASCII主站的开发与RTU主站的开发是一致的.同样的我们也不是做具体的应用,而是实现ASCII主站的基本功能.我们将ASCI ...

  6. Confluence 6 H2 数据库连接与合并整合

    使用 H2 console 连接到你嵌入的 H2 数据库 可以选的,你可以使用 H2 console 来连接到你的 H2 数据库.最简单的访问 Console 的方法是双击 H2 数据库的 jar 文 ...

  7. 【java】转:Windows系统下面多个jdk版本切换

    转自:https://blog.csdn.net/iamcaochong/article/details/56008545 1.系统-高级系统设置-环境变量 里面的Path值最前面的C:\Progra ...

  8. CF767C Garland--树形dp

    今天无聊的我又来切树形dp了,貌似我与树形dp有仇似的. n个节点的树 第i个节点权值为 n<=10^6 −100<=ai​<=100 问是否能够删除掉两条边,使得该树分成三个不为空 ...

  9. mongo数据库的各种查询语句示例

    左边是mongodb查询语句,右边是sql语句.对照着用,挺方便. db.users.find() select * from users db.users.find({"age" ...

  10. Jquery无刷新实时更新表格数据

    html代码: <style> .editbox { display:none } .editbox { font-size:14px; width:70px; background-co ...