filebeat+kafka+SparkStreaming程序报错及解决办法
// :: WARN RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: ERROR Executor: Exception in task 0.0 in stage 113711.0 (TID )
java.lang.AssertionError: assertion failed
at scala.Predef$.assert(Predef.scala:)
at org.apache.spark.storage.BlockInfo.checkInvariants(BlockInfoManager.scala:)
at org.apache.spark.storage.BlockInfo.readerCount_$eq(BlockInfoManager.scala:)
at org.apache.spark.storage.BlockInfoManager$$anonfun$releaseAllLocksForTask$$$anonfun$apply$.apply(BlockInfoManager.scala:)
at org.apache.spark.storage.BlockInfoManager$$anonfun$releaseAllLocksForTask$$$anonfun$apply$.apply(BlockInfoManager.scala:)
at scala.Option.foreach(Option.scala:)
at org.apache.spark.storage.BlockInfoManager$$anonfun$releaseAllLocksForTask$.apply(BlockInfoManager.scala:)
at org.apache.spark.storage.BlockInfoManager$$anonfun$releaseAllLocksForTask$.apply(BlockInfoManager.scala:)
at scala.collection.Iterator$class.foreach(Iterator.scala:)
at scala.collection.AbstractIterator.foreach(Iterator.scala:)
at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:)
at org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
at java.lang.Thread.run(Thread.java:)
// :: WARN TaskSetManager: Lost task 0.0 in stage 113711.0 (TID , localhost, executor driver): java.lang.AssertionError: assertion failed
at scala.Predef$.assert(Predef.scala:)
at org.apache.spark.storage.BlockInfo.checkInvariants(BlockInfoManager.scala:)
at org.apache.spark.storage.BlockInfo.readerCount_$eq(BlockInfoManager.scala:)
at org.apache.spark.storage.BlockInfoManager$$anonfun$releaseAllLocksForTask$$$anonfun$apply$.apply(BlockInfoManager.scala:)
at org.apache.spark.storage.BlockInfoManager$$anonfun$releaseAllLocksForTask$$$anonfun$apply$.apply(BlockInfoManager.scala:)
at scala.Option.foreach(Option.scala:)
at org.apache.spark.storage.BlockInfoManager$$anonfun$releaseAllLocksForTask$.apply(BlockInfoManager.scala:)
at org.apache.spark.storage.BlockInfoManager$$anonfun$releaseAllLocksForTask$.apply(BlockInfoManager.scala:)
at scala.collection.Iterator$class.foreach(Iterator.scala:)
at scala.collection.AbstractIterator.foreach(Iterator.scala:)
at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:)
at org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
at java.lang.Thread.run(Thread.java:) // :: ERROR TaskSetManager: Task in stage 113711.0 failed times; aborting job
// :: ERROR JobScheduler: Error running job streaming job ms.
org.apache.spark.SparkException: An exception was raised by Python:
Traceback (most recent call last):
File "/home/admin/agent/spark/python/lib/pyspark.zip/pyspark/streaming/util.py", line , in call
r = self.func(t, *rdds)
File "/home/admin/agent/spark/python/lib/pyspark.zip/pyspark/streaming/dstream.py", line , in takeAndPrint
taken = rdd.take(num + )
File "/home/admin/agent/spark/python/lib/pyspark.zip/pyspark/rdd.py", line , in take
res = self.context.runJob(self, takeUpToNumLeft, p)
File "/home/admin/agent/spark/python/lib/pyspark.zip/pyspark/context.py", line , in runJob
port = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)
File "/home/admin/agent/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line , in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/home/admin/agent/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line , in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task in stage 113711.0 failed times, most recent failure: Lost task 0.0 in stage 113711.0 (TID , localhost, executor driver): java.lang.AssertionError: assertion failed
at scala.Predef$.assert(Predef.scala:)
at org.apache.spark.storage.BlockInfo.checkInvariants(BlockInfoManager.scala:)
at org.apache.spark.storage.BlockInfo.readerCount_$eq(BlockInfoManager.scala:)
at org.apache.spark.storage.BlockInfoManager$$anonfun$releaseAllLocksForTask$$$anonfun$apply$.apply(BlockInfoManager.scala:)
at org.apache.spark.storage.BlockInfoManager$$anonfun$releaseAllLocksForTask$$$anonfun$apply$.apply(BlockInfoManager.scala:)
at scala.Option.foreach(Option.scala:)
at org.apache.spark.storage.BlockInfoManager$$anonfun$releaseAllLocksForTask$.apply(BlockInfoManager.scala:)
at org.apache.spark.storage.BlockInfoManager$$anonfun$releaseAllLocksForTask$.apply(BlockInfoManager.scala:)
at scala.collection.Iterator$class.foreach(Iterator.scala:)
at scala.collection.AbstractIterator.foreach(Iterator.scala:)
at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:)
at org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
at java.lang.Thread.run(Thread.java:)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at scala.Option.foreach(Option.scala:)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.util.EventLoop$$anon$.run(EventLoop.scala:)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:)
at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:)
at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
at sun.reflect.GeneratedMethodAccessor55.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:)
at py4j.Gateway.invoke(Gateway.java:)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:)
at py4j.commands.CallCommand.execute(CallCommand.java:)
at py4j.GatewayConnection.run(GatewayConnection.java:)
at java.lang.Thread.run(Thread.java:)
排查原因1:
1. 【不是】由于代码中checkpoint目录为本地导致,搭建了hdfs,将checkpoint移到hdfs,发现还是运行一天左右就挂掉,报错如上。
2. 待续
请大虾们指点。
filebeat+kafka+SparkStreaming程序报错及解决办法的更多相关文章
- 【Runtime Error】打开Matlib7.0运行程序报错的解决办法
1.在C盘建立一个文件夹temp,存放临时文件: 2.右键我的电脑-属性-高级系统设置-环境变量-系统变量,将TEMP.TMP的值改成C:\temp: 3.还是在第2步那里,新建变量,变量名称为BLA ...
- 未在本地计算机上注册“microsoft.ACE.oledb.12.0”提供程序报错的解决办法
https://www.jb51.net/article/157457.htm 下载32位版本安装即可 Microsoft Access Database Engine Redistributable ...
- Base64 报错 的解决办法 (Base-64 字符数组或字符串的长度无效。, 输入的不是有效的 Base-64 字符串,因为它包含非 Base-64 字符、两个以上的填充字符,或者填充字符间包含非法字符。)
Base64 报错 的解决办法, 报错如下:1. FormatException: The input is not a valid Base-64 string as it contains a n ...
- Springboot数据库连接池报错的解决办法
Springboot数据库连接池报错的解决办法 这个异常通常在Linux服务器上会发生,原因是Linux系统会主动断开一个长时间没有通信的连接 那么我们的问题就是:数据库连接池长时间处于间歇状态,导致 ...
- Loadrunner参数化连接oracle、mysql数据源报错及解决办法
Loadrunner参数化连接oracle.mysql数据源报错及解决办法 (本人系统是Win7 64, 两位小伙伴因为是默认安装lr,安装在 最终参数化的时候,出现连接字符串无法自动加载出来: 最 ...
- PHP empty函数报错的解决办法
PHP empty函数在检测一个非变量情况下报错的解决办法. PHP开发时,当你使用empty检查一个函数返回的结果时会报错:Fatal error: Can't use function retur ...
- eclipse中的js文件报错的解决办法
在使用别人的项目的时候,导入到eclipse中发现js文件报错,解决办法是关闭eclipse的js校验功能. 三个步骤: 1. 右键点击项目->properties->Validation ...
- VM装mac10.9教程+报错信息解决办法
VM装mac10.9教程+报错信息解决办法 教程1: 教你在Vmware 10下安装苹果Mac10.9系统 地址:http://tieba.baidu.com/p/2847457021 教程2: VM ...
- Oracle数据库误删文件导致rman备份报错RMAN-06169解决办法
Oracle数据库误删文件导致rman备份报错RMAN-06169解决办法 可能是误删文件导致在使用rman备份时候出现以下提示 RMAN-06169: could not read file hea ...
随机推荐
- CSS3实现两行或三行文字,然后多出的部分省略号代替
概述 -webkit-line-clamp 是一个 不规范的属性(unsupported WebKit property),它没有出现在 CSS 规范草案中.限制在一个块元素显示的文本的行数. 为了实 ...
- idea 免费激活(破解)
1.将补丁(JetbrainsCrack-2.7-release-str.jar)拷贝到idea的安装目录/bin下 下载破解补丁 从IntelliJ IDEA 注册码网站:http://idea ...
- Android反编工具的使用-Android Killer
今天百度搜索"Android反编译"搜索出来的结果大多数都是比較传统的教程.刚接触反编译的时候,我也是从这些教程慢慢学起的.在后来的学习过程中,我接触到比較方便操作的Android ...
- IP欺骗:要虚拟很多IP的情况:在一台机上虚拟的IP跨网段的处理,可通过在服务器端添加路由来实现
要虚拟很多IP的情况:在一台机上虚拟的IP跨网段的处理,可通过在服务器端添加路由来实现. 例: [服务器] IP:192.168.0.1 [测试机] IP:192.168.0.2 测试机上添加的虚拟 ...
- PLSQL常用配置
峰回路转,此乃常客! 01.PL/SQL Developer记住登陆密码 为了工作方便希望PL/SQL Developer记住登录Oracle的用户名和密码: 设置方法: PL/SQL Develop ...
- cocos2dx 3.2 解决输入框(TextField,TextFieldTTF) 中文乱码问题
近期开发cocos2dx 项目,做一个小游戏.(个人喜欢用最新版本号) 没系统学习就是问题多多,遇到了非常多问题,比方全部的opengl api都必须在主线程中调用, 这让我在多线程载入方面吃了不少亏 ...
- lua 代码加密方案
require 实现 require函数在实现上是依次调用package.searchers(lua51中是package.loaders)中的载入函数,成功后返回.在loadlib.c文件里有四个载 ...
- eclipse新建maven工程的各种坑
尽量按照最后强烈推荐的那篇创建maven工程. 1.jsp文件头报错 2.xml配置文件头红叉 3.Archive for required library...blabla 4.pom依赖出错 5 ...
- 60款与DevOps相关的开源工具
原文地址:https://elasticbox.com/blog/de ... ools/ 你喜欢免费的东西吗?获得开发者社区支持的自动化,开源的工具是大家梦寐以求的.这里列举了 60+ 款最棒的开源 ...
- Java annotation 自定义注释@interface的用法
最近看到很多项目都是用了自定义注解,例如 1.什么是注解? 元数据(metadata),就是指数据的数据,元数据是描述数据的,就像数据库中的,表的字段,每一个 字段描述这个字段下面·的数据的含义,j2 ...