IDEA Spark程序报错处理
错误一:
// :: ERROR Executor: Exception in task 0.0 in stage 0.0 (TID )
java.lang.NoSuchMethodError: scala.Product.$init$(Lscala/Product;)V
at Person.<init>(RDD_To_DataFrame.scala:)
at RDD_To_DataFrame$.$anonfun$main$(RDD_To_DataFrame.scala:)
at scala.collection.Iterator$$anon$.next(Iterator.scala:)
at scala.collection.Iterator$$anon$.next(Iterator.scala:)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$$$anon$.hasNext(WholeStageCodegenExec.scala:)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$.apply(SparkPlan.scala:)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$.apply(SparkPlan.scala:)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$$$anonfun$apply$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$$$anonfun$apply$.apply(RDD.scala:)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:)
at org.apache.spark.scheduler.Task.run(Task.scala:)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
at java.lang.Thread.run(Thread.java:)
// :: ERROR TaskSetManager: Task in stage 0.0 failed times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task in stage 0.0 failed times, most recent failure: Lost task 0.0 in stage 0.0 (TID , localhost, executor driver): java.lang.NoSuchMethodError: scala.Product.$init$(Lscala/Product;)V
at Person.<init>(RDD_To_DataFrame.scala:)
at RDD_To_DataFrame$.$anonfun$main$(RDD_To_DataFrame.scala:)
at scala.collection.Iterator$$anon$.next(Iterator.scala:)
at scala.collection.Iterator$$anon$.next(Iterator.scala:)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$$$anon$.hasNext(WholeStageCodegenExec.scala:)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$.apply(SparkPlan.scala:)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$.apply(SparkPlan.scala:)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$$$anonfun$apply$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$$$anonfun$apply$.apply(RDD.scala:)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:)
at org.apache.spark.scheduler.Task.run(Task.scala:)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
at java.lang.Thread.run(Thread.java:) Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at scala.Option.foreach(Option.scala:)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.util.EventLoop$$anon$.run(EventLoop.scala:)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:)
at org.apache.spark.sql.Dataset$$anonfun$head$.apply(Dataset.scala:)
at org.apache.spark.sql.Dataset$$anonfun$head$.apply(Dataset.scala:)
at org.apache.spark.sql.Dataset$$anonfun$.apply(Dataset.scala:)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:)
at org.apache.spark.sql.Dataset.head(Dataset.scala:)
at org.apache.spark.sql.Dataset.take(Dataset.scala:)
at org.apache.spark.sql.Dataset.showString(Dataset.scala:)
at org.apache.spark.sql.Dataset.show(Dataset.scala:)
at org.apache.spark.sql.Dataset.show(Dataset.scala:)
at org.apache.spark.sql.Dataset.show(Dataset.scala:)
at RDD_To_DataFrame$.main(RDD_To_DataFrame.scala:)
at RDD_To_DataFrame.main(RDD_To_DataFrame.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:)
Caused by: java.lang.NoSuchMethodError: scala.Product.$init$(Lscala/Product;)V
at Person.<init>(RDD_To_DataFrame.scala:)
at RDD_To_DataFrame$.$anonfun$main$(RDD_To_DataFrame.scala:)
at scala.collection.Iterator$$anon$.next(Iterator.scala:)
at scala.collection.Iterator$$anon$.next(Iterator.scala:)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$$$anon$.hasNext(WholeStageCodegenExec.scala:)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$.apply(SparkPlan.scala:)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$.apply(SparkPlan.scala:)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$$$anonfun$apply$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$$$anonfun$apply$.apply(RDD.scala:)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:)
at org.apache.spark.scheduler.Task.run(Task.scala:)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
at java.lang.Thread.run(Thread.java:)
错误处理:将IDEA中的Scala 改为2.10.4版本
这个问题主要出现在 Spark程序使用 case class 类时
错误二:
Error:(, ) No TypeTag available for (Array[String],)
val documentDF= spark.createDataFrame(Seq(
错误处理:将IDEA中的Scala 改为2.12.3版本
这个问题主要出现在 Spark程序使用 Seq时:
比如:
val df= spark.createDataFrame(Seq(
(,Array("soyo","spark","soyo2","soyo","")),
(,Array("soyo","hadoop","soyo","hadoop","xiaozhou","soyo2","spark","","")),
(,Array("soyo","spark","soyo2","hadoop","soyo3","")),
(,Array("soyo","spark","soyo20","hadoop","soyo2","","")),
(,Array("soyo","","spark","","spark","spark",""))
)).toDF("id","words")
IDEA Spark程序报错处理的更多相关文章
- 解决spark程序报错:Caused by: java.util.concurrent.TimeoutException: Futures timed out after [300 seconds]
报错信息: 09-05-2017 09:58:44 CST xxxx_job_1494294485570174 INFO - at org.apache.spark.sql.catalyst.erro ...
- eclispe集成Scalas环境后,导入外部Spark包报错:object apache is not a member of package org
在Eclipse中集成scala环境后,发现导入的Spark包报错,提示是:object apache is not a member of package org,网上说了一大推,其实问题很简单: ...
- 运行编译后的程序报错 error while loading shared libraries: lib*.so: cannot open shared object file: No such file or directory
运行编译后的程序报错 error while loading shared libraries: lib*.so: cannot open shared object file: No such f ...
- Window7中Eclipse运行MapReduce程序报错的问题
按照文档:http://www.micmiu.com/bigdata/hadoop/hadoop2x-eclipse-mapreduce-demo/安装配置好Eclipse后,运行WordCount程 ...
- eclipse运行hadoop程序报错:Connection refused: no further information
eclipse运行hadoop程序报错:Connection refused: no further information log4j:WARN No appenders could be foun ...
- WinDbg抓取程序报错dump文件的方法
程序崩溃的两种主要现象: a. 程序在运行中的时候,突然弹出错误窗口,然后点错误窗口的确定时,程序直接关闭 例如: “应用程序错误” “C++错误之类的窗口” “程序无响应” “假死”等 此种崩溃特点 ...
- 记录微信小程序报错 Unexpected end of JSON input;at pages/flow/checkout page getOrderData function
微信小程序报错 Unexpected end of JSON input;at pages/flow/checkout page getOrderData function 这个报错是在将数组对象通过 ...
- 小程序-报错 xxx is not defined (已解决)
小程序-报错 xxx is not defined (已解决) 问题情境: 这样一段代码,微信的小程序报错 is not defined 我 wxml 想这样调用 //wxml 代码 <view ...
- debug运行java程序报错
debug运行java程序报错 ERROR: transport error 202: connect failed: Connection timed out ERROR: JDWP Transpo ...
随机推荐
- Leetcode 498:对角线遍历Diagonal Traverse(python3、java)
对角线遍历 给定一个含有 M x N 个元素的矩阵(M 行,N 列),请以对角线遍历的顺序返回这个矩阵中的所有元素,对角线遍历如下图所示. Given a matrix of M x N elemen ...
- ZOJ - 3981 - Balloon Robot (思维)
参考自:https://blog.csdn.net/qq_36553623/article/details/78445558 题意: 第一行三个数字n, m, q表示有m个座位围成一个环,n个队伍,q ...
- [luogu3067 USACO12OPEN] 平衡的奶牛群
传送门 Solution 折半搜索模板题 考虑枚举每个点在左集合和右集合或者不在集合中,然后排序合并即可 Code //By Menteur_Hxy #include <cmath> #i ...
- Oracle 常用目录结构(10g)
大多数情况下,了解Oracle常用目录结构,将大大提高数据库管理与维护的工作效率,本文介绍了Oracle 10g 的常用目录. OFA: 下面给出Oracle 最优灵活体系结构OFA(Optimal ...
- linux cmp-比较两个文件是否有差异
推荐:更多Linux 文件查找和比较 命令关注:linux命令大全 cmp命令用来比较两个文件是否有差异.当相互比较的两个文件完全一样时,则该指令不会显示任何信息.若发现有差异,预设会标示出第一个不通 ...
- python3虚拟环境应用
python3自带虚拟环境venv,大致操作只有三步 1. 创建虚拟环境 python3 -m venv venv(名称随意) 2. 激活虚拟环境 source venv/bin/activate 3 ...
- 如何创建新用户和授予MySQL中的权限
原创官网http://www.howtoing.com/how-to-create-a-new-user-and-grant-permissions-in-mysql/ 关于MySQL MySQL是一 ...
- Codeforces Round #234 (Div. 2)
A. Inna and Choose Options time limit per test 1 second memory limit per test 256 megabytes input st ...
- git远程上传文件
[第一步]建立先仓库 第一步的话看一般的提示就知道了,在github新建一个repository(谷歌可以解决),都是可视化的界面操作,所以难度不大.或者看这里:https://help.github ...
- IBOutlet loadView UIButton的subview数量 UIWebView
IBOutlet声明的插座变量和属性一起使用的时候,在.m文件调用的是属性. 在loadView方法中获取view属性会产生循环引用问题并导致内存溢出. Control+E到行尾,Control+A到 ...