1 详细信息
User class threw exception: java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext.
This stopped SparkContext was created at:
org.apache.spark.SparkContext.<init>(SparkContext.scala:76)
com.wm.bigdata.spark.etl.RentOrderDistributedEtl$.main(RentOrderDistributedEtl.scala:227)
com.wm.bigdata.spark.etl.RentOrderDistributedEtl.main(RentOrderDistributedEtl.scala)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:678)
The currently active SparkContext was created at:
(No active SparkContext.)
at org.apache.spark.SparkContext.assertNotStopped(SparkContext.scala:100)
at org.apache.spark.SparkContext$$anonfun$newAPIHadoopRDD$1.apply(SparkContext.scala:1186)
at org.apache.spark.SparkContext$$anonfun$newAPIHadoopRDD$1.apply(SparkContext.scala:1185)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.SparkContext.withScope(SparkContext.scala:699)
at org.apache.spark.SparkContext.newAPIHadoopRDD(SparkContext.scala:1185)
at org.apache.phoenix.spark.PhoenixRDD.<init>(PhoenixRDD.scala:49)
at org.apache.phoenix.spark.PhoenixRelation.buildScan(PhoenixRelation.scala:39)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$10.apply(DataSourceStrategy.scala:293)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$10.apply(DataSourceStrategy.scala:293)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$pruneFilterProject$1.apply(DataSourceStrategy.scala:326)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$pruneFilterProject$1.apply(DataSourceStrategy.scala:325)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy.pruneFilterProjectRaw(DataSourceStrategy.scala:381)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy.pruneFilterProject(DataSourceStrategy.scala:321)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy.apply(DataSourceStrategy.scala:289)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:63)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:63)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:72)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:68)
at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:77)
at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:77)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$3.apply(QueryExecution.scala:207)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$3.apply(QueryExecution.scala:207)
at org.apache.spark.sql.execution.QueryExecution.stringOrError(QueryExecution.scala:99)
at org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:207)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:75)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.Dataset.withNewRDDExecutionId(Dataset.scala:3346)
at org.apache.spark.sql.Dataset.foreach(Dataset.scala:2716)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$$anonfun$main$1$$anonfun$apply$mcVJ$sp$1.apply(RentOrderDistributedEtl.scala:284)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$$anonfun$main$1$$anonfun$apply$mcVJ$sp$1.apply(RentOrderDistributedEtl.scala:263)
at scala.collection.immutable.List.foreach(List.scala:392)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$$anonfun$main$1.apply$mcVJ$sp(RentOrderDistributedEtl.scala:263)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$$anonfun$main$1.apply(RentOrderDistributedEtl.scala:262)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$$anonfun$main$1.apply(RentOrderDistributedEtl.scala:262)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$.main(RentOrderDistributedEtl.scala:262)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl.main(RentOrderDistributedEtl.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:678)
错误原因
自己关闭资源的位置写在任务循环代码之内了
sqlContext.sparkSession.close()
sc.stop()
详细检查写结束的位置,写在最后,解决问题。低级失误
- 严重: Servlet.service() for servlet jsp threw exception java.lang.IllegalStateException: getOutputStream() has already been called for this response
严重: Servlet.service() for servlet jsp threw exception java.lang.IllegalStateException: getOutputS ...
- 严重: Servlet.service() for servlet jsp threw exception java.lang.IllegalStateException: getOutputStream() has already been called
错误: 严重: Servlet.service() for servlet jsp threw exception java.lang.IllegalStateException: getOutput ...
- 严重: End event threw exception java.lang.IllegalArgumentException: Can't convert argument: null
堆栈信息: 2014-6-17 10:33:58 org.apache.tomcat.util.digester.Digester endElement 严重: End event threw exc ...
- 报错!!!Servlet.service() for servlet [action] in context with path [/myssh] threw exception [java.lang.NullPointerException] with root cause java.lang.NullPointerException
这个为什么报错啊~~ at com.hsp.basic.BasicService.executeQuery(BasicService.java:33) 这个对应的语句是 Query query = ...
- Servlet.service() for servlet UserServlet threw exception java.lang.NullPointerException 空指针异常
错误付现: 严重: Servlet.service() for servlet UserServlet threw exceptionjava.lang.NullPointerException at ...
- 异常:java.lang.IllegalStateException: Ambiguous handler methods mapped for HTTP path '/app/userInfoMaint/getProvince.do'
调试代码时出现异常:java.lang.IllegalStateException: Ambiguous handler methods mapped for HTTP path '/app/user ...
- 严重: Servlet.service() for servlet [jsp] threw exception java.lang.NullPointerException
五月 04, 2018 11:53:24 上午 org.apache.catalina.core.ApplicationDispatcher invoke 严重: Servlet.service() ...
- 一个异常org.apache.jasper.JasperException: java.lang.IllegalStateException: No output folder:的解决
今天对一个WebApp做完修改,导出成war包,再发布到Tomcat7中,居然访问不了了! 同样的问题一周前也出现过,后来一顿鼓捣,又莫名其妙好了,当时认为是Tomcat7闹点小毛病,也没多想. 但是 ...
- 常见的java异常——java.lang.IllegalStateException: Ambiguous handler methods mapped for HTTP path
此异常是由于你的controller中有两个名字与内容相同的方法: 出现此异常时去检查你的controller中是否有重复的名字的方法:
随机推荐
- javascript的历史和入门
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/ ...
- Ubuntu中安装memcache并且在Python中连接memcache
1.安装memcache到Ubuntu. PS:依赖libevent,需要提前安装 yum install libevent-devel #centos中使用这个 apt-get install li ...
- 已经配置好了的 jmeter + ant 框架
已经配置好了的 jmeter + ant 框架 ,需要自取,避免查找安装攻略时耗费时间 使用前需配置环境变量,阅读文件内安装文档!!! 链接:https://pan.baidu.com/s/1eRz9 ...
- 积累-T
emmm,各种知识点都有吧,主要方便自己记 随机修改网页图标 <script> var image=new Array(3); image.length=3; image[1]=" ...
- spring @valid 注解
用于验证注解是否符合要求,直接加在变量之前,在变量中添加验证信息的要求,当不符合要求时就会在方法中返回message 的错误提示信息. @PostMapping public User create ...
- linux系统查看当前正在运行的服务
--查看当前服务器所有服务 service --status-all -- 查看当前所有正在运行的服务 service --status-all | grep running --查看指定服务运行状态 ...
- Vi或者Vim下按了ctrl+s后终端卡住了咋办?
在Vi或者Vim下按了ctrl+s后终端卡住了咋办? 习惯了在windows下写程序,也习惯了按ctrl+s 保存代码. 在用vim的时候,也习惯性的按ctrl+s结果就是如同终端死掉了一样. 原因: ...
- vue 导出JSON数据为Excel
1. 安装三个依赖 npm install file-saver --save npm install xlsx --save npm install script-loader --save-dev ...
- 【Linux开发】linux设备驱动归纳总结(八):4.总线热插拔
linux设备驱动归纳总结(八):4.总线热插拔 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx ...
- [转帖]IBM收购红帽价格是多少?是否会形成垄断企业?会存在什么不安因素?
http://www.techweb.com.cn/it/2019-07-10/2743776.shtml 国产的linux 用centos源的 如何是好呢.. 蓝色巨人IBM官方宣布,已经正式完成对 ...