[Dynamic Language] pyspark Python3.7环境设置 及py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe解决!
pyspark Python3.7环境设置 及py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe解决!
环境设置
- JDK: java version "1.8.0_66"
- Python 3.7
- spark-2.3.1-bin-hadoop2.7.tgz
- 环境变量
export PYSPARK_PYTHON=python3export PYSPARK_DRIVER_PYTHON=ipython3
mac-abeen:spark-2.3.1-bin-hadoop2.7 abeen$ ./bin/pyspark
Python 3.7.0 (v3.7.0:1bf9cc5093, Jun 26 2018, 20:42:06)
Type 'copyright', 'credits' or 'license' for more information
IPython 6.4.0 -- An enhanced Interactive Python. Type '?' for help.
1Using Python version 3.7.0 (v3.7.0:1bf9cc5093, Jun 26 2018 20:42:06)
SparkSession available as 'spark'.
In [1]: sc
Out[1]: <SparkContext master=local[*] appName=PySparkShell>
In [2]: lines = sc.textFile("README.md")
In [3]: lines.count()
Out[3]: 103
In [4]: lines.first()
Out[4]: '# Apache Spark'
Py4JJavaError PythonRDD.collectAndServe解决!
注意: spark-2.3.1-bin-hadoop2.7 暂不支持java version "9.0.4". 报错请校正自己的JDK是否支持.
./bin/pyspark
>>> lines = sc.textFile("README.md")
>>> lines.count()
注意: spark-2.3.1-bin-hadoop2.7 暂不支持java version "9.0.4". 报错请校正自己的JDK是否支持
Error 以下为报错
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/abeen/abeen/net_source_code/spark-2.3.1-bin-hadoop2.7/python/pyspark/rdd.py", line 1073, in count
return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
File "/Users/abeen/abeen/net_source_code/spark-2.3.1-bin-hadoop2.7/python/pyspark/rdd.py", line 1064, in sum
return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)
File "/Users/abeen/abeen/net_source_code/spark-2.3.1-bin-hadoop2.7/python/pyspark/rdd.py", line 935, in fold
vals = self.mapPartitions(func).collect()
File "/Users/abeen/abeen/net_source_code/spark-2.3.1-bin-hadoop2.7/python/pyspark/rdd.py", line 834, in collect
sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
File "/Users/abeen/abeen/net_source_code/spark-2.3.1-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
File "/Users/abeen/abeen/net_source_code/spark-2.3.1-bin-hadoop2.7/python/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/Users/abeen/abeen/net_source_code/spark-2.3.1-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: java.lang.IllegalArgumentException
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2299)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2073)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:162)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.base/java.lang.Thread.run(Thread.java:844)
[Dynamic Language] pyspark Python3.7环境设置 及py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe解决!的更多相关文章
- pyspark 使用时环境设置
在脚本中导入pyspark的流程 import os import sys spark_name = os.environ.get('SPARK_HOME',None) # SPARK_HOME即sp ...
- windows和linux下 Python2,Python3 的环境及安装
目录 windows和linux下 Python2,Python3 的环境及安装 window下安装 一. 手动安装 二. pip安装 linux下 安装 更新Python 笔者有话 windows和 ...
- Spark (Python版) 零基础学习笔记(一)—— 快速入门
由于Scala才刚刚开始学习,还是对python更为熟悉,因此在这记录一下自己的学习过程,主要内容来自于spark的官方帮助文档,这一节的地址为: http://spark.apache.org/do ...
- spark基础---->spark的第一个程序
这里面我们介绍一下spark的安装,并通过一个python的例子来简单的体会一下spark的使用. spark的安装与使用 安装环境:mac 10.13.6,spark版本:2.3.1,python版 ...
- py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe. : java.lang.IllegalArgumentException: Unsupported class file major version 55
今天小编用Python编写Spark程序报了如下异常: py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apach ...
- 在Ubuntu 16.04 安装python3.6 环境并设置为默认
在Ubuntu 16.04 安装python3.6 环境并设置为默认 1.添加python3.6安装包,并且安装 sudo apt-get install software-properties-co ...
- 在docker容器中python3.5环境下使用DIGITS训练caffe模型
********* 此处使用的基础镜像为 nvcr.io/nvidia/digits:18.06,镜像大小为6.04GB,可从nvidia官方pull此镜像: 容器配置: CUDA:9.0 CUDNN ...
- 现代3D图形编程学习-环境设置
本书系列 现代3D图形编程学习 环境设置 由于本书中的例子,均是基于OpenGL实现的,因此你的工作环境需要能够运行OpenGL,为了读者能够更好的运行原文中的示例,此处简单地介绍了linux和win ...
- centos-安装python3.6环境并配置虚拟环境
python3.6下载地址:https://www.python.org/ftp/python/3.6.4/Python-3.6.4.tgz linux下python环境配置 统一目录: 源码存放位置 ...
随机推荐
- Nginx实现代理和用户验证
1.下载Nginx 首先去官网http://nginx.org/en/download.html下载需要的版本即可,无需安装,只需要打开nginx.exe文件,nginx.exe的服务就开启了.打开h ...
- linux查看内存、CPU占用资源最多的进程
[内存占用] #利用ps命令,默认使用ps参数会显示的结果 ps -aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 ...
- 激活Win10内置版Linux (ubuntu)
微软自从14316版本后,就开始原生支持Linux Bash命令行. 1.首先到系统设置——更新和安全——针对开发人员——选择开发者模式. 2.控制面板→程序和功能→启用或关闭Windows功能,勾 ...
- JAVA复习笔记分布式篇:kafka
前言:第一次使用消息队列是在实在前年的时候,那时候还不了解kafka,用的是阿里的rocket_mq,当时觉得挺好用的,后来听原阿里的同事说rocket_mq是他们看来kafka的源码后自己开发了一套 ...
- 20165203《Java程序设计》第三周学习总结
教材学习内容总结 1.类: (1)类的声明:class+类名 (2)类体:成员变量的声明+方法(局部变量+语句) 注意: 方法体内声明的局部变量只在方法内有效和书写位置有关. 局部变量和成员变量同名: ...
- 关于文件格式Fuzzing测试与漏洞挖掘的学习
最近对于文件的漏洞挖掘比较感兴趣,所以在找资料来看.顺带记录笔记,把这些笔记贴在博客中分享一下.最近打算把精力放在mp3格式的漏洞发掘上,一来这是常见的文件格式格式也比较清晰.二来这也是学长推荐的入手 ...
- Android开发——子线程操作UI的几种方法(待续)
方法2 Handler andler mHandler = new Handler() { @Override public void handleMessage(Message msg) { su ...
- NHibernate 错误
Unable to locate persister for the entity named 'Model.Customer'.The persister define the persistenc ...
- tp5总结(三)
1.控制器 1-1.加载页面[使用系统函数eg:http://ww:7070/tp5-2/public/admin/test/load] 1-2.加载页面[继承控制器方法eg:http://ww:70 ...
- JAVAEE——ssm综合练习:CRM系统(包含ssm整合)
1 CRM项目外观 1. 开发环境 IDE: Eclipse Mars2 Jdk: 1.7 数据库: MySQL 2. 创建数据库 数据库sql文件位置如下图: 创建crm数据库,执行sql 效果 ...