pyspark Python3.7环境设置 及py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe解决!

环境设置

  1. JDK: java version "1.8.0_66"
  2. Python 3.7
  3. spark-2.3.1-bin-hadoop2.7.tgz
  4. 环境变量
    • export PYSPARK_PYTHON=python3
    • export PYSPARK_DRIVER_PYTHON=ipython3
mac-abeen:spark-2.3.1-bin-hadoop2.7 abeen$ ./bin/pyspark
Python 3.7.0 (v3.7.0:1bf9cc5093, Jun 26 2018, 20:42:06)
Type 'copyright', 'credits' or 'license' for more information
IPython 6.4.0 -- An enhanced Interactive Python. Type '?' for help.
1Using Python version 3.7.0 (v3.7.0:1bf9cc5093, Jun 26 2018 20:42:06)
SparkSession available as 'spark'. In [1]: sc
Out[1]: <SparkContext master=local[*] appName=PySparkShell> In [2]: lines = sc.textFile("README.md") In [3]: lines.count()
Out[3]: 103 In [4]: lines.first()
Out[4]: '# Apache Spark'

Py4JJavaError PythonRDD.collectAndServe解决!

注意: spark-2.3.1-bin-hadoop2.7 暂不支持java version "9.0.4". 报错请校正自己的JDK是否支持.

./bin/pyspark
>>> lines = sc.textFile("README.md")
>>> lines.count() 注意: spark-2.3.1-bin-hadoop2.7 暂不支持java version "9.0.4". 报错请校正自己的JDK是否支持
Error 以下为报错
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/abeen/abeen/net_source_code/spark-2.3.1-bin-hadoop2.7/python/pyspark/rdd.py", line 1073, in count
return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
File "/Users/abeen/abeen/net_source_code/spark-2.3.1-bin-hadoop2.7/python/pyspark/rdd.py", line 1064, in sum
return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)
File "/Users/abeen/abeen/net_source_code/spark-2.3.1-bin-hadoop2.7/python/pyspark/rdd.py", line 935, in fold
vals = self.mapPartitions(func).collect()
File "/Users/abeen/abeen/net_source_code/spark-2.3.1-bin-hadoop2.7/python/pyspark/rdd.py", line 834, in collect
sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
File "/Users/abeen/abeen/net_source_code/spark-2.3.1-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
File "/Users/abeen/abeen/net_source_code/spark-2.3.1-bin-hadoop2.7/python/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/Users/abeen/abeen/net_source_code/spark-2.3.1-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: java.lang.IllegalArgumentException
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2299)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2073)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:162)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.base/java.lang.Thread.run(Thread.java:844)

[Dynamic Language] pyspark Python3.7环境设置 及py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe解决!的更多相关文章

  1. pyspark 使用时环境设置

    在脚本中导入pyspark的流程 import os import sys spark_name = os.environ.get('SPARK_HOME',None) # SPARK_HOME即sp ...

  2. windows和linux下 Python2,Python3 的环境及安装

    目录 windows和linux下 Python2,Python3 的环境及安装 window下安装 一. 手动安装 二. pip安装 linux下 安装 更新Python 笔者有话 windows和 ...

  3. Spark (Python版) 零基础学习笔记(一)—— 快速入门

    由于Scala才刚刚开始学习,还是对python更为熟悉,因此在这记录一下自己的学习过程,主要内容来自于spark的官方帮助文档,这一节的地址为: http://spark.apache.org/do ...

  4. spark基础---->spark的第一个程序

    这里面我们介绍一下spark的安装,并通过一个python的例子来简单的体会一下spark的使用. spark的安装与使用 安装环境:mac 10.13.6,spark版本:2.3.1,python版 ...

  5. py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe. : java.lang.IllegalArgumentException: Unsupported class file major version 55

    今天小编用Python编写Spark程序报了如下异常: py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apach ...

  6. 在Ubuntu 16.04 安装python3.6 环境并设置为默认

    在Ubuntu 16.04 安装python3.6 环境并设置为默认 1.添加python3.6安装包,并且安装 sudo apt-get install software-properties-co ...

  7. 在docker容器中python3.5环境下使用DIGITS训练caffe模型

    ********* 此处使用的基础镜像为 nvcr.io/nvidia/digits:18.06,镜像大小为6.04GB,可从nvidia官方pull此镜像: 容器配置: CUDA:9.0 CUDNN ...

  8. 现代3D图形编程学习-环境设置

    本书系列 现代3D图形编程学习 环境设置 由于本书中的例子,均是基于OpenGL实现的,因此你的工作环境需要能够运行OpenGL,为了读者能够更好的运行原文中的示例,此处简单地介绍了linux和win ...

  9. centos-安装python3.6环境并配置虚拟环境

    python3.6下载地址:https://www.python.org/ftp/python/3.6.4/Python-3.6.4.tgz linux下python环境配置 统一目录: 源码存放位置 ...

随机推荐

  1. WGS84转大地2000

    1.创建自定义地理(坐标)变换: 2.选择源坐标系和目标坐标系: 3.自定义地理转换方法,选择COORDINATE_FRAME; 4.选择投影工具: 5.在地理变换处选择刚才自定义变换.

  2. ArcMap2SLD添加中文支持

    首先,你可以从作者提供的链接下载ArcMap2SLD.zip(支持ArcMap10.2) 1.打开LUT_sld_mapping_file.xml文件(上传文件中已经修改)修改文件<LUT> ...

  3. python网络编程-动态导入和断言

    一:动态导入importlib 在程序运行的过程中,根据变量或者配置动态的决定导入哪个模块,可以使用模块importlib importlib使用示例 二:断言assert 如果接下来的程序依赖于前面 ...

  4. keras LSTM中间的dropout

    TM有三个 model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2)) 第一个dropout是x和hidden之间的dropout,第二个是hid ...

  5. matlab随笔

    主要是记录一些函数.(博客园的一些操作实在是太不方便了) cat函数:http://blog.sina.com.cn/s/blog_6b7dfd9d0100mnz7.html 联结两个数组 magic ...

  6. GrideVlew提供点击按钮添加新数据,单击项目修改,长按删除功能

    package com.example.wang.myapplication; import android.app.AlertDialog; import android.content.Dialo ...

  7. hbase 性能调

    一. HBase的通用优化 1 高可用 在 HBase 中 Hmaster 负责监控 RegionServer 的生命周期,均衡 RegionServer 的负载,如果 Hmaster 挂掉了,那么整 ...

  8. mongo 运维管理学习

    1 如何在线修改chunk大小 https://docs.mongodb.com/manual/tutorial/modify-chunk-size-in-sharded-cluster/ 2 chu ...

  9. 自己封装的php Curl并发处理,欢迎提出问题优化。

    因为项目需要,发现一个一个发送请求实在太慢,无奈之下,我们可以封装一个并发处理的curl请求批处理句柄来减少重复创建句柄的问题 代码如下: /* *@param array $data url的参数 ...

  10. PyQt5调入数据库数据在表格中显示

    数据库为Postgresql import sys from form import Ui_Form from PyQt5.Qt import QWidget, QApplication,QTable ...