Zeppelin0.5.6使用spark解释器
Zeppelin为0.5.6
Zeppelin默认自带本地spark,可以不依赖任何集群,下载bin包,解压安装就可以使用。
使用其他的spark集群在yarn模式下。
配置:
vi zeppelin-env.sh
添加:
export SPARK_HOME=/usr/crh/current/spark-client
export SPARK_SUBMIT_OPTIONS="--driver-memory 512M --executor-memory 1G"
export HADOOP_CONF_DIR=/etc/hadoop/conf
Zeppelin Interpreter配置
注意:设置完重启解释器。
Properties的master属性如下:
新建Notebook
Tips:几个月前zeppelin还是0.5.6,现在最新0.6.2,zeppelin 0.5.6写notebook时前面必须加%spark,而0.6.2若什么也不加就默认是scala语言。
zeppelin 0.5.6不加就报如下错:
Connect to 'databank:4300' failed
%spark.sql
select count(*) from tc.gjl_test0
报错:
com.fasterxml.jackson.databind.JsonMappingException: Could not find creator property with name 'id' (in class org.apache.spark.rdd.RDDOperationScope)
at [Source: {"id":"2","name":"ConvertToSafe"}; line: 1, column: 1]
at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:148)
at com.fasterxml.jackson.databind.DeserializationContext.mappingException(DeserializationContext.java:843)
at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.addBeanProps(BeanDeserializerFactory.java:533)
at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.buildBeanDeserializer(BeanDeserializerFactory.java:220)
at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.createBeanDeserializer(BeanDeserializerFactory.java:143)
at com.fasterxml.jackson.databind.deser.DeserializerCache._createDeserializer2(DeserializerCache.java:409)
at com.fasterxml.jackson.databind.deser.DeserializerCache._createDeserializer(DeserializerCache.java:358)
at com.fasterxml.jackson.databind.deser.DeserializerCache._createAndCache2(DeserializerCache.java:265)
at com.fasterxml.jackson.databind.deser.DeserializerCache._createAndCacheValueDeserializer(DeserializerCache.java:245)
at com.fasterxml.jackson.databind.deser.DeserializerCache.findValueDeserializer(DeserializerCache.java:143)
at com.fasterxml.jackson.databind.DeserializationContext.findRootValueDeserializer(DeserializationContext.java:439)
at com.fasterxml.jackson.databind.ObjectMapper._findRootDeserializer(ObjectMapper.java:3666)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3558)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2578)
at org.apache.spark.rdd.RDDOperationScope$.fromJson(RDDOperationScope.scala:85)
at org.apache.spark.rdd.RDDOperationScope$$anonfun$5.apply(RDDOperationScope.scala:136)
at org.apache.spark.rdd.RDDOperationScope$$anonfun$5.apply(RDDOperationScope.scala:136)
at scala.Option.map(Option.scala:145)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:136)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.ConvertToSafe.doExecute(rowFormatConverters.scala:56)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:187)
at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:165)
at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174)
at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2086)
at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$execute$1(DataFrame.scala:1498)
at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$collect(DataFrame.scala:1505)
at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1375)
at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1374)
at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2099)
at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1374)
at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1456)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.zeppelin.spark.ZeppelinContext.showDF(ZeppelinContext.java:297)
at org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:144)
at org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:300)
at org.apache.zeppelin.scheduler.Job.run(Job.java:169)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:134)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
原因:
进入/opt/zeppelin-0.5.6-incubating-bin-all目录下:
# ls lib |grep jackson
jackson-annotations-2.5.0.jar
jackson-core-2.5.3.jar
jackson-databind-2.5.3.jar
将里面的版本换成如下版本:
# ls lib |grep jackson
jackson-annotations-2.4.4.jar
jackson-core-2.4.4.jar
jackson-databind-2.4.4.jar
测试成功!
Sparksql也可直接通过hive jdbc连接,只需换端口,如下图:
Zeppelin0.5.6使用spark解释器的更多相关文章
- Zeppelin使用spark解释器
Zeppelin为0.5.6 Zeppelin默认自带本地spark,可以不依赖任何集群,下载bin包,解压安装就可以使用. 使用其他的spark集群在yarn模式下. 配置: vi zeppelin ...
- Zeppelin0.6.2使用hive解释器
Zeppelin0.6.2的jdbc Interpreter 配置 1.拷贝hive的配置文件hive-site.xml到zeppelin-0.6.2-bin-all/conf下. 2.进入conf下 ...
- Zeppelin0.5.6使用hive解释器
此zeppelin为官方0.5.6版,可能还在孵化阶段,可能出现一些bug吧. 配置 cp zeppelin-env.sh.template zeppelin-env.sh vi zeppelin-e ...
- Zeppelin0.7.2结合hive解释器进行报表展示
前提:服务器已经安装好了hadoop_client端即hadoop的环境hbase,hive等相关组件 1.环境和变量配置①拷贝hive的配置文件hive-site.xml到zeppelin-0.7. ...
- Zeppelin使用Spark的yarn-client模式
Zeppelin版本0.6.2 1. Export SPARK_HOME In conf/zeppelin-env.sh, export SPARK_HOME environment variable ...
- Apache Spark 2.2.0 中文文档 - Spark RDD(Resilient Distributed Datasets)论文 | ApacheCN
Spark RDD(Resilient Distributed Datasets)论文 概要 1: 介绍 2: Resilient Distributed Datasets(RDDs) 2.1 RDD ...
- Apache Spark RDD(Resilient Distributed Datasets)论文
Spark RDD(Resilient Distributed Datasets)论文 概要 1: 介绍 2: Resilient Distributed Datasets(RDDs) 2.1 RDD ...
- hadoop-2.7.3.tar.gz + spark-2.0.2-bin-hadoop2.7.tgz + zeppelin-0.6.2-incubating-bin-all.tgz(master、slave1和slave2)(博主推荐)(图文详解)
不多说,直接上干货! 我这里,采取的是ubuntu 16.04系统,当然大家也可以在CentOS6.5里,这些都是小事 CentOS 6.5的安装详解 hadoop-2.6.0.tar.gz + sp ...
- Zeppelin 0.6.2使用Spark的yarn-client模式
Zeppelin版本0.6.2 1. Export SPARK_HOME In conf/zeppelin-env.sh, export SPARK_HOME environment variable ...
随机推荐
- Ubuntu服务器搭建
Ubuntu16 搭建Git 服务器 - 濮成林 - 博客园 https://www.cnblogs.com/charliePU/p/7528226.html Ubuntu 搭建 GitLab 笔记 ...
- 微信小程序把玩(十一)icon组件
原文:微信小程序把玩(十一)icon组件 这些是提供的所支持的图标样式,根据需求在此基础上去修改大小和颜色. 主要属性: 使用方式: wxml <!--成功图标--> <icon t ...
- 解决手机提示TF卡受损需要格式化问题
昨晚因为上QQ FOR PAD后.关机.结果又杯具了.上次无意看到一个SD卡修复命令,收藏起来了.一试,还真管用.现把它写出来.分享给大家.以后出现SD卡受损,千万不要再格式化内存卡了.修复过程:1. ...
- 1.预处理,生成预编译文件(.文件): Gcc –E hello.c –o hello.i 2.编译,生成汇编代码(.s文件): Gcc –S hello.i –o hello.s 3.汇编,生成目标文件(.o文件): Gcc –c hello.s –o hello.o 4.链接,生成可执行文件: linux笔记
1 动态查看日志 tail -f filename tail -1000f filename 2 解压当前目录内容为xxx.zip zip -r xxx.zip ./* 3 查看内存使用情况 fre ...
- 升级d7的代码到2010以上版本注意事项(SetLength的参数就是字符长度,而不是字节长度,但Move函数要改)
delphi2010是delphi所有版本的分水岭,其中2010—xe10.2之间版本上的代码都有比较好的兼容性,基本上都能直接进行编译,不需要过多修改,但d7距d2010跨度4个版本以上,新版本除了 ...
- char、char*、char**数组(有图,非常清楚)good
平时都用的是char数组,基本忘记了char*数组和char**数组该怎么用了 char s1[10]; s1[0] s1[1]等都是char s1是char*,等同于&s1[0] char* ...
- Python 2, Python 3, Stretch & Buster
Python 2.7的终止支持时间为2020年,现在已经是2015年了,然而Debian中仍然有大量软件包是基于Python 2的实现.Debian的维护者开始认真讨论淘汰Python 2.开发者Pa ...
- Qt+QZXing编写识别二维码的程序
本人最近在用Qt编写程序,需要用编写二维码识别功能.在网上搜寻一番,找到了QZXing.配置过程中确实出了一大把汗,这里我写这篇文章记录配置方法,替后人省一把汗吧!我的开发环境:MSVC2010 + ...
- FMX+Win32,窗口无法保持原样,应该是个bug
从FMX发布开始,一直有这问题,大家看看是不是一个bug,应该如何修复? 新建一个FMX Application,运行后,点击窗口标题栏右上角的“最大化”按钮,此时窗口是最大化的.在windows最底 ...
- python代码检查工具pylint 让你的python更规范
1.pylint是什么? Pylint 是一个 Python 代码分析工具,它分析 Python 代码中的错误,查找不符合代码风格标准(Pylint 默认使用的代码风格是 PEP 8,具体信息,请参阅 ...