sparkR could not find function "textFile"
Yeah, that’s probably because the head() you’re invoking there is defined for SparkR DataFrames
[1] (note how you don’t have to use the SparkR::: namepsace in front of it), but SparkR:::textFile()
returns an RDD object, which is more like a distributed list data structure the way you’re
applying it over that .md text file. If you want to look at the first item or first several
items in the RDD, I think you want to use SparkR:::first() or SparkR:::take(), both of which
are applied to RDDs.
Just remember that all the functions described in the public API [2] for SparkR right now
are related mostly to working with DataFrames. You’ll have to use the R command line doc
or look at the RDD source code for all the private functions you might want (which includes
the doc strings used to make the R doc), whichever you find easier.
Alek
[1] -- http://spark.apache.org/docs/latest/api/R/head.html
[2] -- https://spark.apache.org/docs/latest/api/R/index.html
[3] -- https://github.com/apache/spark/blob/master/R/pkg/R/RDD.R
From: Wei Zhou <zhweisophie@gmail.com<mailto:zhweisophie@gmail.com>>
Date: Thursday, June 25, 2015 at 3:49 PM
To: Aleksander Eskilson <Alek.Eskilson@cerner.com<mailto:Alek.Eskilson@cerner.com>>
Cc: "user@spark.apache.org<mailto:user@spark.apache.org>" <user@spark.apache.org<mailto:user@spark.apache.org>>
Subject: Re: sparkR could not find function "textFile"
Hi Alek,
Just a follow up question. This is what I did in sparkR shell:
lines <- SparkR:::textFile(sc, "./README.md")
head(lines)
And I am getting error:
"Error in x[seq_len(n)] : object of type 'S4' is not subsettable"
I'm wondering what did I do wrong. Thanks in advance.
Wei
2015-06-25 13:44 GMT-07:00 Wei Zhou <zhweisophie@gmail.com<mailto:zhweisophie@gmail.com>>:
Hi Alek,
Thanks for the explanation, it is very helpful.
Cheers,
Wei
2015-06-25 13:40 GMT-07:00 Eskilson,Aleksander <Alek.Eskilson@cerner.com<mailto:Alek.Eskilson@cerner.com>>:
Hi there,
The tutorial you’re reading there was written before the merge of SparkR for Spark 1.4.0
For the merge, the RDD API (which includes the textFile() function) was made private, as the
devs felt many of its functions were too low level. They focused instead on finishing the
DataFrame API which supports local, HDFS, and Hive/HBase file reads. In the meantime, the
devs are trying to determine which functions of the RDD API, if any, should be made public
again. You can see the rationale behind this decision on the issue’s JIRA [1].
You can still make use of those now private RDD functions by prepending the function call
with the SparkR private namespace, for example, you’d use
SparkR:::textFile(…).
Hope that helps,
Alek
[1] -- https://issues.apache.org/jira/browse/SPARK-7230<https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_SPARK-2D7230&d=AwMFaQ&c=NRtzTzKNaCCmhN_9N2YJR-XrNU1huIgYP99yDsEzaJo&r=0vZw1rBdgaYvDJYLyKglbrax9kvQfRPdzxLUyWSyxPM&m=7RxLcWCdPWHoYk05KGwnohDZDileOX4Wo7Ht5SFge4I&s=ruNsApqV-sn8sBzSgJW0PIZ5beD_TvhLulQjeabR7p8&e=>
From: Wei Zhou <zhweisophie@gmail.com<mailto:zhweisophie@gmail.com>>
Date: Thursday, June 25, 2015 at 3:33 PM
To: "user@spark.apache.org<mailto:user@spark.apache.org>" <user@spark.apache.org<mailto:user@spark.apache.org>>
Subject: sparkR could not find function "textFile"
Hi all,
I am exploring sparkR by activating the shell and following the tutorial here https://amplab-extras.github.io/SparkR-pkg/<https://urldefense.proofpoint.com/v2/url?u=https-3A__amplab-2Dextras.github.io_SparkR-2Dpkg_&d=AwMFaQ&c=NRtzTzKNaCCmhN_9N2YJR-XrNU1huIgYP99yDsEzaJo&r=0vZw1rBdgaYvDJYLyKglbrax9kvQfRPdzxLUyWSyxPM&m=aL4A2Pv9tHbhgJUX-EnuYx2HntTnrqVpegm6Ag-FwnQ&s=qfOET1UvP0ECAKgnTJw8G13sFTi_PhiJ8Q89fMSgH_Q&e=>
And when I tried to read in a local file with textFile(sc, "file_location"), it gives an error
could not find function "textFile".
By reading through sparkR doc for 1.4, it seems that we need sqlContext to import data, for
example.
people <- read.df(sqlContext, "./examples/src/main/resources/people.json", "json"
)
And we need to specify the file type.
My question is does sparkR stop supporting general type file importing? If not, would appreciate
any help on how to do this.
PS, I am trying to recreate the word count example in sparkR, and want to import README.md
file, or just any file into sparkR.
Thanks in advance.
Best,
Wei
CONFIDENTIALITY NOTICE This message and any included attachments are from Cerner Corporation
and are intended only for the addressee. The information contained in this message is confidential
and may constitute inside or non-public information under international, federal, or state
securities laws. Unauthorized forwarding, printing, copying, distribution, or use of such
information is strictly prohibited and may be unlawful. If you are not the addressee, please
promptly delete this message and notify the sender of the delivery error by e-mail or you
may call Cerner's corporate offices in Kansas City, Missouri, U.S.A at (+1) (816)221-1024<tel:%28%2B1%29%20%28816%29221-1024>.
sparkR could not find function "textFile"的更多相关文章
- j解决sparkr中使用某些r的原生函数 发生错误Error: class(objId) == "jobj" is not TRUE的问题
Create table function in Spark in R not working João_Andre (3) 询问的问题 | 2016年12月10日 06:03BLUEMIXRSPA ...
- 通过百度echarts实现数据图表展示功能
现在我们在工作中,在开发中都会或多或少的用到图表统计数据显示给用户.通过图表可以很直观的,直接的将数据呈现出来.这里我就介绍说一下利用百度开源的echarts图表技术实现的具体功能. 1.对于不太理解 ...
- 在CentOS上安装并运行SparkR
环境配置—— 操作系统:CentOS 6.5 JDK版本:1.7.0_67 Hadoop集群版本:CDH 5.3.0 安装过程—— 1.安装R yum install -y R 2.安装curl-de ...
- Apache Spark技术实战之5 -- SparkR的安装及使用
欢迎转载,转载请注明出处,徽沪一郎. 概要 根据论坛上的信息,在Sparkrelease计划中,在Spark 1.3中有将SparkR纳入到发行版的可能.本文就提前展示一下如何安装及使用SparkR. ...
- SparkR安装部署及数据分析实例
1. SparkR的安装配置 1.1. R与Rstudio的安装 1.1.1. R的安装 我们的工作环境都是在Ubuntu下操作的,所以只介绍Ubuntu下安装R的方法 ...
- (转载)SPARKR,对RDD操作的介绍
原以为,用sparkR不能做map操作, 搜了搜发现可以. lapply等同于map, 但是不能操作spark RDD. spark2.0以后, sparkR增加了 dapply, dapplycol ...
- SPARKR,对RDD操作的介绍
(转载)SPARKR,对RDD操作的介绍 原以为,用sparkR不能做map操作, 搜了搜发现可以. lapply等同于map, 但是不能操作spark RDD. spark2.0以后, spar ...
- sparkR介绍及安装
sparkR介绍及安装 SparkR是AMPLab发布的一个R开发包,为Apache Spark提供了轻量的前端.SparkR提供了Spark中弹性分布式数据集(RDD)的API,用户可以在集群上通过 ...
- SparkR初体验2.0
突然有个想法,R只能处理百万级别的数据,如果R能运行在Spark上多好!搜了下发现13年SparkR这个项目就启动了,感谢美帝! 1.你肯定得先装个spark吧.看这:Spark本地模式与Spark ...
随机推荐
- BIP_BI Pubisher的SQL/XSL/FO扩展函数应用(概念)
2014-12-01 Created By BaoXinjian
- Accounting_会计电算化工作指南
会计电算化工作指南 会计电算化实施的内容目标及原则 企业会计电算化的实施,也就是企业建立会计电算化的整个过程,是一项复杂的系统工程.在整个系统的实施过程中,包括会计电算化工作的规划,会计信息的建立与管 ...
- Spark与Pandas中DataFrame对比
Pandas Spark 工作方式 单机single machine tool,没有并行机制parallelism不支持Hadoop,处理大量数据有瓶颈 分布式并行计算框架,内建并行机制paral ...
- cocos2dx 3.x 避免空sprite
由于cocos2dx 3.x中autobatch的,如果场景中含有空sprite(并且还不处于visible==false状态)的话,则会打断流水线(因为空sprite的贴图与其它元素的贴图必定不在同 ...
- Android开发15——给TextView加上滚动条
给TextView加上滚动条非常简单,只需要把TextView标签放在ScrollView标签中 <ScrollView android:layout_width="wrap_cont ...
- Linode中的Network Helper
Linode主机vps有一个很好的网络配置工具:Network Helper,他可以在系统启动的时候,根据你的操作系统,以及检测到的网络配置等信息,自动配置好网络,非常有用. 官方文档: Networ ...
- RhinoMock顺序调用
MockRepository mocks = new MockRepository(); ISongBird maleBird = (ISongBird)mocks.StrictMock(typeof ...
- ThinkPad 预装win8换win7(软激活)
今天晚上有人叫我给他装系统,没错!这就是计算机专业的拿手技能(维修学院重装系统专业Win7系统班^-^). 一拿手上,是lenovo的ThinkPad E430型号,预装的系统是win8,由于win8 ...
- jvisualvm工具使用
VisualVM 是Netbeans的profile子项目,已在JDK6.0 update 7 中自带(java启动时不需要特定参数,监控工具在bin/jvisualvm.exe). https:// ...
- c语言double类型数据四舍五入
借助math库的round函数 #include <math.h> double ext_round(double data, int precision) { , precision); ...