When working with big data with R (say, using Spark and sparklyr) we have found it very convenient to keep data handles in a neat list ordata_frame.

Please read on for our handy hints on keeping your data handles neat.

When using R to work over a big data system (such as Spark) much of your work is over "data handles" and not actual data (data handles are objects that control access to remote data).

Data handles are a lot like sockets or file-handles in that they can not be safely serialized and restored (i.e., you can not save them into a .RDS file and then restore them into another session). This means when you are starting or re-starting a project you must "ready" all of your data references. Your projects will be much easier to manage and document if you load your references using the methods we show below.

Let’s set-up our example Spark cluster:

library("sparklyr")
#packageVersion('sparklyr')
suppressPackageStartupMessages(library("dplyr"))
#packageVersion('dplyr')
suppressPackageStartupMessages(library("tidyr")) # Please see the following video for installation help
# https://youtu.be/qnINvPqcRvE
# spark_install(version = "2.0.2") # set up a local "practice" Spark instance
sc <- spark_connect(master = "local",
version = "2.0.2")
#print(sc)

Data is much easier to manage than code, and much easier to compute over. So the more information you can keep as pure data the better off you will be. In this case we are loading the chosen names and paths ofparquet data we wish to work with from an external file that is easy for the user to edit.

# Read user's specification of files and paths.
userSpecification <- read.csv('tableCollection.csv',
header = TRUE,
strip.white = TRUE,
stringsAsFactors = FALSE)
print(userSpecification)
##   tableName tablePath
## 1 data_01 data_01
## 2 data_02 data_02
## 3 data_03 data_03

We can now read these parquet files (usually stored in Hadoop) into ourSpark environment as follows.

readParquets <- function(userSpecification) {
userSpecification <- as_data_frame(userSpecification)
userSpecification$handle <- lapply(
seq_len(nrow(userSpecification)),
function(i) {
spark_read_parquet(sc,
name = userSpecification$tableName[[i]],
path = userSpecification$tablePath[[i]])
}
)
userSpecification
} tableCollection <- readParquets(userSpecification)
print(tableCollection)
## # A tibble: 3 x 3
## tableName tablePath handle
## <chr> <chr> <list>
## 1 data_01 data_01 <S3: tbl_spark>
## 2 data_02 data_02 <S3: tbl_spark>
## 3 data_03 data_03 <S3: tbl_spark>

data.frame is a great place to keep what you know about your Sparkhandles in one place. Let’s add some details to our Spark handles.

addDetails <- function(tableCollection) {
tableCollection <- as_data_frame(tableCollection)
# get the references
tableCollection$handle <-
lapply(tableCollection$tableName,
function(tableNamei) {
dplyr::tbl(sc, tableNamei)
}) # and tableNames to handles for convenience
# and printing
names(tableCollection$handle) <-
tableCollection$tableName # add in some details (note: nrow can be expensive)
tableCollection$nrow <- vapply(tableCollection$handle,
nrow,
numeric(1))
tableCollection$ncol <- vapply(tableCollection$handle,
ncol,
numeric(1))
tableCollection
} tableCollection <- addDetails(userSpecification) # convenient printing
print(tableCollection)
## # A tibble: 3 x 5
## tableName tablePath handle nrow ncol
## <chr> <chr> <list> <dbl> <dbl>
## 1 data_01 data_01 <S3: tbl_spark> 10 1
## 2 data_02 data_02 <S3: tbl_spark> 10 2
## 3 data_03 data_03 <S3: tbl_spark> 10 3
# look at the top of each table (also forces
# evaluation!).
lapply(tableCollection$handle,
head)
## $data_01
## Source: query [6 x 1]
## Database: spark connection master=local[4] app=sparklyr local=TRUE
##
## # A tibble: 6 x 1
## a_01
## <dbl>
## 1 0.8274947
## 2 0.2876151
## 3 0.6638404
## 4 0.1918336
## 5 0.9111187
## 6 0.8802026
##
## $data_02
## Source: query [6 x 2]
## Database: spark connection master=local[4] app=sparklyr local=TRUE
##
## # A tibble: 6 x 2
## a_02 b_02
## <dbl> <dbl>
## 1 0.3937457 0.34936496
## 2 0.0195079 0.74376380
## 3 0.9760512 0.00261368
## 4 0.4388773 0.70325800
## 5 0.9747534 0.40327283
## 6 0.6054003 0.53224218
##
## $data_03
## Source: query [6 x 3]
## Database: spark connection master=local[4] app=sparklyr local=TRUE
##
## # A tibble: 6 x 3
## a_03 b_03 c_03
## <dbl> <dbl> <dbl>
## 1 0.59512263 0.2615939 0.592753768
## 2 0.72292799 0.7287428 0.003926143
## 3 0.51846687 0.3641869 0.874463146
## 4 0.01174093 0.9648346 0.177722575
## 5 0.86250126 0.3891915 0.857614579
## 6 0.33082723 0.2633013 0.233822140

A particularly slick trick is to expand the columns column into a taller table that allows us to quickly identify which columns are in which tables.

columnDictionary <- function(tableCollection) {
tableCollection$columns <-
lapply(tableCollection$handle,
colnames)
columnMap <- tableCollection %>%
select(tableName, columns) %>%
unnest(columns)
columnMap
} columnMap <- columnDictionary(tableCollection)
print(columnMap)
## # A tibble: 6 x 2
## tableName columns
## <chr> <chr>
## 1 data_01 a_01
## 2 data_02 a_02
## 3 data_02 b_02
## 4 data_03 a_03
## 5 data_03 b_03
## 6 data_03 c_03

The idea is: place all of the above functions into a shared script or package, and then use them to organize loading your Spark data references. With this practice you will have much less "spaghetti code", better document intent, and have a versatile workflow.

The principles we are using include:

  • Keep configuration out of code (i.e., maintain the file list in a spreadsheet). This makes working with others much easier.
  • Treat configuration as data (i.e., make sure the configuration is a nice regular table so that you can use R tools such as tidyr::unnest() to work with it).

转自:http://www.win-vector.com/blog/2017/05/managing-spark-data-handles-in-r/

Managing Spark data handles in R的更多相关文章

  1. 7 Tools for Data Visualization in R, Python, and Julia

    7 Tools for Data Visualization in R, Python, and Julia Last week, some examples of creating visualiz ...

  2. Managing Hierarchical Data in MySQL

    Managing Hierarchical Data in MySQL Introduction Most users at one time or another have dealt with h ...

  3. An Introduction to Stock Market Data Analysis with R (Part 1)

    Around September of 2016 I wrote two articles on using Python for accessing, visualizing, and evalua ...

  4. Best packages for data manipulation in R

    dplyr and data.table are amazing packages that make data manipulation in R fun. Both packages have t ...

  5. sc.textFile("file:///home/spark/data.txt") Input path does not exist解决方法——submit 加参数 --master local 即可解决

    use this val data = sc.textFile("/home/spark/data.txt") this should work and set master as ...

  6. Data Science With R In Visual Studio

    R Projects Similar to Python, when we installed the data science tools we get an “R” section in our ...

  7. mysql 树形数据,层级数据Managing Hierarchical Data in MySQL

    原文:http://mikehillyer.com/articles/managing-hierarchical-data-in-mysql/ 引言 大多数用户都曾在数据库中处理过分层数据(hiera ...

  8. Managing Hierarchical Data in MySQL(邻接表模型)[转载]

    原文在:http://dev.mysql.com/tech-resources/articles/hierarchical-data.html 来源: http://www.cnblogs.com/p ...

  9. Java生成-zipf分布的数据集(自定义倾斜度,用作spark data skew测试)

    1.代码 import java.io.Serializable; import java.util.NavigableMap; import java.util.Random; import jav ...

随机推荐

  1. stm32串口通讯问题

    stm32串口通讯问题 在串口试验中,串口通讯不正常,则可能会出现以下问题: 1. 配置完成后,串口没有任何消息打印. 原因:1,端口配置有问题,需要重新检查I/O口的配置 2,接线有问题,检查接线是 ...

  2. CSS3实现一束光划过图片、和文字特效

    在打折图标里面 实现一道白光划过的动画效果 css: <!DOCTYPE html><html><head><meta charset="utf-8 ...

  3. 雷达的L、S、C、X波段是什么

    L.S.C.X都是电磁波波段的划分代号. 最早用于搜索雷达的电磁波波长度为23cm,这一波段被定义为L波段(英语Long的字头),后来这一波段的中心波长度变为22cm. 当波长为10cm的电磁波被使用 ...

  4. 关于 __proto__和prototype的一些理解

    var Person = function(name) {}; Person.prototype.say = function() { console.log("Person say&quo ...

  5. SQL基础函数

    首先咱们一起来看一下SQL的基本函数 一.聚合函数 二.数学函数 三.字符串函数 四.转换函数 五.时间函数 这样子看起来可能很多,那咱们给变得---------------------------- ...

  6. uc广告过滤你能更坑点不

    背景: 搞的手机站要上线,电脑测试木有问题,拿手机访问,有个页面始终不正常, 其他的 windows phone 的正常, ios 的也正常 就唯独 ,用的是安卓,uc的浏览器显示有问题 我勒个去,那 ...

  7. 为大数据软件准备JAVA、Python环境

    环境:SUSE 11 64位 安装JAVA JDK 1.确定版本.一般都是安装最新的JDK(Java SE Development Kit).个别软件和系统需要特定版本的JDK,根据实际需要下载. 2 ...

  8. DELL Precision Tower7910重装系统+开机出现GRUB界面如何处理

    想给实验室的工作站重新装个Win7系统,因为以前并没装过工作站的系统,发现和普通的电脑装系统还是有些不一样的.主要的问题就在于主板的不同. 尝试了老毛桃U盘启动盘安装,结果在WinPE里面提示找不到硬 ...

  9. H5万能选择器:iosselect

    iosselect是个什么东西? 移动端浏览器对于select的展示样式是不一致的,ios下是类似原生的picker,安卓下各浏览器展示各异,我们需要一个选择器组件来统一各端下各种浏览器的展示.下面是 ...

  10. 优化单页面开发环境:webpack与react的运行时打包与热更新

    前面两篇文章介绍初步搭建单页面应用的开发环境: 第一篇:使用webpack.babel.react.antdesign配置单页面应用开发环境 第二篇:使用react-router实现单页面应用路由 这 ...