How-to go parallel in R – basics + tips(转)
Today is a good day to start parallelizing your code. I’ve been using the parallel package since its integration with R (v. 2.14.0) and its much easier than it at first seems. In this post I’ll go through the basics for implementing parallel computations in R, cover a few common pitfalls, and give tips on how to avoid them.
Contents [hide]
The common motivation behind parallel computing is that something is taking too long time. For me that means any computation that takes more than 3 minutes – this because parallelization is incredibly simple and most tasks that take time areembarrassingly parallel. Here are a few common tasks that fit the description:
- Bootstrapping
- Cross-validation
- Multivariate Imputation by Chained Equations (MICE)
- Fitting multiple regression models
Learning lapply
is key
One thing I regret is not learning earlier lapply
. The function is beautiful in its simplicity: It takes one parameter (a vector/list), feeds that variable into the function, and returns a list:
- [[1]]
- [1] 1 1 1
- [[2]]
- [1] 2 4 8
- [[3]]
- [1] 3 9 27
You can feed it additional values by adding named parameters:
- [[1]]
- [1] 0.333
- [[2]]
- [1] 0.667
- [[3]]
- [1] 1
The tasks are embarrassingly parallel as the elements are calculated independently, i.e. second element is independent of the result from the first element. After learning to code usinglapply
you will find that parallelizing your code is a breeze.
The parallel
package
The parallel
package is basically about doing the above in parallel. The main difference is that we need to start with setting up a cluster, a collection of “workers” that will be doing the job. A good number of clusters is the numbers of available cores – 1. I’ve found that using all 8 cores on my machine will prevent me from doing anything else (the computers comes to a standstill until the R task has finished). I therefore always set up the cluster as follows:
|
|
Now we just call the parallel version of lapply
, parLapply
:
|
|
- [[1]]
- [1] 4
- [[2]]
- [1] 8
- [[3]]
- [1] 16
Once we are done we need to close the cluster so that resources such as memory are returned to the operating system.
|
|
variable scope
On Mac/Linux you have the option of using makeCluster(no_core, type="FORK")
that automatically contains all environment variables (more details on this below). On Windows you have to use the Parallel Socket Cluster (PSOCK) that starts out with only the base packages loaded (note that PSOCK is default on all systems). You should therefore always specify exactly what variables and libraries that you need for the parallel function to work, e.g. the following fails:
|
|
- Error in checkForRemoteErrors(val) :
- 3 nodes produced errors; first error: object 'base' not found
While this passes:
|
|
- [[1]]
- [1] 4
- [[2]]
- [1] 8
- [[3]]
- [1] 16
Note that you need the clusterExport(cl, "base")
in order for the function to see the basevariable. If you are using some special packages you will similarly need to load those throughclusterEvalQ
, e.g. I often use the rms
package and I therefore useclusterEvalQ(cl, library(rms))
. Note that any changes to the variable after clusterExport
are ignored:
|
|
- [[1]]
- [1] 4
- [[2]]
- [1] 8
- [[3]]
- [1] 16
using parSapply
Sometimes we only want to return a simple value and directly get it processed as a vector/matrix. The lapply
version that does this is called sapply
, thus it is hardly surprising that its parallel version is parSapply
:
|
|
- [1] 4 8 16
Matrix output with names (this is why we need the as.character
):
|
|
- 2 3 4
- base 4 8 16
- self 4 27 256
The foreach
package
The idea behind the foreach
package is to create ‘a hybrid of the standard for loop and lapply function’ and its ease of use has made it rather popular. The set-up is slightly different, you need “register” the cluster as below:
Note that you can change the last two lines to:
|
|
But then you need to remember to instead of stopCluster()
at the end do:
|
|
The foreach
function can be viewed as being a more controlled version of the parSapply
that allows combining the results into a suitable format. By specifying the .combine
argument we can choose how to combine our results, below is a vector, matrix, and a list example:
|
|
- [1] 4 8 16
|
|
- [,1]
- result.1 4
- result.2 8
- result.3 16
|
|
- [[1]]
- [1] 4
- [[2]]
- [1] 8
- [[3]]
- [1] 16
Note that the last is the default and can be achieved without any tweaking, justforeach(exponent = 2:4) %dopar%
. In the example it is worth noting the .multicombine
argument that is needed to avoid a nested list. The nesting occurs due to the sequential.combine
function calls, i.e. list(list(result.1, result.2), result.3)
:
|
|
- [[1]]
- [[1]][[1]]
- [1] 4
- [[1]][[2]]
- [1] 8
- [[2]]
- [1] 16
variable scope
The variable scope constraints are slightly different for the foreach
package. Variable within the same local environment are by default available:
|
|
- [1] 4 8 16
While variables from a parent environment will not be available, i.e. the following will throw an error:
|
|
- Error in base^exponent : task 1 failed - "object 'base' not found"
A nice feature is that you can use the .export
option instead of the clusterExport
. Note that as it is part of the parallel call it will have the latest version of the variable, i.e. the following change in “base” will work:
|
|
- [1] 4 8 16
Similarly you can load packages with the .packages
option, e.g. .packages = c("rms", "mice")
. I strongly recommend always exporting the variables you need as it limits issues that arise when encapsulating the code within functions.
Fork or sock?
I do most of my analyses on Windows and have therefore gotten used to the PSOCK system. For those of you on other systems you should be aware of some important differences between the two main alternatives:
FORK: "to divide in branches and go separate ways"
Systems: Unix/Mac (not Windows)
Environment: Link all
PSOCK: Parallel Socket Cluster
Systems: All (including Windows)
Environment: Empty
memory handling
Unless you are using multiple computers or Windows or planning on sharing your code with someone using a Windows machine, you should try to use FORK (I use capitalized due to themakeCluster
type argument). It is leaner on the memory usage by linking to the same address space. Below you can see that the memory address space for variables exported to PSOCK are not the same as the original:
|
|
- [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
While they are for FORK clusters:
|
|
- [1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
This can save a lot of time during setup and also memory. Interestingly, you do not need to worry about variable corruption:
|
|
Debugging
Debugging is especially hard when working in a parallelized environment. You cannot simply call browser
/cat
/print
in order to find out what the issue is.
the tryCatch
– list
approach
Using stop()
for debugging without modification is generally a bad idea; while you will receive the error message, there is a large chance that you have forgotten about that stop()
, and it gets evoked once you have run your software for a day or two. It is annoying to throw away all the previous successful computations just because one failed (yup, this is default behavior of all the above functions). You should therefore try to catch errors and return a text explaining the setting that caused the error:
|
|
- [[1]]
- [1] 1 1 2
- [[2]]
- [1] 0.5 2.0 4.0
- [[3]]
- [1] "The variable 'a' caused the error: 'Error in 1/x: non-numeric argument to binary operator\n'"
This is also why I like lists, the .combine
may look appealing but it is easy to manually apply and if you have function that crashes when one of the element is not of the expected type you will loose all your data. Here is a simple example of how to call rbind
on a lapply
output:
- [,1] [,2] [,3]
- [1,] 1 2 1
- [2,] 2 4 4
- [3,] 3 8 27
creating a common output file
Since we can’t have a console per worker we can set a shared file. I would say that this is a “last resort” solution:
|
|
- starting worker pid=7392 on localhost:11411 at 00:11:21.077
- starting worker pid=7276 on localhost:11411 at 00:11:21.319
- starting worker pid=7576 on localhost:11411 at 00:11:21.762
- [1] 2]
- [1] "a"
As you can see due to a race between first and the second node the output is a little garbled and therefore in my opinion less useful than returning a custom statement.
creating a node-specific file
A perhaps slightly more appealing alternative is to a have a node-specific file. This could potentially be interesting when you have a dataset that is causing some issues and you want to have a closer look at that data set:
|
|
A tip is to combine this with your tryCatch
– list
approach. Thereby you can extract any data that is not suitable for a simple message (e.g. a large data.frame), load that, and debug it without parallel. If the x
is too long for a file name I suggest that you use digest as described below for the cache function.
the partools
package
There is an interesting package partools
that has a dbs() function that may be worth looking into (unless your on a Windows machine). It allows coupling terminals per process and debugging through them.
Caching
I strongly recommend implementing some caching when doing large computations. There may be a multitude of reasons to why you need to exit a computation and it would be a pity to waist all that valuable time. There is a package for caching, R.cache, but I’ve found it easier to write the function myself. All you need is the built-in digest
package. By feeding the data + the function that you are using to the digest()
you get an unique key, if that key matches your previous calculation there is no need for re-running that particular section. Here is a function with caching:
|
|
The when running the code it is pretty obvious that the Sys.sleep
is not invoked the second time around:
|
|
Load balancing
Balancing so that the cores have similar weight load and don’t fight for memory resources is core for a successful parallelization scheme.
work load
Note that the parLapply
and foreach
are wrapper functions. This means that they are not directly doing the processing the parallel code, but rely on other functions for this. In theparLapply
the function is defined as:
|
|
Note the splitList(X, length(cl))
. This will split the tasks into even portions and send them onto the workers. If you have many of those cached or there is a big computational difference between the tasks you risk ending up with only one cluster actually working while the others are inactive. To avoid this you should when caching try to remove those that are cached from the X or try to mix everything into an even workload. E.g. if we want to find optimal number of neurons in a neural network we may want to change:
|
|
to:
|
|
memory load
Running large datasets in parallel can quickly get you into trouble. If you run out of memory the system will either crash or run incredibly slow. The former happens to me on Linux systems while the latter is quite common on Windows systems. You should therefore always monitor your parallelization to make sure that you aren’t too close to the memory ceiling.
Using FORKs is an important tool for handling memory ceilings. As they link to the original variable address the fork will not require any time for exporting variables or take up any additional space when using these. The impact on performance can be significant (my system has 16Gb of memory and eight cores):
|
|
FORKs can also make your able to run code in parallel that otherwise crashes:
|
|
Although, it won’t save you from yourself as you can see below when we create an intermediate variable that takes up storage space:
|
|
memory tips
- Frequently use
rm()
in order to avoid having unused variables around - Frequently call the garbage collector
gc()
. Although this should be implemented automatically in R, I’ve found that while it may releases the memory locally it may not return it to the operating system (OS). This makes sense when running at a single instance as this is an time expensive procedure but if you have multiple processes this may not be a good strategy. Each process needs to get their memory from the OS and it is therefore vital that each process returns memory once they no longer need it. - Although it is often better to parallelize at a large scale due to initialization costs it may in memory situations be better to parallelize at a small scale, i.e. in subroutines.
- I sometimes run code in parallel, cache the results, and once I reach the limit I change to sequential.
- You can also manually limit the number of cores, using all the cores is of no use if the memory isn’t large enough. A simple way to think of it is:
memory.limit()/memory.size() = max cores
Other tips
- A general core detector function that I often use is:
? RSPLUS
- 1
- max(1, detectCores() - 1)
- Never use
set.seed()
, useclusterSetRNGStream()
instead, to set the cluster seed if you want reproducible results - If you have a Nvidia GPU-card, you can get huge gains from micro-parallelization through the
gputools
package (Warning though, the installation can be rather difficult…). - When using
mice
in parallel remember to useibind()
for combining the imputations.
How-to go parallel in R – basics + tips(转)的更多相关文章
- R︱并行计算以及提高运算效率的方式(parallel包、clusterExport函数、SupR包简介)
要学的东西太多,无笔记不能学~~ 欢迎关注公众号,一起分享学习笔记,记录每一颗"贝壳"~ --------------------------- 终于开始攻克并行这一块了,有点小兴 ...
- GNU Parallel Tutorial
GNU Parallel Tutorial Prerequisites Input sources A single input source Multiple input sources Linki ...
- Java性能提示(全)
http://www.onjava.com/pub/a/onjava/2001/05/30/optimization.htmlComparing the performance of LinkedLi ...
- 站在巨人的肩膀上---重新自定义 android- ExpandableListView 收缩类,实现列表的可收缩扩展
距离上次更新博客,时隔略长,诸事繁琐,赶在去广州答辩之前,分享下安卓 android 中的一个 列表收缩 类---ExpandableListView 先上效果图: 如果想直接看实现此页面的代码请下滑 ...
- csc.rsp Nuget MVC/WebAPI、SignalR、Rx、Json、EntityFramework、OAuth、Spatial
# This file contains command-line options that the C# # command line compiler (CSC) will process as ...
- <<Differential Geometry of Curves and Surfaces>>笔记
<Differential Geometry of Curves and Surfaces> by Manfredo P. do Carmo real line Rinterval I== ...
- 四则运算(Android)版
实验题目: 将小学四则运算整合成网页版或者是Android版.实现有无余数,减法有无负数.... 设计思路: 由于学到的基础知识不足,只能设计简单的加减乘除,界面设计简单,代码量少,只是达到了入门级的 ...
- 自旋锁-SpinLock(.NET 4.0+)
短时间锁定的情况下,自旋锁(spinlock)更快.(因为自旋锁本质上不会让线程休眠,而是一直循环尝试对资源访问,直到可用.所以自旋锁线程被阻塞时,不进行线程上下文切换,而是空转等待.对于多核CPU而 ...
- 今天再分享一个TextView内容风格化的类
/* * Copyright (C) 2014 Jason Fang ( ijasonfang@gmail.com ) * * Licensed under the Apache License, V ...
随机推荐
- c#FTP操作类,包含上传,下载,删除,获取FTP文件列表文件夹等Hhelp类
有些时间没发表文章了,之前用到过,这是我总结出来关于ftp相关操作一些方法,网上也有很多,但是没有那么全面,我的这些仅供参考和借鉴,希望能够帮助到大家,代码和相关引用我都复制粘贴出来了,希望大家喜欢 ...
- ThreadLocal学习笔记
首先,ThreadLocal是Java语言提供的用于支持线程局部变量的标准实现类.很多时候,ThreadLocal与Synchronized在功能上有一定的共性,都可以用来解决多线程环境下线程安全问题 ...
- 卷积神经网络CNN与深度学习常用框架的介绍与使用
一.神经网络为什么比传统的分类器好 1.传统的分类器有 LR(逻辑斯特回归) 或者 linear SVM ,多用来做线性分割,假如所有的样本可以看做一个个点,如下图,有蓝色的点和绿色的点,传统的分类器 ...
- node express安装
我们现在全局安装只需要安装这个命令行工具就可以,指令如下: npm install -g express-generator 这时我们就着手安装express框架,指令如下: express blog ...
- 泛型(CSDN转载)
函数的参数不同叫多态,函数的参数类型可以不确定吗? 函数的返回值只能是一个吗?函数的返回值可以不确定吗? 泛型是一种特殊的类型,它把指定类型的工作推迟到客户端代码声明并实例化类或方法的时候进行. 下面 ...
- jdk源码剖析五:JDK8-废弃永久代(PermGen)迎来元空间(Metaspace)
目录 1.背景 2.为什么废弃永久代(PermGen) 3.深入理解元空间(Metaspace) 4.总结 ========正文分割线===== 一.背景 1.1 永久代(PermGen)在哪里? 根 ...
- Android VideoView使用小记
在Android中播放视频一般采用VideoView,当然也可以自己使用MediaPlayer+SurfaceView,但是比较麻烦.这里记录一些我使用VideoView时的疑惑 1.如何监听播放完成 ...
- 做自己的PHP语法解释器
PHP关键字异构化实验 PHP词法分析和语法分析 简单理解PHP代码执行过程:http://blog.csdn.net/risingsun001/article/details/22888861 PH ...
- C#基础知识-编写第一个程序(二)
通过上一篇数据类型已经介绍了C#中最基本的15种预定义数据类型,了解每一种类型代表的数据以及每种类型的取值范围,这是很重要也是最基本.下面我们通过实例来了解每个类型如何去使用.编写C#程序时我们需要用 ...
- fir.im 持续集成技术实践
互联网时代,人人都在追求产品的快速响应.快速迭代和快速验证.不论是创业团队还是大中型企业,都在探索属于自己的敏捷开发.持续交付之道.fir.im 团队也在全面实施敏捷,并推出新持续集成服务 - flo ...