Spark RDD Action 简单用例(一)
collectAsMap(): Map[K, V]
返回key-value对,key是唯一的,如果rdd元素中同一个key对应多个value,则只会保留一个。
/**
* Return the key-value pairs in this RDD to the master as a Map.
*
* Warning: this doesn't return a multimap (so if you have multiple values to the same key, only
* one value per key is preserved in the map returned)
*
* @note this method should only be used if the resulting data is expected to be small, as
* all the data is loaded into the driver's memory.
*/
def collectAsMap(): Map[K, V]
scala> val rdd = sc.parallelize(List(("A",1),("A",2),("A",3),("B",1),("B",2),("C",3)),3)
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[0] at parallelize at <console>:24 scala> rdd.collectAsMap
res0: scala.collection.Map[String,Int] = Map(A -> 3, C -> 3, B -> 2)
countByKey(): Map[K, Long]
计算有多少个不同的key.
/**
* Count the number of elements for each key, collecting the results to a local Map.
*
* Note that this method should only be used if the resulting map is expected to be small, as
* the whole thing is loaded into the driver's memory.
* To handle very large results, consider using rdd.mapValues(_ => 1L).reduceByKey(_ + _), which
* returns an RDD[T, Long] instead of a map.
*/
def countByKey(): Map[K, Long] = self.withScope {
self.mapValues(_ => 1L).reduceByKey(_ + _).collect().toMap
}
scala> val rdd = sc.parallelize(List((1,1),(1,2),(1,3),(2,1),(2,2),(2,3)),3)
rdd: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[5] at parallelize at <console>:24 scala> rdd.countByKey
res5: scala.collection.Map[Int,Long] = Map(1 -> 3, 2 -> 3)
countByValue()
计算不同的value个数,该函数首先通过map将每个元素转成(value,null)的key-value(value为null)对,
然后调用countByKey进行统计。 /**
* Return the count of each unique value in this RDD as a local map of (value, count) pairs.
*
* Note that this method should only be used if the resulting map is expected to be small, as
* the whole thing is loaded into the driver's memory.
* To handle very large results, consider using rdd.map(x => (x, 1L)).reduceByKey(_ + _), which
* returns an RDD[T, Long] instead of a map.
*/
def countByValue()(implicit ord: Ordering[T] = null): Map[T, Long] = withScope {
map(value => (value, null)).countByKey()
}
scala> val rdd = sc.parallelize(List(1,2,3,4,5,4,4,3,2,1))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[18] at parallelize at <console>:24 scala> rdd.countByValue
res12: scala.collection.Map[Int,Long] = Map(5 -> 1, 1 -> 2, 2 -> 2, 3 -> 2, 4 -> 3)
lookup(key: K)
根据key值搜索所有的value.
/**
* Return the list of values in the RDD for key `key`. This operation is done efficiently if the
* RDD has a known partitioner by only searching the partition that the key maps to.
*/
def lookup(key: K): Seq[V]
scala> val rdd = sc.parallelize(List(("A",1),("A",2),("A",3),("B",1),("B",2),("C",3)),3)
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[3] at parallelize at <console>:24 scala> rdd.lookup("A")
res2: Seq[Int] = WrappedArray(1, 2, 3)
checkpoint()
将RDD数据根据设置的checkpoint目录保存至硬盘中。
/**
* Mark this RDD for checkpointing. It will be saved to a file inside the checkpoint
* directory set with `SparkContext#setCheckpointDir` and all references to its parent
* RDDs will be removed. This function must be called before any job has been
* executed on this RDD. It is strongly recommended that this RDD is persisted in
* memory, otherwise saving it on a file will require recomputation.
*/
def checkpoint(): Unit
/*通过linux命令创建/home/check目录后,设置checkpoint directory*/
scala> sc.setCheckpointDir("/home/check") scala> val rdd = sc.parallelize(List(("A",1),("A",2),("A",3),("B",1),("B",2),("C",3)),3)
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[6] at parallelize at <console>:24 /*
*执行下面的代码会在/home/check目录下创建一个空的目录/home/check/5545e4ca-d53d-4d93-aaf4-fd3c74f1ea49
*/
scala> rdd.checkpoint /*
执行count后会在上述目录下创建一个rdd目录,rdd目录下是数据文件
*/
scala> rdd.count
res5: Long = 6
[root@localhost ~]# ll -a /home/check/5545e4ca-d53d-4d93-aaf4-fd3c74f1ea49/
total
drwxr-xr-x. root root Sep : .
drwxr-xr-x. root root Sep : ..
[root@localhost ~]# ll -a /home/check/5545e4ca-d53d-4d93-aaf4-fd3c74f1ea49/
total
drwxr-xr-x. root root Sep : .
drwxr-xr-x. root root Sep : ..
drwxr-xr-x. root root Sep : rdd-
[root@localhost ~]# ll -a /home/check/5545e4ca-d53d-4d93-aaf4-fd3c74f1ea49/rdd-/
total
drwxr-xr-x. root root Sep : .
drwxr-xr-x. root root Sep : ..
-rw-r--r--. root root Sep : part-
-rw-r--r--. root root Sep : .part-.crc
-rw-r--r--. root root Sep : part-
-rw-r--r--. root root Sep : .part-.crc
-rw-r--r--. root root Sep : part-
-rw-r--r--. root root Sep : .part-.crc
collect()
返回RDD所有元素的数组。
/**
* Return an array that contains all of the elements in this RDD.
*
* @note this method should only be used if the resulting array is expected to be small, as
* all the data is loaded into the driver's memory.
*/
def collect(): Array[T]
scala> val rdd = sc.parallelize(1 to 10,3)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[10] at parallelize at <console>:24 scala> rdd.collect
res8: Array[Int] = Array(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
toLocalIterator: Iterator[T]
返回一个包含所有算的迭代器。
/**
* Return an iterator that contains all of the elements in this RDD.
*
* The iterator will consume as much memory as the largest partition in this RDD.
*
* Note: this results in multiple Spark jobs, and if the input RDD is the result
* of a wide transformation (e.g. join with different partitioners), to avoid
* recomputing the input RDD should be cached first.
*/
def toLocalIterator: Iterator[T]
scala> val rdd = sc.parallelize(1 to 10,2)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:24 scala> val it = rdd.toLocalIterator
it: Iterator[Int] = non-empty iterator scala> while(it.hasNext){
| println(it.next)
| }
1
2
3
4
5
6
7
8
9
10
count()
返回RDD中元素的数量。
/**
* Return the number of elements in the RDD.
*/
def count(): Long
scala> val rdd = sc.parallelize(1 to 10,2)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:24
scala> rdd.count
res1: Long = 10
dependencies
返回该RDD的依赖RDD的地址。
/**
* Get the list of dependencies of this RDD, taking into account whether the
* RDD is checkpointed or not.
*/
final def dependencies: Seq[Dependency[_]]
scala> val rdd = sc.parallelize(1 to 10,2)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:24
scala> val rdd1 = rdd.filter(_>3)
rdd1: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[1] at filter at <console>:26 scala> val rdd2 = rdd1.filter(_<6)
rdd2: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[2] at filter at <console>:28 scala> rdd2.dependencies
res2: Seq[org.apache.spark.Dependency[_]] = List(org.apache.spark.OneToOneDependency@21c882b5)
partitions
以数组形式返回RDD各分区地址
/**
* Get the array of partitions of this RDD, taking into account whether the
* RDD is checkpointed or not.
*/
final def partitions: Array[Partition]
scala> val rdd = sc.parallelize(1 to 10,2)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[3] at parallelize at <console>:24 scala> rdd.partitions
res4: Array[org.apache.spark.Partition] = Array(org.apache.spark.rdd.ParallelCollectionPartition@70c, org.apache.spark.rdd.ParallelCollectionPartition@70d)
first()
返回RDD的第一个元素。
/**
* Return the first element in this RDD.
*/
def first(): T
scala> val rdd = sc.parallelize(1 to 10,2)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[3] at parallelize at <console>:24
scala> rdd.first
res5: Int = 1
fold(zeroValue: T)(op: (T, T) => T)
使用zeroValue和每个分区的元素进行聚合运算,最后各分区结果和zeroValue再进行一次聚合运算。
/**
* @param zeroValue the initial value for the accumulated result of each partition for the `op`
* operator, and also the initial value for the combine results from different
* partitions for the `op` operator - this will typically be the neutral
* element (e.g. `Nil` for list concatenation or `0` for summation)
* @param op an operator used to both accumulate results within a partition and combine results
* from different partitions
*/
def fold(zeroValue: T)(op: (T, T) => T): T
scala> val rdd = sc.parallelize(1 to 5)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[6] at parallelize at <console>:24 scala> rdd.fold(10)(_+_)
res13: Int = 35
Spark RDD Action 简单用例(一)的更多相关文章
- Spark RDD Action 简单用例(二)
foreach(f: T => Unit) 对RDD的所有元素应用f函数进行处理,f无返回值./** * Applies a function f to all elements of this ...
- Spark RDD Transformation 简单用例(三)
cache和persist 将RDD数据进行存储,persist(newLevel: StorageLevel)设置了存储级别,cache()和persist()是相同的,存储级别为MEMORY_ON ...
- Spark RDD Transformation 简单用例(二)
aggregateByKey(zeroValue)(seqOp, combOp, [numTasks]) aggregateByKey(zeroValue)(seqOp, combOp, [numTa ...
- Spark RDD Transformation 简单用例(一)
map(func) /** * Return a new RDD by applying a function to all elements of this RDD. */ def map[U: C ...
- spark RDD transformation与action函数整理
1.创建RDD val lines = sc.parallelize(List("pandas","i like pandas")) 2.加载本地文件到RDD ...
- Apache Spark 2.2.0 中文文档 - Spark RDD(Resilient Distributed Datasets)论文 | ApacheCN
Spark RDD(Resilient Distributed Datasets)论文 概要 1: 介绍 2: Resilient Distributed Datasets(RDDs) 2.1 RDD ...
- Apache Spark RDD(Resilient Distributed Datasets)论文
Spark RDD(Resilient Distributed Datasets)论文 概要 1: 介绍 2: Resilient Distributed Datasets(RDDs) 2.1 RDD ...
- Spark RDD深度解析-RDD计算流程
Spark RDD深度解析-RDD计算流程 摘要 RDD(Resilient Distributed Datasets)是Spark的核心数据结构,所有数据计算操作均基于该结构进行,包括Spark ...
- spark RDD 常见操作
fold 操作 区别 与 co 1.mapValus 2.flatMapValues 3.comineByKey 4.foldByKey 5.reduceByKey 6.groupByKey 7.so ...
随机推荐
- 给iOS开发者的Android开发建议
本人从事iOS应用开发已经5年有余,直到现在还总是刻意回避Andriod应用的开发.但是不管你信不信,安卓开发还是很有意思的,从iOS转向Android应用开发的跨度并没有你想象的那么大. 现在我把在 ...
- Djnogo Web开发学习笔记(2)
安 装 截止目前,https://www.djangoproject.com/download/提供的最新的Django的下载版本为1.6.4. Install Django You’ve got ...
- PyCharm for Mac(Python 开发工具)破解版安装
1.软件简介 PyCharm 是 macOS 系统上一款 Python 编辑利器,具有智能代码编辑器,能理解 Python 的特性并提供卓越的生产力推进工具:自动代码格式化.代码完成.重构.自动 ...
- Swift 类型桥接
前言 iOS 中的 API 基本都是在许多年前由 OC 写成的,现在通过桥接的方法在 Swift 中可以用,基本看不出区别,非常自然.但是一些特殊的类型,在两种语言进行桥接的时候需要特别注意. 1.N ...
- PHP遍历指定目录,并存储目录内所有文件属性信息
项目需要,需要写一个函数,能够遍历指定目录中的所有文件,而且这个目录中的子目录也要遍历.输出文件的属性信息,并存储. 想想需求,不就是一个ls -al命令吗,实现获取相关属性就好了,再加上一个遍历OK ...
- html5 required属性的注意事项
实例 带有必填字段的表单: <form action="demo_form.asp" method="get"> Name: <input t ...
- 空间谱专题13:联合解算DOA(ML/AP)
其中作者:桂. 时间:2017-10-16 07:51:40 链接:http://www.cnblogs.com/xingshansi/p/7675380.html 前言 主要记录二维测向中,分别利 ...
- Redis高可用详解:持久化技术及方案选择
文章摘自:https://www.cnblogs.com/kismetv/p/9137897.html 前言 在上一篇文章中,介绍了Redis的内存模型,从这篇文章开始,将依次介绍Redis高可用相关 ...
- php手册总结《类》
手册页面: http://php.net/manual/zh/language.oop5.basic.php >> 类名 类名可以是任何非 PHP 保留字的合法标签.一个合法类名以字母或下 ...
- 转移 Visual Studio 2017 的安装临时文件
每次更新 Visual Studio 2017 会在 C 盘留下大量的缓存文件,因为目录比较深,怕以后忘了,用目录链接的形式转移到其它磁盘,也好方便清理: mklink /D C:\ProgramDa ...