RDD算子

#常用Transformation(即转换,延迟加载)
#通过并行化scala集合创建RDD
val rdd1 = sc.parallelize(Array(1,2,3,4,5,6,7,8))
#查看该rdd的分区数量
rdd1.partitions.length val rdd1 = sc.parallelize(List(5,6,4,7,3,8,2,9,1,10))
val rdd2 = sc.parallelize(List(5,6,4,7,3,8,2,9,1,10)).map(_*2).sortBy(x=>x,true)
val rdd3 = rdd2.filter(_>10)
val rdd2 = sc.parallelize(List(5,6,4,7,3,8,2,9,1,10)).map(_*2).sortBy(x=>x+"",true)
val rdd2 = sc.parallelize(List(5,6,4,7,3,8,2,9,1,10)).map(_*2).sortBy(x=>x.toString,true) val rdd4 = sc.parallelize(Array("a b c", "d e f", "h i j"))
rdd4.flatMap(_.split(' ')).collect val rdd5 = sc.parallelize(List(List("a b c", "a b b"),List("e f g", "a f g"), List("h i j", "a a b"))) List("a b c", "a b b") =List("a","b",)) rdd5.flatMap(_.flatMap(_.split(" "))).collect #union求并集,注意类型要一致
val rdd6 = sc.parallelize(List(5,6,4,7))
val rdd7 = sc.parallelize(List(1,2,3,4))
val rdd8 = rdd6.union(rdd7)
rdd8.distinct.sortBy(x=>x).collect #intersection求交集
val rdd9 = rdd6.intersection(rdd7) val rdd1 = sc.parallelize(List(("tom", 1), ("jerry", 2), ("kitty", 3)))
val rdd2 = sc.parallelize(List(("jerry", 9), ("tom", 8), ("shuke", 7), ("tom", 2))) #join(连接)
val rdd3 = rdd1.join(rdd2)
val rdd3 = rdd1.leftOuterJoin(rdd2)
val rdd3 = rdd1.rightOuterJoin(rdd2) #groupByKey
val rdd3 = rdd1 union rdd2
rdd3.groupByKey
//(tom,CompactBuffer(1, 8, 2))
rdd3.groupByKey.map(x=>(x._1,x._2.sum))
groupByKey.mapValues(_.sum).collect
Array((tom,CompactBuffer(1, 8, 2)), (jerry,CompactBuffer(9, 2)), (shuke,CompactBuffer(7)), (kitty,CompactBuffer(3))) #WordCount
sc.textFile("/root/words.txt").flatMap(x=>x.split(" ")).map((_,1)).reduceByKey(_+_).sortBy(_._2,false).collect
sc.textFile("/root/words.txt").flatMap(x=>x.split(" ")).map((_,1)).groupByKey.map(t=>(t._1, t._2.sum)).collect #cogroup
val rdd1 = sc.parallelize(List(("tom", 1), ("tom", 2), ("jerry", 3), ("kitty", 2)))
val rdd2 = sc.parallelize(List(("jerry", 2), ("tom", 1), ("shuke", 2)))
val rdd3 = rdd1.cogroup(rdd2)
val rdd4 = rdd3.map(t=>(t._1, t._2._1.sum + t._2._2.sum)) #cartesian笛卡尔积
val rdd1 = sc.parallelize(List("tom", "jerry"))
val rdd2 = sc.parallelize(List("tom", "kitty", "shuke"))
val rdd3 = rdd1.cartesian(rdd2) ################################################################################################### #spark action
val rdd1 = sc.parallelize(List(1,2,3,4,5), 2) #collect
rdd1.collect #reduce
val r = rdd1.reduce(_+_) #count
rdd1.count #top
rdd1.top(2) #take
rdd1.take(2) #first(similer to take(1))
rdd1.first #takeOrdered
rdd1.takeOrdered(3)

spark RDD api

http://homepage.cs.latrobe.edu.au/zhe/ZhenHeSparkRDDAPIExamples.html

mapPartitionsWithIndex
val func = (index: Int, iter: Iterator[(String)]) => {
iter.map(x => "[partID:" + index + ", val: " + x + "]")
} mapPartitionsWithIndex
val func = (index: Int, iter: Iterator[Int]) => {
iter.map(x => "[partID:" + index + ", val: " + x + "]")
}
val rdd1 = sc.parallelize(List(1,2,3,4,5,6,7,8,9), 2)
rdd1.mapPartitionsWithIndex(func).collect -------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------
aggregate def func1(index: Int, iter: Iterator[(Int)]) : Iterator[String] = {
iter.toList.map(x => "[partID:" + index + ", val: " + x + "]").iterator
}
val rdd1 = sc.parallelize(List(1,2,3,4,5,6,7,8,9), 2)
rdd1.mapPartitionsWithIndex(func1).collect
rdd1.aggregate(0)(math.max(_, _), _ + _)
rdd1.aggregate(5)(math.max(_, _), _ + _) val rdd2 = sc.parallelize(List("a","b","c","d","e","f"),2)
def func2(index: Int, iter: Iterator[(String)]) : Iterator[String] = {
iter.toList.map(x => "[partID:" + index + ", val: " + x + "]").iterator
}
rdd2.aggregate("")(_ + _, _ + _)
rdd2.aggregate("=")(_ + _, _ + _) val rdd3 = sc.parallelize(List("12","23","345","4567"),2)
rdd3.aggregate("")((x,y) => math.max(x.length, y.length).toString, (x,y) => x + y) val rdd4 = sc.parallelize(List("12","23","345",""),2)
rdd4.aggregate("")((x,y) => math.min(x.length, y.length).toString, (x,y) => x + y) val rdd5 = sc.parallelize(List("12","23","","345"),2)
rdd5.aggregate("")((x,y) => math.min(x.length, y.length).toString, (x,y) => x + y) -------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------
aggregateByKey val pairRDD = sc.parallelize(List( ("cat",2), ("cat", 5), ("mouse", 4),("cat", 12), ("dog", 12), ("mouse", 2)), 2)
def func2(index: Int, iter: Iterator[(String, Int)]) : Iterator[String] = {
iter.map(x => "[partID:" + index + ", val: " + x + "]")
}
pairRDD.mapPartitionsWithIndex(func2).collect
pairRDD.aggregateByKey(0)(math.max(_, _), _ + _).collect
pairRDD.aggregateByKey(100)(math.max(_, _), _ + _).collect -------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------
checkpoint
sc.setCheckpointDir("hdfs://node-1.edu360.cn:9000/ck")
val rdd = sc.textFile("hdfs://node-1.edu360.cn:9000/wc").flatMap(_.split(" ")).map((_, 1)).reduceByKey(_+_)
rdd.checkpoint
rdd.isCheckpointed
rdd.count
rdd.isCheckpointed
rdd.getCheckpointFile -------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------
coalesce, repartition
val rdd1 = sc.parallelize(1 to 10, 10)
val rdd2 = rdd1.coalesce(2, false)
rdd2.partitions.length -------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------
collectAsMap
val rdd = sc.parallelize(List(("a", 1), ("b", 2)))
rdd.collectAsMap -------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------
combineByKey
val rdd1 = sc.textFile("hdfs://node-1.edu360.cn:9000/wc").flatMap(_.split(" ")).map((_, 1))
val rdd2 = rdd1.combineByKey(x => x, (a: Int, b: Int) => a + b, (m: Int, n: Int) => m + n)
rdd2.collect val rdd3 = rdd1.combineByKey(x => x + 10, (a: Int, b: Int) => a + b, (m: Int, n: Int) => m + n)
rdd3.collect val rdd4 = sc.parallelize(List("dog","cat","gnu","salmon","rabbit","turkey","wolf","bear","bee"), 3)
val rdd5 = sc.parallelize(List(1,1,2,2,2,1,2,2,2), 3)
val rdd6 = rdd5.zip(rdd4)
val rdd7 = rdd6.combineByKey(List(_), (x: List[String], y: String) => x :+ y, (m: List[String], n: List[String]) => m ++ n) -------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------
countByKey val rdd1 = sc.parallelize(List(("a", 1), ("b", 2), ("b", 2), ("c", 2), ("c", 1)))
rdd1.countByKey
rdd1.countByValue -------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------
filterByRange val rdd1 = sc.parallelize(List(("e", 5), ("c", 3), ("d", 4), ("c", 2), ("a", 1)))
val rdd2 = rdd1.filterByRange("b", "d")
rdd2.colllect -------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------
flatMapValues
val a = sc.parallelize(List(("a", "1 2"), ("b", "3 4")))
rdd3.flatMapValues(_.split(" ")) -------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------
foldByKey val rdd1 = sc.parallelize(List("dog", "wolf", "cat", "bear"), 2)
val rdd2 = rdd1.map(x => (x.length, x))
val rdd3 = rdd2.foldByKey("")(_+_) val rdd = sc.textFile("hdfs://node-1.edu360.cn:9000/wc").flatMap(_.split(" ")).map((_, 1))
rdd.foldByKey(0)(_+_) -------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------
foreachPartition
val rdd1 = sc.parallelize(List(1, 2, 3, 4, 5, 6, 7, 8, 9), 3)
rdd1.foreachPartition(x => println(x.reduce(_ + _))) -------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------
keyBy
val rdd1 = sc.parallelize(List("dog", "salmon", "salmon", "rat", "elephant"), 3)
val rdd2 = rdd1.keyBy(_.length)
rdd2.collect -------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------
keys values
val rdd1 = sc.parallelize(List("dog", "tiger", "lion", "cat", "panther", "eagle"), 2)
val rdd2 = rdd1.map(x => (x.length, x))
rdd2.keys.collect
rdd2.values.collect -------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------
mapPartitions( it: Iterator => {it.map(x => x * 10)})

  

RDD算子的更多相关文章

  1. RDD 算子补充

    一.RDD算子补充 1.mapPartitions         mapPartitions的输入函数作用于每个分区, 也就是把每个分区中的内容作为整体来处理.   (map是把每一行) mapPa ...

  2. RDD算子、RDD依赖关系

    RDD:弹性分布式数据集, 是分布式内存的一个抽象概念 RDD:1.一个分区的集合, 2.是计算每个分区的函数 ,    3.RDD之间有依赖关系 4.一个对于key-value的RDD的Partit ...

  3. spark教程(四)-SparkContext 和 RDD 算子

    SparkContext SparkContext 是在 spark 库中定义的一个类,作为 spark 库的入口点: 它表示连接到 spark,在进行 spark 操作之前必须先创建一个 Spark ...

  4. Spark性能调优-RDD算子调优篇(深度好文,面试常问,建议收藏)

    RDD算子调优 不废话,直接进入正题! 1. RDD复用 在对RDD进行算子时,要避免相同的算子和计算逻辑之下对RDD进行重复的计算,如下图所示: 对上图中的RDD计算架构进行修改,得到如下图所示的优 ...

  5. Spark中普通集合与RDD算子的sortBy()有什么区别

    分别观察一下集合与算子的sortBy()的参数列表 普通集合的sortBy() RDD算子的sortBy() 结论:普通集合的sortBy就没有false参数,也就是说只能默认的升序排. 如果需要对普 ...

  6. Spark RDD算子介绍

    Spark学习笔记总结 01. Spark基础 1. 介绍 Spark可以用于批处理.交互式查询(Spark SQL).实时流处理(Spark Streaming).机器学习(Spark MLlib) ...

  7. 大数据入门第二十二天——spark(二)RDD算子(2)与spark其它特性

    一.JdbcRDD与关系型数据库交互 虽然略显鸡肋,但这里还是记录一下(点开JdbcRDD可以看到限制比较死,基本是鸡肋.但好在我们可以通过自定义的JdbcRDD来帮助我们完成与关系型数据库的交互.这 ...

  8. 大数据入门第二十二天——spark(二)RDD算子(1)

    一.RDD概述 1.什么是RDD RDD(Resilient Distributed Dataset)叫做分布式数据集,是Spark中最基本的数据抽象,它代表一个不可变.可分区.里面的元素可并行计算的 ...

  9. RDD算子的使用

    TransformationDemo.scala import org.apache.spark.{HashPartitioner, SparkConf, SparkContext} import s ...

随机推荐

  1. BZOJ4314 倍数?倍数!

    好神仙啊.... 题意 在$ [0,n) $中选$ k$个不同的数使和为$ n$的倍数 求方案数 $ n \leq 10^9, \ k \leq 10^3$ 题解 k可以放大到1e6的 先不考虑$ k ...

  2. sql注入中关于--+的一点探索

    在sql-labs游戏中,经常使用--+放在最后注释多余部分,而mysql中的注释符为#和-- 却不能直接使用,以前没学过mysql,一直不理解,也不知道+号的作用,今天有时间特地探索了一下,算是搞明 ...

  3. pysvn 相关

    sudo apt-get install python-svn sudo apt-get install svn-workbench 安装过程中如果缺少相关依赖下载好在执行这两条语句 安装好之后的界面 ...

  4. RNN,写起来真的烦

    曾经,为了处理一些序列相关的数据,我稍微了解了一点递归网络 (RNN) 的东西.由于当时只会 tensorflow,就从官网上找了一些 tensorflow 相关的 demo,中间陆陆续续折腾了两个多 ...

  5. 2018-2019-2 网络对抗技术 20165325 Exp1 PC平台逆向破解

    2018-2019-2 网络对抗技术 20165325 Exp1 PC平台逆向破解(BOF实验) 实验有三个模块: (一)直接修改程序机器指令,改变程序执行流程: (二)通过构造输入参数,造成BOF攻 ...

  6. mysql性能优化分析 --- 下篇

    概要回顾 之前看过<高性能mysql>对mysql数据库有了系统化的理解,虽然没能达到精通,但有了概念,遇到问题时会有逻辑条理的分析; 这回继上次sql分析结果的一个继续延伸分析,我拿了; ...

  7. vue 双向数据绑定的实现学习(二)- 监听器的实现

    废话:上一篇https://www.cnblogs.com/adouwt/p/9928278.html 提到了vue实现的基本实现原理:Object.defineProperty() -数据劫持 和  ...

  8. 简单的接口测试类和测试生成报告工具HTMLTestRunner.py

    Demo.py #coding:utf-8 # import requests import json ''' data = { 'username':'jackson', 'password':'a ...

  9. windows下安装Python虚拟环境virtualenvwrapper-win

      1 前言 由于Python的版本众多,还有Python2和Python3的争论,因此有些软件包或第三方库就容易出现版本不兼容的问题. 通过 virtualenv 这个工具,就可以构建一系列 虚拟的 ...

  10. 范围for循环

    1.C++使用如下方法遍历一个容器: #include "stdafx.h" #include<iostream> #include<vector> int ...