repartition 和 partitionBy 都是对数据进行重新分区,默认都是使用 HashPartitioner,区别在于partitionBy 只能用于 PairRDD,但是当它们同时都用于 PairRDD时,结果却不一样:

不难发现,其实 partitionBy 的结果才是我们所预期的,我们打开 repartition 的源码进行查看:

/**
* Return a new RDD that has exactly numPartitions partitions.
*
* Can increase or decrease the level of parallelism in this RDD. Internally, this uses
* a shuffle to redistribute data.
*
* If you are decreasing the number of partitions in this RDD, consider using `coalesce`,
* which can avoid performing a shuffle.
*
* TODO Fix the Shuffle+Repartition data loss issue described in SPARK-23207.
*/
def repartition(numPartitions: Int)(implicit ord: Ordering[T] = null): RDD[T] = withScope {
coalesce(numPartitions, shuffle = true)
} /**
* Return a new RDD that is reduced into `numPartitions` partitions.
*
* This results in a narrow dependency, e.g. if you go from 1000 partitions
* to 100 partitions, there will not be a shuffle, instead each of the 100
* new partitions will claim 10 of the current partitions. If a larger number
* of partitions is requested, it will stay at the current number of partitions.
*
* However, if you're doing a drastic coalesce, e.g. to numPartitions = 1,
* this may result in your computation taking place on fewer nodes than
* you like (e.g. one node in the case of numPartitions = 1). To avoid this,
* you can pass shuffle = true. This will add a shuffle step, but means the
* current upstream partitions will be executed in parallel (per whatever
* the current partitioning is).
*
* @note With shuffle = true, you can actually coalesce to a larger number
* of partitions. This is useful if you have a small number of partitions,
* say 100, potentially with a few partitions being abnormally large. Calling
* coalesce(1000, shuffle = true) will result in 1000 partitions with the
* data distributed using a hash partitioner. The optional partition coalescer
* passed in must be serializable.
*/
def coalesce(numPartitions: Int, shuffle: Boolean = false,
partitionCoalescer: Option[PartitionCoalescer] = Option.empty)
(implicit ord: Ordering[T] = null)
: RDD[T] = withScope {
require(numPartitions > , s"Number of partitions ($numPartitions) must be positive.")
if (shuffle) {
/** Distributes elements evenly across output partitions, starting from a random partition. */
val distributePartition = (index: Int, items: Iterator[T]) => {
var position = new Random(hashing.byteswap32(index)).nextInt(numPartitions)
items.map { t =>
// Note that the hash code of the key will just be the key itself. The HashPartitioner
// will mod it with the number of total partitions.
position = position +
(position, t)
}
} : Iterator[(Int, T)] // include a shuffle step so that our upstream tasks are still distributed
new CoalescedRDD(
new ShuffledRDD[Int, T, T](mapPartitionsWithIndex(distributePartition),
new HashPartitioner(numPartitions)),
numPartitions,
partitionCoalescer).values
} else {
new CoalescedRDD(this, numPartitions, partitionCoalescer)
}
}

即使是RairRDD也不会使用自己的key,repartition 其实使用了一个随机生成的数来当做 Key,而不是使用原来的 Key!!

Spark中repartition和partitionBy的区别的更多相关文章

  1. Spark中ml和mllib的区别

    转载自:https://vimsky.com/article/3403.html Spark中ml和mllib的主要区别和联系如下: ml和mllib都是Spark中的机器学习库,目前常用的机器学习功 ...

  2. spark中map与flatMap的区别

    作为spark初学者对,一直对map与flatMap两个函数比较难以理解,这几天看了和写了不少例子,终于把它们搞清楚了 两者的区别主要在于action后得到的值 例子: import org.apac ...

  3. Spark中cache和persist的区别

    cache和persist都是用于将一个RDD进行缓存的,这样在之后使用的过程中就不需要重新计算了,可以大大节省程序运行时间. cache和persist的区别 基于Spark 1.6.1 的源码,可 ...

  4. Spark中groupBy groupByKey reduceByKey的区别

    groupBy 和SQL中groupby一样,只是后面必须结合聚合函数使用才可以. 例如: hour.filter($"version".isin(version: _*)).gr ...

  5. spark中map与mapPartitions区别

    在spark中,map与mapPartitions两个函数都是比较常用,这里使用代码来解释一下两者区别 import org.apache.spark.{SparkConf, SparkContext ...

  6. 大数据学习day19-----spark02-------0 零碎知识点(分区,分区和分区器的区别) 1. RDD的使用(RDD的概念,特点,创建rdd的方式以及常见rdd的算子) 2.Spark中的一些重要概念

    0. 零碎概念 (1) 这个有点疑惑,有可能是错误的. (2) 此处就算地址写错了也不会报错,因为此操作只是读取数据的操作(元数据),表示从此地址读取数据但并没有进行读取数据的操作 (3)分区(有时间 ...

  7. Scala中sortBy和Spark中sortBy区别

    Scala中sortBy是以方法的形式存在的,并且是作用在Array或List集合排序上,并且这个sortBy默认只能升序,除非实现隐式转换或调用reverse方法才能实现降序,Spark中sortB ...

  8. Spark中Task,Partition,RDD、节点数、Executor数、core数目的关系和Application,Driver,Job,Task,Stage理解

    梳理一下Spark中关于并发度涉及的几个概念File,Block,Split,Task,Partition,RDD以及节点数.Executor数.core数目的关系. 输入可能以多个文件的形式存储在H ...

  9. spark中的scalaAPI之RDDAPI常用操作

    package com.XXX import org.apache.spark.storage.StorageLevel import org.apache.spark.{SparkConf, Spa ...

随机推荐

  1. web -- Navigator.vibrate(); 使设备(有振动硬件)产生有频率的振动

    MDN 文档 此方法需要用户手势. 否则,它返回false. const koa2 = require(`koa2`); const Router = require(`koa-router`); c ...

  2. nohup 同时实现记录日志和屏幕输出

    nohup   nohup命令:如果你正在运行一个进程,而且你觉得在退出帐户时该进程还不会结束,那么可以使用nohup命令.该命令可以在你退出帐户/关闭终端之后继续运行相应的进程.nohup就是不挂断 ...

  3. 百度云曲显平:AIOps时代下如何用运维数据系统性地解决运维问题?

    百度云智能运维负责人 曲显平 本文是根据百度云智能运维负责人曲显平10月20日在msup携手魅族.Flyme.百度云主办的第十三期魅族技术开放日<百度云智能运维实践>演讲中的分享内容整理而 ...

  4. .NET Core开发日志——Runtime IDentifier

    .NET Core对于传统.NET开发人员而言是既熟悉又陌生的新平台,所以有时遇上出乎意料的事情也纯属正常情况.这时只需点耐心,多查查资料,努力找到原因,也未尝不是件有意义的体验. 比如当建完一个最简 ...

  5. HDU 1247 - Hat’s Words - [字典树水题]

    题目链接:http://acm.hdu.edu.cn/showproblem.php?pid=1247 Problem DescriptionA hat’s word is a word in the ...

  6. [No000017C]改善C#程序的建议5:引用类型赋值为null与加速垃圾回收

    在标准的Dispose模式中(见前一篇博客“C#中标准Dispose模式的实现”),提到了需要及时释放资源,却并没有进一步细说让引用等于null是否有必要. 有一些人认为等于null可以帮助垃圾回收机 ...

  7. [No000013D].Net 项目代码风格参考

    1. C#代码风格要求 1.1 注释 类型.属性.事件.方法.方法参数,根据需要添加注释. 如果类型.属性.事件.方法.方法参数的名称已经是自解释了,不需要加注释:否则需要添加注释. 当添加注释时,添 ...

  8. centos7.2 nfs安装配置

    nfs服务端 ip:192.168.1.16 1.yum -y install rpcbind nfs-utils 2.创建文件/etc/exports,内容如下 /mnt/ 192.168.1.0/ ...

  9. python之if __name__ == '__main__'

    if __name__ == '__main__' 我们简单的理解就是: 如果模块是被直接运行的,则代码块被运行,如果模块是被导入的,则代码块不被运行.

  10. pillow与numpy实现图片素描化

    from PIL import Image import numpy as np #封装一个图像处理类 class TestNumpy(object): def photo2paint(self,im ...