partitionBy 重新分区, repartition默认采用HashPartitioner分区,自己设计合理的分区方法(比如数量比较大的key 加个随机数 随机分到更多的分区, 这样处理数据倾斜更彻底一些)

/**
* An object that defines how the elements in a key-value pair RDD are partitioned by key.
* Maps each key to a partition ID, from 0 to `numPartitions - 1`.
*/
abstract class Partitioner extends Serializable {
def numPartitions: Int
def getPartition(key: Any): Int
}
import org.apache.spark.HashPartitioner
import org.apache.spark.sql.SparkSession

//查看rdd中的每个分区元素
object PartitionBy_Test {
def main(args: Array[String]): Unit = {
val spark = SparkSession.builder().master("local").appName(this.getClass.getSimpleName).getOrCreate()
val rdd = spark.sparkContext.parallelize(Array(("a", ), ("a", ), ("b", ), ("b", ), (("c", )), (("e", ))), ) val result = rdd.mapPartitionsWithIndex {
(partIdx, iter) => { val part_map = scala.collection.mutable.Map[String, List[(String, Int)]]() while (iter.hasNext) {
val part_name = "part_" + partIdx
var elem = iter.next()
if (part_map.contains(part_name)) {
var elems = part_map(part_name)
elems ::= elem
part_map(part_name) = elems
} else {
part_map(part_name) = List[(String, Int)] {
elem
}
}
}
part_map.iterator
}
}.collect
result.foreach(x => println(x._1 + ":" + x._2.toString())) }
}

这里的分区方法可以选择, 默认的分区就是HashPartition分区,
注意如果多次使用该RDD或者进行join操作, 分区后peresist持久化操作

/**
* A [[org.apache.spark.Partitioner]] that implements hash-based partitioning using
* Java's `Object.hashCode`.
*
* Java arrays have hashCodes that are based on the arrays' identities rather than their contents,
* so attempting to partition an RDD[Array[_]] or RDD[(Array[_], _)] using a HashPartitioner will
* produce an unexpected or incorrect result.
*/
class HashPartitioner(partitions: Int) extends Partitioner {
require(partitions >= , s"Number of partitions ($partitions) cannot be negative.") def numPartitions: Int = partitions def getPartition(key: Any): Int = key match {
case null =>
case _ => Utils.nonNegativeMod(key.hashCode, numPartitions)
} override def equals(other: Any): Boolean = other match {
case h: HashPartitioner =>
h.numPartitions == numPartitions
case _ =>
false
} override def hashCode: Int = numPartitions
}

范围分区 RangePartitioner :先键值排序, 确定样本大小,采样后不放回总体的随机采样方法, 分配键值的分区,通过样本采样避免数据倾斜。

class RangePartitioner[K : Ordering : ClassTag, V](
partitions: Int,
rdd: RDD[_ <: Product2[K, V]],
private var ascending: Boolean = true,
val samplePointsPerPartitionHint: Int = )
extends Partitioner { // A constructor declared in order to maintain backward compatibility for Java, when we add the
// 4th constructor parameter samplePointsPerPartitionHint. See SPARK-22160.
// This is added to make sure from a bytecode point of view, there is still a 3-arg ctor.
def this(partitions: Int, rdd: RDD[_ <: Product2[K, V]], ascending: Boolean) = {
this(partitions, rdd, ascending, samplePointsPerPartitionHint = )
} // We allow partitions = 0, which happens when sorting an empty RDD under the default settings.
require(partitions >= , s"Number of partitions cannot be negative but found $partitions.")
require(samplePointsPerPartitionHint > ,
s"Sample points per partition must be greater than 0 but found $samplePointsPerPartitionHint") private var ordering = implicitly[Ordering[K]] // An array of upper bounds for the first (partitions - 1) partitions
private var rangeBounds: Array[K] = {
if (partitions <= ) {
Array.empty
} else {
// This is the sample size we need to have roughly balanced output partitions, capped at 1M.
// Cast to double to avoid overflowing ints or longs
val sampleSize = math.min(samplePointsPerPartitionHint.toDouble * partitions, 1e6)
// Assume the input partitions are roughly balanced and over-sample a little bit.
val sampleSizePerPartition = math.ceil(3.0 * sampleSize / rdd.partitions.length).toInt
val (numItems, sketched) = RangePartitioner.sketch(rdd.map(_._1), sampleSizePerPartition)
if (numItems == 0L) {
Array.empty
} else {
// If a partition contains much more than the average number of items, we re-sample from it
// to ensure that enough items are collected from that partition.
val fraction = math.min(sampleSize / math.max(numItems, 1L), 1.0)
val candidates = ArrayBuffer.empty[(K, Float)]
val imbalancedPartitions = mutable.Set.empty[Int]
sketched.foreach { case (idx, n, sample) =>
if (fraction * n > sampleSizePerPartition) {
imbalancedPartitions += idx
} else {
// The weight is 1 over the sampling probability.
val weight = (n.toDouble / sample.length).toFloat
for (key <- sample) {
candidates += ((key, weight))
}
}
}
if (imbalancedPartitions.nonEmpty) {
// Re-sample imbalanced partitions with the desired sampling probability.
val imbalanced = new PartitionPruningRDD(rdd.map(_._1), imbalancedPartitions.contains)
val seed = byteswap32(-rdd.id - )
val reSampled = imbalanced.sample(withReplacement = false, fraction, seed).collect()
val weight = (1.0 / fraction).toFloat
candidates ++= reSampled.map(x => (x, weight))
}
RangePartitioner.determineBounds(candidates, math.min(partitions, candidates.size))
}
}
} def numPartitions: Int = rangeBounds.length + private var binarySearch: ((Array[K], K) => Int) = CollectionsUtils.makeBinarySearch[K] def getPartition(key: Any): Int = {
val k = key.asInstanceOf[K]
var partition =
if (rangeBounds.length <= ) {
// If we have less than 128 partitions naive search
while (partition < rangeBounds.length && ordering.gt(k, rangeBounds(partition))) {
partition +=
}
} else {
// Determine which binary search method to use only once.
partition = binarySearch(rangeBounds, k)
// binarySearch either returns the match location or -[insertion point]-1
if (partition < ) {
partition = -partition-
}
if (partition > rangeBounds.length) {
partition = rangeBounds.length
}
}
if (ascending) {
partition
} else {
rangeBounds.length - partition
}
} override def equals(other: Any): Boolean = other match {
case r: RangePartitioner[_, _] =>
r.rangeBounds.sameElements(rangeBounds) && r.ascending == ascending
case _ =>
false
} override def hashCode(): Int = {
val prime =
var result =
var i =
while (i < rangeBounds.length) {
result = prime * result + rangeBounds(i).hashCode
i +=
}
result = prime * result + ascending.hashCode
result
} @throws(classOf[IOException])
private def writeObject(out: ObjectOutputStream): Unit = Utils.tryOrIOException {
val sfactory = SparkEnv.get.serializer
sfactory match {
case js: JavaSerializer => out.defaultWriteObject()
case _ =>
out.writeBoolean(ascending)
out.writeObject(ordering)
out.writeObject(binarySearch) val ser = sfactory.newInstance()
Utils.serializeViaNestedStream(out, ser) { stream =>
stream.writeObject(scala.reflect.classTag[Array[K]])
stream.writeObject(rangeBounds)
}
}
} @throws(classOf[IOException])
private def readObject(in: ObjectInputStream): Unit = Utils.tryOrIOException {
val sfactory = SparkEnv.get.serializer
sfactory match {
case js: JavaSerializer => in.defaultReadObject()
case _ =>
ascending = in.readBoolean()
ordering = in.readObject().asInstanceOf[Ordering[K]]
binarySearch = in.readObject().asInstanceOf[(Array[K], K) => Int] val ser = sfactory.newInstance()
Utils.deserializeViaNestedStream(in, ser) { ds =>
implicit val classTag = ds.readObject[ClassTag[Array[K]]]()
rangeBounds = ds.readObject[Array[K]]()
}
}
}
}

自定义分区函数 自己根据业务数据减缓数据倾斜问题:
要实现自定义的分区器,你需要继承 org.apache.spark.Partitioner 类并实现下面三个方法

  • numPartitions: Int:返回创建出来的分区数。
  • getPartition(key: Any): Int:返回给定键的分区编号( 0 到 numPartitions-1)。
//自定义分区类,需继承Partitioner类
class UsridPartitioner(numParts:Int) extends Partitioner{
//覆盖分区数
override def numPartitions: Int = numParts //覆盖分区号获取函数
override def getPartition(key: Any): Int = {
if(key.toString == "A")
key.toString.toInt%
else:
key.toString.toInt%
}
}

Spark partitionBy的更多相关文章

  1. spark算子:partitionBy对数据进行分区

    def partitionBy(partitioner: Partitioner): RDD[(K, V)] 该函数根据partitioner函数生成新的ShuffleRDD,将原RDD重新分区. s ...

  2. Spark中repartition和partitionBy的区别

    repartition 和 partitionBy 都是对数据进行重新分区,默认都是使用 HashPartitioner,区别在于partitionBy 只能用于 PairRDD,但是当它们同时都用于 ...

  3. Spark算子--partitionBy

    转载请标明出处http://www.cnblogs.com/haozhengfei/p/923b11fce561e82748baa016bcfb8421.html partitionBy--Trans ...

  4. 图解Spark API

    初识spark,需要对其API有熟悉的了解才能方便开发上层应用.本文用图形的方式直观表达相关API的工作特点,并提供了解新的API接口使用的方法.例子代码全部使用python实现. 1. 数据源准备 ...

  5. Spark 生态系统组件

    摘要: 随着大数据技术的发展,实时流计算.机器学习.图计算等领域成为较热的研究方向,而Spark作为大数据处理的“利器”有着较为成熟的生态圈,能够一站式解决类似场景的问题.那你知道Spark生态系统有 ...

  6. Spark的DataFrame的窗口函数使用

    作者:Syn良子 出处:http://www.cnblogs.com/cssdongl 转载请注明出处 SparkSQL这块儿从1.4开始支持了很多的窗口分析函数,像row_number这些,平时写程 ...

  7. Learning Spark 第四章——键值对处理

    本章主要介绍Spark如何处理键值对.K-V RDDs通常用于聚集操作,使用相同的key聚集或者对不同的RDD进行聚集.部分情况下,需要将spark中的数据记录转换为键值对然后进行聚集处理.我们也会对 ...

  8. [大数据之Spark]——Transformations转换入门经典实例

    Spark相比于Mapreduce的一大优势就是提供了很多的方法,可以直接使用:另一个优势就是执行速度快,这要得益于DAG的调度,想要理解这个调度规则,还要理解函数之间的依赖关系. 本篇就着重描述下S ...

  9. 【原】Learning Spark (Python版) 学习笔记(二)----键值对、数据读取与保存、共享特性

    本来应该上周更新的,结果碰上五一,懒癌发作,就推迟了 = =.以后还是要按时完成任务.废话不多说,第四章-第六章主要讲了三个内容:键值对.数据读取与保存与Spark的两个共享特性(累加器和广播变量). ...

随机推荐

  1. css - Grid网格布局

    .wrapper{ display: grid; grid-template-columns: 100px 100px 100px; grid-template-rows: 100px 100px 1 ...

  2. 使用jenkins pipeline,并发selenium测试 --- 你值得了解

    一.契机 相信很多使用selenium进行UI测试,再对接jenkins时,都是简单的在jenkins上将命令输入就完事了. 但是,相信你一定会遇到以下问题: 1.你需要同时跑不同文件或不同类的用例, ...

  3. Java学习者论坛【申明:来源于网络】

    学习者论坛[申明:来源于网络] 1.Java学习者论坛 2.51论坛 3.csdn论坛 4.JAVA ME论坛 地址|: http://www.javaxxz.com/ 地址|: http://bbs ...

  4. 使用 PREPARE 的几个注意点

    简单的用set或者declare语句定义变量,然后直接作为sql的表名是不行的,mysql会把变量名当作表名.在其他的sql数据库中也是如此,mssql的解决方法是将整条sql语句作为变量,其中穿插变 ...

  5. MySQL设置只读模式

    MySQL设置了主从复制,为保证数据一致性需要在从库设置只读状态 查看默认读写状态 show global variables like "%read_only%"; 设置只读 # ...

  6. 启动weblogic报错:string value '2.4' is not a valid enumeration value for web-app-versionType in namespace http://java.sun.com/xml/ns/javaee

    启动报错: 原因:有人改动了web.xml的头 解决方法: 在web.xml中修改抬头为: <?xml version="1.0" encoding="UTF-8& ...

  7. Winter-Camp欠债记录

    待完成: 球相交体积模板博客 Day3B题计算几何 Splay和Treap学习 [寒假]整理算法&模板

  8. [转载]Invalid bound statement (not found): com.taotao.mapper.TbItemMapper.selectByExample: 错误

    因碰到同样的问题,使用该方法对我有效,为方便以后查找,所以做了转载,原文请查看:https://www.cnblogs.com/fifiyong/p/5795365.html 在Maven工程下,想通 ...

  9. PCI 设备调试手段

    Author: Younix Platform: RK3399 OS: Android 6.0 Kernel: 4.4 Version: v2017.04 一PCI 设备调试手段 busybox ls ...

  10. Xml文件删除节点总是留有空标签

    ---恢复内容开始--- 在删除Xml文件时,删除成功后还有标签,让我百思不得其解,因为xml文档中留着这空标签会对后续的操作带来很多麻烦,会取出空值,人后导致程序中止. 导致这种情况的原因是删除xm ...