一 简介 Shuffle,简而言之,就是对数据进行重新分区,其中会涉及大量的网络io和磁盘io,为什么需要shuffle,以词频统计reduceByKey过程为例, serverA:partition1: (hello, 1), (word, 1)serverB:partition2: (hello, 2) shuffle之后: serverA:partition1: (hello, 1), (hello, 2)serverB:partition2: (word, 1) 最后才能得到结果: (h…
一 简介 spark核心是RDD,官方文档地址:https://spark.apache.org/docs/latest/rdd-programming-guide.html#resilient-distributed-datasets-rdds官方描述如下:重点是可容错,可并行处理 Spark revolves around the concept of a resilient distributed dataset (RDD), which is a fault-tolerant colle…
Azkaban3.45 一 简介 1 官网 https://azkaban.github.io/ Azkaban was implemented at LinkedIn to solve the problem of Hadoop job dependencies. We had jobs that needed to run in order, from ETL jobs to data analytics products. Initially a single server solutio…
spark 2.1.1 spark初始化rdd的时候,需要读取文件,通常是hdfs文件,在读文件的时候可以指定最小partition数量,这里只是建议的数量,实际可能比这个要大(比如文件特别多或者特别大时),也可能比这个要小(比如文件只有一个而且很小时),如果没有指定最小partition数量,初始化完成的rdd默认有多少个partition是怎样决定的呢? 以SparkContext.textfile为例来看下代码: org.apache.spark.SparkContext /** * Re…
spark 2.1.1 spark中可以通过RDD.sortBy来对分布式数据进行排序,具体是如何实现的?来看代码: org.apache.spark.rdd.RDD /** * Return this RDD sorted by the given key function. */ def sortBy[K]( f: (T) => K, ascending: Boolean = true, numPartitions: Int = this.partitions.length) (implic…
spark中join有两种,一种是RDD的join,一种是sql中的join,分别来看: 1 RDD join org.apache.spark.rdd.PairRDDFunctions /** * Return an RDD containing all pairs of elements with matching keys in `this` and `other`. Each * pair of elements will be returned as a (k, (v1, v2)) t…
spark 2.1.1 一 启动命令 启动spark thrift命令 $SPARK_HOME/sbin/start-thriftserver.sh 然后会执行 org.apache.spark.deploy.SparkSubmit --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 二 启动过程及代码分析 hive thrift代码详见:https://www.cnblogs.com/barneywill/p/101…
spark中要将计算结果取回driver,有两种方式:collect和take,这两种方式有什么差别?来看代码: org.apache.spark.rdd.RDD /** * Return an array that contains all of the elements in this RDD. * * @note This method should only be used if the resulting array is expected to be small, as * all…
spark 2.1.1 最近spark任务(spark on yarn)有一个报错 Diagnostics: Container [pid=5901,containerID=container_1542879939729_30802_01_000001] is running beyond physical memory limits. Current usage: 11.0 GB of 11 GB physical memory used; 12.2 GB of 23.1 GB virtual…
Spark2.1.1 一 Spark Submit本地解析 1.1 现象 提交命令: spark-submit --master local[10] --driver-memory 30g --class app.package.AppClass app-1.0.jar 进程: hadoop 225653 0.0 0.0 11256 364 ? S Aug24 0:00 bash /$spark-dir/bin/spark-class org.apache.spark.deploy.SparkS…