import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.VoidFunction;
import scala.Tuple2; import java.util.Arrays;
import java.util.List; /**
* reduceByKey(fun,[numTasks]) 算子:
* 根据key将value聚合,然后根据fun进行计算
* 可以设置并行度
* reduceByKey = groupByKey+reduce
*/
public class ReduceByKeyOperator {
public static void main(String[] args){
SparkConf conf = new SparkConf().setMaster("local").setAppName("reduceByKey");
JavaSparkContext sc = new JavaSparkContext(conf); List<Tuple2<String,Integer>> list = Arrays.asList(
new Tuple2<String,Integer>("w1",1),
new Tuple2<String,Integer>("w2",2),
new Tuple2<String,Integer>("w3",3),
new Tuple2<String,Integer>("w2",22),
new Tuple2<String,Integer>("w1",11)
); JavaPairRDD<String,Integer> pairRdd = sc.parallelizePairs(list); JavaPairRDD<String,Integer> result = pairRdd.reduceByKey(new Function2<Integer, Integer, Integer>() {
@Override
public Integer call(Integer integer, Integer integer2) throws Exception {
return integer+integer2;
}
},2); result.foreach(new VoidFunction<Tuple2<String, Integer>>() {
@Override
public void call(Tuple2<String, Integer> stringIntegerTuple2) throws Exception {
System.err.println(stringIntegerTuple2._1+":"+stringIntegerTuple2._2);
}
}); }
} 微信扫描下图二维码加入博主知识星球,获取更多大数据、人工智能、算法等免费学习资料哦!

java实现spark常用算子之ReduceByKey的更多相关文章

  1. java实现spark常用算子之Union

    import org.apache.spark.SparkConf;import org.apache.spark.api.java.JavaRDD;import org.apache.spark.a ...

  2. java实现spark常用算子之TakeSample

    import org.apache.spark.SparkConf;import org.apache.spark.api.java.JavaRDD;import org.apache.spark.a ...

  3. java实现spark常用算子之SaveAsTextFile

    import org.apache.spark.SparkConf;import org.apache.spark.api.java.JavaRDD;import org.apache.spark.a ...

  4. java实现spark常用算子之Repartitions

    import org.apache.spark.SparkConf;import org.apache.spark.api.java.JavaRDD;import org.apache.spark.a ...

  5. java实现spark常用算子之mapPartitionsWithIndex

    import org.apache.spark.SparkConf;import org.apache.spark.api.java.JavaRDD;import org.apache.spark.a ...

  6. java实现spark常用算子之map

    import org.apache.spark.SparkConf;import org.apache.spark.api.java.JavaRDD;import org.apache.spark.a ...

  7. java实现spark常用算子之intersection

    import org.apache.spark.SparkConf;import org.apache.spark.api.java.JavaRDD;import org.apache.spark.a ...

  8. java实现spark常用算子之frist

    import org.apache.spark.SparkConf;import org.apache.spark.api.java.JavaRDD;import org.apache.spark.a ...

  9. java实现spark常用算子之flatmap

    import org.apache.spark.SparkConf;import org.apache.spark.api.java.JavaRDD;import org.apache.spark.a ...

随机推荐

  1. koa 项目实战(十一)验证登录和注册的 input

    1.验证注册参数 根目录/validation/register.js const Validator = require('validator'); const isEmpty = require( ...

  2. sentinel备忘

    git https://github.com/alibaba/Sentinel   https://github.com/dubbo/dubbo-sentinel-supportdubbo http: ...

  3. LC 980. Unique Paths III

    On a 2-dimensional grid, there are 4 types of squares: 1 represents the starting square.  There is e ...

  4. 初步理解js作用域

    <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8&quo ...

  5. 02 MySQL之数据表的基本操作

    01-创建数据表 # 切换数据库 use test_db; # 创建数据表 语法规则如下: create table 表名 ( 字段名1, 数据类型 [列级别约束条件] [默认值], 字段名2, 数据 ...

  6. [Python]正则匹配字符串 | 蒲公英二维码图片url

    代码示例: import re def Find(string): # findall() 查找匹配正则表达式的字符串 url = re.findall('http[s]?://(?:[a-zA-Z] ...

  7. python之NLP数据清洗

    1.知识点 """ 安装模块:bs4 nltk gensim nltk:处理英文 1.安装 2.nltk.download() 下载相应的模块 英文数据处理: 1.去掉h ...

  8. —Entity Framework实例详解

    Entity Framework Code First的默认行为是使用一系列约定将POCO类映射到表.然而,有时候,不能也不想遵循这些约定,那就需要重写它们.重写默认约定有两种方式:Data Anno ...

  9. springboot-多环境测试

    1.application.properties中添加spring.profiles.active=test 2.同级目录下创建application-dev.properties.applicati ...

  10. 爬虫——简单处理js中嵌入的json数据

    看了群里一个人提问道https://www.amazon.com/,商品分类那里无法用xpath拿得到列表.遂对其研究. 通过抓包工具可以得知,原始数据存在于js代码中,我的方式是手动解析js,从里面 ...