需求:

三个不同的dfs中存在不同的多个节点id,现在需要求出不同的dfs之间的节点对应关系,比如,哪些节点在某一个dfs,但是不在另一个dfs中

思路:

一、 如果是单纯计算dfs中节点数量,则可以使用scala,代码如下:

1 准备原始数据

将原始数据存放在文本文件dfs_id中(本文通过subline),格式如下:

2415,2416,2417,2418,2419,2421,2422,2423,2424,2425,2426,2427,2428,2429,2430,2431,2432,2433,2434,2435,2436,2437,2438,2439,2440,2441,2442,2443,2444,2445,2446,2447,2448,2449,2450,2541,2542,2544,2724,3124,3126,2605,2606,3133,3194,3272,3271,3273,3274,3302,3313,3314,3652,3654,3657,3944

2 计算代码

1. 在【spark】/bin目录下启动spark:
spark-shell --master=local 2. 读取源数据文件,创建rdd

scala> var rdd = sc.textFile("/Users/wooluwalker/Desktop/dfs_id")
rdd: org.apache.spark.rdd.RDD[String] = /Users/wooluwalker/Desktop/dfs_id MapPartitionsRDD[2] at textFile at <console>:24

  3.  计算id个数

scala> rdd.flatMap(x=>x.split(',')).count()
res3: Long = 56

3 上述代码只用于个数的计算,不能用于不同dfs之间的id对比,为了是实现这个功能,推荐使用hive

二、 使用hive将不同dfs的id汇总到一张表中

1 创建表 tb_dfsid_137_confluence_task_id_split,tb_dfsid_3_confluence_task_id_split
结构相同,如下:
line string 2 上传对应的数据到不同的表中
load data local inpath '/Users/wooluwalker/Desktop/dfs_3_taskid' into table tb_dfsid_3_confluence_task_id_split; load data local inpath '/Users/wooluwalker/Desktop/dfs_137_taskid' into table tb_dfsid_137_confluence_task_id_split; 3 将tb_dfsid_137_confluence_task_id_split,tb_dfsid_3_confluence_task_id_split一横行的数据拆分成一纵列, 放到tb_dfsid_137_confluence_task_id_split_vertical,tb_dfsid_3_confluence_task_id_split_vertical: create table if not exists tb_dfsid_137_confluence_task_id_split_vertical
as
select * from (
select explode(split(line,",")) as dfsid_137 from tb_dfsid_137_confluence_task_id_split
) tmp; create table if not exists tb_dfsid_3_confluence_task_id_split_vertical
as
select * from (
select explode(split(line,",")) as dfsid_137 from tb_dfsid_3_confluence_task_id_split
) tmp; 4 创建表tb_dfsid_3_137_confluence_task_id_compare,汇总对比dfs_3和dfs_137下的节点 create table if not exists tb_dfsid_3_137_confluence_task_id_compare
select dfsid3, dfsid137
from
  tb_dfsid_3_confluence_task_id_split_vertical id3
full outer join
  tb_dfsid_137_confluence_task_id_split_vertical id137
on id3.dfsid3 = id137.dfsid137;

dfs_3 与dfs_137的节点对比如下(部分):  

同样也可得到 dfs_3 与dfs_137 dfs_137之间的对比:

注:

- 子查询必须有select表的别名!!!!

- 子查询中必须指明字段的别名(as dfsid_137),否则据此创建出来的表 tb_dfsid_137_confluence_task_id_split_vertical 列名为col,不利于后续计算

如下为原始代码: 

创建表 tb_dfsid_137_confluence_task_id_split,tb_dfsid_3_confluence_task_id_split
结构相同,如下:
line string 2 上传对应的数据到不同的表中
load data local inpath '/Users/wooluwalker/Desktop/dfs_3_taskid' into table tb_dfsid_3_confluence_task_id_split; load data local inpath '/Users/wooluwalker/Desktop/dfs_137_taskid' into table tb_dfsid_137_confluence_task_id_split; 3 将tb_dfsid_137_confluence_task_id_split,tb_dfsid_3_confluence_task_id_split一横行的数据拆分成一纵列,放到tb_dfsid_137_confluence_task_id_split_vertical,tb_dfsid_3_confluence_task_id_split_vertical create table if not exists tb_dfsid_137_confluence_task_id_split_vertical
as
select * from
(
select explode(split(line,",")) from tb_dfsid_137_confluence_task_id_split
) tmp; create table if not exists tb_dfsid_3_confluence_task_id_split_vertical
as
select * from
(
select explode(split(line,",")) from tb_dfsid_3_confluence_task_id_split
) tmp; 4 创建表tb_dfsid_3_137_confluence_task_id_compare,汇总对比dfs_3和dfs_137下的节点
create table if not exists tb_dfsid_3_137_confluence_task_id_compare
select dfsid3, dfsid137
from
tb_dfsid_3_confluence_task_id_split_vertical id3
full outer join
tb_dfsid_137_confluence_task_id_split_vertical id137
on id3.dfsid3 = id137.dfsid137; truncate table tb_dfsid_148_confluence_task_id_split; load data local inpath '/Users/wooluwalker/Desktop/dfs148_taskid' into table tb_dfsid_148_confluence_task_id_split; truncate table tb_dfsid_148_confluence_task_id_split_vertical; insert into tb_dfsid_148_confluence_task_id_split_vertical
select * from
(
select explode(split(line,",")) from tb_dfsid_148_confluence_task_id_split
) tmp; select count(1) from tb_dfsid_148_confluence_task_id_split_vertical; drop table tb_dfsid_3_137_148_confluence_task_id_compare; create table if not exists tb_dfsid_3_137_148_confluence_task_id_compare(dfsid3 string,dfsid137 string,dfsid148 string); insert into tb_dfsid_3_137_148_confluence_task_id_compare
select dfsid3, dfsid137,dfsid148
from
tb_dfsid_3_137_confluence_task_id_compare id3_137
full outer join
tb_dfsid_148_confluence_task_id_split_vertical id148
on id3_137.dfsid3 = id148.dfsid148; select * from tb_dfsid_3_137_148_confluence_task_id_compare;
--输出59个
select count(distinct dfsid148) from tb_dfsid_3_137_148_confluence_task_id_compare; select explode(split(line,",")) from tb_dfsid_148_confluence_task_id_split; select count(*) from (select explode(split(line,",")) from tb_dfsid_148_confluence_task_id_split) as tmp; select size(split(line,',')) from tb_dfsid_148_confluence_task_id_split; select size(split('1,2,3,4,5,6,7,8,1,2,3,4,5,6,7,8',',')); select size(split('2447,2445,3124,3944,2444,2442,2440,3271,3126,3652,3654,3302,3629,2415,2979,2439,2429,2980,2981,2433,2978,2427,2977,2438,2426,2969,2976,2431,2973,2430,2972,2425,2970,2542,2450,3657,2544,2424,2437,2423,2422,2419,2428,3314,2449,2418,2421,2448,3313,2417,3194,2416,2432,2446,2443,2441,3273,3274,3272',',')); select explode(split('2447,2445,3124,3944,2444,2442,2440,3271,3126,3652,3654,3302,3629,2415,2979,2439,2429,2980,2981,2433,2978,2427,2977,2438,2426,2969,2976,2431,2973,2430,2972,2425,2970,2542,2450,3657,2544,2424,2437,2423,2422,2419,2428,3314,2449,2418,2421,2448,3313,2417,3194,2416,2432,2446,2443,2441,3273,3274,3272',",")); --select 出来的“表”的列明为col
select explode(split('2447,2445,3124,3944,2444,2442,2440,3271,3126,3652,3654,3302,3629,2415,2979,2439,2429,2980,2981,2433,2978,2427,2977,2438,2426,2969,2976,2431,2973,2430,2972,2425,2970,2542,2450,3657,2544,2424,2437,2423,2422,2419,2428,3314,2449,2418,2421,2448,3313,2417,3194,2416,2432,2446,2443,2441,3273,3274,3272',",")) order by col; --查看148 task的个数:59个
select count(*) from (select explode(split('2447,2445,3124,3944,2444,2442,2440,3271,3126,3652,3654,3302,3629,2415,2979,2439,2429,2980,2981,2433,2978,2427,2977,2438,2426,2969,2976,2431,2973,2430,2972,2425,2970,2542,2450,3657,2544,2424,2437,2423,2422,2419,2428,3314,2449,2418,2421,2448,3313,2417,3194,2416,2432,2446,2443,2441,3273,3274,3272',",")))tmp; --输出 59 去重
select count(distinct col) from (select explode(split('2447,2445,3124,3944,2444,2442,2440,3271,3126,3652,3654,3302,3629,2415,2979,2439,2429,2980,2981,2433,2978,2427,2977,2438,2426,2969,2976,2431,2973,2430,2972,2425,2970,2542,2450,3657,2544,2424,2437,2423,2422,2419,2428,3314,2449,2418,2421,2448,3313,2417,3194,2416,2432,2446,2443,2441,3273,3274,3272',","))) tmp; --tb_dfsid_148_confluence_task_id_split_vertical 只有51个task
select count(1) from tb_dfsid_148_confluence_task_id_split_vertical;

  

不同组的id列表的汇总对比的更多相关文章

  1. python中字符串和列表只是汇总

    字符串知识汇总 字符串是描述变量的重要信息,其中的应用也是很多,很重要的一点就是StringBuilder.今天我们会为大家介绍一下常用的StringBuilder 1 strip lstrip rs ...

  2. JS 获取某个容器控件中id包含制定字符串的控件id列表

    //获取某容器控件中id包含某字符串的控件id列表 //参数:容器控件.要查找的控件的id关键字 function GetIdListBySubKey(container,subIdKey) { va ...

  3. 气象城市ID列表

    气象城市ID列表 数据来源: http://cj.weather.com.cn/support/Detail.aspx?id=51837fba1b35fe0f8411b6df 记录了2574个地区,2 ...

  4. C#开发BIMFACE系列16 服务端API之获取模型数据1:查询满足条件的构件ID列表

    系列目录     [已更新最新开发文章,点击查看详细] 源文件/模型转换完成之后,可以获取模型的具体数据.本篇介绍根据文件ID查询满足条件的构件ID列表. 请求地址:GET https://api.b ...

  5. 倒排列表压缩算法汇总——分区Elias-Fano编码貌似是最牛叉的啊!

    来看看倒排索引压缩.压缩是拿CPU换IO的最重要手段之一,不论索引是放在硬盘还是内存中.索引压缩的算法有几十种,跟文本压缩不同,索引压缩算法不仅仅需要考虑压缩率,更要考虑压缩和解压性能,否则会解压太慢 ...

  6. HTTP访问的两种方式(HttpClient+HttpURLConnection)整合汇总对比(转)

    在Android上http 操作类有两种,分别是HttpClient和HttpURLConnection,其中两个类的详细介绍可以问度娘. HttpClient: HttpClient是Apache ...

  7. sphinx 源码阅读之分词,压缩索引,倒排——单词对应的文档ID列表本质和lucene无异 也是外部排序再压缩 解压的时候需要全部扫描doc_ids列表偏移量相加获得最终的文档ID

    转自:http://github.tiankonguse.com/blog/2014/12/03/sphinx-token-inverted-sort.html 外部排序 现在我们的背景是有16个已经 ...

  8. jQuery ID与Class性能对比之一

    最近一直在做网站的优化方面的工作,在实际优化的过程中逐渐发现yahoo的34条只能作为一个大的方向,除此之外还有很多地方值得前端工程师关注的.结合最近的优化体会及实地测试,现发出来一部分供大家批评指正 ...

  9. [python01] python列表,元组对比Erlang的区别总结

    数据结构是通过某种方式组织在一起的数据元素的集合,这些数据元素可以是数字,字符,甚至可以是其他的数据结构. python最基本的数据结构是sequence(序列):6种内建的序列:列表,元组,字符串, ...

随机推荐

  1. oracle中的SQL优化

    一.SQL语言的使用1.IN 操作符    用IN写出来的SQL的优点是比较容易写及清晰易懂,这比较适合现代软件开发的风格.    但是用IN的SQL性能总是比较低的,从ORACLE执行的步骤来分析用 ...

  2. validation-api各注解的用法

    入参用@Valid,要不下面实体类中的注解不生效 @AssertFalse 被注解的元素必须为false@AssertTrue 被注解的元素必须为True@DecimalMax(value) 被注解的 ...

  3. ubuntu上安装并使用mysql数据库

    一.安装Mysql 最简单的方式就是apt-get安装 安装核心程序 sudo apt-get install mysql-client-core-5.6 安装客户端程序 sudo apt-get i ...

  4. 小白的首个maven web项目Step1软件安装三(8.0.15mysql及workbench安装)

    直接先开始下 MySQL 和 Workbench(mysql的可视化工具) ,注意下得是镜像版 .msi 后缀的 (mysql是纯控制面板的呈现方式,想要界面化操作可以装可视化工具,这里我装的是wor ...

  5. Python2.0 与 3.0 的区别

    Python 2.0 =默认编码=ASSIC=不支持中文 Python 3.0 =默认编码=UNICODE=默认支持中文   In summary : Python 2.x is legacy, Py ...

  6. 将springboot打包成的jar文件做成windows服务,解决java程序自启动问题

    https://blog.csdn.net/weixin_40411331/article/details/80193376 https://blog.csdn.net/qq_33188180/art ...

  7. ZooKeeper 单机版安装和配置

    Zookeeper 下载链接:http://mirrors.shu.edu.cn/apache/zookeeper/ #wget https://mirrors.tuna.tsinghua.edu.c ...

  8. MongoDB知识点拾遗梳理

    MongoDB数据库安装: >apt-get install mongodb 0.MongoDB状态查看.启动.停止       >/etc/init.d/mongodb status或s ...

  9. 认识enum

    今天在看别人代码的时候发现他的使用enum的方法,我是不会用的,因此记录一下. 这个是一个语音合成,今天改为使用百度的语音,可以免费支持离线.在线. 在语音合成的使用,会使用许多的词句让其合成语音,用 ...

  10. JS Math方法