Spark GraphX图处理编程实例
所构建的图如下:
Scala程序代码如下:
import org.apache.spark._
import org.apache.spark.graphx._
// To make some of the examples work we will also need RDD
import org.apache.spark.rdd.RDD
object Test {
def main(args: Array[String]): Unit = {
// 初始化SparkContext
val sc: SparkContext = new SparkContext("local[2]", "Spark Graphx");
// 创造一个点的RDD
val users: RDD[(VertexId, (String, String))] =
sc.parallelize(Array((3L, ("rxin", "student")), (7L, ("jgonzal", "postdoc")),
(5L, ("franklin", "prof")), (2L, ("istoica", "prof"))))
// 创造一个边的RDD,包含各种关系
val relationships: RDD[Edge[String]] =
sc.parallelize(Array(Edge(3L, 7L, "collab"), Edge(5L, 3L, "advisor"),
Edge(2L, 5L, "colleague"), Edge(5L, 7L, "pi")))
// 定义一个缺省的用户,其主要作用就在于当描述一种关系中不存在的目标顶点时就会使用这个缺省的用户
val defaultUser = ("John Doe", "Missing")
// 构造图
val graph = Graph(users, relationships, defaultUser)
// 输出Graph的信息
graph.vertices.collect().foreach(println(_))
graph.triplets.map(triplet => triplet.srcAttr + "----->" + triplet.dstAttr + " attr:" + triplet.attr)
.collect().foreach(println(_))
// 统计所有用户当中postdoc的数量
val cnt1 = graph.vertices.filter { case (id, (name, pos)) => pos == "postdoc" }.count
System.out.println("所有用户当中postdoc的数量为:"+cnt1);
// 统计所有源顶点大于目标顶点src > dst的边的数量
val cnt2 = graph.edges.filter(e => e.srcId > e.dstId).count
System.out.println("所有源顶点大于目标顶点 src > dst的边的数量为:"+cnt2);
// 统计图各个顶点的入度
val inDegrees: VertexRDD[Int] = graph.inDegrees
inDegrees.collect().foreach(println(_))
}
}
相关内置的图操作方法有:
/** Summary of the functionality in the property graph */
class Graph[VD, ED] {
// Information about the Graph ===================================================================
val numEdges: Long
val numVertices: Long
val inDegrees: VertexRDD[Int]
val outDegrees: VertexRDD[Int]
val degrees: VertexRDD[Int]
// Views of the graph as collections =============================================================
val vertices: VertexRDD[VD]
val edges: EdgeRDD[ED]
val triplets: RDD[EdgeTriplet[VD, ED]]
// Functions for caching graphs ==================================================================
def persist(newLevel: StorageLevel = StorageLevel.MEMORY_ONLY): Graph[VD, ED]
def cache(): Graph[VD, ED]
def unpersistVertices(blocking: Boolean = true): Graph[VD, ED]
// Change the partitioning heuristic ============================================================
def partitionBy(partitionStrategy: PartitionStrategy): Graph[VD, ED]
// Transform vertex and edge attributes ==========================================================
def mapVertices[VD2](map: (VertexID, VD) => VD2): Graph[VD2, ED]
def mapEdges[ED2](map: Edge[ED] => ED2): Graph[VD, ED2]
def mapEdges[ED2](map: (PartitionID, Iterator[Edge[ED]]) => Iterator[ED2]): Graph[VD, ED2]
def mapTriplets[ED2](map: EdgeTriplet[VD, ED] => ED2): Graph[VD, ED2]
def mapTriplets[ED2](map: (PartitionID, Iterator[EdgeTriplet[VD, ED]]) => Iterator[ED2])
: Graph[VD, ED2]
// Modify the graph structure ====================================================================
def reverse: Graph[VD, ED]
def subgraph(
epred: EdgeTriplet[VD,ED] => Boolean = (x => true),
vpred: (VertexID, VD) => Boolean = ((v, d) => true))
: Graph[VD, ED]
def mask[VD2, ED2](other: Graph[VD2, ED2]): Graph[VD, ED]
def groupEdges(merge: (ED, ED) => ED): Graph[VD, ED]
// Join RDDs with the graph ======================================================================
def joinVertices[U](table: RDD[(VertexID, U)])(mapFunc: (VertexID, VD, U) => VD): Graph[VD, ED]
def outerJoinVertices[U, VD2](other: RDD[(VertexID, U)])
(mapFunc: (VertexID, VD, Option[U]) => VD2)
: Graph[VD2, ED]
// Aggregate information about adjacent triplets =================================================
def collectNeighborIds(edgeDirection: EdgeDirection): VertexRDD[Array[VertexID]]
def collectNeighbors(edgeDirection: EdgeDirection): VertexRDD[Array[(VertexID, VD)]]
def aggregateMessages[Msg: ClassTag](
sendMsg: EdgeContext[VD, ED, Msg] => Unit,
mergeMsg: (Msg, Msg) => Msg,
tripletFields: TripletFields = TripletFields.All)
: VertexRDD[A]
// Iterative graph-parallel computation ==========================================================
def pregel[A](initialMsg: A, maxIterations: Int, activeDirection: EdgeDirection)(
vprog: (VertexID, VD, A) => VD,
sendMsg: EdgeTriplet[VD, ED] => Iterator[(VertexID,A)],
mergeMsg: (A, A) => A)
: Graph[VD, ED]
// Basic graph algorithms ========================================================================
def pageRank(tol: Double, resetProb: Double = 0.15): Graph[Double, Double]
def connectedComponents(): Graph[VertexID, ED]
def triangleCount(): Graph[Int, ED]
def stronglyConnectedComponents(numIter: Int): Graph[VertexID, ED]
}
参考链接:
http://spark.apache.org/docs/latest/graphx-programming-guide.html
Spark GraphX图处理编程实例的更多相关文章
- Spark GraphX图计算核心源码分析【图构建器、顶点、边】
一.图构建器 GraphX提供了几种从RDD或磁盘上的顶点和边的集合构建图形的方法.默认情况下,没有图构建器会重新划分图的边:相反,边保留在默认分区中.Graph.groupEdges要求对图进行重新 ...
- Spark GraphX图计算核心算子实战【AggreagteMessage】
一.简介 参考博客:https://www.cnblogs.com/yszd/p/10186556.html 二.代码实现 package graphx import org.apache.log4j ...
- Spark GraphX图计算简单案例【代码实现,源码分析】
一.简介 参考:https://www.cnblogs.com/yszd/p/10186556.html 二.代码实现 package big.data.analyse.graphx import o ...
- Spark GraphX实例(1)
Spark GraphX是一个分布式的图处理框架.社交网络中,用户与用户之间会存在错综复杂的联系,如微信.QQ.微博的用户之间的好友.关注等关系,构成了一张巨大的图,单机无法处理,只能使用分布式图处理 ...
- Spark + GraphX + Pregel
Spark+GraphX图 Q:什么是图?图的应用场景 A:图是由顶点集合(vertex)及顶点间的关系集合(边edge)组成的一种网状数据结构,表示为二元组:Gragh=(V,E),V\E分别是顶点 ...
- Spark GraphX企业运用
========== Spark GraphX 概述 ==========1.Spark GraphX是什么? (1)Spark GraphX 是 Spark 的一个模块,主要用于进行以图为核心的计 ...
- Spark—GraphX编程指南
Spark系列面试题 Spark面试题(一) Spark面试题(二) Spark面试题(三) Spark面试题(四) Spark面试题(五)--数据倾斜调优 Spark面试题(六)--Spark资源调 ...
- 明风:分布式图计算的平台Spark GraphX 在淘宝的实践
快刀初试:Spark GraphX在淘宝的实践 作者:明风 (本文由团队中梧苇和我一起撰写,并由团队中的林岳,岩岫,世仪等多人Review,发表于程序员的8月刊,由于篇幅原因,略作删减,本文为完整版) ...
- Spark Graphx编程指南
问题导读1.GraphX提供了几种方式从RDD或者磁盘上的顶点和边集合构造图?2.PageRank算法在图中发挥什么作用?3.三角形计数算法的作用是什么?Spark中文手册-编程指南Spark之一个快 ...
随机推荐
- nginx卸载与安装
1.卸载 在前面曾经安装过一次,这一次卸载再重新安装. 直接删除文件夹 2.更新软件源 3.依赖包安装 4.下载源码包并解压 5.增加用户组 6.安装 三个步骤 ./configure make ma ...
- 第一个web程序(web.xml , ServletConfig , ServletContext)
一:第一个jsp程序 1.项目设计结构 2.新建Person.java package com.java.demo; public class Person { public void printSt ...
- SpringSecurity3基础篇
Spring Security 是一种基于Spring AOP 和Servlet过滤器的安全框架,它提供了全面的安全性解决方案,同时在Web请求级和方法调用级处理身份确认和授权.在Spring Fra ...
- MCI:移动持续集成在大众点评的实践
一.背景 美团是全球最大的互联网+生活服务平台,为3.2亿活跃用户和500多万的优质商户提供一个连接线上与线下的电子商务服务.秉承“帮大家吃得更好,生活更好”的使命,我们的业务覆盖了超过200个品类和 ...
- eclipse使用小技巧
1.eclipse中SVN无版本信息显示,window-preference-general-appeerance-label decoration-svn勾上,显示有关项目中受 SVN 控制的资源的 ...
- Mac 上自带TFTP Server 软件的使用
搬瓦工搭建SS教程 1.TFTP协议 简单文件传输协议Trivial File Transfer Protocol (TFTP)是一个基于UDP协议的简单的.低开销的文件传输协议,允许客户端get或者 ...
- 【CF398B】B. Painting The Wall(期望)
B. Painting The Wall time limit per test 1 second memory limit per test 256 megabytes input standard ...
- BZOJ.2194.快速傅立叶之二(FFT 卷积)
题目链接 \(Descripiton\) 给定\(A[\ ],B[\ ]\),求\[C[k]=\sum_{i=k}^{n-1}A[i]*B[i-k]\ (0\leq k<n)\] \(Solut ...
- Go Web编程 第四章--处理请求
请求和响应 Request结构 URL字段 Header字段 Body字段 Form, PostForm, MultipartForm字段 在处理器函数中,我们可以通过Request获取各个字段的详细 ...
- NOIP2018 RP++
飞吧,不用看向地面. NOIP,RP++.