Spark读取Hbase的数据
val conf = HBaseConfiguration.create()
conf.addResource(new Path("/opt/cloudera/parcels/CDH-5.4.4-1.cdh5.4.4.p0.4/lib/hbase/conf/hbase-site.xml"))
conf.addResource(new Path("/opt/cloudera/parcels/CDH-5.4.4-1.cdh5.4.4.p0.4/lib/hadoop/etc/hadoop/core-site.xml"))
conf.set(TableInputFormat.INPUT_TABLE, "FLOW") //添加过滤条件,年龄大于 18 岁
//val scan = new Scan()
//conf.set(TableInputFormat.SCAN, convertScanToString(scan))
/*
scan.setFilter(new SingleColumnValueFilter("basic".getBytes, "age".getBytes,
CompareOp.GREATER_OR_EQUAL, Bytes.toBytes(18)))
*/ val usersRDD = sc.newAPIHadoopRDD(conf, classOf[TableInputFormat],
classOf[org.apache.hadoop.hbase.io.ImmutableBytesWritable],
classOf[org.apache.hadoop.hbase.client.Result]) val data1 = usersRDD.count() val sf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSSSS") println("data length:" + data1) var map = HashMap[String, HashMap[String, collection.mutable.ArrayBuffer[Double]]]() usersRDD.collect().map {
case (_, result) =>
val key = Bytes.toInt(result.getRow)
println("Key:" + key)
val ip = Bytes.toString(result.getValue("F".getBytes, "SADDR".getBytes))
val port = Bytes.toString(result.getValue("F".getBytes, "SPORT".getBytes))
val startTimeLong = Bytes.toString(result.getValue("F".getBytes, "STIME".getBytes))
val endTimeLong = Bytes.toString(result.getValue("F".getBytes, "LTIME".getBytes))
val protocol = Bytes.toString(result.getValue("F".getBytes, "PROTO".getBytes))
val sumTime = Bytes.toString(result.getValue("F".getBytes, "DUR".getBytes))
val sum = Bytes.toString(result.getValue("F".getBytes, "DBYTES".getBytes)).toDouble println("ip:" + ip + ",port:" + port + ",startTime:" + startTimeLong + ",endTime:" + endTimeLong + ",protocol:" + protocol + ",sum:" + sum) //ip+port+udp,14:02 14:07 List
//ip+port+tcp,15:02 15:07 List
val startTimeDate = sf.parse(startTimeLong)
val endTimeLongDate = sf.parse(endTimeLong)
val startHours = startTimeDate.getHours
val startMinutes = startTimeDate.getMinutes val endHours = endTimeLongDate.getHours
val endMinutes = endTimeLongDate.getMinutes val key1 = ip + "_" + port + "_" + protocol
println("key1:" + key1) val key2 = startHours + ":" + startMinutes + "_" + endHours + ":" + endMinutes println("key2:" + key2) val tmpMap = map.get(key1) if (!tmpMap.isEmpty) {
println("--------------------map is not null:" + tmpMap.size + "--------------------")
val sumArray = tmpMap.get.get(key2)
if (!sumArray.isEmpty) {
sumArray.get += sum
}
} else {
println("--------------------map is null--------------------")
//如果当前Key不存在的话,是一个全新的Ip
val sumArray = collection.mutable.ArrayBuffer[Double]()
sumArray += sum val secondMap = HashMap[String, collection.mutable.ArrayBuffer[Double]]()
secondMap += (key2 -> sumArray)
map += (key1 -> secondMap)
}
map
println("map size-----------------:" + map.size)
} println("map size:" + map.size) map.map(e => {
println("--------------------Statistics start --------------------")
val resultKey1 = e._1
val resultVal1 = e._2
println("resultKey1:" + resultKey1)
resultVal1.foreach(f => {
val resultKey2 = f._1
val resultVal2 = f._2
println("resultKey2:" + resultKey2)
println("-----------------resultVal2:" + resultVal2.length) resultVal2.map(f=>{
println("------------------------f:"+f)
}) val dataArray = resultVal2.map(f => Vectors.dense(f)) val summary: MultivariateStatisticalSummary = Statistics.colStats(sc.parallelize(dataArray)) //
println("--------------------mean:" + summary.mean + " --------------------")
println("--------------------variance:" + summary.variance + " --------------------") println("--------------------mean apply 0:" + summary.mean.toArray.apply(0) + " --------------------")
println("--------------------variance apply 0:" + summary.variance.apply(0) + " --------------------") val upbase = summary.mean.toArray.apply(0) + 1.960 * Math.sqrt(summary.variance.apply(0))
val downbase = summary.mean.toArray.apply(0) - 1.960 * Math.sqrt(summary.variance.apply(0))
println("------------------- " + upbase + " ---------- " + downbase)
val df = new DecimalFormat(".##")
val upbaseString = df.format(upbase)
val downbaseString = df.format(downbase)
//resultMap.put(key, value)
val result3 = HashMap[Double, Double]()
//result3 +=(upbase -> downbase)
println("ip port:" + resultKey1 + ",time:" + resultKey2 + ",upbase:" + upbase + ",downbase:" + downbase)
})
}) println("--------------------baseLine end --------------------")
sc.stop()
Spark读取Hbase的数据的更多相关文章
- 使用TableSnapshotInputFormat读取Hbase快照数据
根据快照名称读取hbase快照中的数据,在网上查了好多资料,很少有资料能够给出清晰的方案,根据自己的摸索终于实现,现将代码贴出,希望能给大家有所帮助: public void read(org.apa ...
- Spark 读取HBase和SolrCloud数据
Spark1.6.2读取SolrCloud 5.5.1 //httpmime-4.4.1.jar // solr-solrj-5.5.1.jar //spark-solr-2.2.2-20161007 ...
- Spark 读取HBase数据
Spark1.6.2 读取 HBase 1.2.3 //hbase-common-1.2.3.jar //hbase-protocol-1.2.3.jar //hbase-server-1.2.3.j ...
- Spark读取Hbase中的数据
大家可能都知道很熟悉Spark的两种常见的数据读取方式(存放到RDD中):(1).调用parallelize函数直接从集合中获取数据,并存入RDD中:Java版本如下: JavaRDD<Inte ...
- spark读取hbase形成RDD,存入hive或者spark_sql分析
object SaprkReadHbase { var total:Int = 0 def main(args: Array[String]) { val spark = SparkSession . ...
- Spark读取结构化数据
读取结构化数据 Spark可以从本地CSV,HDFS以及Hive读取结构化数据,直接解析为DataFrame,进行后续分析. 读取本地CSV 需要指定一些选项,比如留header,比如指定delimi ...
- spark读取hbase(NewHadoopAPI 例子)
package cn.piesat.controller import java.text.{DecimalFormat, SimpleDateFormat}import java.utilimpor ...
- spark读取hbase数据
def main(args: Array[String]): Unit = { val hConf = HBaseConfiguration.create(); hConf.set("hba ...
- Spark读取HBase
背景:公司有些业务需求是存储在HBase上的,总是有业务人员找我要各种数据,所以想直接用Spark( shell) 加载到RDD进行计算 摘要: 1.相关环境 2.代码例子 内容 1.相关环境 Spa ...
随机推荐
- Linux软件的安装方法!!!
1.yum/rpm(*.rpm) 包管理器:直接yum/rpm安装. 优点:是全自动化安装,不需要为依赖问题发愁,缺点是自主性太差,软件的功能.存放位置固定,不易变更. 2.源码包(*.tar.gz) ...
- test「Python」流程&中文
#例1 text='dShArpen骑草泥马在马勒隔壁玩Python时看到一群SB绿茶婊在逗逼,马上的他马上吓尿了' iftext = '马' for letter in text.decode('u ...
- Sqlserver 存储过程
转载自:http://www.cnblogs.com/hoojo/archive/2011/07/19/2110862.html Transact-SQL中的存储过程,非常类似于Java语言中的方法, ...
- ODOO-10.0 错误 Could not execute command 'lessc'
2017-01-05 20:24:12,473 4652 INFO None odoo.service.db: Create database `hello`. 2017-01-05 20:24:16 ...
- 使用cocos2d-x3.4结合cocos2.1.5制作小游戏《亲亲小熊》
在最新的cocos集成环境中,CocosStudio已经集成到cocos中了,至于界面的制作和编辑器的基本使用在cocos官网有详细教程, 这里就不细说,资源下载和详情请参看官网教程:http://c ...
- LEETCODE —— Binary Tree的3 题 —— 3种非Recursive遍历
Binary Tree Preorder Traversal Given a binary tree, return the preorder traversal of its nodes' valu ...
- c++宏使用总结【转】
C/C++中宏总结C程序的源代码中可包括各种编译指令,这些指令称为预处理命令.虽然它们实际上不是C语言的一部分,但却扩展了C程序设计的环境. ANSI标准定义的C语言预处理程序包括下列命令: #de ...
- 【OpenGL】VAO与VBO
1.我们先了解什么是OpenGL对象(OpenGL Object) 根据OpenGL Wiki的定义: An OpenGL Object is an OpenGL construct that con ...
- C#四种深拷贝方法
//四种深拷贝方法 public static T DeepCopyByReflect<T>(T obj) { //如果是字符串或值类型则直接返回 if (obj is string || ...
- Java BigDecimal 加减乘除运算
加法:add 减法:subtract 乘法:multiply 除法:divide BigDecimal bignum1 = new BigDecimal("10"); BigDec ...