Spark读写HBase示例

1、HBase shell查看表结构

hbase(main)::> desc 'SDAS_Person'
Table SDAS_Person is ENABLED
SDAS_Person
COLUMN FAMILIES DESCRIPTION
{NAME => 'cf0', BLOOMFILTER => 'ROW', VERSIONS => '', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE',
DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '', BLOCKCACHE =>
'true', BLOCKSIZE => '', REPLICATION_SCOPE => ''}
{NAME => 'cf1', BLOOMFILTER => 'ROW', VERSIONS => '', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE',
DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '', BLOCKCACHE =>
'true', BLOCKSIZE => '', REPLICATION_SCOPE => ''}
{NAME => 'cf2', BLOOMFILTER => 'ROW', VERSIONS => '', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE',
DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '', BLOCKCACHE =>
'true', BLOCKSIZE => '', REPLICATION_SCOPE => ''}
row(s) in 0.0810 seconds
hbase(main)::> desc 'RESULT'
Table RESULT is ENABLED
RESULT
COLUMN FAMILIES DESCRIPTION
{NAME => 'cf0', BLOOMFILTER => 'ROW', VERSIONS => '', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE',
DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '', BLOCKCACHE =>
'true', BLOCKSIZE => '', REPLICATION_SCOPE => ''}
row(s) in 0.0250 seconds

2、HBase shell插入数据

hbase(main)::> scan 'SDAS_Person'
ROW COLUMN+CELL
SDAS_1# column=cf0:Age, timestamp=, value=
SDAS_1# column=cf0:CompanyID, timestamp=, value=
SDAS_1# column=cf0:InDate, timestamp=, value=-- ::08.49
SDAS_1# column=cf0:Money, timestamp=, value=5.20
SDAS_1# column=cf0:Name, timestamp=, value=zhangsan
SDAS_1# column=cf0:PersonID, timestamp=, value=

3、pom.xml:

    <dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_${scala.binary.version}</artifactId>
<version>${spark.version}</version>
<scope>provided</scope>
</dependency>

4、源码:

package com.zxth.sdas.spark.apps
import org.apache.spark._
import org.apache.spark.rdd.NewHadoopRDD
import org.apache.hadoop.hbase.{HBaseConfiguration, HTableDescriptor}
import org.apache.hadoop.hbase.client.HBaseAdmin
import org.apache.hadoop.hbase.mapreduce.TableInputFormat
import org.apache.hadoop.hbase.util.Bytes
import org.apache.hadoop.hbase.client.Put
import org.apache.hadoop.hbase.io.ImmutableBytesWritable
import org.apache.hadoop.mapreduce.Job
import org.apache.hadoop.hbase.client.Result
import org.apache.hadoop.hbase.mapreduce.TableOutputFormat object HBaseOp {
var total:Int = 0
def main(args: Array[String]) {
val sparkConf = new SparkConf().setAppName("HBaseOp").setMaster("local")
val sc = new SparkContext(sparkConf) val conf = HBaseConfiguration.create()
conf.set("hbase.zookeeper.quorum","master,slave1,slave2")
conf.set("hbase.zookeeper.property.clientPort", "2181")
conf.set(TableInputFormat.INPUT_TABLE, "SDAS_Person") //读取数据并转化成rdd
val hBaseRDD = sc.newAPIHadoopRDD(conf, classOf[TableInputFormat],
classOf[org.apache.hadoop.hbase.io.ImmutableBytesWritable],
classOf[org.apache.hadoop.hbase.client.Result]) val count = hBaseRDD.count()
println("\n\n\n:" + count)
hBaseRDD.foreach{case (_,result) =>{
//获取行键
val key = Bytes.toString(result.getRow)
//通过列族和列名获取列
var obj = result.getValue("cf0".getBytes,"Name".getBytes)
val name = if(obj==null) "" else Bytes.toString(obj) obj = result.getValue("cf0".getBytes,"Age".getBytes);
val age:Int = if(obj == null) 0 else Bytes.toString(obj).toInt total = total + age
println("Row key:"+key+" Name:"+name+" Age:"+age+" total:"+total)
}}
var average:Double = total.toDouble/count.toDouble
println("" + total + "/" + count + " average age:" + average.toString()) //write hbase
conf.set(TableOutputFormat.OUTPUT_TABLE, "RESULT")
val job = new Job(conf)
job.setOutputKeyClass(classOf[ImmutableBytesWritable])
job.setOutputValueClass(classOf[Result])
job.setOutputFormatClass(classOf[TableOutputFormat[ImmutableBytesWritable]]) var arrResult:Array[String] = new Array[String](1)
arrResult(0) = "1," + total + "," + average;
//arrResult(0) = "1,100,11" val resultRDD = sc.makeRDD(arrResult)
val saveRDD = resultRDD.map(_.split(',')).map{arr=>{
val put = new Put(Bytes.toBytes(arr(0)))
put.add(Bytes.toBytes("cf0"),Bytes.toBytes("total"),Bytes.toBytes(arr(1)))
put.add(Bytes.toBytes("cf0"),Bytes.toBytes("average"),Bytes.toBytes(arr(2)))
(new ImmutableBytesWritable, put)
}}
println("getConfiguration")
var c = job.getConfiguration()
println("save")
saveRDD.saveAsNewAPIHadoopDataset(c) sc.stop()
}
}

5、maven打包

mvn clean scala:compile compile package

6、提交运算

bin/spark-submit \
--jars $(echo /opt/hbase-1.2./lib/*.jar | tr ' ' ',') \
--class com.zxth.sdas.spark.apps.HBaseOp \
--master local \
sdas-spark-1.0.0.jar

Spark读写HBase的更多相关文章

  1. Spark读写Hbase的二种方式对比

    作者:Syn良子 出处:http://www.cnblogs.com/cssdongl 转载请注明出处 一.传统方式 这种方式就是常用的TableInputFormat和TableOutputForm ...

  2. spark读写hbase性能对比

    一.spark写入hbase hbase client以put方式封装数据,并支持逐条或批量插入.spark中内置saveAsHadoopDataset和saveAsNewAPIHadoopDatas ...

  3. Spark读写HBase时出现的问题--RpcRetryingCaller: Call exception

    问题描述 Exception in thread "main" org.apache.hadoop.hbase.client.RetriesExhaustedException: ...

  4. Spark读写Hbase中的数据

    def main(args: Array[String]) { val sparkConf = new SparkConf().setMaster("local").setAppN ...

  5. Spark-读写HBase,SparkStreaming操作,Spark的HBase相关操作

    Spark-读写HBase,SparkStreaming操作,Spark的HBase相关操作 1.sparkstreaming实时写入Hbase(saveAsNewAPIHadoopDataset方法 ...

  6. Spark实战之读写HBase

    1 配置 1.1 开发环境: HBase:hbase-1.0.0-cdh5.4.5.tar.gz Hadoop:hadoop-2.6.0-cdh5.4.5.tar.gz ZooKeeper:zooke ...

  7. 使用 Spark SQL 高效地读写 HBase

    Apache Spark 和 Apache HBase 是两个使用比较广泛的大数据组件.很多场景需要使用 Spark 分析/查询 HBase 中的数据,而目前 Spark 内置是支持很多数据源的,其中 ...

  8. Spark读Hbase优化 --手动划分region提高并行数

    一. Hbase的region 我们先简单介绍下Hbase的架构和Hbase的region: 从物理集群的角度看,Hbase集群中,由一个Hmaster管理多个HRegionServer,其中每个HR ...

  9. 开源大数据技术专场(上午):Spark、HBase、JStorm应用与实践

    16日上午9点,2016云栖大会“开源大数据技术专场” (全天)在阿里云技术专家封神的主持下开启.通过封神了解到,在上午的专场中,阿里云高级技术专家无谓.阿里云技术专家封神.阿里巴巴中间件技术部高级技 ...

随机推荐

  1. jquery autocomplete 设置滚动条

    加入样式 .ui-autocomplete{height:250px; overflow-y: scroll; overflow-x: hidden;}

  2. win7插着网线开机卡死,拔下网线开机正常

    公司的部分win7电脑插着网线开机,进到桌面后网络图标转圈圈卡住.控制面板,启动项,任务管理器等都打不开.把网线拔下后再开机,电脑正常进入系统,后再插上网线就能正常上网了.被这个问题困扰了很久,百度也 ...

  3. [ISSUE] [Centos] Centos Start Nginx Show: Failed to start nginx.service:unit not found

    CMD Line:systemctl start nginx.serviceFailed to start nginx.service: Unit not found. Solution: 1.vim ...

  4. XPosed 示例

    https://blog.csdn.net/fmc088/article/details/80535894

  5. Python爬虫与一汽项目【三】爬取中国五矿集团采购平台

    网站地址:http://ec.mcc.com.cn/b2b/web/two/indexinfoAction.do?actionType=showMoreCgxx&xxposition=cgxx ...

  6. lnmp一键安装包安装失败,或者安装下载缓慢的解决办法

    使用阿里云内网安装模块 阿里云外网: ftp://soft6.vpser.net/ 阿里云云内网:ftp://10.163.196.147 修改lnmp.conf 文件 目前可用的国内LNMP ful ...

  7. SpringMVC,SpringBoot利用ajax上传文件到后台

    1.传递单文件 首先html文件中有个<input type=”file” name=”file” id=”file”/>元素. 前台js写法: var formData=new Form ...

  8. TreeMap,HashMap,LinkedHashMap区别,很简单解释

    TreeMap,HashMap,LinkedHashMap之间的区别和TreeSet,HashSet,LinkedHashSet之间的区别相似. TreeMap:内部排序. HashMap:无序. L ...

  9. 浅谈Vue之双向绑定

    VUE实现双向数据绑定的原理就是利用了 Object.defineProperty() 这个方法重新定义了对象获取属性值(get)和设置属性值(set)的操作来实现的.那么Object.defineP ...

  10. 测试框架Unitest的运行原理,以及多个测试类中的执行顺序以及简化方法

    单元测试单元测试(unit testing)是指对软件中的最小可测试单元进行检查和验证.对于单元测试中单元的含义,一般来说,要根据实际情况去判定其具体含义,如C语言中单元指一个函数,Java里单元指一 ...