IDEA中 Spark 读Hbase 报错处理:
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
// :: ERROR RecoverableZooKeeper: ZooKeeper exists failed after attempts
// :: ERROR ZooKeeperWatcher: hconnection-0xaaa0f760x0, quorum=localhost:, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
at org.apache.zookeeper.KeeperException.create(KeeperException.java:)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:)
at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:)
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:)
at java.lang.reflect.Constructor.newInstance(Constructor.java:)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:)
at org.apache.hadoop.hbase.mapreduce.TableInputFormat.initialize(TableInputFormat.java:)
at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:)
at org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at scala.Option.getOrElse(Option.scala:)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:)
at org.apache.spark.rdd.RDD.count(RDD.scala:)
at 读Hbase数据$.main(读Hbase数据.scala:)
at 读Hbase数据.main(读Hbase数据.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:)
Exception in thread "main" org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the locations
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:)
at org.apache.hadoop.hbase.client.MetaScanner.allTableRegions(MetaScanner.java:)
at org.apache.hadoop.hbase.client.HRegionLocator.getAllRegionLocations(HRegionLocator.java:)
at org.apache.hadoop.hbase.util.RegionSizeCalculator.init(RegionSizeCalculator.java:)
at org.apache.hadoop.hbase.util.RegionSizeCalculator.<init>(RegionSizeCalculator.java:)
at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:)
at org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at scala.Option.getOrElse(Option.scala:)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:)
at org.apache.spark.rdd.RDD.count(RDD.scala:)
at 读Hbase数据$.main(读Hbase数据.scala:)
at 读Hbase数据.main(读Hbase数据.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:)
问题处理:
在IDEA Spark程序中需要操作Hbase时,需要:start-hbase.sh 启动Hbase服务。不然就会报以上错误
IDEA中 Spark 读Hbase 报错处理:的更多相关文章
- IDEA中Spark读Hbase中的数据
import org.apache.hadoop.hbase.HBaseConfiguration import org.apache.hadoop.hbase.io.ImmutableBytesWr ...
- Eclipse连接HBase 报错:org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
在eclipse中连接到HBase报错org.apache.hadoop.hbase.PleaseHoldException: Master is initializing,搜索了好久,网上其它人说的 ...
- Spark读HBase写MySQL
1 Spark读HBase Spark读HBase黑名单数据,过滤出当日新增userid,并与mysql黑名单表内userid去重后,写入mysql. def main(args: Array[Str ...
- 往sde中导入要素类报错000732
sde可以成功连接,可以在Server中注册. 但是向sde中导入要素类报错000732,如图所示. 点击红色圆圈提示 ERROR 000732. 将路径修改为绝对路径即可,如下图所示.
- RedHat中敲sh-copy-id命令报错:-bash: ssh-copy-id: command not found
RedHat中敲sh-copy-id命令报错:-bash: ssh-copy-id: command not found 在多台Linux服务器SSH相互访问无需密码, 其中进入一台Linus中,对其 ...
- Oracle中建立物化视图报错
Oracle中建立物化视图报错 今天在建立视图的时候,报了一个错:ORA-01723: zero-length columns are not allowed. 建视图的语句: create mate ...
- eclipse中的js文件报错的解决办法
在使用别人的项目的时候,导入到eclipse中发现js文件报错,解决办法是关闭eclipse的js校验功能. 三个步骤: 1. 右键点击项目->properties->Validation ...
- 安装PHP过程中,make步骤报错:(集合网络上各种解决方法)
安装PHP过程中,make步骤报错:(集合网络上各种解决方法) (1)-liconv -o sapi/fpm/php-fpm /usr/bin/ld: cannot find -liconv coll ...
- vuex2中使用mapMutations/mapActions报错解决方法 BabelLoaderError: SyntaxError: Unexpected token
在尝鲜vuex2时,发现vuex2增加了 mapGetters 和 mapActions 的方法,借助stage2的 Object Rest Operator 特性,可以写出下面代码:methods: ...
随机推荐
- NRF24L01注意点
nrf24L01被设置为接收模式后,可通过6个不同的数据通道(data pipe)接收数据. 每个数据通道都有一个唯一的地址但是各数据通道的频率是相同的.这意味着可以有6个被配置成发送状态的nRF24 ...
- amoeba连接mysql--ERROR 2006 (HY000): MySQL server has gone away
amoeba下载地址:http://sourceforge.net/projects/amoeba/files amoeba version:amoeba-mysql-binary-2.1.0-RC5 ...
- [COJ0970]WZJ的数据结构(负三十)
[COJ0970]WZJ的数据结构(负三十) 试题描述 给你一棵N个点的无根树,点和边上均有权值.请你设计一个数据结构,回答M次操作. 1 x v:对于树上的每一个节点y,如果将x.y在树上的距离记为 ...
- 【Java集合】Java中集合(List,Set,Map)
简介: 数组是大小固定的,并且同一个数组只能存放类型一样的数据(基本类型/引用类型),而JAVA集合可以存储和操作数目不固定的一组数据. 所有的JAVA集合都位于 java.util包中! JAVA集 ...
- jQuery插件之ajaxFileUpload(ajax文件上传)
一.ajaxFileUpload是一个异步上传文件的jQuery插件. 传一个不知道什么版本的上来,以后不用到处找了. 语法:$.ajaxFileUpload([options]) options参数 ...
- Linux 中浏览网页的命令行
Linux系统环境的WEB网站浏览器工具,常用的有w3m.Links.Lynx三个工具 第一.w3m w3m文本浏览器是基于GPL协议发布的且支持表格.颜色.SSL连接以及内链图像,因速度快而著称. ...
- VMware实用技巧
1.VM快照管理 这个功能实在太常用,不用我多废话.这里只是提醒一下还没有用过快照的同学,赶紧的给自己的VM保存点快照吧,这样VM里的系统出了问题或是有其它需要很容易让你还原到原来的某个点,这功能可比 ...
- hdu 2544 最短路(SPFA算法)
本题链接:点击打开链接 本题大意: 首先输入一个n,m.代表有n个点.m条边.然后输入m条边,每条边输入两个点及边权.1为起点,n为终点.输入两个零表示结束. 解题思路: 本题能够使用SPFA算法来做 ...
- 带头尾和动画的下拉刷新RecyclerView
项目地址:https://github.com/shichaohui/AnimRefreshRecyclerView 项目中包括一个demo(普通Androidproject)和Android Lib ...
- 【Linux多线程】同步与互斥的区别
同步与互斥这两个概念经常被混淆,所以在这里说一下它们的区别. 一.同步与互斥的区别 1. 同步 同步,又称直接制约关系,是指多个线程(或进程)为了合作完成任务,必须严格按照规定的 某种先后次序来运行. ...