spark读取hive表,org.apache.spark.sql.AnalysisException: Unsupported data source type for direct query on files: hive;
异常出现:spark读取hive表时,spark.read.table(hive.test)
hdp版本的spark默认的catalog是spark,配置项 metastore.catalog.default 默认值是spark,即读取SparkSQL自己的metastore_db。所以才会出现上述相互是查看不到的对方的创建的数据的问题。
org.apache.spark.sql.AnalysisException: Unsupported data source type for direct query on files: hive;;
at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:47)
at org.apache.spark.sql.execution.datasources.ResolveSQLOnFile$$anonfun$apply$1.applyOrElse(rules.scala:64)
at org.apache.spark.sql.execution.datasources.ResolveSQLOnFile$$anonfun$apply$1.applyOrElse(rules.scala:42)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:288)
at org.apache.spark.sql.execution.datasources.ResolveSQLOnFile.apply(rules.scala:42)
at org.apache.spark.sql.execution.datasources.ResolveSQLOnFile.apply(rules.scala:37)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)
at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)
at scala.collection.immutable.List.foldLeft(List.scala:84)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)
at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:124)
at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:118)
at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:103)
at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
at org.apache.spark.sql.SparkSession.table(SparkSession.scala:628)
at org.apache.spark.sql.SparkSession.table(SparkSession.scala:624)
at org.apache.spark.sql.DataFrameReader.table(DataFrameReader.scala:654)
... 49 elided
Caused by: org.apache.spark.sql.AnalysisException: Unsupported data source type for direct query on files: hive;
at org.apache.spark.sql.execution.datasources.ResolveSQLOnFile$$anonfun$apply$1.applyOrElse(rules.scala:56)
... 74 more
spark读取hive表,org.apache.spark.sql.AnalysisException: Unsupported data source type for direct query on files: hive;的更多相关文章
- 【原创】大叔经验分享(60)hive和spark读取kudu表
从impala中创建kudu表之后,如果想从hive或spark sql直接读取,会报错: Caused by: java.lang.ClassNotFoundException: com.cloud ...
- Spark(1) - Getting Started with Apache Spark
Introduction Apache Spark is a general-purpose cluster computing system to process big data workload ...
- java.lang.NoSuchMethodError: org.apache.spark.internal.Logging.$init$(Lorg/apache/spark/internal/Logging;)V
1.sparkML的版本不对应 请参考官网找到对于版本, 比如我的 spark2.3.3 spark MLlib 也是2.3.3
- Caused by: java.sql.SQLException: Failed to start database 'metastore_db' with class loader org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@d7c365, see the next exception for details.
解决方法:https://stackoverflow.com/questions/37442910/spark-shell-startup-errors 异常: 18/01/29 19:04:27 W ...
- Apache Spark 2.2.0 中文文档 - Spark SQL, DataFrames and Datasets Guide | ApacheCN
Spark SQL, DataFrames and Datasets Guide Overview SQL Datasets and DataFrames 开始入门 起始点: SparkSession ...
- Apache Spark 2.2.0 中文文档 - Spark SQL, DataFrames and Datasets
Spark SQL, DataFrames and Datasets Guide Overview SQL Datasets and DataFrames 开始入门 起始点: SparkSession ...
- spark相关介绍-提取hive表(一)
本文环境说明 centos服务器 jupyter的scala核spylon-kernel spark-2.4.0 scala-2.11.12 hadoop-2.6.0 本文主要内容 spark读取hi ...
- Spark访问与HBase关联的Hive表
知识点1:创建关联Hbase的Hive表 知识点2:Spark访问Hive 知识点3:Spark访问与Hbase关联的Hive表 知识点1:创建关联Hbase的Hive表 两种方式创建,内部表和外部表 ...
- Spark记录-本地Spark读取Hive数据简单例子
注意:将mysql的驱动包拷贝到spark/lib下,将hive-site.xml拷贝到项目resources下,远程调试不要使用主机名 import org.apache.spark._ impor ...
- 新手福利:Apache Spark入门攻略
[编者按]时至今日,Spark已成为大数据领域最火的一个开源项目,具备高性能.易于使用等特性.然而作为一个年轻的开源项目,其使用上存在的挑战亦不可为不大,这里为大家分享SciSpike软件架构师Ash ...
随机推荐
- 外部工具连接SaaS模式云数据仓库MaxCompute实战——商业BI分析工具篇
简介: MaxCompute 是面向分析的企业级 SaaS 模式云数据仓库,以 Serverless 架构提供快速.全托管的在线数据仓库服务,消除了传统数据平台在资源扩展性和弹性方面的限制,最小化用户 ...
- [HTML] 访问 a 链接不带 referer 的方式
html5 新属性 referrerpolicy: referrerpolicy no-referrer no-referrer-when-downgrade origin origin-when-c ...
- [TP5] 动态绑定指定默认模块, 解决: 控制器不存在:app\index\controller\Api
当在 TP5 入口中简单使用 define('BIND_MODULE','index') 绑定默认模块后,访问 api 模块会提示: 控制器不存在:app\index\controller\Api 这 ...
- [Docker] 镜像源配置 for Linux
$ vi /etc/docker/daemon.json { "registry-mirrors": [ "https://docker.mirrors.ustc.edu ...
- LVGL 日志
一.启动日志 在 lv_conf.h 中将 LV_USE_LOG 设置为 1,如下图所示: 二.日志级别 在文件 lvgl/src/misc/lv_log.h 中定义了日志等级,等级是从小到大,所以 ...
- JAVA下唯一一款搞定OLTP+OLAP的强类型查询这就是最好用的ORM相见恨晚
JAVA下唯一一款搞定OLTP+OLAP的强类型查询这就是最好用的ORM相见恨晚 介绍 首先非常感谢 FreeSQL 提供的部分源码,让我借鉴了不少功能点,整体设计并没有参考FreeSQL(因为jav ...
- 从零开始:Django项目的创建与配置指南
title: 从零开始:Django项目的创建与配置指南 date: 2024/5/2 18:29:33 updated: 2024/5/2 18:29:33 categories: 后端开发 tag ...
- Linux查看文件指定行数内容与查找文件内容
Linux查看文件指定行数内容 1.tail date.log 输出文件末尾的内容,默认10行 tail -20 date.log 输出最后20行的内容 tail -n -20 date.log 输出 ...
- 关于UE4对象静态/动态的销毁问题整理(AddToRoot、TWeakObjectPtr)
1.非UObject对象 即非UObject常规C++对象,创建销毁不赘述.但可以用智能指针:从而不用关心销毁逻辑: TSharedPtr<ClassA> MyObj = MakeShar ...
- docker安装MySQL8.0.35主从复制(实战保姆级)
很久没有记录了,今天有时间就记录一下最近安装遇到的问题 liunx安装docker这个是前提,就不多过述 1 准备两台服务器 10.104.13.139 10.104.13.140 2 确保liunx ...