df.write
.option("truncate", "true")
.option("driver", mysqlDriver)
.mode(SaveMode.Overwrite).jdbc(url, table, pro)

 需要配置某个jdbc类型驱动driver,option("driver", mysqlDriver),不配置的话不会通过url前缀去寻找的,会抛出错误

1/01/26 14:07:35 INFO LineBufferedStream: stdout: java.lang.InstantiationException: org.apache.spark.sql.execution.datasources.jdbc.DriverWrapper
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at com.wanmi.sbc.dw.spark.recommend.db.RelationDb.dataFrameInsert(RelationDb.scala:56)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at com.wanmi.sbc.dw.spark.recommend.relation.GoodsRelationAnalysis.analysis(GoodsRelationAnalysis.scala:99)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at com.wanmi.sbc.dw.spark.recommend.relation.GoodsRelationAnalysis.analysis(GoodsRelationAnalysis.scala:26)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at com.wanmi.sbc.dw.spark.app.RecommendAnalysisApp$$anonfun$run$3.apply(RecommendAnalysisApp.scala:39)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at com.wanmi.sbc.dw.spark.app.RecommendAnalysisApp$$anonfun$run$3.apply(RecommendAnalysisApp.scala:34)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at com.wanmi.sbc.dw.spark.app.RecommendAnalysisApp$.run(RecommendAnalysisApp.scala:34)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at com.wanmi.sbc.dw.spark.app.RecommendAnalysisApp.run(RecommendAnalysisApp.scala)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at java.lang.reflect.Method.invoke(Method.java:498)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at com.wanmi.sbc.dw.spark.app.BulkLoadTest$.main(BulkLoadTest.scala:65)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at com.wanmi.sbc.dw.spark.app.BulkLoadTest.main(BulkLoadTest.scala)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at java.lang.reflect.Method.invoke(Method.java:498)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:904)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:180)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:178)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at java.security.AccessController.doPrivileged(Native Method)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at javax.security.auth.Subject.doAs(Subject.java:422)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:178)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
21/01/26 14:07:35 INFO LineBufferedStream: stdout: Caused by: java.lang.NoSuchMethodException: org.apache.spark.sql.execution.datasources.jdbc.DriverWrapper.<init>()

spark读取写入jdbc.,Caused by: java.lang.NoSuchMethodException: org.apache.spark.sql.execution.datasources.jdbc.DriverWrapper.<init>()的更多相关文章

  1. 错误处理:java.lang.NoSuchMethodException: org.apache.catalina.deploy.WebXml addFilter

    部署项目时,启动Tomcat一直出错:java.lang.NoSuchMethodException: org.apache.catalina.deploy.WebXml addFilter SEVE ...

  2. 转:Caused by: java.lang.NoSuchMethodError: org.apache.log4j.Category.log

    Caused by: java.lang.NoSuchMethodError: org.apache.log4j.Category.log出现的异常:java.lang.reflect.Invocat ...

  3. Caused by: java.lang.ClassNotFoundException: org.apache.commons.io.FileUtils

    1.错误叙述性说明 警告: Could not create JarEntryRevision for [jar:file:/D:/MyEclipse/apache-tomcat-7.0.53/web ...

  4. 生成HFile文件后倒入数据出现Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.filter.Filter

    数据导入的时候出现: at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclar ...

  5. Caused by: java.lang.ClassNotFoundException: org.apache.commons.lang3.StringUtils

    1.错误叙述性说明 2014-7-10 23:12:23 org.apache.catalina.core.StandardContext filterStart 严重: Exception star ...

  6. Caused by: java.lang.ClassNotFoundException: org.apache.commons.logging.LogFactory

    1.错误叙述性说明 2014-7-12 0:38:57 org.apache.catalina.core.ApplicationContext log 信息: No Spring WebApplica ...

  7. Caused by: java.lang.ClassNotFoundException: org.apache.commons.fileupload.RequestContext

    1.错误描述 usage: java org.apache.catalina.startup.Catalina [ -config {pathname} ] [ -nonaming ] { -help ...

  8. 【java web】Caused by: java.lang.ClassNotFoundException: org.apache.commons.logging.LogFactory

    javaweb报错如下:22:49:22.155 [http-nio-8081-exec-9] ERROR org.apache.struts2.dispatcher.DefaultDispatche ...

  9. IDEA里运行代码时出现Caused by: java.lang.ClassNotFoundException: org.apache.log4j.Logger的解决办法(图文详解)

    不多说,直接上干货! 问题详情 运行出现log4j的问题 -classpath "C:\Program Files\Java\jdk1.8.0_66\jre\lib\charsets.jar ...

  10. SpringBoot Caused by: java.lang.NoClassDefFoundError: org/apache/tomcat/util/descriptor/tld/TldParser

    最近尝试着用spring boot ,页面模版使用的jsp,在pom里配置了对jsp的支持: <dependency> <groupId>org.apache.tomcat.e ...

随机推荐

  1. aspnetcore项目中kafka组件封装

    前段时间在项目中把用到kafka组件完全剥离开出来,项目需要可以直接集成进去.源代码如下: liuzhixin405/My.Project (github.com) 组件结构如下,代码太多不一一列举, ...

  2. Educational Codeforces Round 162 (Rated for Div. 2) E

    E:Link 枚举路径两端的颜色 \(k\). 令 \(g[x]\) 表示满足以下条件的点 \(y\) 数量. $ y \in subtree[x]$ \(col[y] = k\) \(y\) 到 \ ...

  3. Solution Set - 贪心和数据结构

    感觉自己好菜啊,这个专题真的不太会. CF1439C Greedy Shopping Link&Submission. 容易发现,当此人连续买了一段物品之后,他的钱数至少减半.所以他最多只会买 ...

  4. Django Admin后台管理:高效开发与实践

    title: Django Admin后台管理:高效开发与实践 date: 2024/5/8 14:24:15 updated: 2024/5/8 14:24:15 categories: 后端开发 ...

  5. 2019年最新前端面试题,js程序设计题

    都说机会是留给有准备的人的. 一年之计在于春,面对众多的前端技术,需要时刻充电自己. 我现在整理一些前端js面试程序题. 1.判断一个字符串中出现最多的字符,并计算出现的次数? 2.用css伪类实现下 ...

  6. 详解GaussDB(DWS)中的行执行引擎

    本文分享自华为云社区<GaussDB(DWS)行执行引擎详解>,作者:yd_227398895. 1.前言 GaussDB(DWS)包含三大引擎,一是SQL执行引擎,用来解析用户输入的SQ ...

  7. JDK源码阅读-------自学笔记(七)(二维数组的浅析)

    实际开发中一般最多使用到二维数组,再高很少使用 二维数组很少用,实际开发中会使用容器代替使用 1.创建二维数组 1 // 二维数组初始化 2 int[][] secondDimensional = n ...

  8. .NET 代理模式(一)基本概念

    代理模式 代理模式,它是一种结构型的设计模式. 让你能够提供对象的替代品或其占位符. 代理控制着对于原对象的访问, 并允许在将请求提交给对象前后进行一些处理. 简单理解就是 客户端不会直接与实际实现类 ...

  9. 用STM32F4的DMA实现高速、实时的同步并行通信——以读取高速ADC为例[原创www.cnblogs.com/helesheng]

    大概6-7年前,在网上看到过一篇用STM32F1的DMA控制GPIO输出高速数字波形的帖子.觉得很有意思,就自己试了试:控制GPIO输出波形翻转的速度最高只能达到3-4MHz,且容易受到STM32F1 ...

  10. Python读Excel数据自动化填入日常办公的网页表单

      前言 本篇内容,让你完全掌握Python是如何自动化办公的~ 一.环境准备 1.1  Python 3.7.0 1.2  Pycharm  (Python 开发工具) 1.3 Selenium  ...