1.sbt是什么

对于sbt 我也是小白, 为了搞spark看了一下scala,学习scala时指定的构建工具就是sbt(因为sbt也是用scala开发的嘛),起初在我眼里就是一个maven(虽然maven我也没怎么用),后面构建2个项目之后,发现还是蛮强大的,就是学习成本有点高。

哎,但是现在什么东东没有学习成本呢。扯远了,0.13版本的入门之旅参考:http://www.scala-sbt.org/0.13/tutorial/zh-cn/index.html

2.assembly是sbt的一个打包插件

下面是一个入门之旅里面的例子:

/* SimpleApp.scala */
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf object SimpleApp {
def main(args: Array[String]) {
val logFile = "YOUR_SPARK_HOME/README.md" // Should be some file on your system
val conf = new SparkConf().setAppName("Simple Application")
val sc = new SparkContext(conf)
val logData = sc.textFile(logFile, ).cache()
val numAs = logData.filter(line => line.contains("a")).count()
val numBs = logData.filter(line => line.contains("b")).count()
println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
}
}
#simple.sbt
name := "Simple Project" version := "1.0" scalaVersion := "2.10.4" libraryDependencies += "org.apache.spark" %% "spark-core" % "1.5.2"
# Your directory layout should look like this
$ find .
.
./simple.sbt
./src
./src/main
./src/main/scala
./src/main/scala/SimpleApp.scala # Package a jar containing your application
$ sbt package
...
[info] Packaging {..}/{..}/target/scala-2.10/simple-project_2.-1.0.jar # Use spark-submit to run your application
$ YOUR_SPARK_HOME/bin/spark-submit \
--class "SimpleApp" \
--master local[] \
target/scala-2.10/simple-project_2.-1.0.jar
...
Lines with a: 46, Lines with b: 23

到目前为止,都很happy,因为都能顺利通过,因为你依赖的spark库,在spark master和worker上面都有。但是如果依赖mysql的jdbc这些第三方库, 只使用sbt的 package 命令打包,是不会把这些第三方库打包进去的。

这样在spark上面运行就会报错,而且如果你有多台wroker机器的话,需要把其它机器都撞上同样的运行环境(jar包依赖)。

所以,这个时候我们就需要sbt的assembly pulgin。它的任务,就是负责把所有依赖的jar包都打成一个 fat jar。

但是,它也不是万能的,特别当你遇到重名的文件时候,就非常尴尬。

3.assembly如何解决 SBT Assembly - Deduplicate error & Exclude error

我们先来看个错误例子:

[error]  error was encountered during merge
[trace] Stack trace suppressed: run last *:assembly for the full output.
[error] (*:assembly) deduplicate: different file contents found in the following:
[error] /Users/qpzhang/.ivy2/cache/io.netty/netty-handler/jars/netty-handler-4.0..Final.jar:META-INF/io.netty.versions.properties
[error] /Users/qpzhang/.ivy2/cache/io.netty/netty-buffer/jars/netty-buffer-4.0..Final.jar:META-INF/io.netty.versions.properties
[error] /Users/qpzhang/.ivy2/cache/io.netty/netty-common/jars/netty-common-4.0..Final.jar:META-INF/io.netty.versions.properties
[error] /Users/qpzhang/.ivy2/cache/io.netty/netty-transport/jars/netty-transport-4.0..Final.jar:META-INF/io.netty.versions.properties
[error] /Users/qpzhang/.ivy2/cache/io.netty/netty-codec/jars/netty-codec-4.0..Final.jar:META-INF/io.netty.versions.properties
[error] Total time: s, completed -- ::

大概是说,这里面有很多路径一样的重复文件,它处理不了。怎么办?

只好手动来进行判断,assembly提供了不打包文件的规则,这些可以用脚本写在build.sbt文件中。

参考:https://github.com/sbt/sbt-assembly#excluding-jars-and-files

在我们这里,脚本是这样的(注意:sbt是当时最新的 0.13版本):

qpzhang@qpzhangdeMac-mini:~/scala_code/CassandraTest $cat build.sbt

name := "CassandraTest"

version := "1.0"

scalaVersion := "2.10.4"

#spark的依赖直接忽略, 使用关键词provided表示运行环境已经有,不需要打包
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.5.2" % "provided" #依赖spark-cassandra-connector的库
libraryDependencies += "com.datastax.spark" %% "spark-cassandra-connector" % "1.5.0-M2"

#如果后缀是.properties的文件,合并策略采用(MergeStrategy.first)第一个出现的文件
assemblyMergeStrategy in assembly := {
case PathList(ps @ _*) if ps.last endsWith ".properties" => MergeStrategy.first
case x =>
val oldStrategy = (assemblyMergeStrategy in assembly).value
oldStrategy(x)
}

这样就搞定了,其它的情况,再根据修改一下合并策略咯。

> assembly
[info] Including from cache: slf4j-api-1.7..jar
[info] Including from cache: metrics-core-3.0..jar
[info] Including from cache: netty-codec-4.0..Final.jar
[info] Including from cache: netty-handler-4.0..Final.jar
[info] Including from cache: netty-common-4.0..Final.jar
[info] Including from cache: joda-time-2.3.jar
[info] Including from cache: netty-buffer-4.0..Final.jar
[info] Including from cache: commons-lang3-3.3..jar
[info] Including from cache: jsr166e-1.1..jar
[info] Including from cache: cassandra-clientutil-2.1..jar
[info] Including from cache: joda-convert-1.2.jar
[info] Including from cache: netty-transport-4.0..Final.jar
[info] Including from cache: guava-16.0..jar
[info] Including from cache: spark-cassandra-connector_2.-1.5.-M2.jar
[info] Including from cache: cassandra-driver-core-2.2.-rc3.jar
[info] Including from cache: scala-reflect-2.10..jar
[info] Including from cache: scala-library-2.10..jar
[info] Checking every *.class/*.jar file's SHA-1.
[info] Merging files...
[warn] Merging 'META-INF/INDEX.LIST' with strategy 'discard'
[warn] Merging 'META-INF/MANIFEST.MF' with strategy 'discard'
[warn] Merging 'META-INF/io.netty.versions.properties' with strategy 'first'
[warn] Merging 'META-INF/maven/com.codahale.metrics/metrics-core/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.datastax.cassandra/cassandra-driver-core/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.google.guava/guava/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.twitter/jsr166e/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-buffer/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-codec/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-common/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-handler/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-transport/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/joda-time/joda-time/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.apache.commons/commons-lang3/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.joda/joda-convert/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.slf4j/slf4j-api/pom.xml' with strategy 'discard'
[warn] Strategy 'discard' was applied to 15 files
[warn] Strategy 'first' was applied to a file
[info] SHA-1: d2cb403e090e6a3ae36b08c860b258c79120fc90
[info] Packaging /Users/qpzhang/scala_code/CassandraTest/target/scala-2.10/CassandraTest-assembly-1.0.jar ...
[info] Done packaging.
[success] Total time: 19 s, completed 2015-11-26 10:12:22

4.执行结果

qpzhang@qpzhangdeMac-mini:~/project/spark-1.5.-bin-hadoop2. $./bin/spark-submit --class "CassandraTestApp" --master local[] ~/scala_code/CassandraTest/target/scala-2.10/CassandraTest-assembly-1.0.jar
//...........................
// :: INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID , localhost, NODE_LOCAL, bytes)
// :: INFO Executor: Running task 0.0 in stage 0.0 (TID )
// :: INFO Executor: Fetching http://10.60.215.42:57683/jars/CassandraTest-assembly-1.0.jar with timestamp 1448509221160
// :: INFO CassandraConnector: Disconnected from Cassandra cluster: Test Cluster
// :: INFO Utils: Fetching http://10.60.215.42:57683/jars/CassandraTest-assembly-1.0.jar to /private/var/folders/2l/195zcc1n0sn2wjfjwf9hl9d80000gn/T/spark-4030cadf-8489-4540-976e-e98eedf50412/userFiles-63085bda-aa04-4906-9621-c1cedd98c163/fetchFileTemp7487594
.tmp
// :: INFO Executor: Adding file:/private/var/folders/2l/195zcc1n0sn2wjfjwf9hl9d80000gn/T/spark-4030cadf---976e-e98eedf50412/userFiles-63085bda-aa04---c1cedd98c163/CassandraTest-assembly-1.0.jar to class loader
// :: INFO Cluster: New Cassandra host localhost/127.0.0.1: added
// :: INFO CassandraConnector: Connected to Cassandra cluster: Test Cluster
// :: INFO Executor: Finished task 0.0 in stage 0.0 (TID ). bytes result sent to driver
// :: INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID ) in ms on localhost (/)
// :: INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
// :: INFO DAGScheduler: ResultStage (collect at CassandraTest.scala:) finished in 2.481 s
// :: INFO DAGScheduler: Job finished: collect at CassandraTest.scala:, took 2.940601 s
Existing Data: CassandraRow{key: 1, value: first row}
Existing Data: CassandraRow{key: 2, value: second row}
Existing Data: CassandraRow{key: 3, value: third row}
//....................
// :: INFO TaskSchedulerImpl: Removed TaskSet 3.0, whose tasks have all completed, from pool
// :: INFO DAGScheduler: ResultStage (collect at CassandraTest.scala:) finished in 0.032 s
// :: INFO DAGScheduler: Job finished: collect at CassandraTest.scala:, took 0.046502 s
New Data: (4,fourth row)
New Data: (5,fifth row)
Work completed, stopping the Spark context.

sbt的assembly插件使用(打包所有依赖)的更多相关文章

  1. java工程打成jar包 - 使用maven assembly插件打包及手动打包

    在java工程打包的过程中遇到过不少问题,现在总结一下.一种是典型的maven工程打包,依赖的jar包全都在pom.xml中指定,这种方式打包很方便:另一种是依赖了本机jar包(不能通过pom.xml ...

  2. Maven的assembly插件实现自定义打包部署(包含依赖jar包)

    微服务必备 优点: 1.可以直接导入依赖jar包 2.可以添加插件启动 .sh 文件 3.插件的配置以及微服务的统一打包方式 1.首先我们需要在pom.xml中配置maven的assembly插件 & ...

  3. 使用Maven的assembly插件实现自定义打包

    一.背景 最近我们项目越来越多了,然后我就在想如何才能把基础服务的打包方式统一起来,并且可以实现按照我们的要求来生成,通过研究,我们通过使用maven的assembly插件完美的实现了该需求,爽爆了有 ...

  4. sbt公布assembly解决jar包冲突 deduplicate: different file contents found in the following

    一个.问题定义 近期使用sbt战斗assembly发生故障时,包,在package什么时候,发生jar包冲突/文件冲突,两个相同class来自不同jar包classpath内心冲突. 有关详细信息:我 ...

  5. 补习系列-springboot-使用assembly进行项目打包

    目录 springboot-maven插件 1. 项目打包Jar 2. 项目完整构建 3. 本地包依赖 参考文档 springboot-maven插件 springboot-maven插件 repac ...

  6. 使用mybatis assembly插件打成tar包,在linux系统中运行服务

    使用mybatis assembly插件打成tar包,在linux系统中运行服务 assembly插件插件地址: 链接:https://pan.baidu.com/s/1i6bWPxF 密码:gad5 ...

  7. maven--插件篇(assembly插件)

    maven-assembly可以通过dependencySets将依赖的jar包打到特定目录. 1. 简介 简单的说,maven-assembly-plugin 就是用来帮助打包用的,比如说打出一个什 ...

  8. Maven Assembly插件介绍

    转自:http://blueram.iteye.com/blog/1684070 已经写得挺好的,就不用重写了. 你是否想要创建一个包含脚本.配置文件以及所有运行时所依赖的元素(jar)Assembl ...

  9. Maven项目打包jar依赖外部jar

    有时候我们想要做一些java 的小程序,需要把打包成jar,单独执行,做一个maven项目,maven非常方便,有自动打包成jar的插件,但是有时候我们的项目可能会依赖其他的jar包,所以非常麻烦. ...

随机推荐

  1. Python-基础-时间日期处理小结

    Python-基础-时间日期处理小结 涉及对象 1. datetime 2. timestamp 3. time tuple 4. string 5. date datetime基本操作 1. 获取当 ...

  2. Spring 学习总结 使用静态工厂创建Bean

    创建Bean时,class属性必须指定,此时为静态工厂类. factory-method指定静态工厂方法名. 接口: public interface Being { public void test ...

  3. Dictionary 序列化与反序列化

    [转:http://blog.csdn.net/woaixiaozhe/article/details/7873582] 1.说明:Dictionary对象本身不支持序列化和反序列化,需要定义一个继承 ...

  4. 建立一个简单的SpringMVC程序

    首先,所建立的程序是一个web程序,所以在web.xml文件中进行如下的配置: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 ...

  5. S7-1200 与 S7-200 的对比PPT

  6. 【Linux】浅谈I/O模型

    关于I/O模型的引出 我们都知道,为了OS的安全性等的考虑,进程是无法直接操作I/O设备的,其必须通过系统调用请求内核来协助完成I/O动作,而内核会为每个I/O设备维护一个buffer. 如下图所示: ...

  7. 【uTenux实验】信号量

    信号量(semaphore)是一个用来指示可用的资源并将可用资源的数量以数值的形式表示出来的对象.当使用一组资源时,信号量用来实现互斥控制和同步.uTenux提供了信号量出来的API,可以很方便地使用 ...

  8. UVA 753 UNIX 插头(EK网络流+Floyd传递闭包)

    UNIX 插头 紫书P374 [题目链接]UNIX 插头 [题目类型]EK网络流+Floyd传递闭包 &题解: 看了书之后有那么一点懂了,但当看了刘汝佳代码后就完全明白了,感觉他代码写的好牛逼 ...

  9. threadid=1: thread exiting with uncaught.exception ......解决方法

     threadid=1: thread exiting with uncaught exception (group=0x40015560)E/AndroidRuntime(285): FATAL E ...

  10. Hibernate工作原理

    现在我们知道了一个概念Hibernate Session,只有处于Session管理下的POJO才具有持久化操作能力.当应用程序对于处于Session管理下的POJO实例执行操作时,Hibernate ...