Spark RDD Transformation 简单用例(二)
aggregateByKey(zeroValue)(seqOp, combOp, [numTasks])
| aggregateByKey(zeroValue)(seqOp, combOp, [numTasks]) | When called on a dataset of (K, V) pairs, returns a dataset of (K, U) pairs where the values for each key are aggregated using the given combine functions and a neutral "zero" value. Allows an aggregated value type that is different than the input value type, while avoiding unnecessary allocations. Like in groupByKey, the number of reduce tasks is configurable through an optional second argument. |
/**
* Aggregate the values of each key, using given combine functions and a neutral "zero value".
* This function can return a different result type, U, than the type of the values in this RDD,
* V. Thus, we need one operation for merging a V into a U and one operation for merging two U's,
* as in scala.TraversableOnce. The former operation is used for merging values within a
* partition, and the latter is used for merging values between partitions. To avoid memory
* allocation, both of these functions are allowed to modify and return their first argument
* instead of creating a new U.
*/
def aggregateByKey[U: ClassTag](zeroValue: U)(seqOp: (U, V) => U,
combOp: (U, U) => U): RDD[(K, U)]
/**
* Aggregate the values of each key, using given combine functions and a neutral "zero value".
* This function can return a different result type, U, than the type of the values in this RDD,
* V. Thus, we need one operation for merging a V into a U and one operation for merging two U's,
* as in scala.TraversableOnce. The former operation is used for merging values within a
* partition, and the latter is used for merging values between partitions. To avoid memory
* allocation, both of these functions are allowed to modify and return their first argument
* instead of creating a new U.
*/
def aggregateByKey[U: ClassTag](zeroValue: U, numPartitions: Int)(seqOp: (U, V) => U,
combOp: (U, U) => U): RDD[(K, U)]
/**
* Aggregate the values of each key, using given combine functions and a neutral "zero value".
* This function can return a different result type, U, than the type of the values in this RDD,
* V. Thus, we need one operation for merging a V into a U and one operation for merging two U's,
* as in scala.TraversableOnce. The former operation is used for merging values within a
* partition, and the latter is used for merging values between partitions. To avoid memory
* allocation, both of these functions are allowed to modify and return their first argument
* instead of creating a new U.
*/
def aggregateByKey[U: ClassTag](zeroValue: U, partitioner: Partitioner)(seqOp: (U, V) => U,
combOp: (U, U) => U): RDD[(K, U)]
def seq(a:Int,b:Int):Int={
println("seq: " + a + "\t" + b)
math.max(a,b)
}
def comb(a:Int,b:Int):Int = {
println("comb: " + a + "\t" + b)
a+b
}
val rdd = sc.parallelize(List((1,3),(1,2),(1,4),(2,3),(2,4),(2,5)))
rdd.aggregateByKey(0)(seq,comb).collect
rdd.aggregateByKey(6)(seq,comb).collect
scala> def seq(a:Int,b:Int):Int={
| println("seq: " + a + "\t" + b)
| math.max(a,b)
| }
seq: (a: Int, b: Int)Int
scala>
scala> def comb(a:Int,b:Int):Int = {
| println("comb: " + a + "\t" + b)
| a+b
| }
comb: (a: Int, b: Int)Int
scala> val rdd = sc.parallelize(List((1,3),(1,2),(1,4),(2,3),(2,4),(2,5)))
rdd: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[11] at parallelize at <console>:26
scala> rdd.aggregateByKey(0)(seq,comb).collect
seq: 0 3
seq: 3 2
seq: 3 4
seq: 0 3
seq: 3 4
seq: 4 5
res20: Array[(Int, Int)] = Array((1,4), (2,5))
scala> rdd.aggregateByKey(6)(seq,comb).collect
seq: 6 3
seq: 6 2
seq: 6 4
seq: 6 3
seq: 6 4
seq: 6 5
res21: Array[(Int, Int)] = Array((1,6), (2,6))
但是为什么没有执行comb呢?
sortByKey([ascending], [numTasks])
| sortByKey([ascending], [numTasks]) | When called on a dataset of (K, V) pairs where K implements Ordered, returns a dataset of (K, V) pairs sorted by keys in ascending or descending order, as specified in the boolean ascending argument. |
从下面的注释中可以看到在每一个partition中元素是有序的,但是在整个rdd中数据可能是无序的。
/**
* Sort the RDD by key, so that each partition contains a sorted range of the elements. Calling
* `collect` or `save` on the resulting RDD will return or output an ordered list of records
* (in the `save` case, they will be written to multiple `part-X` files in the filesystem, in
* order of the keys).
*/
// TODO: this currently doesn't work on P other than Tuple2!
def sortByKey(ascending: Boolean = true, numPartitions: Int = self.partitions.length)
: RDD[(K, V)]
val rdd = sc.parallelize(List((3,"sd"),(1,"fd"),(2,"dfh"),(4,"kjh"),(7,"kf"),(5,"nb"),(100,"jd"),(63,"mm"),(42,"kk"),(99,"ll"),(10,"ll"),(11,"ll"),(12,"ll")),1)
val rdd1 = rdd.sortByKey(true,1)
rdd1.collect
val rdd2 = rdd.sortByKey(true,3)
rdd2.foreachPartition(
x=>{
while(x.hasNext){
println(x.next)
}
println("============")
}
) val rdd2 = rdd.sortByKey(false,4)
val rdd2 = rdd.sortByKey(true,3)
rdd2.foreachPartition(
x=>{
while(x.hasNext){
println(x.next)
}
println("============")
}
)
scala> val rdd = sc.parallelize(List((3,"sd"),(1,"fd"),(2,"dfh"),(4,"kjh"),(7,"kf"),(5,"nb"),(100,"jd"),(63,"mm"),(42,"kk"),(99,"ll"),(10,"ll"),(11,"ll"),(12,"ll")),1)
rdd: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[24] at parallelize at <console>:26 scala> val rdd1 = rdd.sortByKey(true,1)
rdd1: org.apache.spark.rdd.RDD[(Int, String)] = ShuffledRDD[25] at sortByKey at <console>:28 scala> rdd1.collect
res42: Array[(Int, String)] = Array((1,fd), (2,dfh), (3,sd), (4,kjh), (5,nb), (7,kf), (10,ll), (11,ll), (12,ll), (42,kk), (63,mm), (99,ll), (100,jd)) scala> val rdd2 = rdd.sortByKey(true,3)
rdd2: org.apache.spark.rdd.RDD[(Int, String)] = ShuffledRDD[28] at sortByKey at <console>:28 scala> rdd2.foreachPartition(
| x=>{
| while(x.hasNext){
| println(x.next)
| }
| println("============")
| }
| )
(1,fd)
(2,dfh)
(3,sd)
(4,kjh)
(5,nb)
============
(7,kf)
(10,ll)
(11,ll)
(12,ll)
============
(42,kk)
(63,mm)
(99,ll)
(100,jd)
============ scala> val rdd2 = rdd.sortByKey(false,4)
rdd2: org.apache.spark.rdd.RDD[(Int, String)] = ShuffledRDD[34] at sortByKey at <console>:28 scala> rdd2.foreachPartition(
| x=>{
| while(x.hasNext){
| println(x.next)
| }
| println("============")
| }
| )
(100,jd)
(99,ll)
(63,mm)
============
(42,kk)
(12,ll)
(11,ll)
============
(10,ll)
(7,kf)
(5,nb)
============
(4,kjh)
(3,sd)
(2,dfh)
(1,fd)
============
sortBy(func,[ascending], [numTasks])
/**
* Return this RDD sorted by the given key function.
*/
def sortBy[K](
f: (T) => K,
ascending: Boolean = true,
numPartitions: Int = this.partitions.length)
(implicit ord: Ordering[K], ctag: ClassTag[K]): RDD[T]
val a = Array(9,2,8,1,5,6,4,7,3)
val rdd = sc.parallelize(a)
rdd.collect
rdd.sortBy(x=>x).collect
rdd.sortBy(x=>x,false,3).collect
scala> val a = Array(9,2,8,1,5,6,4,7,3)
a: Array[Int] = Array(9, 2, 8, 1, 5, 6, 4, 7, 3) scala> val rdd = sc.parallelize(a)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[35] at parallelize at <console>:28 scala> rdd.collect
res46: Array[Int] = Array(9, 2, 8, 1, 5, 6, 4, 7, 3) scala> rdd.sortBy(x=>x).collect
res49: Array[Int] = Array(1, 2, 3, 4, 5, 6, 7, 8, 9) scala> rdd.sortBy(x=>x,false,3).collect
res50: Array[Int] = Array(9, 8, 7, 6, 5, 4, 3, 2, 1)
join(otherDataset, [numTasks])
| join(otherDataset, [numTasks]) | When called on datasets of type (K, V) and (K, W), returns a dataset of (K, (V, W)) pairs with all pairs of elements for each key. Outer joins are supported through leftOuterJoin, rightOuterJoin, and fullOuterJoin. |
同SQL语句中join,leftOuterJoin同SQL中left outer join,rightOuterJoin同SQL语句中right outer join,fullOuterJoin同SQL语句中的full outer join
scala> val a = List((1,"a"),(2,"b"),(3,"c"))
a: List[(Int, String)] = List((1,a), (2,b), (3,c)) scala> val rdd1 = sc.parallelize(a)
rdd1: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[47] at parallelize at <console>:28 scala> val b = List((1,"A"),(2,"B"),(4,"D"))
b: List[(Int, String)] = List((1,A), (2,B), (4,D)) scala> val rdd2 = sc.parallelize(b)
rdd2: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[48] at parallelize at <console>:28 scala> val rdd = rdd1.join(rdd2)
rdd: org.apache.spark.rdd.RDD[(Int, (String, String))] = MapPartitionsRDD[51] at join at <console>:34 scala> rdd.collect
res51: Array[(Int, (String, String))] = Array((1,(a,A)), (2,(b,B))) scala> rdd1.leftOuterJoin(rdd2)
res52: org.apache.spark.rdd.RDD[(Int, (String, Option[String]))] = MapPartitionsRDD[54] at leftOuterJoin at <console>:35 scala> rdd1.leftOuterJoin(rdd2).collect
res53: Array[(Int, (String, Option[String]))] = Array((1,(a,Some(A))), (3,(c,None)), (2,(b,Some(B)))) scala> rdd1.rightOuterJoin(rdd2).collect
res54: Array[(Int, (Option[String], String))] = Array((4,(None,D)), (1,(Some(a),A)), (2,(Some(b),B))) scala> rdd1.fullOuterJoin(rdd2).collect
res55: Array[(Int, (Option[String], Option[String]))] = Array((4,(None,Some(D))), (1,(Some(a),Some(A))), (3,(Some(c),None)), (2,(Some(b),Some(B))))
不管是join,leftOuterJoin,rightOuterJoin还是fullOuterJoin,除上述入参为otherDataset外,还包含下面两种方式
(other: RDD[(K, W)], numPartitions: Int)
(other: RDD[(K, W)], partitioner: Partitioner)
cogroup(otherDataset, [numTasks])
| cogroup(otherDataset, [numTasks]) | When called on datasets of type (K, V) and (K, W), returns a dataset of (K, (Iterable<V>, Iterable<W>)) tuples. This operation is also called groupWith. |
/**
* For each key k in `this` or `other`, return a resulting RDD that contains a tuple with the
* list of values for that key in `this` as well as `other`.
*/
def cogroup[W](other: RDD[(K, W)]): RDD[(K, (Iterable[V], Iterable[W]))]
scala> val rdd1 = sc.parallelize(List((1,"a"),(2,"b"),(3,"c"),(1,"z")))
rdd1: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[0] at parallelize at <console>:24 scala> val rdd2 = sc.parallelize(List((1,"A"),(2,"B"),(2,"C"),(4,"D")))
rdd2: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[1] at parallelize at <console>:24 scala> val rdd = rdd1.cogroup(rdd2)
rdd: org.apache.spark.rdd.RDD[(Int, (Iterable[String], Iterable[String]))] = MapPartitionsRDD[3] at cogroup at <console>:28 scala> rdd.collect
res0: Array[(Int, (Iterable[String], Iterable[String]))] = Array((4,(CompactBuffer(),CompactBuffer(D))), (1,(CompactBuffer(a, z),CompactBuffer(A))), (3,(CompactBuffer(c),CompactBuffer())), (2,(CompactBuffer(b),CompactBuffer(B, C))))
cartesian(otherDataset)
| cartesian(otherDataset) | When called on datasets of types T and U, returns a dataset of (T, U) pairs (all pairs of elements). |
对两个RDD中元素进行笛卡尔积运算。 /**
* Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of
* elements (a, b) where a is in `this` and b is in `other`.
*/
def cartesian[U: ClassTag](other: RDD[U]): RDD[(T, U)]
scala> val rdd1 = sc.parallelize(Array(1,2,3,4,5))
rdd1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[4] at parallelize at <console>:24 scala> val rdd2 = sc.parallelize(Array("A","B","C"))
rdd2: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[5] at parallelize at <console>:24 scala> val rdd = rdd1.cartesian(rdd2)
rdd: org.apache.spark.rdd.RDD[(Int, String)] = CartesianRDD[6] at cartesian at <console>:28 scala> rdd.collect
res1: Array[(Int, String)] = Array((1,A), (1,B), (1,C), (2,A), (2,B), (2,C), (3,A), (3,B), (3,C), (4,A), (4,B), (4,C), (5,A), (5,B), (5,C))
pipe(command, [envVars])
| pipe(command, [envVars]) | Pipe each partition of the RDD through a shell command, e.g. a Perl or bash script. RDD elements are written to the process's stdin and lines output to its stdout are returned as an RDD of strings. |
通过pipe运行外部程序,每个分区中的元素作为外部程序入参运行一次外部程序,而外部程序的输出有创建一个新的RDD。
/**
* Return an RDD created by piping elements to a forked external process.
*/
def pipe(command: String): RDD[String]
[root@localhost home]# more /home/test.sh
#!/bin/bash
echo "Running shell script"
RESULT=""
while read LINE
do
if [ -z ${LINE} ]
then
break
fi
RESULT=${RESULT}" "${LINE}
done echo ${RESULT} >> /home/out.txt
echo "========" >>/home/out.txt
val rdd = sc.parallelize(List("ab","cd","ef","gh","ij"),)
rdd.pipe("/home/test.sh").collect
结果:
rdd有两个分区,test.sh每次运行会输出一个“Running shell script”字符串,元素输出至/home/out.txt中。
scala> val rdd = sc.parallelize(List("ab","cd","ef","gh","ij"),)
rdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[] at parallelize at <console>:
scala> rdd.pipe("/home/test.sh").collect
res6: Array[String] = Array(Running shell script, Running shell script)
[root@localhost home]# more out.txt
ab cd
========
ef gh ij
========
coalesce(numPartitions)
| coalesce(numPartitions) | Decrease the number of partitions in the RDD to numPartitions. Useful for running operations more efficiently after filtering down a large dataset. |
减少RDD的partition数量,对过滤掉大量数据后进行算子操作高效运行非常有用。 /**
* Return a new RDD that is reduced into `numPartitions` partitions.
*
* This results in a narrow dependency, e.g. if you go from 1000 partitions
* to 100 partitions, there will not be a shuffle, instead each of the 100
* new partitions will claim 10 of the current partitions.
*
* However, if you're doing a drastic coalesce, e.g. to numPartitions = 1,
* this may result in your computation taking place on fewer nodes than
* you like (e.g. one node in the case of numPartitions = 1). To avoid this,
* you can pass shuffle = true. This will add a shuffle step, but means the
* current upstream partitions will be executed in parallel (per whatever
* the current partitioning is).
*
* Note: With shuffle = true, you can actually coalesce to a larger number
* of partitions. This is useful if you have a small number of partitions,
* say 100, potentially with a few partitions being abnormally large. Calling
* coalesce(1000, shuffle = true) will result in 1000 partitions with the
* data distributed using a hash partitioner.
*/
def coalesce(numPartitions: Int, shuffle: Boolean = false,
partitionCoalescer: Option[PartitionCoalescer] = Option.empty)
(implicit ord: Ordering[T] = null)
: RDD[T]
scala> val rdd = sc.parallelize(1 to 1000,1000)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[10] at parallelize at <console>:24 scala> val rdd1 = rdd.filter(_%3 == 0)
rdd1: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[11] at filter at <console>:26 scala> rdd1.partitions.length
res7: Int = 1000 scala> rdd1.coalesce(3,false).partitions.length
res9: Int = 3
repartition(numPartitions)
| repartition(numPartitions) | Reshuffle the data in the RDD randomly to create either more or fewer partitions and balance it across them. This always shuffles all data over the network. |
该函数其实内部调用就是coalesce(numPartitions, shuffle = true)。
/**
* Return a new RDD that has exactly numPartitions partitions.
* Can increase or decrease the level of parallelism in this RDD. Internally, this uses
* a shuffle to redistribute data.
* If you are decreasing the number of partitions in this RDD, consider using `coalesce`,
* which can avoid performing a shuffle.
*/
def repartition(numPartitions: Int)(implicit ord: Ordering[T] = null): RDD[T] = withScope {
coalesce(numPartitions, shuffle = true)
}
repartitionAndSortWithinPartitions(partitioner)
| repartitionAndSortWithinPartitions(partitioner) | Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys. This is more efficient than calling repartition and then sorting within each partition because it can push the sorting down into the shuffle machinery. |
/**
* Repartition the RDD according to the given partitioner and, within each resulting partition,
* sort records by their keys.
*
* This is more efficient than calling `repartition` and then sorting within each partition
* because it can push the sorting down into the shuffle machinery.
*/
def repartitionAndSortWithinPartitions(partitioner: Partitioner): RDD[(K, V)]
class MyPartitioner(numParts:Int) extends org.apache.spark.Partitioner{
override def numPartitions: Int = numParts
override def getPartition(key: Any): Int = {
key.toString.toInt%numPartitions
}
}
val rdd1 = sc.makeRDD(1 to 10,2)
val rdd2 = sc.makeRDD(1 to 10,2)
val rdd = rdd1.zip(rdd2)
rdd.foreachPartition(
x=>{
while(x.hasNext){
println(x.next)
}
println("============")
}
)
val rdd3 = rdd.repartitionAndSortWithinPartitions(new MyPartitioner(3))
rdd3.foreachPartition(
x=>{
while(x.hasNext){
println(x.next)
}
println("============")
}
)
scala> class MyPartitioner(numParts:Int) extends org.apache.spark.Partitioner{
| override def numPartitions: Int = numParts
| override def getPartition(key: Any): Int = {
| key.toString.toInt%numPartitions
| }
| }
defined class MyPartitioner
scala> val rdd1 = sc.makeRDD(1 to 10,2)
rdd1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[37] at makeRDD at <console>:24
scala> val rdd2 = sc.makeRDD(1 to 10,2)
rdd2: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[38] at makeRDD at <console>:24
scala> val rdd = rdd1.zip(rdd2)
rdd: org.apache.spark.rdd.RDD[(Int, Int)] = ZippedPartitionsRDD2[39] at zip at <console>:28
scala> rdd.foreachPartition(
| x=>{
| while(x.hasNext){
| println(x.next)
| }
| println("============")
| }
| )
(1,1)
(2,2)
(3,3)
(4,4)
(5,5)
============
(6,6)
(7,7)
(8,8)
(9,9)
(10,10)
============
scala> val rdd3 = rdd.repartitionAndSortWithinPartitions(new MyPartitioner(3))
rdd3: org.apache.spark.rdd.RDD[(Int, Int)] = ShuffledRDD[40] at repartitionAndSortWithinPartitions at <console>:31
scala> rdd3.foreachPartition(
| x=>{
| while(x.hasNext){
| println(x.next)
| }
| println("============")
| }
| )
[Stage 17:> (0 + 1) / 3](3,3)
(6,6)
(9,9)
============
(1,1)
(4,4)
(7,7)
(10,10)
============
(2,2)
(5,5)
(8,8)
============
Spark RDD Transformation 简单用例(二)的更多相关文章
- Spark RDD Transformation 简单用例(三)
cache和persist 将RDD数据进行存储,persist(newLevel: StorageLevel)设置了存储级别,cache()和persist()是相同的,存储级别为MEMORY_ON ...
- Spark RDD Action 简单用例(二)
foreach(f: T => Unit) 对RDD的所有元素应用f函数进行处理,f无返回值./** * Applies a function f to all elements of this ...
- Spark RDD Transformation 简单用例(一)
map(func) /** * Return a new RDD by applying a function to all elements of this RDD. */ def map[U: C ...
- Spark RDD Action 简单用例(一)
collectAsMap(): Map[K, V] 返回key-value对,key是唯一的,如果rdd元素中同一个key对应多个value,则只会保留一个./** * Return the key- ...
- spark RDD transformation与action函数整理
1.创建RDD val lines = sc.parallelize(List("pandas","i like pandas")) 2.加载本地文件到RDD ...
- spark rdd Transformation和Action 剖析
1.看到 这篇总结的这么好, 就悄悄的转过来,供学习 wordcount.toDebugString查看RDD的继承链条 所以广义的讲,对任何函数进行某一项操作都可以认为是一个算子,甚至包括求幂次,开 ...
- PHP 下基于 php-amqp 扩展的 RabbitMQ 简单用例 (二) -- Topic Exchange 和 Fanout Exchange
Topic Exchange 此模式下交换机,在推送消息时, 会根据消息的主题词和队列的主题词决定将消息推送到哪个队列. 交换机只会为 Queue 分发符合其指定的主题的消息. 向交换机发送消息时,消 ...
- spring事务详解(二)简单样例
系列目录 spring事务详解(一)初探事务 spring事务详解(二)简单样例 spring事务详解(三)源码详解 spring事务详解(四)测试验证 spring事务详解(五)总结提高 一.引子 ...
- Spark基础:(二)Spark RDD编程
1.RDD基础 Spark中的RDD就是一个不可变的分布式对象集合.每个RDD都被分为多个分区,这些分区运行在分区的不同节点上. 用户可以通过两种方式创建RDD: (1)读取外部数据集====> ...
随机推荐
- Vue中CSS模块化最佳实践
Vue风格指南中介绍了单文件组件中的Style是必须要有作用域的,否则组件之间可能相互影响,造成难以调试. 在Vue Loader Scope CSS和Vue Loader CSS Modules两节 ...
- C#_基础题1-10套答案
one 1.用户输入一个整数,用if...else判断是偶数还是奇数 Console.WriteLine("请输入一个整数:"); ...
- CentOS7安装chrony替代ntp同步时间
Chrony是一个开源的自由软件,它能保持系统时钟与时钟服务器(NTP)同步,让时间保持精确.它由两个程序组成:chronyd和chronyc:chronyd是一个后台运行的守护进程,用于调整内核中运 ...
- 透彻理解Spring事务设计思想之手写实现
前言 事务,是描述一组操作的抽象,比如对数据库的一组操作,要么全部成功,要么全部失败.事务具有4个特性:Atomicity(原子性),Consistency(一致性),Isolation(隔离性),D ...
- pandas DataFrame(4)-向量化运算
pandas DataFrame进行向量化运算时,是根据行和列的索引值进行计算的,而不是行和列的位置: 1. 行和列索引一致: import pandas as pd df1 = pd.DataFra ...
- visio2013激活软件
环境是 win7, 64 bit 装了 visio 2013 , 可以却不能用它来画图,在网上找了一些破解工具,大都不能解决问题.网上不靠谱的广告型文章太多了,比较头痛. 所幸,终于找到正确的破解工具 ...
- js判断登陆用户名及密码是否为空的简单实例
js判断登陆用户名及密码是否为空的简单实例 ? 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 <script type="text/javascript ...
- Android Launcher分析和修改4——初始化加载数据
上面一篇文章说了Launcher是如何被启动的,Launcher启动的过程主要是加载界面数据然后显示出来, 界面数据都是系统APP有关的数据,都是从Launcher的数据库读取,下面我们详细分析Lau ...
- Maven包下载不下来的情况
从svn上遇到过项目下载下来,缺丢失了一些包,怎么都下载不了,只能从同事的电脑上给拷贝下来? 千万别这样,别问为何,说多了都是泪,然后发现. 如果是eclipse的话: 勾选这两个选项,就能下载下来了 ...
- [转]ztree出现$.fn.zTree is undefined错误的解决办法。
原文地址:https://blog.csdn.net/smallboy2011/article/details/20554269 问题描述,在一个界面使用ztree创建树,提示TypeError: $ ...