[root@node1 aas]# pwd
/root/aas
[root@node1 aas]# wget http://archive.apache.org/dist/spark/spark-1.2.1/spark-1.2.1-bin-hadoop2.4.tgz
[root@node1 aas]# tar zxvf spark-1.2.-bin-hadoop2..tgz

修改将conf目录下的spark-env.sh.template复制为/conf/spark-env.sh并增加如下内容:

HADOOP_CONF_DIR=/etc/hadoop/conf/
SPARK_EXECUTOR_INSTANCES=
SPARK_EXECUTOR_CORES=
SPARK_EXECUTOR_MEMORY=4G
SPARK_DRIVER_MEMORY=4G
SPARK_YARN_APP_NAME=AAS

修改conf目录下的log4j.properties.template为log4j.properties,并修改日志输出级别为WARN

log4j.rootCategory=WARN, console

启动spark-shell

[root@node1 spark-1.2.-bin-hadoop2.]# ./bin/spark-shell --master yarn-client
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.2.
/_/ Using Scala version 2.10. (OpenJDK -Bit Server VM, Java 1.7.0_09-icedtea)
Type in expressions to have them evaluated.
Type :help for more information.
// :: WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Spark context available as sc. scala> :help
All commands can be abbreviated, e.g. :he instead of :help.
Those marked with a * have more detailed help, e.g. :help imports. :cp <path> add a jar or directory to the classpath
:help [command] print this summary or command-specific help
:history [num] show the history (optional num is commands to show)
:h? <string> search the history
:imports [name name ...] show import history, identifying sources of names
:implicits [-v] show the implicits in scope
:javap <path|class> disassemble a file or class name
:load <path> load and interpret a Scala file
:paste enter paste mode: all input up to ctrl-D compiled together
:quit exit the repl
:replay reset execution and replay all previous commands
:reset reset the repl to its initial state, forgetting all session entries
:sh <command line> run a shell command (result is implicitly => List[String])
:silent disable/enable automatic printing of results
:fallback
disable/enable advanced repl changes, these fix some issues but may introduce others.
This mode will be removed once these fixes stablize
:type [-v] <expr> display the type of an expression without evaluating it
:warnings show the suppressed warnings from the most recent line which had any scala>

P10中的样例数据在国内无法下载。需要有代理才行。请到百度网盘上下载:http://pan.baidu.com/s/1pJvjHA7

[root@node1 linkage]# unzip donation.zip
Archive: donation.zip
extracting: block_10.zip
extracting: block_1.zip
extracting: block_2.zip
extracting: block_3.zip
extracting: block_4.zip
extracting: block_5.zip
extracting: block_6.zip
extracting: block_7.zip
extracting: block_8.zip
extracting: block_9.zip
inflating: documentation
inflating: frequencies.csv
[root@node1 linkage]# ll
total
-rw-r--r-- root root Mar block_10.zip
-rw-r--r-- root root Mar block_1.zip
-rw-r--r-- root root Mar block_2.zip
-rw-r--r-- root root Mar block_3.zip
-rw-r--r-- root root Mar block_4.zip
-rw-r--r-- root root Mar block_5.zip
-rw-r--r-- root root Mar block_6.zip
-rw-r--r-- root root Mar block_7.zip
-rw-r--r-- root root Mar block_8.zip
-rw-r--r-- root root Mar block_9.zip
-rwxrw-rw- root root Mar documentation
-rw-r--r-- root root Dec : donation.zip
-rw-r--r-- root root Mar frequencies.csv

解压:

[root@node1 linkage]# unzip block_1.zip
Archive: block_1.zip
inflating: block_1.csv
[root@node1 linkage]# unzip block_2.zip
Archive: block_2.zip
inflating: block_2.csv
[root@node1 linkage]# unzip block_3.zip
Archive: block_3.zip
inflating: block_3.csv
[root@node1 linkage]# unzip block_4.zip
Archive: block_4.zip
inflating: block_4.csv
[root@node1 linkage]# unzip block_5.zip
Archive: block_5.zip
inflating: block_5.csv
[root@node1 linkage]# unzip block_6.zip
Archive: block_6.zip
inflating: block_6.csv
[root@node1 linkage]# unzip block_7.zip
Archive: block_7.zip
inflating: block_7.csv
[root@node1 linkage]# unzip block_8.zip
Archive: block_8.zip
inflating: block_8.csv
[root@node1 linkage]# unzip block_9.zip
Archive: block_9.zip
inflating: block_9.csv
[root@node1 linkage]# unzip block_10.zip
Archive: block_10.zip
inflating: block_10.csv
[root@node1 linkage]# ll
total
-rw-r--r-- root root Mar block_10.csv
-rw-r--r-- root root Mar block_10.zip
-rw-r--r-- root root Mar block_1.csv
-rw-r--r-- root root Mar block_1.zip
-rw-r--r-- root root Mar block_2.csv
-rw-r--r-- root root Mar block_2.zip
-rw-r--r-- root root Mar block_3.csv
-rw-r--r-- root root Mar block_3.zip
-rw-r--r-- root root Mar block_4.csv
-rw-r--r-- root root Mar block_4.zip
-rw-r--r-- root root Mar block_5.csv
-rw-r--r-- root root Mar block_5.zip
-rw-r--r-- root root Mar block_6.csv
-rw-r--r-- root root Mar block_6.zip
-rw-r--r-- root root Mar block_7.csv
-rw-r--r-- root root Mar block_7.zip
-rw-r--r-- root root Mar block_8.csv
-rw-r--r-- root root Mar block_8.zip
-rw-r--r-- root root Mar block_9.csv
-rw-r--r-- root root Mar block_9.zip
-rwxrw-rw- root root Mar documentation
-rw-r--r-- root root Dec : donation.zip
-rw-r--r-- root root Mar frequencies.csv

将数据放到HDFS上

[root@node1 linkage]# hdfs dfs -mkdir linkage
[root@node1 linkage]# hdfs dfs -put block_*.csv linkage

运行实例代码:

scala> val rawblocks = sc.textFile("/root/aas/ch02/linkage/frequencies.csv")
rawblocks: org.apache.spark.rdd.RDD[String] = /root/aas/ch02/linkage/frequencies.csv MappedRDD[] at textFile at <console>:12

scala> rawblocks.first
res3: String = "id_1","id_2","cmp_fname_c1","cmp_fname_c2","cmp_lname_c1","cmp_lname_c2","cmp_sex","cmp_bd","cmp_bm","cmp_by","cmp_plz","is_match"

scala> val head = rawblocks.take(10)
head: Array[String] = Array("id_1","id_2","cmp_fname_c1","cmp_fname_c2","cmp_lname_c1","cmp_lname_c2","cmp_sex","cmp_bd","cmp_bm","cmp_by","cmp_plz","is_match", 37291,53113,0.833333333333333,?,1,?,1,1,1,1,0,TRUE, 39086,47614,1,?,1,?,1,1,1,1,1,TRUE, 70031,70237,1,?,1,?,1,1,1,1,1,TRUE, 84795,97439,1,?,1,?,1,1,1,1,1,TRUE, 36950,42116,1,?,1,1,1,1,1,1,1,TRUE, 42413,48491,1,?,1,?,1,1,1,1,1,TRUE, 25965,64753,1,?,1,?,1,1,1,1,1,TRUE, 49451,90407,1,?,1,?,1,1,1,1,0,TRUE, 39932,40902,1,?,1,?,1,1,1,1,1,TRUE)

scala> head.length
res4: Int = 10

scala> head.foreach(println)
"id_1","id_2","cmp_fname_c1","cmp_fname_c2","cmp_lname_c1","cmp_lname_c2","cmp_sex","cmp_bd","cmp_bm","cmp_by","cmp_plz","is_match"
37291,53113,0.833333333333333,?,1,?,1,1,1,1,0,TRUE
39086,47614,1,?,1,?,1,1,1,1,1,TRUE
70031,70237,1,?,1,?,1,1,1,1,1,TRUE
84795,97439,1,?,1,?,1,1,1,1,1,TRUE
36950,42116,1,?,1,1,1,1,1,1,1,TRUE
42413,48491,1,?,1,?,1,1,1,1,1,TRUE
25965,64753,1,?,1,?,1,1,1,1,1,TRUE
49451,90407,1,?,1,?,1,1,1,1,0,TRUE
39932,40902,1,?,1,?,1,1,1,1,1,TRUE

scala> head.foreach(println)
"id_1","id_2","cmp_fname_c1","cmp_fname_c2","cmp_lname_c1","cmp_lname_c2","cmp_sex","cmp_bd","cmp_bm","cmp_by","cmp_plz","is_match"
37291,53113,0.833333333333333,?,1,?,1,1,1,1,0,TRUE
39086,47614,1,?,1,?,1,1,1,1,1,TRUE
70031,70237,1,?,1,?,1,1,1,1,1,TRUE
84795,97439,1,?,1,?,1,1,1,1,1,TRUE
36950,42116,1,?,1,1,1,1,1,1,1,TRUE
42413,48491,1,?,1,?,1,1,1,1,1,TRUE
25965,64753,1,?,1,?,1,1,1,1,1,TRUE
49451,90407,1,?,1,?,1,1,1,1,0,TRUE
39932,40902,1,?,1,?,1,1,1,1,1,TRUE

scala> def isHeader(line: String) = line.contains("id_1")
isHeader: (line: String)Boolean

scala> head.filter(isHeader).foreach(println)
"id_1","id_2","cmp_fname_c1","cmp_fname_c2","cmp_lname_c1","cmp_lname_c2","cmp_sex","cmp_bd","cmp_bm","cmp_by","cmp_plz","is_match"

scala> head.filterNot(isHeader).foreach(println)
37291,53113,0.833333333333333,?,1,?,1,1,1,1,0,TRUE
39086,47614,1,?,1,?,1,1,1,1,1,TRUE
70031,70237,1,?,1,?,1,1,1,1,1,TRUE
84795,97439,1,?,1,?,1,1,1,1,1,TRUE
36950,42116,1,?,1,1,1,1,1,1,1,TRUE
42413,48491,1,?,1,?,1,1,1,1,1,TRUE
25965,64753,1,?,1,?,1,1,1,1,1,TRUE
49451,90407,1,?,1,?,1,1,1,1,0,TRUE
39932,40902,1,?,1,?,1,1,1,1,1,TRUE

scala> head.filter(x => !isHeader(x)).length
res8: Int = 9

scala> val noheader = rawblocks.filter(x => !isHeader(x))
noheader: org.apache.spark.rdd.RDD[String] = FilteredRDD[4] at filter at <console>:16

scala> noheader.first
res9: String = 37291,53113,0.833333333333333,?,1,?,1,1,1,1,0,TRUE

 

scala> def toDouble(s: String) = {
| if ("?" .equals(s)) Double.NaN else s.toDouble
| }
toDouble: (s: String)Double


scala>


scala> def parse(line: String) = {
| val pieces = line.split(',' )
| val id1 = pieces(0).toInt
| val id2 = pieces(1).toInt
| val scores = pieces.slice(2, 11).map(toDouble)
| val matched = pieces(11).toBoolean
| (id1, id2, scores, matched)
| }
parse: (line: String)(Int, Int, Array[Double], Boolean)

scala> val line =head(5)
line: String = 36950,42116,1,?,1,1,1,1,1,1,1,TRUE

scala> val tup = parse(line)
tup: (Int, Int, Array[Double], Boolean) = (36950,42116,Array(1.0, NaN, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0),true)

scala> def parse(line: String) = {
| val pieces = line.split(',' )
| val id1 = pieces(0).toInt
| val id2 = pieces(1).toInt
| val scores = pieces.slice(2, 11).map(toDouble)
| val matched = pieces(11).toBoolean
| MatchData(id1, id2, scores, matched)
| }
parse: (line: String)MatchData

scala> val md = parse(line)
md: MatchData = MatchData(36950,42116,[D@3c935226,true)

scala> val mds = head.filter(x => !isHeader(x)).map(x => parse(x))
mds: Array[MatchData] = Array(MatchData(37291,53113,[D@3bc5ac5,true), MatchData(39086,47614,[D@42eb3d6d,true), MatchData(70031,70237,[D@620de16d,true), MatchData(84795,97439,[D@7d4aed65,true), MatchData(36950,42116,[D@4227c226,true), MatchData(42413,48491,[D@403b6eb8,true), MatchData(25965,64753,[D@7de212f9,true), MatchData(49451,90407,[D@54bda00,true), MatchData(39932,40902,[D@36d538b7,true))

scala> val parsed = noheader.map(line => parse(line))
parsed: org.apache.spark.rdd.RDD[MatchData] = MappedRDD[5] at map at <console>:28

scala> parsed. cache()
res10: parsed.type = MappedRDD[5] at map at <console>:28


两次调用parsed.count()

scala> parsed.count()
res11: Long = scala> parsed.count()
res12: Long =

发现第二次的速度明显快乐许多。原因在于第一次调用后parsed这个数据已经到内存里了。

观察Storage页面http://node1:8088/proxy/application_1448538943757_0008/storage/,发现占用缓存683.4 MB。如果将StorageLevel改成MEMOERY_SER,观察一下内存大小为683.4 MB

先调用unpersist(),这时观察到Storage页面上已经没有了缓存。

然后调用parsed.persist(org.apache.spark.storage.StorageLevel.MEMORY_ONLY_SER)

scala> parsed.unpersist()
res13: parsed.type = MappedRDD[] at map at <console>: scala> import org.apache.spark.storage.StorageLevel
import org.apache.spark.storage.StorageLevel scala> parsed.persist(StorageLevel.MEMORY_ONLY_SER)
res18: parsed.type = MappedRDD[] at map at <console>: scala> parsed.count()
res19: Long = scala> parsed.count()

观察到采用MEMORY_ONLY_SER之后内存占用大小为580.1 MB。比采用MEMORY时的683.4 MB少了100MB

2.8节聚合

scala> val grouped = mds.groupBy(md => md.matched)
grouped: scala.collection.immutable.Map[Boolean,Array[MatchData]] = Map(true -> Array(MatchData(,,[D@3bc5ac5,true), MatchData(,,[D@42eb3d6d,true), MatchData(,,[D@620de16d,true), MatchData(,,[D@7d4aed65,true), MatchData(,,[D@4227c226,true), MatchData(,,[D@403b6eb8,true), MatchData(,,[D@7de212f9,true), MatchData(,,[D@54bda00,true), MatchData(,,[D@36d538b7,true))) scala> grouped.mapValues(x => x.size).foreach(println)
(true,)

2.9创建直方图

scala> val grouped = mds.groupBy(md => md.matched)
grouped: scala.collection.immutable.Map[Boolean,Array[MatchData]] = Map(true -> Array(MatchData(,,[D@3bc5ac5,true), MatchData(,,[D@42eb3d6d,true), MatchData(,,[D@620de16d,true), MatchData(,,[D@7d4aed65,true), MatchData(,,[D@4227c226,true), MatchData(,,[D@403b6eb8,true), MatchData(,,[D@7de212f9,true), MatchData(,,[D@54bda00,true), MatchData(,,[D@36d538b7,true))) scala> grouped.mapValues(x => x.size).foreach(println)
(true,) scala> val matchCounts = parsed.map(md => md.matched).countByValue()
matchCounts: scala.collection.Map[Boolean,Long] = Map(true -> , false -> ) scala> val matchCountsSeq = matchCounts.toSeq
matchCountsSeq: Seq[(Boolean, Long)] = ArrayBuffer((true,), (false,))
scala> matchCountsSeq.sortBy(_._1).foreach(println)
(false,)
(true,)

scala> matchCountsSeq.sortBy(_._2).reverse.foreach(println)
(false,5728201)
(true,20931)

2.10连续变量的概要统计

scala> parsed.map(md => md.scores()).stats()
res27: org.apache.spark.util.StatCounter = (count: , mean: NaN, stdev: NaN, max: NaN, min: NaN)

scala> import java.lang.Double.isNaN
import java.lang.Double.isNaN

scala> parsed.map(md => md.scores(0)).filter(!isNaN(_)).stats()
res28: org.apache.spark.util.StatCounter = (count: 5748125, mean: 0.712902, stdev: 0.388758, max: 1.000000, min: 0.000000)

2.11.为计算概要信息创建可重用代码

编写一个新的文件,存放在/root/aas/ch02/StatsWithMissing.scala,代码如下:

class NAStatCounter extends Serializable {
val stats: StatCounter = new StatCounter()
var missing: Long =
def add(x: Double): NAStatCounter = {
if (java. lang. Double. isNaN(x)) {
missing +=
} else {
stats. merge(x)
}
this
} def merge(other: NAStatCounter): NAStatCounter = {
stats. merge(other. stats)
missing += other. missing
this
} override def toString = {
"stats: " + stats. toString + " NaN: " + missing
} } object NAStatCounter extends Serializable {
def apply(x: Double) = new NAStatCounter(). add(x)
}

注意,这段代码中stats. merge(x)有点难理解:由于定义了apply方法,这里其实有一个NAStatCounter.apply(x)的隐式调用。

scala> val nasRDD = parsed.map(md => {md.scores.map(d=> NAStatCounter(d))})
nasRDD: org.apache.spark.rdd.RDD[Array[NAStatCounter]] = MappedRDD[] at map at <console>: scala> val nas1 = Array(1.0, Double. NaN). map(d => NAStatCounter(d))
nas1: Array[NAStatCounter] = Array(stats: (count: , mean: 1.000000, stdev: 0.000000, max: 1.000000, min: 1.000000) NaN: , stats: (count: , mean: 0.000000, stdev: NaN, max: -Infinity, min: Infinity) NaN: ) scala> val nas2 = Array(Double. NaN, 2.0). map(d => NAStatCounter(d))
nas2: Array[NAStatCounter] = Array(stats: (count: , mean: 0.000000, stdev: NaN, max: -Infinity, min: Infinity) NaN: , stats: (count: , mean: 2.000000, stdev: 0.000000, max: 2.000000, min: 2.000000) NaN: ) scala> val merged = nas1. zip(nas2). map(p => p. _1. merge(p. _2))
merged: Array[NAStatCounter] = Array(stats: (count: , mean: 1.000000, stdev: 0.000000, max: 1.000000, min: 1.000000) NaN: , stats: (count: , mean: 2.000000, stdev: 0.000000, max: 2.000000, min: 2.000000) NaN: ) scala> val reduced = nasRDD. reduce((n1, n2) => {
| n1. zip(n2). map { case (a, b) => a. merge(b) }
| })
reduced: Array[NAStatCounter] = Array(stats: (count: , mean: 0.712902, stdev: 0.388758, max: 1.000000, min: 0.000000) NaN: , stats: (count: , mean: 0.900018, stdev: 0.271316, max: 1.000000, min: 0.000000) NaN: , stats: (count: , mean: 0.315628, stdev: 0.334234, max: 1.000000, min: 0.000000) NaN: , stats: (count: , mean: 0.318413, stdev: 0.368492, max: 1.000000, min: 0.000000) NaN: , stats: (count: , mean: 0.955001, stdev: 0.207301, max: 1.000000, min: 0.000000) NaN: , stats: (count: , mean: 0.224465, stdev: 0.417230, max: 1.000000, min: 0.000000) NaN: , stats: (count: , mean: 0.488855, stdev: 0.499876, max: 1.000000, min: 0.000000) NaN: , stats: (count: , mean: 0.222749, stdev: 0.416091, max: 1.000000, min: ....
scala> reduced. foreach(println)

2.12.变量的选择和评分简介

scala> val statsm = statsWithMissing(parsed. filter(_. matched). map(_. scores))
statsm: Array[NAStatCounter] = Array(stats: (count: , mean: 0.997316, stdev: 0.036506, max: 1.000000, min: 0.000000) NaN: , stats: (count: , mean: 0.989890, stdev: 0.082489, max: 1.000000, min: 0.000000) NaN: , stats: (count: , mean: 0.997015, stdev: 0.043118, max: 1.000000, min: 0.000000) NaN: , stats: (count: , mean: 0.969370, stdev: 0.153291, max: 1.000000, min: 0.000000) NaN: , stats: (count: , mean: 0.987292, stdev: 0.112013, max: 1.000000, min: 0.000000) NaN: , stats: (count: , mean: 0.997085, stdev: 0.053914, max: 1.000000, min: 0.000000) NaN: , stats: (count: , mean: 0.997945, stdev: 0.045285, max: 1.000000, min: 0.000000) NaN: , stats: (count: , mean: 0.996129, stdev: 0.062097, max: 1.000000, min: 0.000000) NaN: , stats: (cou...
scala> val statsn = statsWithMissing(parsed. filter(! _. matched). map(_. scores))
statsn: Array[NAStatCounter] = Array(stats: (count: , mean: 0.711863, stdev: 0.389081, max: 1.000000, min: 0.000000) NaN: , stats: (count: , mean: 0.898847, stdev: 0.272720, max: 1.000000, min: 0.000000) NaN: , stats: (count: , mean: 0.313138, stdev: 0.332281, max: 1.000000, min: 0.000000) NaN: , stats: (count: , mean: 0.162955, stdev: 0.192975, max: 1.000000, min: 0.000000) NaN: , stats: (count: , mean: 0.954883, stdev: 0.207560, max: 1.000000, min: 0.000000) NaN: , stats: (count: , mean: 0.221643, stdev: 0.415352, max: 1.000000, min: 0.000000) NaN: , stats: (count: , mean: 0.486995, stdev: 0.499831, max: 1.000000, min: 0.000000) NaN: , stats: (count: , mean: 0.219923, stdev: 0.414194, max: 1.000000, min: 0.00...
scala> statsm.zip(statsn).map { case(m, n) =>
| (m.missing + n.missing, m.stats.mean - n.stats.mean)
| }.foreach(println)
(,0.285452905746686)
(,0.09104268062279908)
(,0.6838772482597568)
(,0.8064147192926266)
(,0.03240818525033473)
(,0.7754423117834042)
(,0.5109496938298719)
(,0.7762059675300523)
(,0.9563812499852178) scala> def naz(d: Double) = if (Double.NaN.equals(d)) 0.0 else d
naz: (d: Double)Double scala> case class Scored(md: MatchData, score: Double)
defined class Scored scala> val ct = parsed.map(md => {
| val score = Array(, , , , ).map(i => naz(md.scores(i))).sum
| Scored(md, score)
| })
ct: org.apache.spark.rdd.RDD[Scored] = MappedRDD[] at map at <console>:
scala> ct.filter(s => s.score >= 4.0).map(s => s.md.matched).countByValue()
res34: scala.collection.Map[Boolean,Long] = Map(true -> , false -> ) scala> ct.filter(s => s.score >= 2.0). map(s => s.md.matched).countByValue()
res35: scala.collection.Map[Boolean,Long] = Map(true -> , false -> )

AAS代码第2章的更多相关文章

  1. AAS代码运行-第4章

    [root@node1 aas]# ls ch02 ch03 spark--bin-hadoop2. spark--bin-hadoop2..tgz [root@node1 aas]# cd spar ...

  2. AAS代码运行-第11章-2

    hdfs dfs -ls /user/littlesuccess/AdvancedAnalysisWithSparkhdfs dfs -mkdir /user/littlesuccess/Advanc ...

  3. AAS代码运行-第11章-1

    启动PySpark export IPYTHON= # PySpark也可使用IPython shell pyspark --master yarn --num-executors 发生如下错误: / ...

  4. [翻译] 编写高性能 .NET 代码--第五章 通用编码与对象设计 -- 类 vs 结构体

    本章介绍了本书其它部分未涉及到的一些编码和设计原则.包含了一些.NET的应用场景,有些不会造成太大危害,有些则会造成明显的问题.剩下的则根据你的使用方法会产生不同的效果.如果要对本章节出现的原则做一个 ...

  5. Unix网络编程代码 第13章 守护进程和inetd超级服务器

    1. 概述 守护进程是在后台运行且不与任何控制终端关联的进程.unix系统通常有很多守护进程在后台运行,执行不同的管理任务.    守护进程没有控制终端通常源于它们由系统初始化脚本启动.然而守护进程也 ...

  6. SQL COOKBOOK SQL经典实例代码 笔记第一章代码

    -- SQL COOKBOOK CHAPTER1 -- 查看所有内容 select * from emp; -- 可以单列 select empno,ename,job,sal,mgr,hiredat ...

  7. Android 《第一行代码》 第二章练习代码 ActivityTest

    FirstActivity.java package com.example.activitytest; import android.app.Activity; import android.con ...

  8. [转] VS2015中跑OpenGL红宝书第八版的第一章示例代码,运行

    Ori Article Link OpenGL的东西快忘光了,把角落的第八版红宝书拿出来复习一下 从书中的地址下了个示例代码结果新系统(Win10+VS2015)各种跑不起来,懊恼之后在网上疯狂搜索资 ...

  9. 异步编程系列第01章 Async异步编程简介

    p { display: block; margin: 3px 0 0 0; } --> 2016.10.11补充 三个月过去了,回头来看,我不得不承认这是一系列失败的翻译.过段时间,我将重新翻 ...

随机推荐

  1. Cocos2d-x 核心概念 - 导演(Director)

    导演类(Director) 用于管理场景对象,采用的是单例模式(单例模式能保存一致的配置信息,方便管理场景对象) 获得导演实例的语句如下 local director = cc.Director:ge ...

  2. Android Studio -修改LogCat的颜色

    Android Studio -修改LogCat的颜色 author:Kang,Leo weibo:http://weibo.com/kangyi 效果图 设置 Preference->Edit ...

  3. Tomcat安装配置

    Tomcat安装配置 很久没有通过博客对学习所得进行记录了. 现在将使用Tomcat的一些经验和心得写到这里,作为记录和备忘.如果有朋友看到,也请不吝赐教. 1.首先是Tomcat的获取和安装. 获取 ...

  4. 蒙特卡洛马尔科夫链(MCMC)

    蒙特卡洛马尔科夫链(MCMC) 标签: 机器学习重要性采样MCMC蒙特卡洛 2016-12-30 20:34 3299人阅读 评论(0) 收藏 举报  分类: 数据挖掘与机器学习(41)  版权声明: ...

  5. Linux 系统时间和硬件时间

    linux 的系统时间有时跟硬件时间是不同步的 Linux时钟分为系统时钟(System Clock)和硬件(Real Time Clock,简称RTC)时钟.系统时钟是指当前Linux Kernel ...

  6. 【java基础】java的构造函数

    java构造器用于创建类的实例,是创建对象的重要途径,因此,java类必须含有一个或一个以上的构造函数   当我们没有为类提供任何构造函数的时候,系统会自动为该类提供一个无参构造函数,当我们为类提供了 ...

  7. Security » Authorization » 通过映射限制身份

    Limiting identity by scheme¶ 通过映射限制身份(这部分有好几个概念还不清楚,翻译的有问题) 36 of 39 people found this helpful In so ...

  8. miniui datepicker 二次加工

    function onshowpopup(){ $(".mini-calendar-timespinner").css({"width":"180px ...

  9. 关于启用 HTTPS 的一些经验分享(一)

    转载: 关于启用 HTTPS 的一些经验分享(一) 随着国内网络环境的持续恶化,各种篡改和劫持层出不穷,越来越多的网站选择了全站 HTTPS.就在今天,免费提供证书服务的 Let's Encrypt ...

  10. Ubuntu Server 配置网络

    ------------------ Ubuntu 14.04   x86_64 ----------------- 设置静态IP:vi /etc/network/interfaces. 然后再编辑 ...