日志=>flume=>kafka=>spark streaming=>hbase

日志部分

  1. #coding=UTF-8
  2. import random
  3. import time
  4.  
  5. url_paths = [
  6. "class/112.html",
  7. "class/128.html",
  8. "learn/821",
  9. "class/145.html",
  10. "class/146.html",
  11. "class/131.html",
  12. "class/130.html",
  13. "course/list"
  14. ]
  15.  
  16. ip_slices = [132,156,124,10, 29, 167,143,187,30, 46, 55, 63, 72, 87,98,168]
  17.  
  18. http_referers = [
  19. "http://www.baidu.com/s?wd={query}",
  20. "http://www.sogou.com/web?query={query}",
  21. "https://search.yahoo.com/search?p={query}",
  22. "http://www.bing.com/search?q={query}"
  23. ]
  24.  
  25. search_keyword = ["Spark SQL实战", "Hadoop基础", "Storm实战", "Spark Streaming实战", "大数据面试"]
  26.  
  27. status_codes = ["", "", ""]
  28.  
  29. def sample_url():
  30. return random.sample(url_paths,1)[0]
  31.  
  32. def sample_ip():
  33. slice = random.sample(ip_slices,4)
  34. return ".".join([str(item) for item in slice])
  35.  
  36. def sample_status_code():
  37. return random.sample(status_codes,1)[0]
  38.  
  39. def sample_referer():
  40. if random.uniform(0, 1) > 0.2:
  41. return "-"
  42.  
  43. refer_str = random.sample(http_referers, 1)
  44. query_str = random.sample(search_keyword, 1)
  45. return refer_str[0].format(query=query_str[0])
  46.  
  47. def generate_log(count=3):
  48. time_str = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
  49. f = open("/home/hadoop/data/project/logs/access.log", "w+")
  50. while count >= 1:
  51. query_log = "{ip}\t[{local_time}]\t\"GET /{url} HTTP/1.1\"\t{status_code}\t\"{referer}\"".format(ip=sample_ip() , local_time=time_str, url=sample_url(), status_code=sample_status_code(), referer=sample_referer())
  52. print query_log
  53. f.write(query_log + "\n")
  54. count = count - 1
  55.  
  56. if __name__ == '__main__':
  57. #print sample_ip()
  58. #print sample_url()
  59. generate_log(10)

flume对接日志部分

  1. exec-memory-kafka.conf
  1. #exec-memory-kafka
  2.  
  3. exec-memory-kafka.sources = exec-source
  4. exec-memory-kafka.channels = memory-channel
  5. exec-memory-kafka.sinks = kafka-sink
  6.  
  7. exec-memory-kafka.sources.exec-source.type = exec
  8. exec-memory-kafka.sources.exec-source.command = tail -F /home/hadoop/data/project/logs/access.log
  9. exec-memory-kafka.sources.exec-source.shell = /bin/sh -c
  10. exec-memory-kafka.sources.exec-source.channels = memory-channel
  11.  
  12. exec-memory-kafka.channels.memory-channel.type = memory
  13.  
  14. exec-memory-kafka.sinks.kafka-sink.type = org.apache.flume.sink.kafka.KafkaSink
  15. exec-memory-kafka.sinks.kafka-sink.topic = streamingtopic
  16. exec-memory-kafka.sinks.kafka-sink.brokerList = hadoop:9092
  17. exec-memory-kafka.sinks.kafka-sink.batchSize = 5
  18. exec-memory-kafka.sinks.kafka-sink.requiredAcks = 1
  19. exec-memory-kafka.sinks.kafka-sink.channel = memory-channel

flume-ng agent \
--name exec-memory-kafka \
--conf $FLUME_HOME/conf \
--conf-file /home/hadoop/data/project/exec-memory-kafka.conf \
-Dflume.root.logger=INFO,console

启动kafka测试消费:kafka-console-consumer.sh --zookeeper hadoop:2181 --topic streamingtopic --from-beginning

启动Hadoop:start-dfs.sh

启动hbase: start-hbase.sh

进入hbase shell:hbase shell -> 查看: list
hbase表设计:
create 'lin_course_clickcount' ,'info'
create 'lin_course_search_clickcount','info'
查看表:scan 'lin_course_clickcount'
rowkey设计:
day_courseid
day_search_courseid

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <project xmlns="http://maven.apache.org/POM/4.0.0"
  3. xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  4. xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  5. <modelVersion>4.0.0</modelVersion>
  6.  
  7. <groupId>com.lin.spark</groupId>
  8. <artifactId>SparkStreaming</artifactId>
  9. <version>1.0-SNAPSHOT</version>
  10. <properties>
  11. <scala.version>2.11.8</scala.version>
  12. <kafka.version>0.9.0.0</kafka.version>
  13. <spark.version>2.2.0</spark.version>
  14. <hadoop.version>2.6.0-cdh5.7.0</hadoop.version>
  15. <hbase.version>1.2.0-cdh5.7.0</hbase.version>
  16. </properties>
  17.  
  18. <!--添加cloudera的repository-->
  19. <repositories>
  20. <repository>
  21. <id>cloudera</id>
  22. <url>https://repository.cloudera.com/artifactory/cloudera-repos</url>
  23. </repository>
  24. </repositories>
  25.  
  26. <dependencies>
  27. <dependency>
  28. <groupId>org.scala-lang</groupId>
  29. <artifactId>scala-library</artifactId>
  30. <version>${scala.version}</version>
  31. </dependency>
  32.  
  33. <!-- Kafka 依赖-->
  34. <!--
  35. <dependency>
  36. <groupId>org.apache.kafka</groupId>
  37. <artifactId>kafka_2.11</artifactId>
  38. <version>${kafka.version}</version>
  39. </dependency>
  40. -->
  41.  
  42. <!-- Hadoop 依赖-->
  43. <dependency>
  44. <groupId>org.apache.hadoop</groupId>
  45. <artifactId>hadoop-client</artifactId>
  46. <version>${hadoop.version}</version>
  47. </dependency>
  48.  
  49. <!-- HBase 依赖-->
  50. <dependency>
  51. <groupId>org.apache.hbase</groupId>
  52. <artifactId>hbase-client</artifactId>
  53. <version>${hbase.version}</version>
  54. </dependency>
  55.  
  56. <dependency>
  57. <groupId>org.apache.hbase</groupId>
  58. <artifactId>hbase-server</artifactId>
  59. <version>${hbase.version}</version>
  60. </dependency>
  61.  
  62. <!-- Spark Streaming 依赖-->
  63. <dependency>
  64. <groupId>org.apache.spark</groupId>
  65. <artifactId>spark-streaming_2.11</artifactId>
  66. <version>${spark.version}</version>
  67. </dependency>
  68.  
  69. <!-- Spark Streaming整合Flume 依赖-->
  70. <dependency>
  71. <groupId>org.apache.spark</groupId>
  72. <artifactId>spark-streaming-flume_2.11</artifactId>
  73. <version>${spark.version}</version>
  74. </dependency>
  75.  
  76. <dependency>
  77. <groupId>org.apache.spark</groupId>
  78. <artifactId>spark-streaming-flume-sink_2.11</artifactId>
  79. <version>${spark.version}</version>
  80. </dependency>
  81.  
  82. <dependency>
  83. <groupId>org.apache.spark</groupId>
  84. <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
  85. <version>${spark.version}</version>
  86. </dependency>
  87.  
  88. <dependency>
  89. <groupId>org.apache.commons</groupId>
  90. <artifactId>commons-lang3</artifactId>
  91. <version>3.5</version>
  92. </dependency>
  93.  
  94. <!-- Spark SQL 依赖-->
  95. <dependency>
  96. <groupId>org.apache.spark</groupId>
  97. <artifactId>spark-sql_2.11</artifactId>
  98. <version>${spark.version}</version>
  99. </dependency>
  100.  
  101. <dependency>
  102. <groupId>com.fasterxml.jackson.module</groupId>
  103. <artifactId>jackson-module-scala_2.11</artifactId>
  104. <version>2.6.5</version>
  105. </dependency>
  106.  
  107. <dependency>
  108. <groupId>net.jpountz.lz4</groupId>
  109. <artifactId>lz4</artifactId>
  110. <version>1.3.0</version>
  111. </dependency>
  112.  
  113. <dependency>
  114. <groupId>mysql</groupId>
  115. <artifactId>mysql-connector-java</artifactId>
  116. <version>5.1.38</version>
  117. </dependency>
  118.  
  119. <dependency>
  120. <groupId>org.apache.flume.flume-ng-clients</groupId>
  121. <artifactId>flume-ng-log4jappender</artifactId>
  122. <version>1.6.0</version>
  123. </dependency>
  124.  
  125. </dependencies>
  126.  
  127. <build>
  128. <!--
  129. <sourceDirectory>src/main/scala</sourceDirectory>
  130. <testSourceDirectory>src/test/scala</testSourceDirectory>
  131. -->
  132. <plugins>
  133. <plugin>
  134. <groupId>org.scala-tools</groupId>
  135. <artifactId>maven-scala-plugin</artifactId>
  136. <executions>
  137. <execution>
  138. <goals>
  139. <goal>compile</goal>
  140. <goal>testCompile</goal>
  141. </goals>
  142. </execution>
  143. </executions>
  144. <configuration>
  145. <scalaVersion>${scala.version}</scalaVersion>
  146. <args>
  147. <arg>-target:jvm-1.5</arg>
  148. </args>
  149. </configuration>
  150. </plugin>
  151. <plugin>
  152. <groupId>org.apache.maven.plugins</groupId>
  153. <artifactId>maven-eclipse-plugin</artifactId>
  154. <configuration>
  155. <downloadSources>true</downloadSources>
  156. <buildcommands>
  157. <buildcommand>ch.epfl.lamp.sdt.core.scalabuilder</buildcommand>
  158. </buildcommands>
  159. <additionalProjectnatures>
  160. <projectnature>ch.epfl.lamp.sdt.core.scalanature</projectnature>
  161. </additionalProjectnatures>
  162. <classpathContainers>
  163. <classpathContainer>org.eclipse.jdt.launching.JRE_CONTAINER</classpathContainer>
  164. <classpathContainer>ch.epfl.lamp.sdt.launching.SCALA_CONTAINER</classpathContainer>
  165. </classpathContainers>
  166. </configuration>
  167. </plugin>
  168. </plugins>
  169. </build>
  170. <reporting>
  171. <plugins>
  172. <plugin>
  173. <groupId>org.scala-tools</groupId>
  174. <artifactId>maven-scala-plugin</artifactId>
  175. <configuration>
  176. <scalaVersion>${scala.version}</scalaVersion>
  177. </configuration>
  178. </plugin>
  179. </plugins>
  180. </reporting>
  181.  
  182. </project>
  1. package com.lin.spark.streaming.project.spark
  2.  
  3. import com.lin.spark.streaming.project.dao.{CourseClickCountDAO, CourseSearchClickCountDAO}
  4. import com.lin.spark.streaming.project.domain.{ClickLog, CourseClickCount, CourseSearchClickCount}
  5. import com.lin.spark.streaming.project.utils.DateUtils
  6. import org.apache.spark.SparkConf
  7. import org.apache.spark.streaming.kafka.KafkaUtils
  8. import org.apache.spark.streaming.{Seconds, StreamingContext}
  9.  
  10. import scala.collection.mutable.ListBuffer
  11.  
  12. /**
  13. * Created by Administrator on 2019/6/6.
  14. */
  15. object StatStreamingApp {
  16. def main(args: Array[String]): Unit = {
  17.  
  18. if (args.length != 4) {
  19. System.err.println("参数有误!")
  20. System.exit(1)
  21. }
  22. //hadoop:2181 test streamingtopic 2
  23. val Array(zkQuorum, group, topics, numThreads) = args
  24. val conf = new SparkConf().setAppName("KafkaUtil").setMaster("local[4]")
  25. val ssc = new StreamingContext(conf, Seconds(60))
  26.  
  27. val topicMap = topics.split(",").map((_, numThreads.toInt)).toMap
  28.  
  29. val clickLog = KafkaUtils.createStream(ssc, zkQuorum, group, topicMap).map(_._2)
  30.  
  31. val cleanData = clickLog.map(line => {
  32. val infos = line.split("\t")
  33. //29.98.156.124 2019-06-06 05:37:01 "GET /class/131.html HTTP/1.1" 500 http://www.baidu.com/s?wd=Storm实战
  34. //case class ClickLog(ip:String, time:String, courseId:Int, statusCode:Int, referer:String)
  35. var courseId = 0
  36. val url = infos(2).split(" ")(1)
  37. if (url.startsWith("/class")) {
  38. val urlHTML = url.split("/")(2)
  39. courseId = urlHTML.substring(0, urlHTML.lastIndexOf(".")).toInt
  40. }
  41. ClickLog(infos(0), DateUtils.parseToMinute(infos(1)), courseId, infos(3).toInt, infos(4))
  42. }).filter(clickLog => clickLog.courseId != 0)
  43.  
  44. //存储点击日志
  45. cleanData.map(log => {
  46. (log.time.substring(0, 8) + "_" + log.courseId, 1)
  47. }).reduceByKey(_ + _).foreachRDD(rdd => {
  48. rdd.foreachPartition(partitionReconrds => {
  49. val list = new ListBuffer[CourseClickCount]
  50. partitionReconrds.foreach(pair => {
  51. list.append(CourseClickCount(pair._1, pair._2))
  52. })
  53. CourseClickCountDAO.save(list)
  54. })
  55. })
  56.  
  57. //存储查询点击日志
  58. cleanData.map(log => {
  59.  
  60. val referer = log.referer.replaceAll("//", "/")
  61. val splits = referer.split("/")
  62. var host = ""
  63. if (splits.length > 2) {
  64. host = splits(1)
  65. }
  66. (host, log.courseId, log.time)
  67. }).filter(x => {
  68. x._1 != ""
  69. }).map(searchLog=>{
  70. (searchLog._3.substring(0,8) + "_" + searchLog._1 + "_" + searchLog._2 , 1)
  71. }).reduceByKey(_ + _).foreachRDD(rdd => {
  72. rdd.foreachPartition(partitionReconrds => {
  73. val list = new ListBuffer[CourseSearchClickCount]
  74. partitionReconrds.foreach(pair => {
  75. list.append(CourseSearchClickCount(pair._1, pair._2))
  76. })
  77. CourseSearchClickCountDAO.save(list)
  78. })
  79. })
  80.  
  81. ssc.start()
  82. ssc.awaitTermination()
  83. }
  84. }
  1. package com.lin.spark.streaming.project.utils
  2.  
  3. import java.util.Date
  4.  
  5. import org.apache.commons.lang3.time.FastDateFormat
  6.  
  7. /**
  8. * Created by Administrator on 2019/6/6.
  9. */
  10. object DateUtils {
  11.  
  12. val YYYYMMDDHHMMSS_FORMAT = FastDateFormat.getInstance("yyyy-MM-dd HH:mm:ss")
  13. val TARGE_FORMAT = FastDateFormat.getInstance("yyyyMMddHHmmss")
  14.  
  15. def getTime(time:String) ={
  16. YYYYMMDDHHMMSS_FORMAT.parse(time).getTime
  17. }
  18.  
  19. def parseToMinute(time:String)={
  20. TARGE_FORMAT.format(new Date(getTime(time)))
  21. }
  22.  
  23. def main(args: Array[String]): Unit = {
  24. println(parseToMinute("2017-10-22 14:46:01"))
  25. }
  26. }
  1. package com.lin.spark.streaming.project.domain
  2.  
  3. case class ClickLog(ip:String, time:String, courseId:Int, statusCode:Int, referer:String)
  1. package com.lin.spark.streaming.project.domain
  2.  
  3. /**
  4. * Created by Administrator on 2019/6/7.
  5. */
  6. case class CourseClickCount(day_course:String,click_course:Long)
  1. package com.lin.spark.streaming.project.domain
  2.  
  3. /**
  4. * Created by Administrator on 2019/6/7.
  5. */
  6. case class CourseSearchClickCount(day_search_course:String, click_count:Long)
  1. package com.lin.spark.streaming.project.dao
  2.  
  3. import com.lin.spark.project.utils.HBaseUtils
  4. import com.lin.spark.streaming.project.domain.CourseClickCount
  5. import org.apache.hadoop.hbase.client.Get
  6. import org.apache.hadoop.hbase.util.Bytes
  7.  
  8. import scala.collection.mutable.ListBuffer
  9.  
  10. /**
  11. * Created by Administrator on 2019/6/7.
  12. */
  13. object CourseClickCountDAO {
  14.  
  15. val tableName = "lin_course_clickcount"
  16. val cf = "info"
  17. val qualifer = "click_count"
  18.  
  19. def save(list:ListBuffer[CourseClickCount]):Unit={
  20. val table =HBaseUtils.getInstance().getTable(tableName)
  21. for (ele <- list){
  22. table.incrementColumnValue(Bytes.toBytes(ele.day_course),
  23. Bytes.toBytes(cf),
  24. Bytes.toBytes(qualifer),
  25. ele.click_course)
  26. }
  27. }
  28.  
  29. def count(day_course:String):Long={
  30. val table = HBaseUtils.getInstance().getTable(tableName)
  31. val get = new Get(Bytes.toBytes(day_course))
  32. val value = table.get(get).getValue(cf.getBytes,qualifer.getBytes)
  33. if(value == null){
  34. 0L
  35. }else{
  36. Bytes.toLong(value)
  37. }
  38. }
  39.  
  40. def main(args: Array[String]): Unit = {
  41. val list = new ListBuffer[CourseClickCount]
  42. list.append(CourseClickCount("20190606",99))
  43. list.append(CourseClickCount("20190608",89))
  44. list.append(CourseClickCount("20190609",100))
  45. // save(list)
  46. println(count("20190609"))
  47. }
  48. }
  1. package com.lin.spark.streaming.project.dao
  2.  
  3. import com.lin.spark.project.utils.HBaseUtils
  4. import com.lin.spark.streaming.project.domain.{CourseClickCount, CourseSearchClickCount}
  5. import org.apache.hadoop.hbase.client.Get
  6. import org.apache.hadoop.hbase.util.Bytes
  7.  
  8. import scala.collection.mutable.ListBuffer
  9.  
  10. /**
  11. * Created by Administrator on 2019/6/7.
  12. */
  13. object CourseSearchClickCountDAO {
  14.  
  15. val tableName = "lin_course_search_clickcount"
  16. val cf = "info"
  17. val qualifer = "click_count"
  18.  
  19. def save(list:ListBuffer[CourseSearchClickCount]):Unit={
  20. val table =HBaseUtils.getInstance().getTable(tableName)
  21. for (ele <- list){
  22. table.incrementColumnValue(Bytes.toBytes(ele.day_search_course),
  23. Bytes.toBytes(cf),
  24. Bytes.toBytes(qualifer),
  25. ele.click_count)
  26. }
  27. }
  28.  
  29. def count(day_course:String):Long={
  30. val table = HBaseUtils.getInstance().getTable(tableName)
  31. val get = new Get(Bytes.toBytes(day_course))
  32. val value = table.get(get).getValue(cf.getBytes,qualifer.getBytes)
  33. if(value == null){
  34. 0L
  35. }else{
  36. Bytes.toLong(value)
  37. }
  38. }
  39.  
  40. def main(args: Array[String]): Unit = {
  41. val list = new ListBuffer[CourseSearchClickCount]
  42. list.append(CourseSearchClickCount("20190606_www.baidu.com_99",99))
  43. list.append(CourseSearchClickCount("20190608_www.bing.com_89",89))
  44. list.append(CourseSearchClickCount("20190609_www.csdn.net_100",100))
  45. save(list)
  46. // println(count("20190609"))
  47. }
  48. }

日志=>flume=>kafka=>spark streaming=>hbase的更多相关文章

  1. flume+kafka+spark streaming整合

    1.安装好flume2.安装好kafka3.安装好spark4.流程说明: 日志文件->flume->kafka->spark streaming flume输入:文件 flume输 ...

  2. 基于Kafka+Spark Streaming+HBase实时点击流案例

    背景 Kafka实时记录从数据采集工具Flume或业务系统实时接口收集数据,并作为消息缓冲组件为上游实时计算框架提供可靠数据支撑,Spark 1.3版本后支持两种整合Kafka机制(Receiver- ...

  3. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十)安装hadoop2.9.0搭建HA

    如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...

  4. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(二十一)NIFI1.7.1安装

    一.nifi基本配置 1. 修改各节点主机名,修改/etc/hosts文件内容. 192.168.0.120 master 192.168.0.121 slave1 192.168.0.122 sla ...

  5. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十一)定制一个arvo格式文件发送到kafka的topic,通过Structured Streaming读取kafka的数据

    将arvo格式数据发送到kafka的topic 第一步:定制avro schema: { "type": "record", "name": ...

  6. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(九)安装kafka_2.11-1.1.0

    如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...

  7. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(八)安装zookeeper-3.4.12

    如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...

  8. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(三)安装spark2.2.1

    如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...

  9. demo2 Kafka+Spark Streaming+Redis实时计算整合实践 foreachRDD输出到redis

    基于Spark通用计算平台,可以很好地扩展各种计算类型的应用,尤其是Spark提供了内建的计算库支持,像Spark Streaming.Spark SQL.MLlib.GraphX,这些内建库都提供了 ...

随机推荐

  1. java.net.ProtocolException: Exceeded stated content-length of: '13824' bytes

    转自:https://blog.csdn.net/z69183787/article/details/18967927 1. 原因: 因为weblogic会向response中写东西造成的,解决方式是 ...

  2. 115-基于TI TMS320DM6467T Camera Link 机器视觉 智能图像分析平台

    基于TI TMS320DM6467无操作系统Camera Link智能图像分析平台 1.板卡概述 该板卡是我公司推出的一款具有超高可靠性.效率最大化.无操作系统的智能视频处理卡,是机器视觉开发上的首选 ...

  3. MySQL--18 报错总结

    报错1: 报错原因:MySQL的socket文件目录不存在. 解决方法: 创建MySQL的socket文件目录 mkdir /application/mysql-5.6.38/tmp 报错2: 报错原 ...

  4. Nginx配置参数详解参考示例

    user nobody; worker_processes 2; events{ worker_connections 1024; } http{ #设置默认类型为二进制流 default_type ...

  5. mysql中文乱码解决办法

    Windows 在C:\Program Files\MySQL\MySQL Server 5.5\bin目录下 MySQLInstanceConfig.exe执行 重新配置character_set_ ...

  6. $NOIP2018$ 爆踩全场记

    NOIP2018 Day-1 路还很长. 这里就是起点. 这是最简单的一步,但这是最关键的一步. 联赛就在眼前了,一切好像都已经准备好了,一切好像又都没准备好. 相信自己吧,\(mona\),这绝对不 ...

  7. ARC096E Everything on It 容斥原理

    题目传送门 https://atcoder.jp/contests/arc096/tasks/arc096_c 题解 考虑容斥,问题转化为求至少有 \(i\) 个数出现不高于 \(1\) 次. 那么我 ...

  8. BZOJ3331 BZOJ2013 压力

    考前挣扎 圆方树这么早就出现了嘛... 要求每个点必须被经过的次数 所以就是路径上的割点/端点++ 由于圆方树上所有非叶子圆点都是割点 所以就是树上差分就可以辣. 实现的时候出了一点小问题. 就是这里 ...

  9. 为什么集合类没有实现Cloneable和Serializable接口?

    为什么集合类没有实现Cloneable和Serializable接口? 克隆(cloning)或者是序列化(serialization)的语义和含义是跟具体的实现相关的.因此,应该由集合类的具体实现来 ...

  10. 服务器构建CentOS+Jenkins+Git+Maven之爬坑

    ssh端口变更后,git如何访问远端中央代码库 参考来源: http://wiki.jenkins-ci.org/display/JENKINS/Git+Plugin http://blog.csdn ...