1 mk-worker

和其他的daemon一样, 都是通过defserverfn macro来创建worker

  1. (defserverfn mk-worker [conf shared-mq-context storm-id assignment-id port worker-id]
  2. (log-message "Launching worker for " storm-id " on " assignment-id ":" port " with id " worker-id
  3. " and conf " conf)
  4. (if-not (local-mode? conf)
  5. (redirect-stdio-to-slf4j!))
  6. ;; because in local mode, its not a separate
  7. ;; process. supervisor will register it in this case
  8. (when (= :distributed (cluster-mode conf))
  9. (touch (worker-pid-path conf worker-id (process-pid))))
  10. (let [worker (worker-data conf shared-mq-context storm-id assignment-id port worker-id) ;;1.1 生成work-data
  11. ;;1.2 生成workerhb
  1. heartbeat-fn #(do-heartbeat worker)
  2. ;; do this here so that the worker process dies if this fails
  3. ;; it's important that worker heartbeat to supervisor ASAP when launching so that the supervisor knows it's running (and can move on)
  4. _ (heartbeat-fn)
  5.  
  6. ;; heartbeat immediately to nimbus so that it knows that the worker has been started
  7. _ (do-executor-heartbeats worker)
  8.  
  9. executors (atom nil)
  10. ;; launch heartbeat threads immediately so that slow-loading tasks don't cause the worker to timeout
  11. ;; to the supervisor
  12. _ (schedule-recurring (:heartbeat-timer worker) 0 (conf WORKER-HEARTBEAT-FREQUENCY-SECS) heartbeat-fn)
  13. _ (schedule-recurring (:executor-heartbeat-timer worker) 0 (conf TASK-HEARTBEAT-FREQUENCY-SECS) #(do-executor-heartbeats worker :executors @executors))
  14.  
  15. ;;1.3 更新发送connections
  16. refresh-connections (mk-refresh-connections worker)
  17. _ (refresh-connections nil)
  18. _ (refresh-storm-active worker nil)
  1. ;;1.4 创建executors
  2. _ (reset! executors (dofor [e (:executors worker)] (executor/mk-executor worker e)))
  1. ;;1.5 launch接收线程,将数据从server的侦听端口, 不停的放到task对应的接收队列
  1. receive-thread-shutdown (launch-receive-thread worker) ;;返回值是threadclose function
  1. ;;1.6 定义event handler来处理transfer queue里面的数据, 并创建transfer-thread
  2. transfer-tuples (mk-transfer-tuples-handler worker)
  3. transfer-thread (disruptor/consume-loop* (:transfer-queue worker) transfer-tuples)
  1. ;;1.7 定义worker shutdown函数, 以及worker的操作接口实现
  2. shutdown* (fn []
  3. (log-message "Shutting down worker " storm-id " " assignment-id " " port)
  4. (doseq [[_ socket] @(:cached-node+port->socket worker)]
  5. ;; this will do best effort flushing since the linger period
  6. ;; was set on creation
  7. (.close socket))
  8. (log-message "Shutting down receive thread")
  9. (receive-thread-shutdown)
  10. (log-message "Shut down receive thread")
  11. (log-message "Terminating messaging context")
  12. (log-message "Shutting down executors")
  13. (doseq [executor @executors] (.shutdown executor))
  14. (log-message "Shut down executors")
  15.  
  16. ;;this is fine because the only time this is shared is when it's a local context,
  17. ;;in which case it's a noop
  18. (.term ^IContext (:mq-context worker))
  19. (log-message "Shutting down transfer thread")
  20. (disruptor/halt-with-interrupt! (:transfer-queue worker))
  21.  
  22. (.interrupt transfer-thread)
  23. (.join transfer-thread)
  24. (log-message "Shut down transfer thread")
  25. (cancel-timer (:heartbeat-timer worker))
  26. (cancel-timer (:refresh-connections-timer worker))
  27. (cancel-timer (:refresh-active-timer worker))
  28. (cancel-timer (:executor-heartbeat-timer worker))
  29. (cancel-timer (:user-timer worker))
  30.  
  31. (close-resources worker)
  32.  
  33. ;; TODO: here need to invoke the "shutdown" method of WorkerHook
  34.  
  35. (.remove-worker-heartbeat! (:storm-cluster-state worker) storm-id assignment-id port)
  36. (log-message "Disconnecting from storm cluster state context")
  37. (.disconnect (:storm-cluster-state worker))
  38. (.close (:cluster-state worker))
  39. (log-message "Shut down worker " storm-id " " assignment-id " " port))
  40. ret (reify
  41. Shutdownable
  42. (shutdown
  43. [this]
  44. (shutdown*))
  45. DaemonCommon
  46. (waiting? [this]
  47. (and
  48. (timer-waiting? (:heartbeat-timer worker))
  49. (timer-waiting? (:refresh-connections-timer worker))
  50. (timer-waiting? (:refresh-active-timer worker))
  51. (timer-waiting? (:executor-heartbeat-timer worker))
  52. (timer-waiting? (:user-timer worker))
  53. ))
  54. )]
  55.  
  56. (schedule-recurring (:refresh-connections-timer worker) 0 (conf TASK-REFRESH-POLL-SECS) refresh-connections)
  57. (schedule-recurring (:refresh-active-timer worker) 0 (conf TASK-REFRESH-POLL-SECS) (partial refresh-storm-active worker))
  58.  
  59. (log-message "Worker has topology config " (:storm-conf worker))
  60. (log-message "Worker " worker-id " for storm " storm-id " on " assignment-id ":" port " has finished loading")
  61. ret
  62. ))

 

1.1 生成worker-data

  1. (defn worker-data [conf mq-context storm-id assignment-id port worker-id]
  2. (let [cluster-state (cluster/mk-distributed-cluster-state conf)
  3. storm-cluster-state (cluster/mk-storm-cluster-state cluster-state)
  4. storm-conf (read-supervisor-storm-conf conf storm-id)
  1. ;;从assignments里面找出分配给这个workerexecutors, 另外加上个SYSTEM_EXECUTOR
  2. executors (set (read-worker-executors storm-conf storm-cluster-state storm-id assignment-id port))
  3. ;;基于disruptor创建worker用于接收和发送messgaebuffer queue
  1. ;;创建基于disruptortransfer-queue
  1. transfer-queue (disruptor/disruptor-queue (storm-conf TOPOLOGY-TRANSFER-BUFFER-SIZE)
  2. :wait-strategy (storm-conf TOPOLOGY-DISRUPTOR-WAIT-STRATEGY))
  3. ;;对于每个executors创建receive-queue(基于disruptor-queue),并生成{e,queue}的map返回
  1. executor-receive-queue-map (mk-receive-queue-map storm-conf executors)
  2. ;;executor可能有多个tasks,相同executortasks公用一个queue, 将{e,queue}转化为{t,queue}
  3. receive-queue-map (->> executor-receive-queue-map
  4. (mapcat (fn [[e queue]] (for [t (executor-id->tasks e)] [t queue])))
  5. (into {}))
  6. ;;读取supervisor机器上存储的stormcode.ser (topology对象的序列化文件)
  7. topology (read-supervisor-topology conf storm-id)]
  1. ;;recursive-map,会将底下value都执行一遍, 用返回值和key生成新的map
  1. (recursive-map
  2. :conf conf
  3. :mq-context (if mq-context
  4. mq-context
  5. (TransportFactory/makeContext storm-conf)) ;;已经prepare的具有IContext接口的对象
  6. :storm-id storm-id
  7. :assignment-id assignment-id
  8. :port port
  9. :worker-id worker-id
  10. :cluster-state cluster-state
  11. :storm-cluster-state storm-cluster-state
  12. :storm-active-atom (atom false)
  13. :executors executors
  14. :task-ids (->> receive-queue-map keys (map int) sort)
  15. :storm-conf storm-conf
  16. :topology topology
  17. :system-topology (system-topology! storm-conf topology)
  18. :heartbeat-timer (mk-halting-timer)
  19. :refresh-connections-timer (mk-halting-timer)
  20. :refresh-active-timer (mk-halting-timer)
  21. :executor-heartbeat-timer (mk-halting-timer)
  22. :user-timer (mk-halting-timer)
  23. :task->component (HashMap. (storm-task-info topology storm-conf)) ; for optimized access when used in tasks later on
  24. :component->stream->fields (component->stream->fields (:system-topology <>)) ;;从ComponentCommon中读出steamsfields信息
  25. :component->sorted-tasks (->> (:task->component <>) reverse-map (map-val sort))
  26. :endpoint-socket-lock (mk-rw-lock)
  27. :cached-node+port->socket (atom {})
  28. :cached-task->node+port (atom {})
  29. :transfer-queue transfer-queue
  30. :executor-receive-queue-map executor-receive-queue-map
  31. :short-executor-receive-queue-map (map-key first executor-receive-queue-map) ;;单纯为了简化executor的表示, 由[first-task,last-task]变为first-task
  32. :task->short-executor (->> executors ;;列出task和简化后的short-executor的对应关系
  33. (mapcat (fn [e] (for [t (executor-id->tasks e)] [t (first e)])))
  34. (into {})
  35. (HashMap.))
  36. :suicide-fn (mk-suicide-fn conf)
  37. :uptime (uptime-computer)
  38. :default-shared-resources (mk-default-resources <>)
  39. :user-shared-resources (mk-user-resources <>)
  40. :transfer-local-fn (mk-transfer-local-fn <>) ;;接收messages并发到task对应的接收队列
  41. :transfer-fn (mk-transfer-fn <>) ;;将处理过的message放到发送队列transfer-queue
  42. )))

1.2 Worker Heartbeat

1.2.1. 建立worker本地的hb

调用do-heartbeat, 将worker的hb写到本地的localState数据库中, (.put state LS-WORKER-HEARTBEAT hb false)

1.2.2. 将worker hb同步到zk, 以便nimbus可以立刻知道worker已经启动

调用do-executor-heartbeats, 通过worker-heartbeat!将worker hb写入zk的workerbeats目录

1.2.3. 设定timer定期更新本地hb和zk hb

(schedule-recurring (:heartbeat-timer worker) 0 (conf WORKER-HEARTBEAT-FREQUENCY-SECS) heartbeat-fn)

(schedule-recurring (:executor-heartbeat-timer worker) 0 (conf TASK-HEARTBEAT-FREQUENCY-SECS) #(do-executor-heartbeats worker :executors @executors))

 

1.3 维护和更新worker的发送connection

mk-refresh-connections定义并返回一个匿名函数, 但是这个匿名函数, 定义了函数名this, 这个情况前面也看到, 是因为这个函数本身要在函数体内被使用.

并且refresh-connections是需要反复被执行的, 即当每次assignment-info发生变化的时候, 就需要refresh一次

所以这里使用timer.schedule-recurring就不合适, 因为不是以时间触发

这里使用的是zk的callback触发机制

Supervisor的mk-synchronize-supervisor, 以及worker的mk-refresh-connections, 都采用类似的机制

a. 首先需要在每次assignment改变的时候被触发, 所以都利用zk的watcher

b. 都需要将自己作为callback, 并在获取assignment时进行注册, 都使用(fn this [])

c. 因为比较耗时, 都选择后台执行callback, 但是mk-synchronize-supervisor使用的是eventmanager, mk-refresh-connections使用的是timer

两者不同, timer是基于优先级队列, 所以更灵活, 可以设置延时时间, 而eventmanager, 就是普通队列实现, FIFO

另外, eventmanager利用reify来封装接口, 返回的是record, 比timer的实现要优雅些

首先, 如果没有指定callback, 以(schedule (:refresh-connections-timer worker) 0 this)为callback

接着, (.assignment-info storm-cluster-state storm-id callback) 在获取assignment信息的时候, 设置callback, 也就是说当assignment发生变化时, 就会向refresh-connections-timer中发送一个'立即执行this’的event

这样就可以保证, 每次assignment发生变化, timer都会在后台做refresh-connections的操作

  1. (defn mk-refresh-connections [worker]
  2. (let [outbound-tasks (worker-outbound-tasks worker) ;;a.找出该woker需要向哪些component tasks发送数据,to-tasks
  3. conf (:conf worker)
  4. storm-cluster-state (:storm-cluster-state worker)
  5. storm-id (:storm-id worker)]
  6. (fn this
  7. ([]
  8. (this (fn [& ignored] (schedule (:refresh-connections-timer worker) 0 this)))) ;;scheduletimer里面加event
  9. ([callback]
  10. (let [assignment (.assignment-info storm-cluster-state storm-id callback)
  11. my-assignment (-> assignment ;;b.得到to-tasksnode+port
  12. :executor->node+port
  13. to-task->node+port
  14. (select-keys outbound-tasks)
  15. (#(map-val endpoint->string %)))
  16. ;; we dont need a connection for the local tasks anymore
  17. needed-assignment (->> my-assignment ;;c.排除local tasks
  18. (filter-key (complement (-> worker :task-ids set))))
  19. needed-connections (-> needed-assignment vals set)
  20. needed-tasks (-> needed-assignment keys)
  21.  
  22. current-connections (set (keys @(:cached-node+port->socket worker)))
  23. new-connections (set/difference needed-connections current-connections) ;;d.需要add的和removeconnections
  24. remove-connections (set/difference current-connections needed-connections)]
  25. (swap! (:cached-node+port->socket worker) ;;e.创建新的connections
  26. #(HashMap. (merge (into {} %1) %2))
  27. (into {}
  28. (dofor [endpoint-str new-connections
  29. :let [[node port] (string->endpoint endpoint-str)]]
  30. [endpoint-str
  31. (.connect
  32. ^IContext (:mq-context worker)
  33. storm-id
  34. ((:node->host assignment) node)
  35. port)
  36. ]
  37. )))
  38. (write-locked (:endpoint-socket-lock worker)
  39. (reset! (:cached-task->node+port worker)
  40. (HashMap. my-assignment)))
  41. (doseq [endpoint remove-connections]
  42. (.close (get @(:cached-node+port->socket worker) endpoint)))
  43. (apply swap!
  44. (:cached-node+port->socket worker)
  45. #(HashMap. (apply dissoc (into {} %1) %&))
  46. remove-connections)
  47.  
  48. (let [missing-tasks (->> needed-tasks
  49. (filter (complement my-assignment)))]
  50. (when-not (empty? missing-tasks)
  51. (log-warn "Missing assignment for following tasks: " (pr-str missing-tasks))
  52. )))))))

refresh-connections的步骤

a. 找出该worker下需要往其他task发送数据的task, outbound-tasks

    worker-outbound-tasks, 找出当前work中的task属于的component, 并找出该component的目标component

    最终找出目标compoennt所对应的所有task, 作为返回   

b. 找出outbound-tasks对应的tasks->node+port, my-assignment

c. 如果outbound-tasks在同一个worker进程中, 不需要建connection, 所以排除掉, 剩下needed-assignment 

   :value –> needed-connections , :key –> needed-tasks

d. 和当前已经创建并cache的connection集合对比一下, 找出new-connections和remove-connections

e. 调用Icontext.connect, (.connect ^IContext (:mq-context worker) storm-id ((:node->host assignment) node) port), 创建新的connection, 并merge到:cached-node+port->socket中

f. 使用my-assignment更新:cached-task->node+port (结合:cached-node+port->socket, 就可以得到task->socket) 

g. close所有remove-connections, 并从:cached-node+port->socket中删除

 

1.4 创建worker中的executors

executor/mk-executor worker e, Storm-源码分析-Topology Submit-Executor

 

1.5 launch-receive-thread

launch接收线程,将数据从server的侦听端口, 不停的放到task对应的接收队列

  1. (defn launch-receive-thread [worker]
  2. (log-message "Launching receive-thread for " (:assignment-id worker) ":" (:port worker))
  3. (msg-loader/launch-receive-thread!
  4. (:mq-context worker)
  5. (:storm-id worker)
  6. (:port worker)
  7. (:transfer-local-fn worker)
  8. (-> worker :storm-conf (get TOPOLOGY-RECEIVER-BUFFER-SIZE))
  9. :kill-fn (fn [t] (halt-process! 11))))

1.5.1 mq-context

调用TransportFactory/makeContext来创建context对象, 根据配置不同, 分别创建local或ZMQ的context

1.5.2 transfer-local-fn

返回fn, 该fn会将tuple-batch里面的tuples, 按task所对应的executor发送到对应的接收队列

  1. (defn mk-transfer-local-fn [worker]
  2. (let [short-executor-receive-queue-map (:short-executor-receive-queue-map worker)
  3. task->short-executor (:task->short-executor worker)
  4. task-getter (comp #(get task->short-executor %) fast-first)]
  5. (fn [tuple-batch]
  6. (let [grouped (fast-group-by task-getter tuple-batch)] ;;将tuple-batchexecutor进行分组
  7. (fast-map-iter [[short-executor pairs] grouped] ;;对应grouped里面每个entry执行下面的逻辑
  8. (let [q (short-executor-receive-queue-map short-executor)]
  9. (if q
  10. (disruptor/publish q pairs) ;;将tuple pairs发送到executor所对应的接收queue里面
  11. (log-warn "Received invalid messages for unknown tasks. Dropping... ")
  12. )))))))

 

  1. (defn fast-group-by [afn alist]
  2. (let [ret (HashMap.)]
  3. (fast-list-iter [e alist] ;;宏, e表示list里面的elem, 下面的逻辑会在每个elem上执行
  4. (let [key (afn e) ;;将afn(e)作为key
  5. ^List curr (get-with-default ret key (ArrayList.))] ;;valuearraylist, 如果第一次就创建
  6. (.add curr e))) ;;把e加到对应keyarraylist里面
  7. ret ))

作用就是将alist里面的elem, 按afn(elem)作为key, 经行group, 最终返回hashmap, 以便通过key可以取出所有的elem

 

  1. (defmacro fast-map-iter [[bind amap] & body]
  2. `(let [iter# (map-iter ~amap)] ;;把map转化为entryset, 并返回iterator
  3. (while (iter-has-next? iter#)
  4. (let [entry# (iter-next iter#)
  5. ~bind (convert-entry entry#)]
  6. ~@body
  7. ))))

对上面的例子,

bind = [short-executor pairs]

amap = grouped

grouped的一个entry是, {: short-executor pairs}

一个简化的iter map的宏, 比较难于理解

1.5.3 msg-loader/launch-receive-thread!

a, 使用async-loop, 创建异步执行loop的线程, 并start thread

   主要的逻辑是, bind到socket端口, 不停的recv messages

   当接收完一批, 通过transfer-local-fn放到接收队列

b, 在async-loop中已经start thread, 完成let的时候thread已经在工作了

   这个function的返回值, 很有意思, 其实是这个thread的close function, 并且由于闭包了该thread, 使得这个thread在close前一直存在

  1. (defnk launch-receive-thread!
  2. [context storm-id port transfer-local-fn max-buffer-size
  3. :daemon true
  4. :kill-fn (fn [t] (System/exit 1))
  5. :priority Thread/NORM_PRIORITY]
  6. (let [max-buffer-size (int max-buffer-size)
  7. vthread (async-loop
  8. (fn []
  9. (let [socket (.bind ^IContext context storm-id port)]
  10. (fn []
  11. (let [batched (ArrayList.)
  12. init (.recv ^IConnection socket 0)] ;;block方式
  13. (loop [packet init]
  14. (let [task (if packet (.task ^TaskMessage packet))
  15. message (if packet (.message ^TaskMessage packet))]
  16. (if (= task -1) ;;收到结束命令
  17. (do (log-message "Receiving-thread:[" storm-id ", " port "] received shutdown notice")
  18. (.close socket)
  19. nil )
  20. (do
  21. (when packet (.add batched [task message]))
  22. (if (and packet (< (.size batched) max-buffer-size))
  23. (recur (.recv ^IConnection socket 1)) ;;non-block方式, 无数据则loop结束
  24. (do (transfer-local-fn batched) ;;将batched数据放到各个task对应的接收队列
  25. 0 ))))))))))
  26. :factory? true
  27. :daemon daemon
  28. :kill-fn kill-fn
  29. :priority priority)]
  30. (fn [] ;;该threadclose function
  31. (let [kill-socket (.connect ^IContext context storm-id "localhost" port)] ;;本地创建client socket用于发送kill cmd
  32. (log-message "Shutting down receiving-thread: [" storm-id ", " port "]")
  33. (.send ^IConnection kill-socket ;;发送kill cmd, -1
  34. -1
  35. (byte-array []))
  36. (log-message "Waiting for receiving-thread:[" storm-id ", " port "] to die")
  37. (.join vthread) ;;等待thread结束
  38. (.close ^IConnection kill-socket)
  39. (log-message "Shutdown receiving-thread: [" storm-id ", " port "]")
  40. ))))

1.6 生成mk-transfer-tuples-handler, 并创建transfer-thread

生成disrputor的event handler,

将packets不停的放到drainer里面, 当batch结束时, 将drainer里面的每条message发送到对应task的connection

  1. (defn mk-transfer-tuples-handler [worker]
  2. (let [^DisruptorQueue transfer-queue (:transfer-queue worker)
  3. drainer (ArrayList.)
  4. node+port->socket (:cached-node+port->socket worker)
  5. task->node+port (:cached-task->node+port worker)
  6. endpoint-socket-lock (:endpoint-socket-lock worker)
  7. ]
  8. (disruptor/clojure-handler
  9. (fn [packets _ batch-end?]
  10. (.addAll drainer packets)
  11. (when batch-end?
  12. (read-locked endpoint-socket-lock
  13. (let [node+port->socket @node+port->socket
  14. task->node+port @task->node+port]
  15. ;; consider doing some automatic batching here (would need to not be serialized at this point to remove per-tuple overhead)
  16. ;; try using multipart messages ... first sort the tuples by the target node (without changing the local ordering)
  17.  
  18. (fast-list-iter [[task ser-tuple] drainer]
  19. ;; TODO: consider write a batch of tuples here to every target worker
  20. ;; group by node+port, do multipart send
  21. (let [node-port (get task->node+port task)]
  22. (when node-port
  23. (.send ^IConnection (get node+port->socket node-port) task ser-tuple))
  24. ))))
  25. (.clear drainer))))))

 

总结,

从下图比较清晰的可以看出worker做了哪些事情,

1. 根据assignment变化, 调整或创建send-connection

2. 创建executors的输入和输出queue

3. 创建worker的接收和发送线程, receive thread和tansfer thread

4. 根据assignments关系, 创建executors

其中线程间通信使用的是, disruptor

而进程间通信使用的是, ZMQ

Storm-源码分析-Topology Submit-Worker的更多相关文章

  1. Storm源码分析--Nimbus-data

    nimbus-datastorm-core/backtype/storm/nimbus.clj (defn nimbus-data [conf inimbus] (let [forced-schedu ...

  2. JStorm与Storm源码分析(四)--均衡调度器,EvenScheduler

    EvenScheduler同DefaultScheduler一样,同样实现了IScheduler接口, 由下面代码可以看出: (ns backtype.storm.scheduler.EvenSche ...

  3. JStorm与Storm源码分析(三)--Scheduler,调度器

    Scheduler作为Storm的调度器,负责为Topology分配可用资源. Storm提供了IScheduler接口,用户可以通过实现该接口来自定义Scheduler. 其定义如下: public ...

  4. storm源码分析之任务分配--task assignment

    在"storm源码分析之topology提交过程"一文最后,submitTopologyWithOpts函数调用了mk-assignments函数.该函数的主要功能就是进行topo ...

  5. JStorm与Storm源码分析(一)--nimbus-data

    Nimbus里定义了一些共享数据结构,比如nimbus-data. nimbus-data结构里定义了很多公用的数据,请看下面代码: (defn nimbus-data [conf inimbus] ...

  6. JStorm与Storm源码分析(二)--任务分配,assignment

    mk-assignments主要功能就是产生Executor与节点+端口的对应关系,将Executor分配到某个节点的某个端口上,以及进行相应的调度处理.代码注释如下: ;;参数nimbus为nimb ...

  7. storm源码分析之topology提交过程

    storm集群上运行的是一个个topology,一个topology是spouts和bolts组成的图.当我们开发完topology程序后将其打成jar包,然后在shell中执行storm jar x ...

  8. twitter storm 源码走读之5 -- worker进程内部消息传递处理和数据结构分析

    欢迎转载,转载请注明出处,徽沪一郎. 本文从外部消息在worker进程内部的转化,传递及处理过程入手,一步步分析在worker-data中的数据项存在的原因和意义.试图从代码实现的角度来回答,如果是从 ...

  9. Nimbus<三>Storm源码分析--Nimbus启动过程

    Nimbus server, 首先从启动命令开始, 同样是使用storm命令"storm nimbus”来启动看下源码, 此处和上面client不同, jvmtype="-serv ...

  10. JStorm与Storm源码分析(五)--SpoutOutputCollector与代理模式

    本文主要是解析SpoutOutputCollector源码,顺便分析该类中所涉及的设计模式–代理模式. 首先介绍一下Spout输出收集器接口–ISpoutOutputCollector,该接口主要声明 ...

随机推荐

  1. 每日英语:China Grapples With Genetically Modified Foods

    A Chinese agricultural official's unsupported claims about the carcinogenic risks of consuming genet ...

  2. 成为 Team Leader 后我最关心的那些事

    成为 Team Leader 后我最关心的那些事   推荐序 老有人问我 iOS 开发如何提高,今天收到一个来自网易的朋友投稿,分享他在成为 iOS 项目负责人之后面临的问题.文章中分享的如何招人,如 ...

  3. Unix系统编程()虚拟内存管理

    在之前学到过进程的内存布局中忽略了一个事实:这一布局存在于虚拟文件中. 因为对虚拟内存的理解将有助于后续对fork系统调用.共享内存和映射文件之类的主题阐述,这里还要学习一下有关虚拟内存的详细内容. ...

  4. MySQL 5.6的72个新特性(译)

    一,安全提高 1.提供保存加密认证信息的方法,使用.mylogin.cnf文件.使用 mysql_config_editor可以创建此文件.这个文件可以进行连接数据库的访问授权. mysql_conf ...

  5. 矩阵hash + KMP - UVA 12886 The Big Painting

    The Big Painting Problem's Link: http://acm.hust.edu.cn/vjudge/problem/viewProblem.action?id=88791 M ...

  6. JavaScript语言精粹读书笔记- JavaScript对象

    JavaScript 对象 除了数字.字符串.布尔值.null.undefined(都不可变)这5种简单类型,其他都是对象. JavaScript中的对象是可变的键控集合(keyed collecti ...

  7. ThinkPHP中的验证码不出现的解决办法

    出现这种问题的原因可能是因为代码写的不规范,出现了其他的输出:解决办法: 原代码:     public function captchaAction()    {        $verify = ...

  8. VC++ 轻松实现“闪屏” SplashWnd

    我们平时使用的好多软件在运行启动时都会有一个“闪屏”画面显示,一般用于标识软件的一些信息,如软件版本名称.公司等,通过查找资料发现,其实实现起来很简单,一个类就能搞定! SplashWnd.h  C+ ...

  9. iOS开发之--iOS APP打包的时候出现的四个选项

  10. AndroidStudio gradle配置

    自2013年5月16日,在I/O大会上,谷歌推出新的Android开发环境——Android Studio,并对开发者控制台进行了改进,增加了五个新的功能, google就已经彻底放弃eclipse ...