摘要:假设你对kube-proxy的工作原理有一定的了解。本文基于kubernetes v1.5代码对kube-proxy的源代码文件夹结构进行了分析,并以iptables mode为例进行了完整流程的源代码分析,给出了其内部实现的模块逻辑图,希望对你深入理解kube-proxy有所帮助。

kube-proxy介绍

请參考我的还有一篇博文:kube-proxy工作原理

源代码文件夹结构分析

  1. cmd/kube-proxy //负责kube-proxy的创建,启动的入口
  2. .
  3. ├── app
  4. ├── conntrack.go //linux kernel的nf_conntrack-sysctl的interface定义,很多其它关于conntracker的定义请看https://www.kernel.org/doc/Documentation/networking/nf_conntrack-sysctl.txt
  5. ├── options
  6. └── options.go //kube-proxy的參数定义ProxyServerConfig及相关方法
  7. ├── server.go //ProxyServer结构定义及其创建(NewProxyServerDefault)和运行(Run)的方法。
  8. └── server_test.go
  9. └── proxy.go //kube-proxy的main方法
  10. pkg/proxy
  11. .
  12. ├── OWNERS
  13. ├── config
  14. ├── api.go //给proxy配置Service和Endpoint的Reflectors和Cache.Store
  15. ├── api_test.go
  16. ├── config.go //定义ServiceUpdate。EndpointUpdate结构体以及ServiceConfigHandler,EndpointConfigHandler来处理Service和Endpoint的Update
  17. ├── config_test.go
  18. └── doc.go
  19. ├── doc.go
  20. ├── healthcheck //负责service listener和endpoint的health check,add/delete请求。
  21. ├── api.go
  22. ├── doc.go
  23. ├── healthcheck.go
  24. ├── healthcheck_test.go
  25. ├── http.go
  26. ├── listener.go
  27. └── worker.go
  28. ├── iptables //proxy mode为iptables的实现
  29. ├── proxier.go
  30. └── proxier_test.go
  31. ├── types.go
  32. ├── userspace //proxy mode为userspace的实现
  33. ├── loadbalancer.go
  34. ├── port_allocator.go
  35. ├── port_allocator_test.go
  36. ├── proxier.go
  37. ├── proxier_test.go
  38. ├── proxysocket.go
  39. ├── rlimit.go
  40. ├── rlimit_windows.go
  41. ├── roundrobin.go
  42. ├── roundrobin_test.go
  43. └── udp_server.go
  44. └── winuserspace //windows OS时。proxy mode为userspace的实现
  45. ├── loadbalancer.go
  46. ├── port_allocator.go
  47. ├── port_allocator_test.go
  48. ├── proxier.go
  49. ├── proxier_test.go
  50. ├── proxysocket.go
  51. ├── roundrobin.go
  52. ├── roundrobin_test.go
  53. └── udp_server.go

内部实现模块逻辑图

源代码分析

main

kube-proxy的main入口在:cmd/kube-proxy/proxy.go:39

  1. func main() {
  2. //创建kube-proxy的默认config对象
  3. config := options.NewProxyConfig()
  4. //用kube-proxy命令行的參数替换默认參数
  5. config.AddFlags(pflag.CommandLine)
  6. flag.InitFlags()
  7. logs.InitLogs()
  8. defer logs.FlushLogs()
  9. verflag.PrintAndExitIfRequested()
  10. //依据config创建ProxyServer
  11. s, err := app.NewProxyServerDefault(config)
  12. if err != nil {
  13. fmt.Fprintf(os.Stderr, "%v\n", err)
  14. os.Exit(1)
  15. }
  16. //运行Run方法让kube-proxy開始干活了
  17. if err = s.Run(); err != nil {
  18. fmt.Fprintf(os.Stderr, "%v\n", err)
  19. os.Exit(1)
  20. }
  21. }

main方法中。我们重点关注app.NewProxyServerDefault(config)创建ProxyServer和Run方法。

创建ProxyServer

NewProxyServerDefault负责依据提供的config參数创建一个新的ProxyServer对象,其代码比較长,逻辑相对复杂,以下会挑重点说一下。

  1. cmd/kube-proxy/app/server.go:131
  2. func NewProxyServerDefault(config *options.ProxyServerConfig) (*ProxyServer, error) {
  3. ...
  4. // Create a iptables utils.
  5. execer := exec.New()
  6. if runtime.GOOS == "windows" {
  7. netshInterface = utilnetsh.New(execer)
  8. } else {
  9. dbus = utildbus.New()
  10. iptInterface = utiliptables.New(execer, dbus, protocol)
  11. }
  12. ...
  13. //设置OOM_SCORE_ADJ
  14. var oomAdjuster *oom.OOMAdjuster
  15. if config.OOMScoreAdj != nil {
  16. oomAdjuster = oom.NewOOMAdjuster()
  17. if err := oomAdjuster.ApplyOOMScoreAdj(0, int(*config.OOMScoreAdj)); err != nil {
  18. glog.V(2).Info(err)
  19. }
  20. }
  21. ...
  22. // Create a Kube Client
  23. ...
  24. // 创建event Broadcaster和event recorder
  25. hostname := nodeutil.GetHostname(config.HostnameOverride)
  26. eventBroadcaster := record.NewBroadcaster()
  27. recorder := eventBroadcaster.NewRecorder(v1.EventSource{Component: "kube-proxy", Host: hostname})
  28. //定义proxier和endpointsHandler。分别用于处理services和endpoints的update event。
  29. var proxier proxy.ProxyProvider
  30. var endpointsHandler proxyconfig.EndpointsConfigHandler
  31. //从config中获取proxy mode
  32. proxyMode := getProxyMode(string(config.Mode), client.Core().Nodes(), hostname, iptInterface, iptables.LinuxKernelCompatTester{})
  33. // proxy mode为iptables场景
  34. if proxyMode == proxyModeIPTables {
  35. glog.V(0).Info("Using iptables Proxier.")
  36. if config.IPTablesMasqueradeBit == nil {
  37. // IPTablesMasqueradeBit must be specified or defaulted.
  38. return nil, fmt.Errorf("Unable to read IPTablesMasqueradeBit from config")
  39. }
  40. //调用pkg/proxy/iptables/proxier.go:222中的iptables.NewProxier来创建proxier,赋值给前面定义的proxier和endpointsHandler。表示由该proxier同一时候负责service和endpoint的event处理。
  41. proxierIPTables, err := iptables.NewProxier(iptInterface, utilsysctl.New(), execer, config.IPTablesSyncPeriod.Duration, config.IPTablesMinSyncPeriod.Duration, config.MasqueradeAll, int(*config.IPTablesMasqueradeBit), config.ClusterCIDR, hostname, getNodeIP(client, hostname))
  42. if err != nil {
  43. glog.Fatalf("Unable to create proxier: %v", err)
  44. }
  45. proxier = proxierIPTables
  46. endpointsHandler = proxierIPTables
  47. // No turning back. Remove artifacts that might still exist from the userspace Proxier.
  48. glog.V(0).Info("Tearing down userspace rules.")
  49. userspace.CleanupLeftovers(iptInterface)
  50. }
  51. // proxy mode为userspace场景
  52. else {
  53. glog.V(0).Info("Using userspace Proxier.")
  54. // This is a proxy.LoadBalancer which NewProxier needs but has methods we don't need for
  55. // our config.EndpointsConfigHandler.
  56. loadBalancer := userspace.NewLoadBalancerRR()
  57. // set EndpointsConfigHandler to our loadBalancer
  58. endpointsHandler = loadBalancer
  59. var proxierUserspace proxy.ProxyProvider
  60. // windows OS场景下,调用pkg/proxy/winuserspace/proxier.go:146的winuserspace.NewProxier来创建proxier。
  61. if runtime.GOOS == "windows" {
  62. proxierUserspace, err = winuserspace.NewProxier(
  63. loadBalancer,
  64. net.ParseIP(config.BindAddress),
  65. netshInterface,
  66. *utilnet.ParsePortRangeOrDie(config.PortRange),
  67. // TODO @pires replace below with default values, if applicable
  68. config.IPTablesSyncPeriod.Duration,
  69. config.UDPIdleTimeout.Duration,
  70. )
  71. }
  72. // linux OS场景下,调用pkg/proxy/userspace/proxier.go:143的userspace.NewProxier来创建proxier。
  73. else {
  74. proxierUserspace, err = userspace.NewProxier(
  75. loadBalancer,
  76. net.ParseIP(config.BindAddress),
  77. iptInterface,
  78. *utilnet.ParsePortRangeOrDie(config.PortRange),
  79. config.IPTablesSyncPeriod.Duration,
  80. config.IPTablesMinSyncPeriod.Duration,
  81. config.UDPIdleTimeout.Duration,
  82. )
  83. }
  84. if err != nil {
  85. glog.Fatalf("Unable to create proxier: %v", err)
  86. }
  87. proxier = proxierUserspace
  88. // Remove artifacts from the pure-iptables Proxier, if not on Windows.
  89. if runtime.GOOS != "windows" {
  90. glog.V(0).Info("Tearing down pure-iptables proxy rules.")
  91. iptables.CleanupLeftovers(iptInterface)
  92. }
  93. }
  94. // Add iptables reload function, if not on Windows.
  95. if runtime.GOOS != "windows" {
  96. iptInterface.AddReloadFunc(proxier.Sync)
  97. }
  98. // Create configs (i.e. Watches for Services and Endpoints)
  99. // 创建serviceConfig负责service的watchforUpdates
  100. serviceConfig := proxyconfig.NewServiceConfig()
  101. //给serviceConfig注冊proxier,既加入相应的listener用来处理service update时逻辑。
  102. serviceConfig.RegisterHandler(proxier)
  103. // 创建endpointsConfig负责endpoint的watchforUpdates
  104. endpointsConfig := proxyconfig.NewEndpointsConfig()
  105. //给endpointsConfig注冊endpointsHandler,既加入相应的listener用来处理endpoint update时的逻辑。
  106. endpointsConfig.RegisterHandler(endpointsHandler)
  107. //NewSourceAPI creates config source that watches for changes to the services and endpoints.
  108. //NewSourceAPI通过ListWatch apiserver的Service和endpoint,并周期性的维护serviceStore和endpointStore的更新
  109. proxyconfig.NewSourceAPI(
  110. client.Core().RESTClient(),
  111. config.ConfigSyncPeriod,
  112. serviceConfig.Channel("api"), //Service Update Channel
  113. endpointsConfig.Channel("api"), //endpoint update channel
  114. )
  115. ...
  116. //把前面创建的对象作为參数。构造出ProxyServer对象。
  117. return NewProxyServer(client, config, iptInterface, proxier, eventBroadcaster, recorder, conntracker, proxyMode)
  118. }

NewProxyServerDefault中的核心逻辑我都已经在上述代码中加入了凝视,当中有几个地方须要我们再深入跟进去看看:proxyconfig.NewServiceConfig。proxyconfig.NewEndpointsConfig,serviceConfig.RegisterHandler,endpointsConfig.RegisterHandler。proxyconfig.NewSourceAPI。

proxyconfig.NewServiceConfig

我们对ServiceConfig的代码分析一遍,EndpointsConfig的代码则相似。

  1. pkg/proxy/config/config.go:192
  2. func NewServiceConfig() *ServiceConfig {
  3. // 创建updates channel
  4. updates := make(chan struct{}, 1)
  5. // 构建serviceStore对象
  6. store := &serviceStore{updates: updates, services: make(map[string]map[types.NamespacedName]api.Service)}
  7. mux := config.NewMux(store)
  8. // 新建Broadcaster。在兴许的serviceConfig.RegisterHandler会注冊该Broadcaster的listener。
  9. bcaster := config.NewBroadcaster()
  10. //启动协程,立即開始watch updates channel
  11. go watchForUpdates(bcaster, store, updates)
  12. return &ServiceConfig{mux, bcaster, store}
  13. }

以下我们再跟进watchForUpdates去看看。

  1. pkg/proxy/config/config.go:292
  2. func watchForUpdates(bcaster *config.Broadcaster, accessor config.Accessor, updates <-chan struct{}) {
  3. for true {
  4. <-updates
  5. bcaster.Notify(accessor.MergedState())
  6. }
  7. }

watchForUpdates就是一直在watch updates channel,假设有数据,则立马由该Broadcaster Notify到注冊的listeners。

Notify的代码例如以下。可见。它负责将数据通知给全部的listener。并调用各个listener的OnUpdate方法。

  1. pkg/util/config/config.go:133
  2. // Notify notifies all listeners.
  3. func (b *Broadcaster) Notify(instance interface{}) {
  4. b.listenerLock.RLock()
  5. listeners := b.listeners
  6. b.listenerLock.RUnlock()
  7. for _, listener := range listeners {
  8. listener.OnUpdate(instance)
  9. }
  10. }
  11. func (f ListenerFunc) OnUpdate(instance interface{}) {
  12. f(instance)
  13. }

serviceConfig.RegisterHandler

上面分析的proxyconfig.NewServiceConfig负责创建ServiceConfig。開始watch updates channel了,当从channel中取到值的时候,Broadcaster就会通知listener进行处理。serviceConfig.RegisterHandler正是负责给Broadcaster注冊listener的。其代码例如以下。

  1. pkg/proxy/config/config.go:205
  2. func (c *ServiceConfig) RegisterHandler(handler ServiceConfigHandler) {
  3. //给ServiceConfig的Broadcaster注冊listener。
  4. c.bcaster.Add(config.ListenerFunc(func(instance interface{}) {
  5. glog.V(3).Infof("Calling handler.OnServiceUpdate()")
  6. handler.OnServiceUpdate(instance.([]api.Service))
  7. }))
  8. }

上面分析proxyconfig.NewServiceConfig时可知,当从updates channel中取到值的时候,终于会调用相应的ListenerFunc(instance)进行处理。在这里。也就是调用:

  1. glog.V(3).Infof("Calling handler.OnServiceUpdate()")
  2. handler.OnServiceUpdate(instance.([]api.Service))
  3. }

即调用到handler.OnServiceUpdate。每种proxymode相应的proxier都有相应的handler.OnServiceUpdate接口实现,我们以iptables mode为例。看看handler.OnServiceUpdate的实现:

  1. pkg/proxy/iptables/proxier.go:428
  2. func (proxier *Proxier) OnServiceUpdate(allServices []api.Service) {
  3. ...
  4. proxier.syncProxyRules()
  5. proxier.deleteServiceConnections(staleUDPServices.List())
  6. }

因此,终于关键的逻辑都转向了proxier.syncProxyRules(),从我们上面给出的内部模块交互图也能看得出来。对于proxier.syncProxyRules()。我们放到后面来具体讨论,如今你仅仅要知道proxier.syncProxyRules()负责将proxy中缓存的service/endpoint同步更新到iptables中生成相应Chain和NAT Rules。

proxyconfig.NewEndpointsConfig

endpointsConfig的逻辑和serviceConfig的相似,在这里仅仅给出相应代码。不再跟进分析。

  1. pkg/proxy/config/config.go:84
  2. func NewEndpointsConfig() *EndpointsConfig {
  3. // The updates channel is used to send interrupts to the Endpoints handler.
  4. // It's buffered because we never want to block for as long as there is a
  5. // pending interrupt, but don't want to drop them if the handler is doing
  6. // work.
  7. updates := make(chan struct{}, 1)
  8. store := &endpointsStore{updates: updates, endpoints: make(map[string]map[types.NamespacedName]api.Endpoints)}
  9. mux := config.NewMux(store)
  10. bcaster := config.NewBroadcaster()
  11. go watchForUpdates(bcaster, store, updates)
  12. return &EndpointsConfig{mux, bcaster, store}
  13. }

endpointsConfig.RegisterHandler

  1. pkg/proxy/config/config.go:97
  2. func (c *EndpointsConfig) RegisterHandler(handler EndpointsConfigHandler) {
  3. c.bcaster.Add(config.ListenerFunc(func(instance interface{}) {
  4. glog.V(3).Infof("Calling handler.OnEndpointsUpdate()")
  5. handler.OnEndpointsUpdate(instance.([]api.Endpoints))
  6. }))
  7. }

proxyconfig.NewSourceAPI

proxyconfig.NewSourceAPI是非常关键的,它负责给service updates channel和endpoint updates channel配置数据源,它是通过周期性的List和Watch kube-apiserver中的all service and endpoint来提供数据的。发给相应的channel。

默认的List周期是15min。可通过--config-sync-period改动。以下来看其具体代码:

  1. func NewSourceAPI(c cache.Getter, period time.Duration, servicesChan chan<- ServiceUpdate, endpointsChan chan<- EndpointsUpdate) {
  2. servicesLW := cache.NewListWatchFromClient(c, "services", api.NamespaceAll, fields.Everything())
  3. cache.NewReflector(servicesLW, &api.Service{}, NewServiceStore(nil, servicesChan), period).Run()
  4. endpointsLW := cache.NewListWatchFromClient(c, "endpoints", api.NamespaceAll, fields.Everything())
  5. cache.NewReflector(endpointsLW, &api.Endpoints{}, NewEndpointsStore(nil, endpointsChan), period).Run()
  6. }
  7. // NewServiceStore creates an undelta store that expands updates to the store into
  8. // ServiceUpdate events on the channel. If no store is passed, a default store will
  9. // be initialized. Allows reuse of a cache store across multiple components.
  10. func NewServiceStore(store cache.Store, ch chan<- ServiceUpdate) cache.Store {
  11. fn := func(objs []interface{}) {
  12. var services []api.Service
  13. for _, o := range objs {
  14. services = append(services, *(o.(*api.Service)))
  15. }
  16. ch <- ServiceUpdate{Op: SET, Services: services}
  17. }
  18. if store == nil {
  19. store = cache.NewStore(cache.MetaNamespaceKeyFunc)
  20. }
  21. return &cache.UndeltaStore{
  22. Store: store,
  23. PushFunc: fn,
  24. }
  25. }
  26. // NewEndpointsStore creates an undelta store that expands updates to the store into
  27. // EndpointsUpdate events on the channel. If no store is passed, a default store will
  28. // be initialized. Allows reuse of a cache store across multiple components.
  29. func NewEndpointsStore(store cache.Store, ch chan<- EndpointsUpdate) cache.Store {
  30. fn := func(objs []interface{}) {
  31. var endpoints []api.Endpoints
  32. for _, o := range objs {
  33. endpoints = append(endpoints, *(o.(*api.Endpoints)))
  34. }
  35. ch <- EndpointsUpdate{Op: SET, Endpoints: endpoints}
  36. }
  37. if store == nil {
  38. store = cache.NewStore(cache.MetaNamespaceKeyFunc)
  39. }
  40. return &cache.UndeltaStore{
  41. Store: store,
  42. PushFunc: fn,
  43. }
  44. }

代码非常easy。不须要过多解释。

运行Run開始工作

创建完ProxyServer后,就运行Run方法開始工作了,它主要负责周期性(default 30s)的同步proxy中的services/endpionts到iptables中生成相应Chain and NAT Rules。

  1. cmd/kube-proxy/app/server.go:308
  2. func (s *ProxyServer) Run() error {
  3. ...
  4. // Start up a webserver if requested
  5. if s.Config.HealthzPort > 0 {
  6. http.HandleFunc("/proxyMode", func(w http.ResponseWriter, r *http.Request) {
  7. fmt.Fprintf(w, "%s", s.ProxyMode)
  8. })
  9. configz.InstallHandler(http.DefaultServeMux)
  10. go wait.Until(func() {
  11. err := http.ListenAndServe(s.Config.HealthzBindAddress+":"+strconv.Itoa(int(s.Config.HealthzPort)), nil)
  12. if err != nil {
  13. glog.Errorf("Starting health server failed: %v", err)
  14. }
  15. }, 5*time.Second, wait.NeverStop)
  16. }
  17. ...
  18. // Just loop forever for now...
  19. s.Proxier.SyncLoop()
  20. return nil
  21. }

Run方法关键代码非常easy。就是运行相应proxier的SyncLoop()。我们还是以iptables mode为例,看看它是怎样实现SyncLoop()的:

  1. pkg/proxy/iptables/proxier.go:416
  2. // SyncLoop runs periodic work. This is expected to run as a goroutine or as the main loop of the app. It does not return.
  3. func (proxier *Proxier) SyncLoop() {
  4. t := time.NewTicker(proxier.syncPeriod)
  5. defer t.Stop()
  6. for {
  7. <-t.C
  8. glog.V(6).Infof("Periodic sync")
  9. proxier.Sync()
  10. }
  11. }

SyncLoop中,通过设置定时器,默认每30s会运行一次proxier.Sync(),能够通过--iptables-sync-period改动默认时间。

那我们继续跟进Sync()的代码:

  1. pkg/proxy/iptables/proxier.go:409
  2. // Sync is called to immediately synchronize the proxier state to iptables
  3. func (proxier *Proxier) Sync() {
  4. proxier.mu.Lock()
  5. defer proxier.mu.Unlock()
  6. proxier.syncProxyRules()
  7. }

可见,终于还是调用proxier.syncProxyRules()。

前一节中创建ProxyServer的分析也是一样。终于watch到service/endpoint有更新时。都会调用到proxier.syncProxyRules()。

那以下我们就来看看proxier.syncProxyRules()的代码。

proxier.syncProxyRules

以下的proxier.syncProxyRules代码是iptables mode相应的实现。userspace mode的代码我就不贴了。

  1. pkg/proxy/iptables/proxier.go:791
  2. // This is where all of the iptables-save/restore calls happen.
  3. // The only other iptables rules are those that are setup in iptablesInit()
  4. // assumes proxier.mu is held
  5. func (proxier *Proxier) syncProxyRules() {
  6. if proxier.throttle != nil {
  7. proxier.throttle.Accept()
  8. }
  9. start := time.Now()
  10. defer func() {
  11. glog.V(4).Infof("syncProxyRules took %v", time.Since(start))
  12. }()
  13. // don't sync rules till we've received services and endpoints
  14. if !proxier.haveReceivedEndpointsUpdate || !proxier.haveReceivedServiceUpdate {
  15. glog.V(2).Info("Not syncing iptables until Services and Endpoints have been received from master")
  16. return
  17. }
  18. glog.V(3).Infof("Syncing iptables rules")
  19. // Create and link the kube services chain.
  20. {
  21. tablesNeedServicesChain := []utiliptables.Table{utiliptables.TableFilter, utiliptables.TableNAT}
  22. for _, table := range tablesNeedServicesChain {
  23. if _, err := proxier.iptables.EnsureChain(table, kubeServicesChain); err != nil {
  24. glog.Errorf("Failed to ensure that %s chain %s exists: %v", table, kubeServicesChain, err)
  25. return
  26. }
  27. }
  28. tableChainsNeedJumpServices := []struct {
  29. table utiliptables.Table
  30. chain utiliptables.Chain
  31. }{
  32. {utiliptables.TableFilter, utiliptables.ChainOutput},
  33. {utiliptables.TableNAT, utiliptables.ChainOutput},
  34. {utiliptables.TableNAT, utiliptables.ChainPrerouting},
  35. }
  36. comment := "kubernetes service portals"
  37. args := []string{"-m", "comment", "--comment", comment, "-j", string(kubeServicesChain)}
  38. for _, tc := range tableChainsNeedJumpServices {
  39. if _, err := proxier.iptables.EnsureRule(utiliptables.Prepend, tc.table, tc.chain, args...); err != nil {
  40. glog.Errorf("Failed to ensure that %s chain %s jumps to %s: %v", tc.table, tc.chain, kubeServicesChain, err)
  41. return
  42. }
  43. }
  44. }
  45. // Create and link the kube postrouting chain.
  46. {
  47. if _, err := proxier.iptables.EnsureChain(utiliptables.TableNAT, kubePostroutingChain); err != nil {
  48. glog.Errorf("Failed to ensure that %s chain %s exists: %v", utiliptables.TableNAT, kubePostroutingChain, err)
  49. return
  50. }
  51. comment := "kubernetes postrouting rules"
  52. args := []string{"-m", "comment", "--comment", comment, "-j", string(kubePostroutingChain)}
  53. if _, err := proxier.iptables.EnsureRule(utiliptables.Prepend, utiliptables.TableNAT, utiliptables.ChainPostrouting, args...); err != nil {
  54. glog.Errorf("Failed to ensure that %s chain %s jumps to %s: %v", utiliptables.TableNAT, utiliptables.ChainPostrouting, kubePostroutingChain, err)
  55. return
  56. }
  57. }
  58. // Get iptables-save output so we can check for existing chains and rules.
  59. // This will be a map of chain name to chain with rules as stored in iptables-save/iptables-restore
  60. existingFilterChains := make(map[utiliptables.Chain]string)
  61. iptablesSaveRaw, err := proxier.iptables.Save(utiliptables.TableFilter)
  62. if err != nil { // if we failed to get any rules
  63. glog.Errorf("Failed to execute iptables-save, syncing all rules: %v", err)
  64. } else { // otherwise parse the output
  65. existingFilterChains = utiliptables.GetChainLines(utiliptables.TableFilter, iptablesSaveRaw)
  66. }
  67. existingNATChains := make(map[utiliptables.Chain]string)
  68. iptablesSaveRaw, err = proxier.iptables.Save(utiliptables.TableNAT)
  69. if err != nil { // if we failed to get any rules
  70. glog.Errorf("Failed to execute iptables-save, syncing all rules: %v", err)
  71. } else { // otherwise parse the output
  72. existingNATChains = utiliptables.GetChainLines(utiliptables.TableNAT, iptablesSaveRaw)
  73. }
  74. filterChains := bytes.NewBuffer(nil)
  75. filterRules := bytes.NewBuffer(nil)
  76. natChains := bytes.NewBuffer(nil)
  77. natRules := bytes.NewBuffer(nil)
  78. // Write table headers.
  79. writeLine(filterChains, "*filter")
  80. writeLine(natChains, "*nat")
  81. // Make sure we keep stats for the top-level chains, if they existed
  82. // (which most should have because we created them above).
  83. if chain, ok := existingFilterChains[kubeServicesChain]; ok {
  84. writeLine(filterChains, chain)
  85. } else {
  86. writeLine(filterChains, utiliptables.MakeChainLine(kubeServicesChain))
  87. }
  88. if chain, ok := existingNATChains[kubeServicesChain]; ok {
  89. writeLine(natChains, chain)
  90. } else {
  91. writeLine(natChains, utiliptables.MakeChainLine(kubeServicesChain))
  92. }
  93. if chain, ok := existingNATChains[kubeNodePortsChain]; ok {
  94. writeLine(natChains, chain)
  95. } else {
  96. writeLine(natChains, utiliptables.MakeChainLine(kubeNodePortsChain))
  97. }
  98. if chain, ok := existingNATChains[kubePostroutingChain]; ok {
  99. writeLine(natChains, chain)
  100. } else {
  101. writeLine(natChains, utiliptables.MakeChainLine(kubePostroutingChain))
  102. }
  103. if chain, ok := existingNATChains[KubeMarkMasqChain]; ok {
  104. writeLine(natChains, chain)
  105. } else {
  106. writeLine(natChains, utiliptables.MakeChainLine(KubeMarkMasqChain))
  107. }
  108. // Install the kubernetes-specific postrouting rules. We use a whole chain for
  109. // this so that it is easier to flush and change, for example if the mark
  110. // value should ever change.
  111. writeLine(natRules, []string{
  112. "-A", string(kubePostroutingChain),
  113. "-m", "comment", "--comment", `"kubernetes service traffic requiring SNAT"`,
  114. "-m", "mark", "--mark", proxier.masqueradeMark,
  115. "-j", "MASQUERADE",
  116. }...)
  117. // Install the kubernetes-specific masquerade mark rule. We use a whole chain for
  118. // this so that it is easier to flush and change, for example if the mark
  119. // value should ever change.
  120. writeLine(natRules, []string{
  121. "-A", string(KubeMarkMasqChain),
  122. "-j", "MARK", "--set-xmark", proxier.masqueradeMark,
  123. }...)
  124. // Accumulate NAT chains to keep.
  125. activeNATChains := map[utiliptables.Chain]bool{} // use a map as a set
  126. // Accumulate the set of local ports that we will be holding open once this update is complete
  127. replacementPortsMap := map[localPort]closeable{}
  128. // Build rules for each service.
  129. for svcName, svcInfo := range proxier.serviceMap {
  130. protocol := strings.ToLower(string(svcInfo.protocol))
  131. // Create the per-service chain, retaining counters if possible.
  132. svcChain := servicePortChainName(svcName, protocol)
  133. if chain, ok := existingNATChains[svcChain]; ok {
  134. writeLine(natChains, chain)
  135. } else {
  136. writeLine(natChains, utiliptables.MakeChainLine(svcChain))
  137. }
  138. activeNATChains[svcChain] = true
  139. svcXlbChain := serviceLBChainName(svcName, protocol)
  140. if svcInfo.onlyNodeLocalEndpoints {
  141. // Only for services with the externalTraffic annotation set to OnlyLocal
  142. // create the per-service LB chain, retaining counters if possible.
  143. if lbChain, ok := existingNATChains[svcXlbChain]; ok {
  144. writeLine(natChains, lbChain)
  145. } else {
  146. writeLine(natChains, utiliptables.MakeChainLine(svcXlbChain))
  147. }
  148. activeNATChains[svcXlbChain] = true
  149. } else if activeNATChains[svcXlbChain] {
  150. // Cleanup the previously created XLB chain for this service
  151. delete(activeNATChains, svcXlbChain)
  152. }
  153. // Capture the clusterIP.
  154. args := []string{
  155. "-A", string(kubeServicesChain),
  156. "-m", "comment", "--comment", fmt.Sprintf(`"%s cluster IP"`, svcName.String()),
  157. "-m", protocol, "-p", protocol,
  158. "-d", fmt.Sprintf("%s/32", svcInfo.clusterIP.String()),
  159. "--dport", fmt.Sprintf("%d", svcInfo.port),
  160. }
  161. if proxier.masqueradeAll {
  162. writeLine(natRules, append(args, "-j", string(KubeMarkMasqChain))...)
  163. }
  164. if len(proxier.clusterCIDR) > 0 {
  165. writeLine(natRules, append(args, "! -s", proxier.clusterCIDR, "-j", string(KubeMarkMasqChain))...)
  166. }
  167. writeLine(natRules, append(args, "-j", string(svcChain))...)
  168. // Capture externalIPs.
  169. for _, externalIP := range svcInfo.externalIPs {
  170. // If the "external" IP happens to be an IP that is local to this
  171. // machine, hold the local port open so no other process can open it
  172. // (because the socket might open but it would never work).
  173. if local, err := isLocalIP(externalIP); err != nil {
  174. glog.Errorf("can't determine if IP is local, assuming not: %v", err)
  175. } else if local {
  176. lp := localPort{
  177. desc: "externalIP for " + svcName.String(),
  178. ip: externalIP,
  179. port: svcInfo.port,
  180. protocol: protocol,
  181. }
  182. if proxier.portsMap[lp] != nil {
  183. glog.V(4).Infof("Port %s was open before and is still needed", lp.String())
  184. replacementPortsMap[lp] = proxier.portsMap[lp]
  185. } else {
  186. socket, err := proxier.portMapper.OpenLocalPort(&lp)
  187. if err != nil {
  188. glog.Errorf("can't open %s, skipping this externalIP: %v", lp.String(), err)
  189. continue
  190. }
  191. replacementPortsMap[lp] = socket
  192. }
  193. } // We're holding the port, so it's OK to install iptables rules.
  194. args := []string{
  195. "-A", string(kubeServicesChain),
  196. "-m", "comment", "--comment", fmt.Sprintf(`"%s external IP"`, svcName.String()),
  197. "-m", protocol, "-p", protocol,
  198. "-d", fmt.Sprintf("%s/32", externalIP),
  199. "--dport", fmt.Sprintf("%d", svcInfo.port),
  200. }
  201. // We have to SNAT packets to external IPs.
  202. writeLine(natRules, append(args, "-j", string(KubeMarkMasqChain))...)
  203. // Allow traffic for external IPs that does not come from a bridge (i.e. not from a container)
  204. // nor from a local process to be forwarded to the service.
  205. // This rule roughly translates to "all traffic from off-machine".
  206. // This is imperfect in the face of network plugins that might not use a bridge, but we can revisit that later.
  207. externalTrafficOnlyArgs := append(args,
  208. "-m", "physdev", "!", "--physdev-is-in",
  209. "-m", "addrtype", "!", "--src-type", "LOCAL")
  210. writeLine(natRules, append(externalTrafficOnlyArgs, "-j", string(svcChain))...)
  211. dstLocalOnlyArgs := append(args, "-m", "addrtype", "--dst-type", "LOCAL")
  212. // Allow traffic bound for external IPs that happen to be recognized as local IPs to stay local.
  213. // This covers cases like GCE load-balancers which get added to the local routing table.
  214. writeLine(natRules, append(dstLocalOnlyArgs, "-j", string(svcChain))...)
  215. }
  216. // Capture load-balancer ingress.
  217. for _, ingress := range svcInfo.loadBalancerStatus.Ingress {
  218. if ingress.IP != "" {
  219. // create service firewall chain
  220. fwChain := serviceFirewallChainName(svcName, protocol)
  221. if chain, ok := existingNATChains[fwChain]; ok {
  222. writeLine(natChains, chain)
  223. } else {
  224. writeLine(natChains, utiliptables.MakeChainLine(fwChain))
  225. }
  226. activeNATChains[fwChain] = true
  227. // The service firewall rules are created based on ServiceSpec.loadBalancerSourceRanges field.
  228. // This currently works for loadbalancers that preserves source ips.
  229. // For loadbalancers which direct traffic to service NodePort, the firewall rules will not apply.
  230. args := []string{
  231. "-A", string(kubeServicesChain),
  232. "-m", "comment", "--comment", fmt.Sprintf(`"%s loadbalancer IP"`, svcName.String()),
  233. "-m", protocol, "-p", protocol,
  234. "-d", fmt.Sprintf("%s/32", ingress.IP),
  235. "--dport", fmt.Sprintf("%d", svcInfo.port),
  236. }
  237. // jump to service firewall chain
  238. writeLine(natRules, append(args, "-j", string(fwChain))...)
  239. args = []string{
  240. "-A", string(fwChain),
  241. "-m", "comment", "--comment", fmt.Sprintf(`"%s loadbalancer IP"`, svcName.String()),
  242. }
  243. // Each source match rule in the FW chain may jump to either the SVC or the XLB chain
  244. chosenChain := svcXlbChain
  245. // If we are proxying globally, we need to masquerade in case we cross nodes.
  246. // If we are proxying only locally, we can retain the source IP.
  247. if !svcInfo.onlyNodeLocalEndpoints {
  248. writeLine(natRules, append(args, "-j", string(KubeMarkMasqChain))...)
  249. chosenChain = svcChain
  250. }
  251. if len(svcInfo.loadBalancerSourceRanges) == 0 {
  252. // allow all sources, so jump directly to the KUBE-SVC or KUBE-XLB chain
  253. writeLine(natRules, append(args, "-j", string(chosenChain))...)
  254. } else {
  255. // firewall filter based on each source range
  256. allowFromNode := false
  257. for _, src := range svcInfo.loadBalancerSourceRanges {
  258. writeLine(natRules, append(args, "-s", src, "-j", string(chosenChain))...)
  259. // ignore error because it has been validated
  260. _, cidr, _ := net.ParseCIDR(src)
  261. if cidr.Contains(proxier.nodeIP) {
  262. allowFromNode = true
  263. }
  264. }
  265. // generally, ip route rule was added to intercept request to loadbalancer vip from the
  266. // loadbalancer's backend hosts. In this case, request will not hit the loadbalancer but loop back directly.
  267. // Need to add the following rule to allow request on host.
  268. if allowFromNode {
  269. writeLine(natRules, append(args, "-s", fmt.Sprintf("%s/32", ingress.IP), "-j", string(chosenChain))...)
  270. }
  271. }
  272. // If the packet was able to reach the end of firewall chain, then it did not get DNATed.
  273. // It means the packet cannot go thru the firewall, then mark it for DROP
  274. writeLine(natRules, append(args, "-j", string(KubeMarkDropChain))...)
  275. }
  276. }
  277. // Capture nodeports. If we had more than 2 rules it might be
  278. // worthwhile to make a new per-service chain for nodeport rules, but
  279. // with just 2 rules it ends up being a waste and a cognitive burden.
  280. if svcInfo.nodePort != 0 {
  281. // Hold the local port open so no other process can open it
  282. // (because the socket might open but it would never work).
  283. lp := localPort{
  284. desc: "nodePort for " + svcName.String(),
  285. ip: "",
  286. port: svcInfo.nodePort,
  287. protocol: protocol,
  288. }
  289. if proxier.portsMap[lp] != nil {
  290. glog.V(4).Infof("Port %s was open before and is still needed", lp.String())
  291. replacementPortsMap[lp] = proxier.portsMap[lp]
  292. } else {
  293. socket, err := proxier.portMapper.OpenLocalPort(&lp)
  294. if err != nil {
  295. glog.Errorf("can't open %s, skipping this nodePort: %v", lp.String(), err)
  296. continue
  297. }
  298. if lp.protocol == "udp" {
  299. proxier.clearUdpConntrackForPort(lp.port)
  300. }
  301. replacementPortsMap[lp] = socket
  302. } // We're holding the port, so it's OK to install iptables rules.
  303. args := []string{
  304. "-A", string(kubeNodePortsChain),
  305. "-m", "comment", "--comment", svcName.String(),
  306. "-m", protocol, "-p", protocol,
  307. "--dport", fmt.Sprintf("%d", svcInfo.nodePort),
  308. }
  309. if !svcInfo.onlyNodeLocalEndpoints {
  310. // Nodeports need SNAT, unless they're local.
  311. writeLine(natRules, append(args, "-j", string(KubeMarkMasqChain))...)
  312. // Jump to the service chain.
  313. writeLine(natRules, append(args, "-j", string(svcChain))...)
  314. } else {
  315. // TODO: Make all nodePorts jump to the firewall chain.
  316. // Currently we only create it for loadbalancers (#33586).
  317. writeLine(natRules, append(args, "-j", string(svcXlbChain))...)
  318. }
  319. }
  320. // If the service has no endpoints then reject packets.
  321. if len(proxier.endpointsMap[svcName]) == 0 {
  322. writeLine(filterRules,
  323. "-A", string(kubeServicesChain),
  324. "-m", "comment", "--comment", fmt.Sprintf(`"%s has no endpoints"`, svcName.String()),
  325. "-m", protocol, "-p", protocol,
  326. "-d", fmt.Sprintf("%s/32", svcInfo.clusterIP.String()),
  327. "--dport", fmt.Sprintf("%d", svcInfo.port),
  328. "-j", "REJECT",
  329. )
  330. continue
  331. }
  332. // Generate the per-endpoint chains. We do this in multiple passes so we
  333. // can group rules together.
  334. // These two slices parallel each other - keep in sync
  335. endpoints := make([]*endpointsInfo, 0)
  336. endpointChains := make([]utiliptables.Chain, 0)
  337. for _, ep := range proxier.endpointsMap[svcName] {
  338. endpoints = append(endpoints, ep)
  339. endpointChain := servicePortEndpointChainName(svcName, protocol, ep.ip)
  340. endpointChains = append(endpointChains, endpointChain)
  341. // Create the endpoint chain, retaining counters if possible.
  342. if chain, ok := existingNATChains[utiliptables.Chain(endpointChain)]; ok {
  343. writeLine(natChains, chain)
  344. } else {
  345. writeLine(natChains, utiliptables.MakeChainLine(endpointChain))
  346. }
  347. activeNATChains[endpointChain] = true
  348. }
  349. // First write session affinity rules, if applicable.
  350. if svcInfo.sessionAffinityType == api.ServiceAffinityClientIP {
  351. for _, endpointChain := range endpointChains {
  352. writeLine(natRules,
  353. "-A", string(svcChain),
  354. "-m", "comment", "--comment", svcName.String(),
  355. "-m", "recent", "--name", string(endpointChain),
  356. "--rcheck", "--seconds", fmt.Sprintf("%d", svcInfo.stickyMaxAgeMinutes*60), "--reap",
  357. "-j", string(endpointChain))
  358. }
  359. }
  360. // Now write loadbalancing & DNAT rules.
  361. n := len(endpointChains)
  362. for i, endpointChain := range endpointChains {
  363. // Balancing rules in the per-service chain.
  364. args := []string{
  365. "-A", string(svcChain),
  366. "-m", "comment", "--comment", svcName.String(),
  367. }
  368. if i < (n - 1) {
  369. // Each rule is a probabilistic match.
  370. args = append(args,
  371. "-m", "statistic",
  372. "--mode", "random",
  373. "--probability", fmt.Sprintf("%0.5f", 1.0/float64(n-i)))
  374. }
  375. // The final (or only if n == 1) rule is a guaranteed match.
  376. args = append(args, "-j", string(endpointChain))
  377. writeLine(natRules, args...)
  378. // Rules in the per-endpoint chain.
  379. args = []string{
  380. "-A", string(endpointChain),
  381. "-m", "comment", "--comment", svcName.String(),
  382. }
  383. // Handle traffic that loops back to the originator with SNAT.
  384. writeLine(natRules, append(args,
  385. "-s", fmt.Sprintf("%s/32", strings.Split(endpoints[i].ip, ":")[0]),
  386. "-j", string(KubeMarkMasqChain))...)
  387. // Update client-affinity lists.
  388. if svcInfo.sessionAffinityType == api.ServiceAffinityClientIP {
  389. args = append(args, "-m", "recent", "--name", string(endpointChain), "--set")
  390. }
  391. // DNAT to final destination.
  392. args = append(args, "-m", protocol, "-p", protocol, "-j", "DNAT", "--to-destination", endpoints[i].ip)
  393. writeLine(natRules, args...)
  394. }
  395. // The logic below this applies only if this service is marked as OnlyLocal
  396. if !svcInfo.onlyNodeLocalEndpoints {
  397. continue
  398. }
  399. // Now write ingress loadbalancing & DNAT rules only for services that have a localOnly annotation
  400. // TODO - This logic may be combinable with the block above that creates the svc balancer chain
  401. localEndpoints := make([]*endpointsInfo, 0)
  402. localEndpointChains := make([]utiliptables.Chain, 0)
  403. for i := range endpointChains {
  404. if endpoints[i].localEndpoint {
  405. // These slices parallel each other; must be kept in sync
  406. localEndpoints = append(localEndpoints, endpoints[i])
  407. localEndpointChains = append(localEndpointChains, endpointChains[i])
  408. }
  409. }
  410. // First rule in the chain redirects all pod -> external vip traffic to the
  411. // Service's ClusterIP instead. This happens whether or not we have local
  412. // endpoints; only if clusterCIDR is specified
  413. if len(proxier.clusterCIDR) > 0 {
  414. args = []string{
  415. "-A", string(svcXlbChain),
  416. "-m", "comment", "--comment",
  417. fmt.Sprintf(`"Redirect pods trying to reach external loadbalancer VIP to clusterIP"`),
  418. "-s", proxier.clusterCIDR,
  419. "-j", string(svcChain),
  420. }
  421. writeLine(natRules, args...)
  422. }
  423. numLocalEndpoints := len(localEndpointChains)
  424. if numLocalEndpoints == 0 {
  425. // Blackhole all traffic since there are no local endpoints
  426. args := []string{
  427. "-A", string(svcXlbChain),
  428. "-m", "comment", "--comment",
  429. fmt.Sprintf(`"%s has no local endpoints"`, svcName.String()),
  430. "-j",
  431. string(KubeMarkDropChain),
  432. }
  433. writeLine(natRules, args...)
  434. } else {
  435. // Setup probability filter rules only over local endpoints
  436. for i, endpointChain := range localEndpointChains {
  437. // Balancing rules in the per-service chain.
  438. args := []string{
  439. "-A", string(svcXlbChain),
  440. "-m", "comment", "--comment",
  441. fmt.Sprintf(`"Balancing rule %d for %s"`, i, svcName.String()),
  442. }
  443. if i < (numLocalEndpoints - 1) {
  444. // Each rule is a probabilistic match.
  445. args = append(args,
  446. "-m", "statistic",
  447. "--mode", "random",
  448. "--probability", fmt.Sprintf("%0.5f", 1.0/float64(numLocalEndpoints-i)))
  449. }
  450. // The final (or only if n == 1) rule is a guaranteed match.
  451. args = append(args, "-j", string(endpointChain))
  452. writeLine(natRules, args...)
  453. }
  454. }
  455. }
  456. // Delete chains no longer in use.
  457. for chain := range existingNATChains {
  458. if !activeNATChains[chain] {
  459. chainString := string(chain)
  460. if !strings.HasPrefix(chainString, "KUBE-SVC-") && !strings.HasPrefix(chainString, "KUBE-SEP-") && !strings.HasPrefix(chainString, "KUBE-FW-") && !strings.HasPrefix(chainString, "KUBE-XLB-") {
  461. // Ignore chains that aren't ours.
  462. continue
  463. }
  464. // We must (as per iptables) write a chain-line for it, which has
  465. // the nice effect of flushing the chain. Then we can remove the
  466. // chain.
  467. writeLine(natChains, existingNATChains[chain])
  468. writeLine(natRules, "-X", chainString)
  469. }
  470. }
  471. // Finally, tail-call to the nodeports chain. This needs to be after all
  472. // other service portal rules.
  473. writeLine(natRules,
  474. "-A", string(kubeServicesChain),
  475. "-m", "comment", "--comment", `"kubernetes service nodeports; NOTE: this must be the last rule in this chain"`,
  476. "-m", "addrtype", "--dst-type", "LOCAL",
  477. "-j", string(kubeNodePortsChain))
  478. // Write the end-of-table markers.
  479. writeLine(filterRules, "COMMIT")
  480. writeLine(natRules, "COMMIT")
  481. // Sync rules.
  482. // NOTE: NoFlushTables is used so we don't flush non-kubernetes chains in the table.
  483. filterLines := append(filterChains.Bytes(), filterRules.Bytes()...)
  484. natLines := append(natChains.Bytes(), natRules.Bytes()...)
  485. lines := append(filterLines, natLines...)
  486. glog.V(3).Infof("Restoring iptables rules: %s", lines)
  487. err = proxier.iptables.RestoreAll(lines, utiliptables.NoFlushTables, utiliptables.RestoreCounters)
  488. if err != nil {
  489. glog.Errorf("Failed to execute iptables-restore: %v\nRules:\n%s", err, lines)
  490. // Revert new local ports.
  491. revertPorts(replacementPortsMap, proxier.portsMap)
  492. return
  493. }
  494. // Close old local ports and save new ones.
  495. for k, v := range proxier.portsMap {
  496. if replacementPortsMap[k] == nil {
  497. v.Close()
  498. }
  499. }
  500. proxier.portsMap = replacementPortsMap
  501. }

看到这么长的方法,本来想多写一点分析凝视的,结果我看完已经肌无力了。

假设你自己又k8s的环境。找一台node,查看其iptables,对着以下的代码来看会好非常多。

假设你没有环境,没关系,能够參考到我的上一篇博文kube-proxy工作原理查看相应的Example。

总结

  • kube-proxy实现了两种linux下的proxy mode:userspace和iptables,实现了一种windows下的proxy mode:userspace。

  • kube-proxy通过周期性的List and Watch kube-apiserver的all service and endpiont Resources。通过Channels传给相应的Broadcaster,由Broadcaster Notify给Proxier注冊的Listener。List周期默认15min,可通过--config-sync-period配置。
  • Listener实现OnServiceUpdate和OnEndpointsUpdate接口,终于调用proxier.syncProxyRules()更新iptables。
  • 另外,Proxy Run方法负责周期性的调用proxier.syncProxyRules()更新iptables,默认30s一次,可通过--iptables-sync-period配置。

kube-proxy源代码分析的更多相关文章

  1. Proxy源代码分析——谈谈如何学习Linux网络编程

    Linux是一个可靠性非常高的操作系统,但是所有用过Linux的朋友都会感觉到, Linux和Windows这样的"傻瓜"操作系统(这里丝毫没有贬低Windows的意思,相反这应该 ...

  2. Proxy源代码分析--谈谈如何学习Linux网络编程

    http://blog.csdn.net/cloudtech/article/details/1823531 Linux是一个可靠性非常高的操作系统,但是所有用过Linux的朋友都会感觉到,Linux ...

  3. Hadoop源代码分析

    http://wenku.baidu.com/link?url=R-QoZXhc918qoO0BX6eXI9_uPU75whF62vFFUBIR-7c5XAYUVxDRX5Rs6QZR9hrBnUdM ...

  4. mybatis源代码分析:mybatis延迟加载机制改进

    在上一篇博客<mybatis源代码分析:深入了解mybatis延迟加载机制>讲诉了mybatis延迟加载的具体机制及实现原理. 可以看出,如果查询结果对象中有一个属性是需要延迟加载的,那整 ...

  5. Android系统进程间通信Binder机制在应用程序框架层的Java接口源代码分析

    文章转载至CSDN社区罗升阳的安卓之旅,原文地址:http://blog.csdn.net/luoshengyang/article/details/6642463 在前面几篇文章中,我们详细介绍了A ...

  6. Android系统进程间通信(IPC)机制Binder中的Client获得Server远程接口过程源代码分析

    文章转载至CSDN社区罗升阳的安卓之旅,原文地址:http://blog.csdn.net/luoshengyang/article/details/6633311 在上一篇文章中,我 们分析了And ...

  7. Android应用Activity、Dialog、PopWindow、Toast窗体加入机制及源代码分析

    [工匠若水 http://blog.csdn.net/yanbober 转载烦请注明出处.尊重劳动成果] 1 背景 之所以写这一篇博客的原因是由于之前有写过一篇<Android应用setCont ...

  8. struts2源代码分析(个人觉得非常经典)

    读者如果曾经学习过Struts1.x或者有过Struts1.x的开发经验,那么千万不要想当然地以为这一章可以跳过.实际上Struts1.x与Struts2并无我们想象的血缘关系.虽然Struts2的开 ...

  9. Android HttpURLConnection源代码分析

    Android HttpURLConnection源代码分析 之前写过HttpURLConnection与HttpClient的差别及选择.后来又分析了Volley的源代码. 近期又遇到了问题,想在V ...

随机推荐

  1. mysql error You must reset your password using ALTER USER statement before executing this statement.

    mysql修改密码Your password does not satisfy the current policy requirements 出现这个问题的原因是:密码过于简单.刚安装的mysql的 ...

  2. localhost,127.0.0.1 和 本机IP 三者的区别

    localhost.127.0.0.1和本机IP的区别如下: 1.首先 localhost 是一个域名,在过去它指向 127.0.0.1 这个IP地址.在操作系统支持 ipv6 后,它同时还指向ipv ...

  3. Linux下怎么确定Nginx安装目录

    linux环境下,怎么确定nginx是以那个config文件启动的? 输入命令行: ps  -ef | grep nginx 摁回车,将出现如下图片: master process 后面的就是 ngi ...

  4. Python数据分析笔记

    最近在看Python数据分析这本书,随手记录一下读书笔记. 工作环境 本书中推荐了edm和ipython作为数据分析的环境,我还是刚开始使用这种集成的环境,觉得交互方面,比传统的命令行方式提高了不少. ...

  5. 在ubuntu中搜索文件或文件夹的方法

    版权声明:本文为博主原创文章,转载请注明出处. https://blog.csdn.net/dcrmg/article/details/78000961 1. whereis+文件名 用于程序名的搜索 ...

  6. .AVLFile Extension

    .AVLFile Extension File Type 1ArcView Legend File   Developer ESRI Popularity           4.1 (7 Votes ...

  7. [Link]Hive资料整理

    Hive SQL的编译过程 Hive学习分享 IBM Hive

  8. jquery fullPage

    FROM : http://www.dowebok.com/77.html 应用: http://txhd.163.com/

  9. ASP.NET C#根据HTML页面导出PDF

    在启明星采购系统里,新增了导出PDF功能.整个功能使用了第三方软件 wkhtmltopdf(下载) 官网 https://wkhtmltopdf.org/ 提供有更多版本下载 他可以把HTML页面转换 ...

  10. java操作mongodb(连接池)(转)

    原文链接: java操作mongodb(连接池) Mongo的实例其实就是一个数据库连接池,这个连接池里默认有10个链接.我们没有必要重新实现这个链接池,但是我们可以更改这个连接池的配置.因为Mong ...