请结合源码阅读,本文只是总结一下,源码里有详细的注释。基于:go1.12.4

http.Client 表示一个http client端,用来处理HTTP相关的工作,例如cookies, redirect, timeout等工作,其内部包含一个Transport,为RountTripper interface类型。

  1. type Client struct {
  2. // Transport specifies the mechanism by which individual
  3. // HTTP requests are made.
  4. // If nil, DefaultTransport is used.
  5. Transport RoundTripper
  6. ...
  7. }

RountTripper定义了执行一次http请求时,如何根据reueqest返回response,它必须是支持并发的一个结构体,允许多个groutine同时调用:

  1. type RoundTripper interface {
  2. RoundTrip(*Request) (*Response, error)
  3. }

如果不给http.Client显式指定RoundTripper则会创建一个默认的DefaultTransport。Transport是用来保存多个请求过程中的一些状态,用来缓存tcp连接,客户可以重用这些连接,防止每次新建,transport需要同时支持http, https, 并且需要http/1.1, http/2。DefaultTransport默认就支持http/2.0,如果需要显式指定则调用ConfigureTransport

transport必须实现interface中的roundTrip方法:


  1. // roundTrip implements a RoundTripper over HTTP.
  2. func (t *Transport) roundTrip(req *Request) (*Response, error) {
  3. ...
  4. for {
  5. select {
  6. case <-ctx.Done():
  7. req.closeBody()
  8. return nil, ctx.Err()
  9. default:
  10. }
  11. // treq gets modified by roundTrip, so we need to recreate for each retry.
  12. treq := &transportRequest{Request: req, trace: trace}
  13. cm, err := t.connectMethodForRequest(treq)
  14. if err != nil {
  15. req.closeBody()
  16. return nil, err
  17. }
  18. // 获取一个连接
  19. // Get the cached or newly-created connection to either the
  20. // host (for http or https), the http proxy, or the http proxy
  21. // pre-CONNECTed to https server. In any case, we'll be ready
  22. // to send it requests.
  23. pconn, err := t.getConn(treq, cm)
  24. if err != nil {
  25. t.setReqCanceler(req, nil)
  26. req.closeBody()
  27. return nil, err
  28. }
  29. var resp *Response
  30. if pconn.alt != nil {
  31. // HTTP/2 path.
  32. t.decHostConnCount(cm.key()) // don't count cached http2 conns toward conns per host
  33. t.setReqCanceler(req, nil) // not cancelable with CancelRequest
  34. resp, err = pconn.alt.RoundTrip(req)
  35. } else {
  36. // 开始调用该pconn的rountTrip方法取得response
  37. resp, err = pconn.roundTrip(treq)
  38. }
  39. if err == nil {
  40. return resp, nil
  41. }
  42. if !pconn.shouldRetryRequest(req, err) {
  43. // Issue 16465: return underlying net.Conn.Read error from peek,
  44. // as we've historically done.
  45. if e, ok := err.(transportReadFromServerError); ok {
  46. err = e.err
  47. }
  48. return nil, err
  49. }
  50. testHookRoundTripRetried()
  51. // Rewind the body if we're able to.
  52. if req.GetBody != nil {
  53. newReq := *req
  54. var err error
  55. newReq.Body, err = req.GetBody()
  56. if err != nil {
  57. return nil, err
  58. }
  59. req = &newReq
  60. }
  61. }
  62. }

roundTrip其实就是通过getConn用于获取一个连接persisConn并调用其roundTrip方法返回repsonse。其中getConn的实现如下:

  1. // getConn dials and creates a new persistConn to the target as
  2. // specified in the connectMethod. This includes doing a proxy CONNECT
  3. // and/or setting up TLS. If this doesn't return an error, the persistConn
  4. // is ready to write requests to.
  5. func (t *Transport) getConn(treq *transportRequest, cm connectMethod) (*persistConn, error) {
  6. req := treq.Request
  7. trace := treq.trace
  8. ctx := req.Context()
  9. if trace != nil && trace.GetConn != nil {
  10. trace.GetConn(cm.addr())
  11. }
  12. // 首先从idleConn空闲连接池中尝试获取闲置的连接
  13. if pc, idleSince := t.getIdleConn(cm); pc != nil {
  14. if trace != nil && trace.GotConn != nil {
  15. trace.GotConn(pc.gotIdleConnTrace(idleSince))
  16. }
  17. // set request canceler to some non-nil function so we
  18. // can detect whether it was cleared between now and when
  19. // we enter roundTrip
  20. t.setReqCanceler(req, func(error) {})
  21. return pc, nil
  22. }
  23. type dialRes struct {
  24. pc *persistConn
  25. err error
  26. }
  27. dialc := make(chan dialRes) // 连接创建完成之后会从该管道异步通知
  28. cmKey := cm.key() // 标识一个连接的key
  29. // Copy these hooks so we don't race on the postPendingDial in
  30. // the goroutine we launch. Issue 11136.
  31. testHookPrePendingDial := testHookPrePendingDial
  32. testHookPostPendingDial := testHookPostPendingDial
  33. handlePendingDial := func() {
  34. testHookPrePendingDial()
  35. go func() {
  36. if v := <-dialc; v.err == nil {
  37. t.putOrCloseIdleConn(v.pc)
  38. } else {
  39. t.decHostConnCount(cmKey)
  40. }
  41. testHookPostPendingDial()
  42. }()
  43. }
  44. cancelc := make(chan error, 1)
  45. t.setReqCanceler(req, func(err error) { cancelc <- err })
  46. // 一边增加记录的连接数,一边尝试获取连接,一边监听取消事件
  47. if t.MaxConnsPerHost > 0 {
  48. select {
  49. case <-t.incHostConnCount(cmKey):
  50. // count below conn per host limit; proceed
  51. case pc := <-t.getIdleConnCh(cm):
  52. if trace != nil && trace.GotConn != nil {
  53. trace.GotConn(httptrace.GotConnInfo{Conn: pc.conn, Reused: pc.isReused()})
  54. }
  55. return pc, nil
  56. case <-req.Cancel:
  57. return nil, errRequestCanceledConn
  58. case <-req.Context().Done():
  59. return nil, req.Context().Err()
  60. case err := <-cancelc:
  61. if err == errRequestCanceled {
  62. err = errRequestCanceledConn
  63. }
  64. return nil, err
  65. }
  66. }
  67. // 异步发起连接操作
  68. go func() {
  69. pc, err := t.dialConn(ctx, cm)
  70. dialc <- dialRes{pc, err}
  71. }()
  72. // 监听多个事件来源
  73. // 1. 新创建成功
  74. // 2. 其它连接结束,闲置连接池中有连接可以复用
  75. // 3. 连接被取消
  76. // 第一种情况和第二种情况谁先成功就直接返回
  77. // 除了新建连接成功,其它所有情况都需要处理调用`handlePendingDial`,该函数决定新建连接返回后该如何处理
  78. idleConnCh := t.getIdleConnCh(cm)
  79. select {
  80. case v := <-dialc: // 如果新建连接结束后会从该channel发送过来
  81. // Our dial finished.
  82. if v.pc != nil {
  83. if trace != nil && trace.GotConn != nil && v.pc.alt == nil {
  84. trace.GotConn(httptrace.GotConnInfo{Conn: v.pc.conn})
  85. }
  86. return v.pc, nil
  87. }
  88. // Our dial failed. See why to return a nicer error
  89. // value.
  90. t.decHostConnCount(cmKey)
  91. select {
  92. case <-req.Cancel:
  93. // It was an error due to cancelation, so prioritize that
  94. // error value. (Issue 16049)
  95. return nil, errRequestCanceledConn
  96. case <-req.Context().Done():
  97. return nil, req.Context().Err()
  98. case err := <-cancelc:
  99. if err == errRequestCanceled {
  100. err = errRequestCanceledConn
  101. }
  102. return nil, err
  103. default:
  104. // It wasn't an error due to cancelation, so
  105. // return the original error message:
  106. return nil, v.err
  107. }
  108. case pc := <-idleConnCh: // 如果从空闲连接池中有了可用的连接,直接返回
  109. // Another request finished first and its net.Conn
  110. // became available before our dial. Or somebody
  111. // else's dial that they didn't use.
  112. // But our dial is still going, so give it away
  113. // when it finishes:
  114. handlePendingDial()
  115. if trace != nil && trace.GotConn != nil {
  116. trace.GotConn(httptrace.GotConnInfo{Conn: pc.conn, Reused: pc.isReused()})
  117. }
  118. return pc, nil
  119. case <-req.Cancel:
  120. handlePendingDial()
  121. return nil, errRequestCanceledConn
  122. case <-req.Context().Done():
  123. handlePendingDial()
  124. return nil, req.Context().Err()
  125. case err := <-cancelc:
  126. handlePendingDial()
  127. if err == errRequestCanceled {
  128. err = errRequestCanceledConn
  129. }
  130. return nil, err
  131. }
  132. }

getConn首先从空闲连接池中获取连接,如果没有,则新建连接。在新建过程中,如果连接池中有空闲连接则也复用空闲连接。

继续看一下dialConn是如何建立连接的:

  1. func (t *Transport) dialConn(ctx context.Context, cm connectMethod) (*persistConn, error) {
  2. // 注意这里初始化的各种channle
  3. pconn := &persistConn{
  4. t: t,
  5. cacheKey: cm.key(),
  6. reqch: make(chan requestAndChan, 1), // 用于给readLoop发送request
  7. writech: make(chan writeRequest, 1), // 用于给writeLoop发送request
  8. closech: make(chan struct{}), // 当连接关闭是用于传递信息
  9. writeErrCh: make(chan error, 1), // 由writeLoop返回给roundTrip错误信息
  10. writeLoopDone: make(chan struct{}), // 当writeLoop结束的时候会关闭该channel
  11. }
  12. trace := httptrace.ContextClientTrace(ctx)
  13. wrapErr := func(err error) error {
  14. if cm.proxyURL != nil {
  15. // Return a typed error, per Issue 16997
  16. return &net.OpError{Op: "proxyconnect", Net: "tcp", Err: err}
  17. }
  18. return err
  19. }
  20. conn, err := t.dial(ctx, "tcp", cm.addr())
  21. if err != nil {
  22. return nil, wrapErr(err)
  23. }
  24. pconn.conn = conn
  25. // 包装一个请求成另一个结构体,方便后续处理
  26. if t.MaxConnsPerHost > 0 {
  27. pconn.conn = &connCloseListener{Conn: pconn.conn, t: t, cmKey: pconn.cacheKey}
  28. }
  29. // 包装读写conn并开启读取和写入groutine
  30. pconn.br = bufio.NewReader(pconn)
  31. pconn.bw = bufio.NewWriter(persistConnWriter{pconn})
  32. go pconn.readLoop()
  33. go pconn.writeLoop()
  34. return pconn, nil
  35. }

可以看到首先调用dial函数,获取一个conn对象,然后封装为pconn的, 启动readLoop和wirteLoop后将该pconn返回。

以readLoop为例,看看是如何从一个pc中读取response的:

  1. func (pc *persistConn) readLoop() {
  2. // 默认是失败,如果失败则进行处理,移除该连接,使用defer语句表示在程序退出的时候执行,也就是说该groutine在正常情况下不会退出,是个死循环,通过channel与其它groutine通信,处理请求
  3. closeErr := errReadLoopExiting // default value, if not changed below
  4. defer func() {
  5. pc.close(closeErr)
  6. pc.t.removeIdleConn(pc)
  7. }()
  8. // 尝试将该连接重新返回闲置连接池
  9. tryPutIdleConn := func(trace *httptrace.ClientTrace) bool {
  10. if err := pc.t.tryPutIdleConn(pc); err != nil {
  11. closeErr = err
  12. if trace != nil && trace.PutIdleConn != nil && err != errKeepAlivesDisabled {
  13. trace.PutIdleConn(err)
  14. }
  15. return false
  16. }
  17. if trace != nil && trace.PutIdleConn != nil {
  18. trace.PutIdleConn(nil)
  19. }
  20. return true
  21. }
  22. // 用来保证先后次序,先归还连接再读取response.Body
  23. // eofc is used to block caller goroutines reading from Response.Body
  24. // at EOF until this goroutines has (potentially) added the connection
  25. // back to the idle pool.
  26. eofc := make(chan struct{})
  27. defer close(eofc) // unblock reader on errors
  28. // Read this once, before loop starts. (to avoid races in tests)
  29. testHookMu.Lock()
  30. testHookReadLoopBeforeNextRead := testHookReadLoopBeforeNextRead
  31. testHookMu.Unlock()
  32. alive := true
  33. for alive {
  34. pc.readLimit = pc.maxHeaderResponseSize()
  35. _, err := pc.br.Peek(1)
  36. pc.mu.Lock()
  37. if pc.numExpectedResponses == 0 {
  38. pc.readLoopPeekFailLocked(err)
  39. pc.mu.Unlock()
  40. return
  41. }
  42. pc.mu.Unlock()
  43. // 获取一个新连接来处理
  44. rc := <-pc.reqch
  45. trace := httptrace.ContextClientTrace(rc.req.Context())
  46. var resp *Response
  47. if err == nil {
  48. // 读取返回结果
  49. resp, err = pc.readResponse(rc, trace)
  50. } else {
  51. err = transportReadFromServerError{err}
  52. closeErr = err
  53. }
  54. if err != nil {
  55. if pc.readLimit <= 0 {
  56. err = fmt.Errorf("net/http: server response headers exceeded %d bytes; aborted", pc.maxHeaderResponseSize())
  57. }
  58. select {
  59. case rc.ch <- responseAndError{err: err}:
  60. case <-rc.callerGone:
  61. return
  62. }
  63. return
  64. }
  65. pc.readLimit = maxInt64 // effictively no limit for response bodies
  66. pc.mu.Lock()
  67. pc.numExpectedResponses--
  68. pc.mu.Unlock()
  69. bodyWritable := resp.bodyIsWritable()
  70. hasBody := rc.req.Method != "HEAD" && resp.ContentLength != 0
  71. if resp.Close || rc.req.Close || resp.StatusCode <= 199 || bodyWritable {
  72. // Don't do keep-alive on error if either party requested a close
  73. // or we get an unexpected informational (1xx) response.
  74. // StatusCode 100 is already handled above.
  75. alive = false
  76. }
  77. if !hasBody || bodyWritable {
  78. pc.t.setReqCanceler(rc.req, nil)
  79. // Put the idle conn back into the pool before we send the response
  80. // so if they process it quickly and make another request, they'll
  81. // get this same conn. But we use the unbuffered channel 'rc'
  82. // to guarantee that persistConn.roundTrip got out of its select
  83. // potentially waiting for this persistConn to close.
  84. // but after
  85. alive = alive &&
  86. !pc.sawEOF &&
  87. pc.wroteRequest() &&
  88. tryPutIdleConn(trace)
  89. if bodyWritable {
  90. closeErr = errCallerOwnsConn
  91. }
  92. select {
  93. case rc.ch <- responseAndError{res: resp}:
  94. case <-rc.callerGone:
  95. return
  96. }
  97. // Now that they've read from the unbuffered channel, they're safely
  98. // out of the select that also waits on this goroutine to die, so
  99. // we're allowed to exit now if needed (if alive is false)
  100. testHookReadLoopBeforeNextRead()
  101. continue
  102. }
  103. // bodyEOFSignal实现了io.ReadCloser interface, 保证读取的时候,该response已经收到了eof
  104. waitForBodyRead := make(chan bool, 2)
  105. body := &bodyEOFSignal{
  106. body: resp.Body,
  107. earlyCloseFn: func() error {
  108. waitForBodyRead <- false
  109. <-eofc // will be closed by deferred call at the end of the function
  110. return nil
  111. },
  112. fn: func(err error) error {
  113. isEOF := err == io.EOF
  114. waitForBodyRead <- isEOF
  115. if isEOF {
  116. <-eofc // see comment above eofc declaration
  117. } else if err != nil {
  118. if cerr := pc.canceled(); cerr != nil {
  119. return cerr
  120. }
  121. }
  122. return err
  123. },
  124. }
  125. resp.Body = body
  126. if rc.addedGzip && strings.EqualFold(resp.Header.Get("Content-Encoding"), "gzip") {
  127. resp.Body = &gzipReader{body: body}
  128. resp.Header.Del("Content-Encoding")
  129. resp.Header.Del("Content-Length")
  130. resp.ContentLength = -1
  131. resp.Uncompressed = true
  132. }
  133. select {
  134. // 将分装好的repsponse发送回去
  135. case rc.ch <- responseAndError{res: resp}:
  136. case <-rc.callerGone:
  137. return
  138. }
  139. // Before looping back to the top of this function and peeking on
  140. // the bufio.Reader, wait for the caller goroutine to finish
  141. // reading the response body. (or for cancelation or death)
  142. select {
  143. case bodyEOF := <-waitForBodyRead:
  144. pc.t.setReqCanceler(rc.req, nil) // before pc might return to idle pool
  145. alive = alive &&
  146. bodyEOF &&
  147. !pc.sawEOF &&
  148. pc.wroteRequest() &&
  149. tryPutIdleConn(trace)
  150. if bodyEOF {
  151. eofc <- struct{}{} //前面所有检查完毕,通知对端开始读取
  152. }
  153. case <-rc.req.Cancel:
  154. alive = false
  155. pc.t.CancelRequest(rc.req)
  156. case <-rc.req.Context().Done():
  157. alive = false
  158. pc.t.cancelRequest(rc.req, rc.req.Context().Err())
  159. case <-pc.closech:
  160. alive = false
  161. }
  162. testHookReadLoopBeforeNextRead()
  163. }
  164. }

上面readLoop中从一个channel中读取出来需要处理的request, 然后读取readResponse并通过管道返回回去。那接受到的request是从哪个地方发送过来的呐?

回到最开始的Transport.roundTrip函数里,它先调用getConn返回一个pconn后然后调用pconn.roundTrip方法,就是在这里面发送的,我们看看:

  1. func (pc *persistConn) roundTrip(req *transportRequest) (resp *Response, err error) {
  2. testHookEnterRoundTrip()
  3. if !pc.t.replaceReqCanceler(req.Request, pc.cancelRequest) {
  4. pc.t.putOrCloseIdleConn(pc)
  5. return nil, errRequestCanceled
  6. }
  7. pc.mu.Lock()
  8. pc.numExpectedResponses++
  9. headerFn := pc.mutateHeaderFunc
  10. pc.mu.Unlock()
  11. if headerFn != nil {
  12. headerFn(req.extraHeaders())
  13. }
  14. ...
  15. var continueCh chan struct{}
  16. if req.ProtoAtLeast(1, 1) && req.Body != nil && req.expectsContinue() {
  17. continueCh = make(chan struct{}, 1)
  18. }
  19. if pc.t.DisableKeepAlives && !req.wantsClose() {
  20. req.extraHeaders().Set("Connection", "close")
  21. }
  22. gone := make(chan struct{})
  23. defer close(gone)
  24. defer func() {
  25. if err != nil {
  26. pc.t.setReqCanceler(req.Request, nil)
  27. }
  28. }()
  29. const debugRoundTrip = false
  30. // 通过writech发送该请求
  31. // Write the request concurrently with waiting for a response,
  32. // in case the server decides to reply before reading our full
  33. // request body.
  34. startBytesWritten := pc.nwrite
  35. writeErrCh := make(chan error, 1)
  36. pc.writech <- writeRequest{req, writeErrCh, continueCh}
  37. resc := make(chan responseAndError)
  38. // 发送当前正在处理的请求给readLoop,readLoop中从channle中读取出该请求,进行readResponse
  39. // 其中的requestAndChan.ch是response返回的channel
  40. pc.reqch <- requestAndChan{
  41. req: req.Request,
  42. ch: resc,
  43. addedGzip: requestedGzip,
  44. continueCh: continueCh,
  45. callerGone: gone,
  46. }
  47. var respHeaderTimer <-chan time.Time
  48. cancelChan := req.Request.Cancel
  49. ctxDoneChan := req.Context().Done()
  50. for {
  51. testHookWaitResLoop()
  52. select {
  53. case err := <-writeErrCh: // writeLoop出现错误
  54. if debugRoundTrip {
  55. req.logf("writeErrCh resv: %T/%#v", err, err)
  56. }
  57. if err != nil {
  58. pc.close(fmt.Errorf("write error: %v", err))
  59. return nil, pc.mapRoundTripError(req, startBytesWritten, err)
  60. }
  61. if d := pc.t.ResponseHeaderTimeout; d > 0 {
  62. if debugRoundTrip {
  63. req.logf("starting timer for %v", d)
  64. }
  65. timer := time.NewTimer(d)
  66. defer timer.Stop() // prevent leaks
  67. respHeaderTimer = timer.C
  68. }
  69. case <-pc.closech:
  70. if debugRoundTrip {
  71. req.logf("closech recv: %T %#v", pc.closed, pc.closed)
  72. }
  73. return nil, pc.mapRoundTripError(req, startBytesWritten, pc.closed)
  74. case <-respHeaderTimer:
  75. if debugRoundTrip {
  76. req.logf("timeout waiting for response headers.")
  77. }
  78. pc.close(errTimeout)
  79. return nil, errTimeout
  80. case re := <-resc: // readLoop会将读取的结果通过resc channel发送回来
  81. if (re.res == nil) == (re.err == nil) {
  82. panic(fmt.Sprintf("internal error: exactly one of res or err should be set; nil=%v", re.res == nil))
  83. }
  84. if debugRoundTrip {
  85. req.logf("resc recv: %p, %T/%#v", re.res, re.err, re.err)
  86. }
  87. if re.err != nil {
  88. return nil, pc.mapRoundTripError(req, startBytesWritten, re.err)
  89. }
  90. return re.res, nil
  91. case <-cancelChan:
  92. pc.t.CancelRequest(req.Request)
  93. cancelChan = nil
  94. case <-ctxDoneChan:
  95. pc.t.cancelRequest(req.Request, req.Context().Err())
  96. cancelChan = nil
  97. ctxDoneChan = nil
  98. }
  99. }
  100. }

该函数中会将request进行封装,然后分别通过channel发送给readLoop和writeLoop,并监听各个channel的事件,分别进行不同的处理。

整体流程走完之后,我们回顾一下两个比较重要的结构体:persistConnTransport的成员

  1. // persistConn wraps a connection, usually a persistent one
  2. // (but may be used for non-keep-alive requests as well)
  3. type persistConn struct {
  4. // alt optionally specifies the TLS NextProto RoundTripper.
  5. // This is used for HTTP/2 today and future protocols later.
  6. // If it's non-nil, the rest of the fields are unused.
  7. alt RoundTripper
  8. t *Transport
  9. cacheKey connectMethodKey // 当前连接对应的key, 也是idleConns map中的key
  10. conn net.Conn // 被封装的conn对象
  11. tlsState *tls.ConnectionState
  12. br *bufio.Reader // from conn // bufio.Reader 对象,封装conn
  13. bw *bufio.Writer // to conn // bufio.Writer 对象,封装conn
  14. nwrite int64 // bytes written // 记录写入的长度
  15. reqch chan requestAndChan // written by roundTrip; read by readLoop // rountTrip在创建一个请求的时候会讲请求通过该chenel发送给readLoop, readLoop后面解释
  16. writech chan writeRequest // written by roundTrip; read by writeLoop // writeTrop 从中读取写入请求并执行写入
  17. closech chan struct{} // closed when conn closed // 连接关闭的时候从该channle通信
  18. isProxy bool
  19. sawEOF bool // whether we've seen EOF from conn; owned by readLoop
  20. readLimit int64 // bytes allowed to be read; owned by readLoop
  21. // writeErrCh passes the request write error (usually nil)
  22. // from the writeLoop goroutine to the readLoop which passes
  23. // it off to the res.Body reader, which then uses it to decide
  24. // whether or not a connection can be reused. Issue 7569.
  25. writeErrCh chan error //
  26. writeLoopDone chan struct{} // closed when write loop ends
  27. // Both guarded by Transport.idleMu:
  28. idleAt time.Time // time it last become idle
  29. idleTimer *time.Timer // holding an AfterFunc to close it
  30. mu sync.Mutex // guards following fields
  31. numExpectedResponses int //表示当期期望的返回response数目
  32. closed error // set non-nil when conn is closed, before closech is closed
  33. canceledErr error // set non-nil if conn is canceled
  34. broken bool // an error has happened on this connection; marked broken so it's not reused.
  35. reused bool // whether conn has had successful request/response and is being reused.
  36. // mutateHeaderFunc is an optional func to modify extra
  37. // headers on each outbound request before it's written. (the
  38. // original Request given to RoundTrip is not modified)
  39. mutateHeaderFunc func(Header)
  40. }
  1. type Transport struct {
  2. idleMu sync.Mutex // 互斥锁,用于保护下面空闲连接池
  3. wantIdle bool // user has requested to close all idle conns// 标识是否idle
  4. idleConn map[connectMethodKey][]*persistConn // most recently used at end // 空闲连接池
  5. idleConnCh map[connectMethodKey]chan *persistConn // 用于在groutine中间传递空闲的连接,一般用于当连接池中没有连接,但是还有请求需要处理,当连接池中出现空闲连接时通过该channel通知
  6. idleLRU connLRU
  7. reqMu sync.Mutex
  8. reqCanceler map[*Request]func(error)
  9. altMu sync.Mutex // guards changing altProto only
  10. altProto atomic.Value // of nil or map[string]RoundTripper, key is URI scheme
  11. connCountMu sync.Mutex
  12. connPerHostCount map[connectMethodKey]int
  13. connPerHostAvailable map[connectMethodKey]chan struct{}
  14. // Proxy specifies a function to return a proxy for a given
  15. // Request. If the function returns a non-nil error, the
  16. // request is aborted with the provided error.
  17. //
  18. // The proxy type is determined by the URL scheme. "http",
  19. // "https", and "socks5" are supported. If the scheme is empty,
  20. // "http" is assumed.
  21. //
  22. // If Proxy is nil or returns a nil *URL, no proxy is used.
  23. Proxy func(*Request) (*url.URL, error)
  24. // DialContext specifies the dial function for creating unencrypted TCP connections.
  25. // If DialContext is nil (and the deprecated Dial below is also nil),
  26. // then the transport dials using package net.
  27. //
  28. // DialContext runs concurrently with calls to RoundTrip.
  29. // A RoundTrip call that initiates a dial may end up using
  30. // a connection dialed previously when the earlier connection
  31. // becomes idle before the later DialContext completes.
  32. DialContext func(ctx context.Context, network, addr string) (net.Conn, error)// 用于新建连接时使用
  33. // Dial specifies the dial function for creating unencrypted TCP connections.
  34. //
  35. // Dial runs concurrently with calls to RoundTrip.
  36. // A RoundTrip call that initiates a dial may end up using
  37. // a connection dialed previously when the earlier connection
  38. // becomes idle before the later Dial completes.
  39. //
  40. // Deprecated: Use DialContext instead, which allows the transport
  41. // to cancel dials as soon as they are no longer needed.
  42. // If both are set, DialContext takes priority.
  43. Dial func(network, addr string) (net.Conn, error)
  44. // DialTLS specifies an optional dial function for creating
  45. // TLS connections for non-proxied HTTPS requests.
  46. //
  47. // If DialTLS is nil, Dial and TLSClientConfig are used.
  48. //
  49. // If DialTLS is set, the Dial hook is not used for HTTPS
  50. // requests and the TLSClientConfig and TLSHandshakeTimeout
  51. // are ignored. The returned net.Conn is assumed to already be
  52. // past the TLS handshake.
  53. DialTLS func(network, addr string) (net.Conn, error)
  54. // TLSClientConfig specifies the TLS configuration to use with
  55. // tls.Client.
  56. // If nil, the default configuration is used.
  57. // If non-nil, HTTP/2 support may not be enabled by default.
  58. TLSClientConfig *tls.Config
  59. // TLSHandshakeTimeout specifies the maximum amount of time waiting to
  60. // wait for a TLS handshake. Zero means no timeout.
  61. TLSHandshakeTimeout time.Duration
  62. // DisableKeepAlives, if true, disables HTTP keep-alives and
  63. // will only use the connection to the server for a single
  64. // HTTP request.
  65. //
  66. // This is unrelated to the similarly named TCP keep-alives.
  67. DisableKeepAlives bool
  68. // DisableCompression, if true, prevents the Transport from
  69. // requesting compression with an "Accept-Encoding: gzip"
  70. // request header when the Request contains no existing
  71. // Accept-Encoding value. If the Transport requests gzip on
  72. // its own and gets a gzipped response, it's transparently
  73. // decoded in the Response.Body. However, if the user
  74. // explicitly requested gzip it is not automatically
  75. // uncompressed.
  76. DisableCompression bool
  77. // MaxIdleConns controls the maximum number of idle (keep-alive)
  78. // connections across all hosts. Zero means no limit.
  79. MaxIdleConns int
  80. // MaxIdleConnsPerHost, if non-zero, controls the maximum idle
  81. // (keep-alive) connections to keep per-host. If zero,
  82. // DefaultMaxIdleConnsPerHost is used.
  83. MaxIdleConnsPerHost int
  84. // MaxConnsPerHost optionally limits the total number of
  85. // connections per host, including connections in the dialing,
  86. // active, and idle states. On limit violation, dials will block.
  87. //
  88. // Zero means no limit.
  89. //
  90. // For HTTP/2, this currently only controls the number of new
  91. // connections being created at a time, instead of the total
  92. // number. In practice, hosts using HTTP/2 only have about one
  93. // idle connection, though.
  94. MaxConnsPerHost int
  95. // IdleConnTimeout is the maximum amount of time an idle
  96. // (keep-alive) connection will remain idle before closing
  97. // itself.
  98. // Zero means no limit.
  99. IdleConnTimeout time.Duration
  100. // ResponseHeaderTimeout, if non-zero, specifies the amount of
  101. // time to wait for a server's response headers after fully
  102. // writing the request (including its body, if any). This
  103. // time does not include the time to read the response body.
  104. ResponseHeaderTimeout time.Duration
  105. // ExpectContinueTimeout, if non-zero, specifies the amount of
  106. // time to wait for a server's first response headers after fully
  107. // writing the request headers if the request has an
  108. // "Expect: 100-continue" header. Zero means no timeout and
  109. // causes the body to be sent immediately, without
  110. // waiting for the server to approve.
  111. // This time does not include the time to send the request header.
  112. ExpectContinueTimeout time.Duration
  113. // TLSNextProto specifies how the Transport switches to an
  114. // alternate protocol (such as HTTP/2) after a TLS NPN/ALPN
  115. // protocol negotiation. If Transport dials an TLS connection
  116. // with a non-empty protocol name and TLSNextProto contains a
  117. // map entry for that key (such as "h2"), then the func is
  118. // called with the request's authority (such as "example.com"
  119. // or "example.com:1234") and the TLS connection. The function
  120. // must return a RoundTripper that then handles the request.
  121. // If TLSNextProto is not nil, HTTP/2 support is not enabled
  122. // automatically.
  123. TLSNextProto map[string]func(authority string, c *tls.Conn) RoundTripper
  124. // ProxyConnectHeader optionally specifies headers to send to
  125. // proxies during CONNECT requests.
  126. ProxyConnectHeader Header
  127. // MaxResponseHeaderBytes specifies a limit on how many
  128. // response bytes are allowed in the server's response
  129. // header.
  130. //
  131. // Zero means to use a default limit.
  132. MaxResponseHeaderBytes int64
  133. // nextProtoOnce guards initialization of TLSNextProto and
  134. // h2transport (via onceSetNextProtoDefaults)
  135. nextProtoOnce sync.Once
  136. h2transport h2Transport // non-nil if http2 wired up
  137. }

如此便是整个流程,其实还是很清晰的,最后总结一下:

tranport用来建立一个连接,其中维护了一个空闲连接池idleConn map[connectMethodKey][]*persistConn,其中的每个成员都是一个persistConn对象,persistConn是个具体的连接实例,包含了连接的上下文,会启动两个groutine分别执行readLoopwriteLoop, 每当transport调用roundTrip的时候,就会从连接池中选择一个空闲的persistConn,然后调用其roundTrip方法,将读写请求通过channel分别发送到readLoopwriteLoop中,然后会进行select各个channel的信息,包括连接关闭,请求超时,writeLoop出错, readLoop返回读取结果等。在writeLoop中发送请求,在readLoop中获取response并通过channe返回给roundTrip函数中,并再次将自己加入到idleConn中,等待下次请求到来。

推荐阅读

Go HTTP Client 持久连接

golang http/transport 代码分析的更多相关文章

  1. wifi display代码 分析

    转自:http://blog.csdn.net/lilian0118/article/details/23168531 这一章中我们来看Wifi Display连接过程的建立,包含P2P的部分和RTS ...

  2. jQuery File Upload 插件 php代码分析

    jquery file upload php代码分析首先进入构造方法 __construct() 再进入 initialize()因为我是post方式传的数据  在进入initialize()中的po ...

  3. 手机自动化测试:Appium源码分析之跟踪代码分析九

    手机自动化测试:Appium源码分析之跟踪代码分析九   poptest是国内唯一一家培养测试开发工程师的培训机构,以学员能胜任自动化测试,性能测试,测试工具开发等工作为目标.如果对课程感兴趣,请大家 ...

  4. Hive metastore整体代码分析及详解

    从上一篇对Hive metastore表结构的简要分析中,我再根据数据设计的实体对象,再进行整个代码结构的总结.那么我们先打开metadata的目录,其目录结构: 可以看到,整个hivemeta的目录 ...

  5. OVS 内核KEY值提取及匹配流表代码分析

    原文链接:http://ry0117.com/2016/12/24/OVS内核KEY值提取及匹配流表代码分析/ 当开启OVS后,创建datapath类型为system的网桥并他添加相关接口,OVS网桥 ...

  6. ADB结构及代码分析【转】

    本文转载自:http://blog.csdn.net/happylifer/article/details/7682563 最近因为需要,看了下adb的源代码,感觉这个作者很牛,设计的很好,于是稍微做 ...

  7. 【miscellaneous】海康相机RTSP连接代码分析

    海康相机RTSP连接代码分析 最近在做海康相机rtsp连接获取音视频的工作,现在介绍一下分析过程和源码. [源码在我上传的共享资料中: http://download.csdn.net/detail/ ...

  8. golang 性能调优分析工具 pprof(下)

    golang 性能调优分析工具 pprof(上)篇, 这是下篇. 四.net/http/pprof 4.1 代码例子 1 go version go1.13.9 把上面的程序例子稍微改动下,命名为 d ...

  9. Android代码分析工具lint学习

    1 lint简介 1.1 概述 lint是随Android SDK自带的一个静态代码分析工具.它用来对Android工程的源文件进行检查,找出在正确性.安全.性能.可使用性.可访问性及国际化等方面可能 ...

随机推荐

  1. Redis 3.2.4编译安装

    1. 下载安装包 wget url tar zxvf redis-3.2.4.tar.gz 2. 编译安装 cd redis-3.2.4/src/ sudo make && make ...

  2. wince c# 创建桌面快捷方式 自动启动 只运行一次 全屏显示

    using System; using System.Linq; using System.Collections.Generic; using System.Text; using System.R ...

  3. RTSP/RTMP/HLS/HTTP流媒体播放器EasyPlayer

    EasyPlayer播放器系列项目 EasyPlayer是由EasyDarwin开源团队开发和维护的一个流媒体播放器系列项目,随着多年不断的发展和迭代,不断基于成功的实践经验,发展出包括有: Easy ...

  4. spring 拦截器简介

    spring 拦截器简介 常见应用场景 1.日志记录:记录请求信息的日志,以便进行信息监控.信息统计.计算PV(Page View)等.2.权限检查:如登录检测,进入处理器检测检测是否登录,如果没有直 ...

  5. 清除inline-block元素默认间距

    1. font-size:0; 2.letter-spaceing:-0.5em;

  6. openstack之路:KVM/Libvirt 安装

    openstac是一个开源的计算机平台,利用虚拟化和底层存储服务提供云计算服务.openstack的基本是虚拟化技术.虚拟化技术采用的KVM.我们首先进行KVM软件的安装. 电脑配置: 内存:8G 硬 ...

  7. STM32 ~ 查看系统时钟

    调用库函数RCC_GetClocksFreq,该函数可以返回片上的各种时钟的频率 函数原形 void RCC_GetClocksFreq(RCC_ClocksTypeDef* RCC_Clocks) ...

  8. SCAU 1138 代码等式 并查集

    1138 代码等式[附加题] 该题有题解 时间限制:500MS  内存限制:65536K 提交次数:59 通过次数:21 题型: 编程题   语言: G++;GCC Description 一个代码等 ...

  9. 存储过程系列五:完整的存储过程备份使用函数REPLACE()substr()

    CREATE OR REPLACE PROCEDURE "YLQXSCXKESL_GGXKZ_TO_QB" (                                    ...

  10. List集合进行分页

    /** * @ClassName: Text2 * @Description: (集合的分页算法) * @author Luhan * @date 2017年3月16日 下午17:18:06*/pub ...