上一篇nginx的文章中,我们理解了整个http正向代理的运行流程原理,主要就是事件机制接入,header解析,body解析,然后遍历各种checker,以及详细讲解了其正向代理的具体实现过程。这已经让我们对整个nginx有了较深入的了解,但nginx核心固然重要,但其扩展功能才是其吸引大家的地方。而它的扩展功能又是无穷无尽的,这是好事又是坏事,好事是功能特别多,坏事是我们不可能都能探究其每个模块。

  个人觉得,nginx至少有两大必备的功能:http服务器(正向代理),http反向代理(服务转发);所以,既然前面我们弄清了其正向代理的实现,接下就是搬另一座大山的时刻了。

0. 反向代理白话

  所谓反向代理,实际就是其本身不做服务器的功能,它只是起到一个代理的角色,当有人请求它的时候,它按照已知的规则将该请求转发到目标服务器上,完成工作后,它再将结果响应给到客户端。在客户端看来,nginx它就是一个目标服务器。这样做有什么好处呢?其实是非常多的,列举两个:屏蔽底层许多不同服务器的差异避免上游过多关心切换问题而导致业务重心不稳;屏蔽内网的各种防火墙限制,上游只需关注与nginx间的网络通畅性即可;统一管控上游接入切换方便;

  看起来反向代理功能很棒,而且表面一看就是一转发功能,并非难事。但事实真如此吗?

  要想知道难不难,我们得思考下这个代理服务器都会面临什么业务要求?

    1. 得有支持转向任意服务器的能力;
    2. 得有支持任意http协议的处理能力(不仅仅是get/post);
    3. 得有保持请求源信息的能力(如客户端ip);
    4. 得是高性能的、支持高并发的;

  前几个看起来都是最基本的东西,难度并不大,后面的要求又很虚,看起来也没毛病。但是单要你实现一个高性能、高并发的系统,可能也不会很简单哦。而这里的高性能高并发是硬性要求,因为这里是被作为统一入口服务的,如果自己无法满足这要求,那么下游再强的能力也无济于事了。

  另外,因作为一个通用的代理服务器,那么它一定是会随时变动的,那么如何支持动态变更配置又是一个问题。通常我们会基于数据库去实现配置,但引入数据库这个组件,将会带来很大的未知。而如果想基于其他的配置,也许就没那么方便了。

  总之,好用的反向代理服务器并不多,这不是没有原因的。

1. nginx 静态文件配置

  要配置静态文件处理,只需在http server中配置 proxy_pass 代理即可。(当然了,你可以根据前缀配置许多不同的代理、server)

http {
include mime.types;
default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on;
#tcp_nopush on; #keepalive_timeout 0;
keepalive_timeout 65; #gzip on; server {
listen 8085;
server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location /tohello {
# 将请求转发给另一个服务器
proxy_pass http://localhost:8081/hello;
# 保持请求端信息
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
} location / {
root html;
index index.html index.htm;
}
}
# 后续可以添加无数个server 扩展
}

  配置简单吧,实际核心就三两行代码搞定:监听端口号 listen、访问域名 server_name、服务器根路径 root、代理转发 proxy_pass。明显这是nginx成功的原因之一。

  本文要讨论的场景是,如果我访问 http://localhost:8085/tohello/getUsers?pageNum=1&pageSize=2, 实际我是想访问背后的服务,那nginx将如何干成这件事呢?

2. 静态文件模块的注册

  整个反向代理的功能,基本都聚合在 proxy_module中。但proxy和静态文件模块相比,又是相当复杂的。它的注册也比static_module更复杂。如下:

// http/modules/ngx_http_proxy_module.c
ngx_module_t ngx_http_proxy_module;
// 所有支持的操作命令定义,想查看完整定义,请点击后续代码块
static ngx_command_t ngx_http_proxy_commands[] = { { ngx_string("proxy_pass"),
NGX_HTTP_LOC_CONF|NGX_HTTP_LIF_CONF|NGX_HTTP_LMT_CONF|NGX_CONF_TAKE1,
ngx_http_proxy_pass,
NGX_HTTP_LOC_CONF_OFFSET,
0,
NULL }, { ngx_string("proxy_set_header"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE2,
ngx_conf_set_keyval_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, headers_source),
NULL },
...
ngx_null_command
}; static ngx_http_module_t ngx_http_proxy_module_ctx = {
ngx_http_proxy_add_variables, /* preconfiguration */
NULL, /* postconfiguration */ ngx_http_proxy_create_main_conf, /* create main configuration */
NULL, /* init main configuration */ NULL, /* create server configuration */
NULL, /* merge server configuration */ ngx_http_proxy_create_loc_conf, /* create location configuration */
ngx_http_proxy_merge_loc_conf /* merge location configuration */
}; // 暴露模块服务
ngx_module_t ngx_http_proxy_module = {
NGX_MODULE_V1,
&ngx_http_proxy_module_ctx, /* module context */
ngx_http_proxy_commands, /* module directives */
NGX_HTTP_MODULE, /* module type */
NULL, /* init master */
NULL, /* init module */
NULL, /* init process */
NULL, /* init thread */
NULL, /* exit thread */
NULL, /* exit process */
NULL, /* exit master */
NGX_MODULE_V1_PADDING
};

  想要查看更多命令,请点击下面链接。

// 所有支持的操作命令定义
static ngx_command_t ngx_http_proxy_commands[] = { { ngx_string("proxy_pass"),
NGX_HTTP_LOC_CONF|NGX_HTTP_LIF_CONF|NGX_HTTP_LMT_CONF|NGX_CONF_TAKE1,
ngx_http_proxy_pass,
NGX_HTTP_LOC_CONF_OFFSET,
0,
NULL }, { ngx_string("proxy_redirect"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE12,
ngx_http_proxy_redirect,
NGX_HTTP_LOC_CONF_OFFSET,
0,
NULL }, { ngx_string("proxy_cookie_domain"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE12,
ngx_http_proxy_cookie_domain,
NGX_HTTP_LOC_CONF_OFFSET,
0,
NULL }, { ngx_string("proxy_cookie_path"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE12,
ngx_http_proxy_cookie_path,
NGX_HTTP_LOC_CONF_OFFSET,
0,
NULL }, { ngx_string("proxy_store"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_http_proxy_store,
NGX_HTTP_LOC_CONF_OFFSET,
0,
NULL }, { ngx_string("proxy_store_access"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE123,
ngx_conf_set_access_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.store_access),
NULL }, { ngx_string("proxy_buffering"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,
ngx_conf_set_flag_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.buffering),
NULL }, { ngx_string("proxy_request_buffering"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,
ngx_conf_set_flag_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.request_buffering),
NULL }, { ngx_string("proxy_ignore_client_abort"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,
ngx_conf_set_flag_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.ignore_client_abort),
NULL }, { ngx_string("proxy_bind"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE12,
ngx_http_upstream_bind_set_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.local),
NULL }, { ngx_string("proxy_socket_keepalive"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,
ngx_conf_set_flag_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.socket_keepalive),
NULL }, { ngx_string("proxy_connect_timeout"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_msec_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.connect_timeout),
NULL }, { ngx_string("proxy_send_timeout"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_msec_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.send_timeout),
NULL }, { ngx_string("proxy_send_lowat"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_size_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.send_lowat),
&ngx_http_proxy_lowat_post }, { ngx_string("proxy_intercept_errors"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,
ngx_conf_set_flag_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.intercept_errors),
NULL }, { ngx_string("proxy_set_header"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE2,
ngx_conf_set_keyval_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, headers_source),
NULL }, { ngx_string("proxy_headers_hash_max_size"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_num_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, headers_hash_max_size),
NULL }, { ngx_string("proxy_headers_hash_bucket_size"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_num_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, headers_hash_bucket_size),
NULL }, { ngx_string("proxy_set_body"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_str_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, body_source),
NULL }, { ngx_string("proxy_method"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_http_set_complex_value_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, method),
NULL }, { ngx_string("proxy_pass_request_headers"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,
ngx_conf_set_flag_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.pass_request_headers),
NULL }, { ngx_string("proxy_pass_request_body"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,
ngx_conf_set_flag_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.pass_request_body),
NULL }, { ngx_string("proxy_buffer_size"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_size_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.buffer_size),
NULL }, { ngx_string("proxy_read_timeout"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_msec_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.read_timeout),
NULL }, { ngx_string("proxy_buffers"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE2,
ngx_conf_set_bufs_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.bufs),
NULL }, { ngx_string("proxy_busy_buffers_size"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_size_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.busy_buffers_size_conf),
NULL }, { ngx_string("proxy_force_ranges"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,
ngx_conf_set_flag_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.force_ranges),
NULL }, { ngx_string("proxy_limit_rate"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_size_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.limit_rate),
NULL }, #if (NGX_HTTP_CACHE) { ngx_string("proxy_cache"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_http_proxy_cache,
NGX_HTTP_LOC_CONF_OFFSET,
0,
NULL }, { ngx_string("proxy_cache_key"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_http_proxy_cache_key,
NGX_HTTP_LOC_CONF_OFFSET,
0,
NULL }, { ngx_string("proxy_cache_path"),
NGX_HTTP_MAIN_CONF|NGX_CONF_2MORE,
ngx_http_file_cache_set_slot,
NGX_HTTP_MAIN_CONF_OFFSET,
offsetof(ngx_http_proxy_main_conf_t, caches),
&ngx_http_proxy_module }, { ngx_string("proxy_cache_bypass"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE,
ngx_http_set_predicate_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_bypass),
NULL }, { ngx_string("proxy_no_cache"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE,
ngx_http_set_predicate_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.no_cache),
NULL }, { ngx_string("proxy_cache_valid"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE,
ngx_http_file_cache_valid_set_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_valid),
NULL }, { ngx_string("proxy_cache_min_uses"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_num_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_min_uses),
NULL }, { ngx_string("proxy_cache_max_range_offset"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_off_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_max_range_offset),
NULL }, { ngx_string("proxy_cache_use_stale"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE,
ngx_conf_set_bitmask_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_use_stale),
&ngx_http_proxy_next_upstream_masks }, { ngx_string("proxy_cache_methods"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE,
ngx_conf_set_bitmask_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_methods),
&ngx_http_upstream_cache_method_mask }, { ngx_string("proxy_cache_lock"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,
ngx_conf_set_flag_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_lock),
NULL }, { ngx_string("proxy_cache_lock_timeout"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_msec_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_lock_timeout),
NULL }, { ngx_string("proxy_cache_lock_age"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_msec_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_lock_age),
NULL }, { ngx_string("proxy_cache_revalidate"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,
ngx_conf_set_flag_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_revalidate),
NULL }, { ngx_string("proxy_cache_convert_head"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,
ngx_conf_set_flag_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_convert_head),
NULL }, { ngx_string("proxy_cache_background_update"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,
ngx_conf_set_flag_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_background_update),
NULL }, #endif { ngx_string("proxy_temp_path"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1234,
ngx_conf_set_path_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.temp_path),
NULL }, { ngx_string("proxy_max_temp_file_size"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_size_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.max_temp_file_size_conf),
NULL }, { ngx_string("proxy_temp_file_write_size"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_size_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.temp_file_write_size_conf),
NULL }, { ngx_string("proxy_next_upstream"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE,
ngx_conf_set_bitmask_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.next_upstream),
&ngx_http_proxy_next_upstream_masks }, { ngx_string("proxy_next_upstream_tries"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_num_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.next_upstream_tries),
NULL }, { ngx_string("proxy_next_upstream_timeout"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_msec_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.next_upstream_timeout),
NULL }, { ngx_string("proxy_pass_header"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_str_array_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.pass_headers),
NULL }, { ngx_string("proxy_hide_header"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_str_array_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.hide_headers),
NULL }, { ngx_string("proxy_ignore_headers"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE,
ngx_conf_set_bitmask_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.ignore_headers),
&ngx_http_upstream_ignore_headers_masks }, { ngx_string("proxy_http_version"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_enum_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, http_version),
&ngx_http_proxy_http_version }, #if (NGX_HTTP_SSL) { ngx_string("proxy_ssl_session_reuse"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,
ngx_conf_set_flag_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_session_reuse),
NULL }, { ngx_string("proxy_ssl_protocols"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE,
ngx_conf_set_bitmask_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, ssl_protocols),
&ngx_http_proxy_ssl_protocols }, { ngx_string("proxy_ssl_ciphers"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_str_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, ssl_ciphers),
NULL }, { ngx_string("proxy_ssl_name"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_http_set_complex_value_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_name),
NULL }, { ngx_string("proxy_ssl_server_name"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,
ngx_conf_set_flag_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_server_name),
NULL }, { ngx_string("proxy_ssl_verify"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,
ngx_conf_set_flag_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify),
NULL }, { ngx_string("proxy_ssl_verify_depth"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_num_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, ssl_verify_depth),
NULL }, { ngx_string("proxy_ssl_trusted_certificate"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_str_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, ssl_trusted_certificate),
NULL }, { ngx_string("proxy_ssl_crl"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_str_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, ssl_crl),
NULL }, { ngx_string("proxy_ssl_certificate"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_str_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, ssl_certificate),
NULL }, { ngx_string("proxy_ssl_certificate_key"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_conf_set_str_slot,
NGX_HTTP_LOC_CONF_OFFSET,
offsetof(ngx_http_proxy_loc_conf_t, ssl_certificate_key),
NULL }, { ngx_string("proxy_ssl_password_file"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_http_proxy_ssl_password_file,
NGX_HTTP_LOC_CONF_OFFSET,
0,
NULL }, #endif ngx_null_command
};

  proxy 模块有非常多命令可以操作,即可配置项非常之多:各种代理方式、自定义变量、缓存、cookie、ssl、upstream等等。所以,造就了这个模块的复杂性。

  但是,我们不打算了解全部(也了解不了),我们只看大概,其如何设置header及如何转发请求即可。

3. 核心代理功能实现

  proxy 代理处理算是content处理的一个分支,所以同样会被 core_content_phase 管理. 但不同的是, 它是在content_handler中被调用, 而不是作为 handler被调用.

// http/ngx_http_core_module.c
ngx_int_t
ngx_http_core_content_phase(ngx_http_request_t *r,
ngx_http_phase_handler_t *ph)
{
size_t root;
ngx_int_t rc;
ngx_str_t path; if (r->content_handler) {
r->write_event_handler = ngx_http_request_empty_handler;
// 调用 content_handler 进行转发处理
ngx_http_finalize_request(r, r->content_handler(r));
return NGX_OK;
} ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
"content phase: %ui", r->phase_handler); rc = ph->handler(r); if (rc != NGX_DECLINED) {
ngx_http_finalize_request(r, rc);
return NGX_OK;
} /* rc == NGX_DECLINED */ ph++; if (ph->checker) {
r->phase_handler++;
return NGX_AGAIN;
} /* no content handler was found */ if (r->uri.data[r->uri.len - 1] == '/') { if (ngx_http_map_uri_to_path(r, &path, &root, 0) != NULL) {
ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,
"directory index of \"%s\" is forbidden", path.data);
} ngx_http_finalize_request(r, NGX_HTTP_FORBIDDEN);
return NGX_OK;
} ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "no handler found"); ngx_http_finalize_request(r, NGX_HTTP_NOT_FOUND);
return NGX_OK;
}

  在上面的注册之后,会在 ngx_http_proxy_merge_loc_conf 被调用时,将handler设置为 ngx_http_proxy_handler。

// http/modules/ngx_http_proxy_module.c
// 代理功能入口
static ngx_int_t
ngx_http_proxy_handler(ngx_http_request_t *r)
{
ngx_int_t rc;
ngx_http_upstream_t *u;
ngx_http_proxy_ctx_t *ctx;
ngx_http_proxy_loc_conf_t *plcf;
#if (NGX_HTTP_CACHE)
ngx_http_proxy_main_conf_t *pmcf;
#endif
// 创建upstream, 即转发流准备
if (ngx_http_upstream_create(r) != NGX_OK) {
return NGX_HTTP_INTERNAL_SERVER_ERROR;
} ctx = ngx_pcalloc(r->pool, sizeof(ngx_http_proxy_ctx_t));
if (ctx == NULL) {
return NGX_HTTP_INTERNAL_SERVER_ERROR;
}
// 将创建好的上下文信息赋给 r->ctx 中
// r->ctx[ngx_http_proxy_module.ctx_index] = ctx;
ngx_http_set_ctx(r, ctx, ngx_http_proxy_module); plcf = ngx_http_get_module_loc_conf(r, ngx_http_proxy_module);
// 设置upstream 信息
u = r->upstream; if (plcf->proxy_lengths == NULL) {
// {key_start = {len = 21, data = 0x8000b4110 "http://localhost:8081/hello"},
// schema = {len = 7, data = 0x8000b4110 "http://localhost:8081/hello"},
// host_header = {len = 14, data = 0x8000b4117 "localhost:8081/hello"},
// port = {len = 4, data = 0x8000b4121 "8081/hello"},
// uri = {len = 6, data = 0x8000b4125 "/hello"}}
ctx->vars = plcf->vars;
u->schema = plcf->vars.schema;
#if (NGX_HTTP_SSL)
u->ssl = (plcf->upstream.ssl != NULL);
#endif } else {
if (ngx_http_proxy_eval(r, ctx, plcf) != NGX_OK) {
return NGX_HTTP_INTERNAL_SERVER_ERROR;
}
} u->output.tag = (ngx_buf_tag_t) &ngx_http_proxy_module; u->conf = &plcf->upstream; #if (NGX_HTTP_CACHE)
pmcf = ngx_http_get_module_main_conf(r, ngx_http_proxy_module); u->caches = &pmcf->caches;
u->create_key = ngx_http_proxy_create_key;
#endif u->create_request = ngx_http_proxy_create_request;
u->reinit_request = ngx_http_proxy_reinit_request;
u->process_header = ngx_http_proxy_process_status_line;
u->abort_request = ngx_http_proxy_abort_request;
u->finalize_request = ngx_http_proxy_finalize_request;
r->state = 0;
// 重定向设置
if (plcf->redirects) {
u->rewrite_redirect = ngx_http_proxy_rewrite_redirect;
} if (plcf->cookie_domains || plcf->cookie_paths) {
u->rewrite_cookie = ngx_http_proxy_rewrite_cookie;
} u->buffering = plcf->upstream.buffering; u->pipe = ngx_pcalloc(r->pool, sizeof(ngx_event_pipe_t));
if (u->pipe == NULL) {
return NGX_HTTP_INTERNAL_SERVER_ERROR;
} u->pipe->input_filter = ngx_http_proxy_copy_filter;
u->pipe->input_ctx = r; u->input_filter_init = ngx_http_proxy_input_filter_init;
u->input_filter = ngx_http_proxy_non_buffered_copy_filter;
u->input_filter_ctx = r; u->accel = 1; if (!plcf->upstream.request_buffering
&& plcf->body_values == NULL && plcf->upstream.pass_request_body
&& (!r->headers_in.chunked
|| plcf->http_version == NGX_HTTP_VERSION_11))
{
r->request_body_no_buffering = 1;
}
// 重要: 读取客户端请求数据
// ngx_http_upstream_init 被作为处理器传入
rc = ngx_http_read_client_request_body(r, ngx_http_upstream_init); if (rc >= NGX_HTTP_SPECIAL_RESPONSE) {
return rc;
} return NGX_DONE;
}

  整个流程, 看起来更向是做一些准备工作, 并未发现如何进行转发. 而且其与upstream有非常大的关系, 可能我们理解了proxy 也就理解了 upstream 了. 以上是upstream的创建方式, 也是非常的简单. 所以, 更多工作被放到了 ngx_http_read_client_request_body 中去了.

// http/ngx_http_upstream.c
ngx_int_t
ngx_http_upstream_create(ngx_http_request_t *r)
{
ngx_http_upstream_t *u; u = r->upstream; if (u && u->cleanup) {
r->main->count++;
ngx_http_upstream_cleanup(r);
} u = ngx_pcalloc(r->pool, sizeof(ngx_http_upstream_t));
if (u == NULL) {
return NGX_ERROR;
} r->upstream = u; u->peer.log = r->connection->log;
u->peer.log_error = NGX_ERROR_ERR; #if (NGX_HTTP_CACHE)
r->cache = NULL;
#endif
// header 被设置为-1, 后续作区分处理
u->headers_in.content_length_n = -1;
u->headers_in.last_modified_time = -1; return NGX_OK;
}

  请看下一集.

4. 请求转发细节实现

  上面转到通用http处理模块后,又发生了一些变化。 大体流程是:ngx_http_read_client_request_body -> ngx_http_upstream_init -> ngx_http_upstream_init_request -> ngx_http_upstream_connect -> ngx_http_upstream_send_request -> ngx_http_upstream_send_request_body -> ngx_handle_write_event -> 写数据到目标服务器 -> 异步等待目标服务器响应 -> 响应客户端 。

// http/ngx_http_request_body.c
// 通用读取请求并处理流程
ngx_int_t
ngx_http_read_client_request_body(ngx_http_request_t *r,
ngx_http_client_body_handler_pt post_handler)
{
size_t preread;
ssize_t size;
ngx_int_t rc;
ngx_buf_t *b;
ngx_chain_t out;
ngx_http_request_body_t *rb;
ngx_http_core_loc_conf_t *clcf; r->main->count++; if (r != r->main || r->request_body || r->discard_body) {
r->request_body_no_buffering = 0;
post_handler(r);
return NGX_OK;
} if (ngx_http_test_expect(r) != NGX_OK) {
rc = NGX_HTTP_INTERNAL_SERVER_ERROR;
goto done;
} rb = ngx_pcalloc(r->pool, sizeof(ngx_http_request_body_t));
if (rb == NULL) {
rc = NGX_HTTP_INTERNAL_SERVER_ERROR;
goto done;
} /*
* set by ngx_pcalloc():
*
* rb->bufs = NULL;
* rb->buf = NULL;
* rb->free = NULL;
* rb->busy = NULL;
* rb->chunked = NULL;
*/ rb->rest = -1;
rb->post_handler = post_handler; r->request_body = rb; if (r->headers_in.content_length_n < 0 && !r->headers_in.chunked) {
r->request_body_no_buffering = 0;
// header为-1, proxy 会直接走此处, 即转身 upstream 处理
post_handler(r);
return NGX_OK;
}
...
} // http/ngx_http_upstream.c
void
ngx_http_upstream_init(ngx_http_request_t *r)
{
ngx_connection_t *c; c = r->connection; ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0,
"http init upstream, client timer: %d", c->read->timer_set); #if (NGX_HTTP_V2)
if (r->stream) {
ngx_http_upstream_init_request(r);
return;
}
#endif if (c->read->timer_set) {
ngx_del_timer(c->read);
} if (ngx_event_flags & NGX_USE_CLEAR_EVENT) { if (!c->write->active) {
if (ngx_add_event(c->write, NGX_WRITE_EVENT, NGX_CLEAR_EVENT)
== NGX_ERROR)
{
ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
}
}
}
ngx_http_upstream_init_request(r);
} // http/ngx_http_upstream.c
static void
ngx_http_upstream_init_request(ngx_http_request_t *r)
{
ngx_str_t *host;
ngx_uint_t i;
ngx_resolver_ctx_t *ctx, temp;
ngx_http_cleanup_t *cln;
ngx_http_upstream_t *u;
ngx_http_core_loc_conf_t *clcf;
ngx_http_upstream_srv_conf_t *uscf, **uscfp;
ngx_http_upstream_main_conf_t *umcf; if (r->aio) {
return;
} u = r->upstream; #if (NGX_HTTP_CACHE) if (u->conf->cache) {
ngx_int_t rc; rc = ngx_http_upstream_cache(r, u); if (rc == NGX_BUSY) {
r->write_event_handler = ngx_http_upstream_init_request;
return;
} r->write_event_handler = ngx_http_request_empty_handler; if (rc == NGX_ERROR) {
ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} if (rc == NGX_OK) {
rc = ngx_http_upstream_cache_send(r, u); if (rc == NGX_DONE) {
return;
} if (rc == NGX_HTTP_UPSTREAM_INVALID_HEADER) {
rc = NGX_DECLINED;
r->cached = 0;
u->buffer.start = NULL;
u->cache_status = NGX_HTTP_CACHE_MISS;
u->request_sent = 1;
}
} if (rc != NGX_DECLINED) {
ngx_http_finalize_request(r, rc);
return;
}
} #endif u->store = u->conf->store; if (!u->store && !r->post_action && !u->conf->ignore_client_abort) {
r->read_event_handler = ngx_http_upstream_rd_check_broken_connection;
r->write_event_handler = ngx_http_upstream_wr_check_broken_connection;
} if (r->request_body) {
u->request_bufs = r->request_body->bufs;
}
// 创建代理请求, 此处为 ngx_http_proxy_create_request
// {len = 3, data = 0x8000a9700 "GET /tohello/getUsers?pageNum=1&pageSize=2 HTTP/1.1\r\nHost"}
if (u->create_request(r) != NGX_OK) {
ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} if (ngx_http_upstream_set_local(r, u, u->conf->local) != NGX_OK) {
ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} if (u->conf->socket_keepalive) {
u->peer.so_keepalive = 1;
} clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); u->output.alignment = clcf->directio_alignment;
u->output.pool = r->pool;
u->output.bufs.num = 1;
u->output.bufs.size = clcf->client_body_buffer_size; if (u->output.output_filter == NULL) {
u->output.output_filter = ngx_chain_writer;
u->output.filter_ctx = &u->writer;
} u->writer.pool = r->pool; if (r->upstream_states == NULL) { r->upstream_states = ngx_array_create(r->pool, 1,
sizeof(ngx_http_upstream_state_t));
if (r->upstream_states == NULL) {
ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} } else { u->state = ngx_array_push(r->upstream_states);
if (u->state == NULL) {
ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} ngx_memzero(u->state, sizeof(ngx_http_upstream_state_t));
}
// 清理数据
cln = ngx_http_cleanup_add(r, 0);
if (cln == NULL) {
ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} cln->handler = ngx_http_upstream_cleanup;
cln->data = r;
u->cleanup = &cln->handler; if (u->resolved == NULL) { uscf = u->conf->upstream; } else { #if (NGX_HTTP_SSL)
u->ssl_name = u->resolved->host;
#endif host = &u->resolved->host; umcf = ngx_http_get_module_main_conf(r, ngx_http_upstream_module); uscfp = umcf->upstreams.elts; for (i = 0; i < umcf->upstreams.nelts; i++) { uscf = uscfp[i]; if (uscf->host.len == host->len
&& ((uscf->port == 0 && u->resolved->no_port)
|| uscf->port == u->resolved->port)
&& ngx_strncasecmp(uscf->host.data, host->data, host->len) == 0)
{
goto found;
}
} if (u->resolved->sockaddr) { if (u->resolved->port == 0
&& u->resolved->sockaddr->sa_family != AF_UNIX)
{
ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,
"no port in upstream \"%V\"", host);
ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} if (ngx_http_upstream_create_round_robin_peer(r, u->resolved)
!= NGX_OK)
{
ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} ngx_http_upstream_connect(r, u); return;
} if (u->resolved->port == 0) {
ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,
"no port in upstream \"%V\"", host);
ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} temp.name = *host; ctx = ngx_resolve_start(clcf->resolver, &temp);
if (ctx == NULL) {
ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} if (ctx == NGX_NO_RESOLVER) {
ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,
"no resolver defined to resolve %V", host); ngx_http_upstream_finalize_request(r, u, NGX_HTTP_BAD_GATEWAY);
return;
} ctx->name = *host;
ctx->handler = ngx_http_upstream_resolve_handler;
ctx->data = r;
ctx->timeout = clcf->resolver_timeout; u->resolved->ctx = ctx; if (ngx_resolve_name(ctx) != NGX_OK) {
u->resolved->ctx = NULL;
ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} return;
} found: if (uscf == NULL) {
ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0,
"no upstream configuration");
ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} u->upstream = uscf; #if (NGX_HTTP_SSL)
u->ssl_name = uscf->host;
#endif
// 初始化连接点数据
// 默认为: ngx_http_upstream_init_round_robin_peer
if (uscf->peer.init(r, uscf) != NGX_OK) {
ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} u->peer.start_time = ngx_current_msec; if (u->conf->next_upstream_tries
&& u->peer.tries > u->conf->next_upstream_tries)
{
u->peer.tries = u->conf->next_upstream_tries;
}
// 连接到upstream 中, 默认是 round-robin
ngx_http_upstream_connect(r, u);
} // http/ngx_http_upstream.c
static void
ngx_http_upstream_connect(ngx_http_request_t *r, ngx_http_upstream_t *u)
{
ngx_int_t rc;
ngx_connection_t *c; r->connection->log->action = "connecting to upstream"; if (u->state && u->state->response_time == (ngx_msec_t) -1) {
u->state->response_time = ngx_current_msec - u->start_time;
} u->state = ngx_array_push(r->upstream_states);
if (u->state == NULL) {
ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} ngx_memzero(u->state, sizeof(ngx_http_upstream_state_t)); u->start_time = ngx_current_msec; u->state->response_time = (ngx_msec_t) -1;
u->state->connect_time = (ngx_msec_t) -1;
u->state->header_time = (ngx_msec_t) -1;
// 连接无端socket, count++
rc = ngx_event_connect_peer(&u->peer); ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
"http upstream connect: %i", rc); if (rc == NGX_ERROR) {
ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} u->state->peer = u->peer.name; if (rc == NGX_BUSY) {
ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "no live upstreams");
ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_NOLIVE);
return;
} if (rc == NGX_DECLINED) {
ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR);
return;
} /* rc == NGX_OK || rc == NGX_AGAIN || rc == NGX_DONE */ c = u->peer.connection; c->requests++; c->data = r;
// 读写事件处理器设置为 ngx_http_upstream_handler
c->write->handler = ngx_http_upstream_handler;
c->read->handler = ngx_http_upstream_handler; u->write_event_handler = ngx_http_upstream_send_request_handler;
u->read_event_handler = ngx_http_upstream_process_header; c->sendfile &= r->connection->sendfile;
u->output.sendfile = c->sendfile; if (r->connection->tcp_nopush == NGX_TCP_NOPUSH_DISABLED) {
c->tcp_nopush = NGX_TCP_NOPUSH_DISABLED;
} if (c->pool == NULL) { /* we need separate pool here to be able to cache SSL connections */ c->pool = ngx_create_pool(128, r->connection->log);
if (c->pool == NULL) {
ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
}
} c->log = r->connection->log;
c->pool->log = c->log;
c->read->log = c->log;
c->write->log = c->log; /* init or reinit the ngx_output_chain() and ngx_chain_writer() contexts */ u->writer.out = NULL;
u->writer.last = &u->writer.out;
u->writer.connection = c;
u->writer.limit = 0; if (u->request_sent) {
if (ngx_http_upstream_reinit(r, u) != NGX_OK) {
ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
}
} if (r->request_body
&& r->request_body->buf
&& r->request_body->temp_file
&& r == r->main)
{
/*
* the r->request_body->buf can be reused for one request only,
* the subrequests should allocate their own temporary bufs
*/ u->output.free = ngx_alloc_chain_link(r->pool);
if (u->output.free == NULL) {
ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} u->output.free->buf = r->request_body->buf;
u->output.free->next = NULL;
u->output.allocated = 1; r->request_body->buf->pos = r->request_body->buf->start;
r->request_body->buf->last = r->request_body->buf->start;
r->request_body->buf->tag = u->output.tag;
} u->request_sent = 0;
u->request_body_sent = 0;
u->request_body_blocked = 0;
// 未处理完成,等待下一次事件通知
if (rc == NGX_AGAIN) {
ngx_add_timer(c->write, u->conf->connect_timeout);
return;
} #if (NGX_HTTP_SSL) if (u->ssl && c->ssl == NULL) {
ngx_http_upstream_ssl_init_connection(r, u, c);
return;
} #endif
// 发送请求到目标端
ngx_http_upstream_send_request(r, u, 1);
} // http/ngx_http_upstream.c
static void
ngx_http_upstream_send_request(ngx_http_request_t *r, ngx_http_upstream_t *u,
ngx_uint_t do_write)
{
ngx_int_t rc;
ngx_connection_t *c; c = u->peer.connection; ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0,
"http upstream send request"); if (u->state->connect_time == (ngx_msec_t) -1) {
u->state->connect_time = ngx_current_msec - u->start_time;
}
// 测试发送数据ok
if (!u->request_sent && ngx_http_upstream_test_connect(c) != NGX_OK) {
ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR);
return;
} c->log->action = "sending request to upstream";
// 发送数据
rc = ngx_http_upstream_send_request_body(r, u, do_write); if (rc == NGX_ERROR) {
ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR);
return;
} if (rc >= NGX_HTTP_SPECIAL_RESPONSE) {
ngx_http_upstream_finalize_request(r, u, rc);
return;
} if (rc == NGX_AGAIN) {
if (!c->write->ready || u->request_body_blocked) {
ngx_add_timer(c->write, u->conf->send_timeout); } else if (c->write->timer_set) {
ngx_del_timer(c->write);
} if (ngx_handle_write_event(c->write, u->conf->send_lowat) != NGX_OK) {
ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} if (c->write->ready && c->tcp_nopush == NGX_TCP_NOPUSH_SET) {
if (ngx_tcp_push(c->fd) == -1) {
ngx_log_error(NGX_LOG_CRIT, c->log, ngx_socket_errno,
ngx_tcp_push_n " failed");
ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} c->tcp_nopush = NGX_TCP_NOPUSH_UNSET;
} return;
} /* rc == NGX_OK */ if (c->write->timer_set) {
ngx_del_timer(c->write);
} if (c->tcp_nopush == NGX_TCP_NOPUSH_SET) {
if (ngx_tcp_push(c->fd) == -1) {
ngx_log_error(NGX_LOG_CRIT, c->log, ngx_socket_errno,
ngx_tcp_push_n " failed");
ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} c->tcp_nopush = NGX_TCP_NOPUSH_UNSET;
} if (!u->conf->preserve_output) {
u->write_event_handler = ngx_http_upstream_dummy_handler;
}
// 写事件
if (ngx_handle_write_event(c->write, 0) != NGX_OK) {
ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} if (!u->request_body_sent) {
u->request_body_sent = 1; if (u->header_sent) {
return;
}
// 注册读超时定时器,避免等等超时
ngx_add_timer(c->read, u->conf->read_timeout);
// 如果已响应,立即处理,否则异步等待后续事件通知
if (c->read->ready) {
ngx_http_upstream_process_header(r, u);
return;
}
}
}

  大体流程如上,以上同步处理。但比较向目标服务器写数据,等待目标服务器响应走的是异步流程,非阻塞io。所以前面注册了事件监听,将会后在后续事件就绪时再次处理。

5. 异步后续事件接入处理器

  首次代理处理后,将所有上下文信息,目标服务器等都已准备好了。但目标服务器或者网络会很慢,所以做了异步化处理,走的另一条路线。handler 为 ngx_http_upstream_handler 。

// http/ngx_http_upstream.c
static void
ngx_http_upstream_handler(ngx_event_t *ev)
{
ngx_connection_t *c;
ngx_http_request_t *r;
ngx_http_upstream_t *u; c = ev->data;
r = c->data; u = r->upstream;
c = r->connection; ngx_http_set_log_request(c->log, r); ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0,
"http upstream request: \"%V?%V\"", &r->uri, &r->args); if (ev->delayed && ev->timedout) {
ev->delayed = 0;
ev->timedout = 0;
}
// 写就绪、读就绪
if (ev->write) {
u->write_event_handler(r, u); } else {
u->read_event_handler(r, u);
} ngx_http_run_posted_requests(c);
} // 接收目标服务器返回值并处理
// http/ngx_http_upstream.c
static void
ngx_http_upstream_process_header(ngx_http_request_t *r, ngx_http_upstream_t *u)
{
ssize_t n;
ngx_int_t rc;
ngx_connection_t *c; c = u->peer.connection; ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0,
"http upstream process header"); c->log->action = "reading response header from upstream";
// 超时处理
if (c->read->timedout) {
ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_TIMEOUT);
return;
} if (!u->request_sent && ngx_http_upstream_test_connect(c) != NGX_OK) {
ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR);
return;
} if (u->buffer.start == NULL) {
u->buffer.start = ngx_palloc(r->pool, u->conf->buffer_size);
if (u->buffer.start == NULL) {
ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} u->buffer.pos = u->buffer.start;
u->buffer.last = u->buffer.start;
u->buffer.end = u->buffer.start + u->conf->buffer_size;
u->buffer.temporary = 1; u->buffer.tag = u->output.tag; if (ngx_list_init(&u->headers_in.headers, r->pool, 8,
sizeof(ngx_table_elt_t))
!= NGX_OK)
{
ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} if (ngx_list_init(&u->headers_in.trailers, r->pool, 2,
sizeof(ngx_table_elt_t))
!= NGX_OK)
{
ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} #if (NGX_HTTP_CACHE) if (r->cache) {
u->buffer.pos += r->cache->header_start;
u->buffer.last = u->buffer.pos;
}
#endif
} for ( ;; ) {
// 循环读取目标服务器传回的数据
n = c->recv(c, u->buffer.last, u->buffer.end - u->buffer.last); if (n == NGX_AGAIN) {
#if 0
ngx_add_timer(rev, u->read_timeout);
#endif if (ngx_handle_read_event(c->read, 0) != NGX_OK) {
ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} return;
} if (n == 0) {
ngx_log_error(NGX_LOG_ERR, c->log, 0,
"upstream prematurely closed connection");
} if (n == NGX_ERROR || n == 0) {
ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR);
return;
} u->state->bytes_received += n; u->buffer.last += n; #if 0
u->valid_header_in = 0; u->peer.cached = 0;
#endif
// 处理header信息
// 该prcocess_header由前面做好的设置, ngx_http_proxy_process_status_line
rc = u->process_header(r); if (rc == NGX_AGAIN) { if (u->buffer.last == u->buffer.end) {
ngx_log_error(NGX_LOG_ERR, c->log, 0,
"upstream sent too big header"); ngx_http_upstream_next(r, u,
NGX_HTTP_UPSTREAM_FT_INVALID_HEADER);
return;
} continue;
} break;
} if (rc == NGX_HTTP_UPSTREAM_INVALID_HEADER) {
ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_INVALID_HEADER);
return;
} if (rc == NGX_ERROR) {
ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
} /* rc == NGX_OK */ u->state->header_time = ngx_current_msec - u->start_time; if (u->headers_in.status_n >= NGX_HTTP_SPECIAL_RESPONSE) { if (ngx_http_upstream_test_next(r, u) == NGX_OK) {
return;
} if (ngx_http_upstream_intercept_errors(r, u) == NGX_OK) {
return;
}
}
// 处理header信息
if (ngx_http_upstream_process_headers(r, u) != NGX_OK) {
return;
}
// 响应客户端
ngx_http_upstream_send_response(r, u);
}

  比如目标服务器可写时会收到一个系统的io事件,然后触发写动作,然后发送数据到目标服务器。完成之后,注册一个读事件监听,即目标服务器处理完成之后,会响应回来。这时nginx又会收到系统的事件就绪通知,从而处理请求。此时要做的就是将目标服务器响应的数据写到客户端即可。(有可能需要添加一些自定义的信息)

// http/ngx_http_upstream.c
// 发送数据到客户端
static void
ngx_http_upstream_send_response(ngx_http_request_t *r, ngx_http_upstream_t *u)
{
ssize_t n;
ngx_int_t rc;
ngx_event_pipe_t *p;
ngx_connection_t *c;
ngx_http_core_loc_conf_t *clcf;
// 发送header
rc = ngx_http_send_header(r); if (rc == NGX_ERROR || rc > NGX_OK || r->post_action) {
ngx_http_upstream_finalize_request(r, u, rc);
return;
} u->header_sent = 1; if (u->upgrade) { #if (NGX_HTTP_CACHE) if (r->cache) {
ngx_http_file_cache_free(r->cache, u->pipe->temp_file);
} #endif ngx_http_upstream_upgrade(r, u);
return;
} c = r->connection; if (r->header_only) { if (!u->buffering) {
ngx_http_upstream_finalize_request(r, u, rc);
return;
} if (!u->cacheable && !u->store) {
ngx_http_upstream_finalize_request(r, u, rc);
return;
} u->pipe->downstream_error = 1;
} if (r->request_body && r->request_body->temp_file
&& r == r->main && !r->preserve_body
&& !u->conf->preserve_output)
{
ngx_pool_run_cleanup_file(r->pool, r->request_body->temp_file->file.fd);
r->request_body->temp_file->file.fd = NGX_INVALID_FILE;
} clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); if (!u->buffering) { #if (NGX_HTTP_CACHE) if (r->cache) {
ngx_http_file_cache_free(r->cache, u->pipe->temp_file);
} #endif if (u->input_filter == NULL) {
u->input_filter_init = ngx_http_upstream_non_buffered_filter_init;
u->input_filter = ngx_http_upstream_non_buffered_filter;
u->input_filter_ctx = r;
} u->read_event_handler = ngx_http_upstream_process_non_buffered_upstream;
r->write_event_handler =
ngx_http_upstream_process_non_buffered_downstream; r->limit_rate = 0;
r->limit_rate_set = 1; if (u->input_filter_init(u->input_filter_ctx) == NGX_ERROR) {
ngx_http_upstream_finalize_request(r, u, NGX_ERROR);
return;
} if (clcf->tcp_nodelay && ngx_tcp_nodelay(c) != NGX_OK) {
ngx_http_upstream_finalize_request(r, u, NGX_ERROR);
return;
} n = u->buffer.last - u->buffer.pos; if (n) {
u->buffer.last = u->buffer.pos; u->state->response_length += n; if (u->input_filter(u->input_filter_ctx, n) == NGX_ERROR) {
ngx_http_upstream_finalize_request(r, u, NGX_ERROR);
return;
} ngx_http_upstream_process_non_buffered_downstream(r); } else {
u->buffer.pos = u->buffer.start;
u->buffer.last = u->buffer.start; if (ngx_http_send_special(r, NGX_HTTP_FLUSH) == NGX_ERROR) {
ngx_http_upstream_finalize_request(r, u, NGX_ERROR);
return;
} if (u->peer.connection->read->ready || u->length == 0) {
ngx_http_upstream_process_non_buffered_upstream(r, u);
}
} return;
} /* TODO: preallocate event_pipe bufs, look "Content-Length" */ #if (NGX_HTTP_CACHE) if (r->cache && r->cache->file.fd != NGX_INVALID_FILE) {
ngx_pool_run_cleanup_file(r->pool, r->cache->file.fd);
r->cache->file.fd = NGX_INVALID_FILE;
} switch (ngx_http_test_predicates(r, u->conf->no_cache)) { case NGX_ERROR:
ngx_http_upstream_finalize_request(r, u, NGX_ERROR);
return; case NGX_DECLINED:
u->cacheable = 0;
break; default: /* NGX_OK */ if (u->cache_status == NGX_HTTP_CACHE_BYPASS) { /* create cache if previously bypassed */ if (ngx_http_file_cache_create(r) != NGX_OK) {
ngx_http_upstream_finalize_request(r, u, NGX_ERROR);
return;
}
} break;
} if (u->cacheable) {
time_t now, valid; now = ngx_time(); valid = r->cache->valid_sec; if (valid == 0) {
valid = ngx_http_file_cache_valid(u->conf->cache_valid,
u->headers_in.status_n);
if (valid) {
r->cache->valid_sec = now + valid;
}
} if (valid) {
r->cache->date = now;
r->cache->body_start = (u_short) (u->buffer.pos - u->buffer.start); if (u->headers_in.status_n == NGX_HTTP_OK
|| u->headers_in.status_n == NGX_HTTP_PARTIAL_CONTENT)
{
r->cache->last_modified = u->headers_in.last_modified_time; if (u->headers_in.etag) {
r->cache->etag = u->headers_in.etag->value; } else {
ngx_str_null(&r->cache->etag);
} } else {
r->cache->last_modified = -1;
ngx_str_null(&r->cache->etag);
} if (ngx_http_file_cache_set_header(r, u->buffer.start) != NGX_OK) {
ngx_http_upstream_finalize_request(r, u, NGX_ERROR);
return;
} } else {
u->cacheable = 0;
}
} ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0,
"http cacheable: %d", u->cacheable); if (u->cacheable == 0 && r->cache) {
ngx_http_file_cache_free(r->cache, u->pipe->temp_file);
} if (r->header_only && !u->cacheable && !u->store) {
ngx_http_upstream_finalize_request(r, u, 0);
return;
} #endif p = u->pipe; p->output_filter = ngx_http_upstream_output_filter;
p->output_ctx = r;
p->tag = u->output.tag;
p->bufs = u->conf->bufs;
p->busy_size = u->conf->busy_buffers_size;
p->upstream = u->peer.connection;
p->downstream = c;
p->pool = r->pool;
p->log = c->log;
p->limit_rate = u->conf->limit_rate;
p->start_sec = ngx_time(); p->cacheable = u->cacheable || u->store; p->temp_file = ngx_pcalloc(r->pool, sizeof(ngx_temp_file_t));
if (p->temp_file == NULL) {
ngx_http_upstream_finalize_request(r, u, NGX_ERROR);
return;
} p->temp_file->file.fd = NGX_INVALID_FILE;
p->temp_file->file.log = c->log;
p->temp_file->path = u->conf->temp_path;
p->temp_file->pool = r->pool; if (p->cacheable) {
p->temp_file->persistent = 1; #if (NGX_HTTP_CACHE)
if (r->cache && !r->cache->file_cache->use_temp_path) {
p->temp_file->path = r->cache->file_cache->path;
p->temp_file->file.name = r->cache->file.name;
}
#endif } else {
p->temp_file->log_level = NGX_LOG_WARN;
p->temp_file->warn = "an upstream response is buffered "
"to a temporary file";
} p->max_temp_file_size = u->conf->max_temp_file_size;
p->temp_file_write_size = u->conf->temp_file_write_size; #if (NGX_THREADS)
if (clcf->aio == NGX_HTTP_AIO_THREADS && clcf->aio_write) {
p->thread_handler = ngx_http_upstream_thread_handler;
p->thread_ctx = r;
}
#endif p->preread_bufs = ngx_alloc_chain_link(r->pool);
if (p->preread_bufs == NULL) {
ngx_http_upstream_finalize_request(r, u, NGX_ERROR);
return;
} p->preread_bufs->buf = &u->buffer;
p->preread_bufs->next = NULL;
u->buffer.recycled = 1; p->preread_size = u->buffer.last - u->buffer.pos; if (u->cacheable) { p->buf_to_file = ngx_calloc_buf(r->pool);
if (p->buf_to_file == NULL) {
ngx_http_upstream_finalize_request(r, u, NGX_ERROR);
return;
} p->buf_to_file->start = u->buffer.start;
p->buf_to_file->pos = u->buffer.start;
p->buf_to_file->last = u->buffer.pos;
p->buf_to_file->temporary = 1;
} if (ngx_event_flags & NGX_USE_IOCP_EVENT) {
/* the posted aio operation may corrupt a shadow buffer */
p->single_buf = 1;
} /* TODO: p->free_bufs = 0 if use ngx_create_chain_of_bufs() */
p->free_bufs = 1; /*
* event_pipe would do u->buffer.last += p->preread_size
* as though these bytes were read
*/
u->buffer.last = u->buffer.pos; if (u->conf->cyclic_temp_file) { /*
* we need to disable the use of sendfile() if we use cyclic temp file
* because the writing a new data may interfere with sendfile()
* that uses the same kernel file pages (at least on FreeBSD)
*/ p->cyclic_temp_file = 1;
c->sendfile = 0; } else {
p->cyclic_temp_file = 0;
} p->read_timeout = u->conf->read_timeout;
p->send_timeout = clcf->send_timeout;
p->send_lowat = clcf->send_lowat; p->length = -1; if (u->input_filter_init
&& u->input_filter_init(p->input_ctx) != NGX_OK)
{
ngx_http_upstream_finalize_request(r, u, NGX_ERROR);
return;
} u->read_event_handler = ngx_http_upstream_process_upstream;
r->write_event_handler = ngx_http_upstream_process_downstream; ngx_http_upstream_process_upstream(r, u);
} // 发送客户端数据时处理
// http/ngx_http_upstream.c
static void
ngx_http_upstream_process_upstream(ngx_http_request_t *r,
ngx_http_upstream_t *u)
{
ngx_event_t *rev;
ngx_event_pipe_t *p;
ngx_connection_t *c; c = u->peer.connection;
p = u->pipe;
rev = c->read; ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0,
"http upstream process upstream"); c->log->action = "reading upstream"; if (rev->timedout) { p->upstream_error = 1;
ngx_connection_error(c, NGX_ETIMEDOUT, "upstream timed out"); } else { if (rev->delayed) { ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0,
"http upstream delayed"); if (ngx_handle_read_event(rev, 0) != NGX_OK) {
ngx_http_upstream_finalize_request(r, u, NGX_ERROR);
} return;
}
// 管道式读取数据响应数据,即不会一次性输出
// 而是源源不断地输出
if (ngx_event_pipe(p, 0) == NGX_ABORT) {
ngx_http_upstream_finalize_request(r, u, NGX_ERROR);
return;
}
} ngx_http_upstream_process_request(r, u);
} // http/ngx_http_upstream.c
// 响应客户端
static void
ngx_http_upstream_process_request(ngx_http_request_t *r,
ngx_http_upstream_t *u)
{
ngx_temp_file_t *tf;
ngx_event_pipe_t *p; p = u->pipe; #if (NGX_THREADS) if (p->writing && !p->aio) { /*
* make sure to call ngx_event_pipe()
* if there is an incomplete aio write
*/ if (ngx_event_pipe(p, 1) == NGX_ABORT) {
ngx_http_upstream_finalize_request(r, u, NGX_ERROR);
return;
}
} if (p->writing) {
return;
} #endif if (u->peer.connection) { if (u->store) { if (p->upstream_eof || p->upstream_done) { tf = p->temp_file; if (u->headers_in.status_n == NGX_HTTP_OK
&& (p->upstream_done || p->length == -1)
&& (u->headers_in.content_length_n == -1
|| u->headers_in.content_length_n == tf->offset))
{
ngx_http_upstream_store(r, u);
}
}
} #if (NGX_HTTP_CACHE) if (u->cacheable) { if (p->upstream_done) {
ngx_http_file_cache_update(r, p->temp_file); } else if (p->upstream_eof) { tf = p->temp_file; if (p->length == -1
&& (u->headers_in.content_length_n == -1
|| u->headers_in.content_length_n
== tf->offset - (off_t) r->cache->body_start))
{
ngx_http_file_cache_update(r, tf); } else {
ngx_http_file_cache_free(r->cache, tf);
} } else if (p->upstream_error) {
ngx_http_file_cache_free(r->cache, p->temp_file);
}
} #endif if (p->upstream_done || p->upstream_eof || p->upstream_error) {
ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
"http upstream exit: %p", p->out);
// 输出完成,关闭连接
// 没有输出完成的情况,会等待下一次就绪事件,再进行处理
// 而无需等待目标服务器完全响应,再返回本次处理
// 这就是非阻塞的优势所在
if (p->upstream_done
|| (p->upstream_eof && p->length == -1))
{
// 关闭连接
ngx_http_upstream_finalize_request(r, u, 0);
return;
} if (p->upstream_eof) {
ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,
"upstream prematurely closed connection");
} ngx_http_upstream_finalize_request(r, u, NGX_HTTP_BAD_GATEWAY);
return;
}
} if (p->downstream_error) {
ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
"http upstream downstream error"); if (!u->cacheable && !u->store && u->peer.connection) {
ngx_http_upstream_finalize_request(r, u, NGX_ERROR);
}
}
}

  新名词,管理式输出结果到客户端。

  说了这么多,我们到底把nginx的代理功能讲清楚了吗?(请找出:header信息是在何时替换掉的呢?)

    1. 解析客户端url, 解析body;
    2. 连接目标服务器;
    3. 设置header并发送;
    4. 设置body并发送;
    5. 异步等待服务端响应;
    6. 读取服务端响应;
    7. 管道式输出数据到客户端;

  整体上是没有问题的,但是nginx使用了大量的非阻塞io,即大量的异步处理,所以有超强悍的性能,而长期的生产验证,则给了大家非常好的保证。

Nginx(五)、http反向代理的实现的更多相关文章

  1. 五、Nginx多Server反向代理配置

    Nginx强大的正则表达式支持,可以使server_name的配置变得很灵活,如果你要做多用户博客,那么每个用户拥有自己的二级域名也就很容易实现了. server_name的匹配顺序 Nginx中的s ...

  2. 【转载】Nginx + Tomcat 实现反向代理

    通常的代理服务器,只用于代理内部网络对Internet的连接请求,客户机必须指定代理服务器,并将本来要直接发送到Web服务器上的http请求发送到代理服务器中由代理服务器向Internet上的web服 ...

  3. Nginx简介和反向代理

    一.什么是 nginx? nginx 是一款高性能的 http 服务器/反向代理服务器及电子邮件(IMAP/POP3)代理服务器.由俄罗斯的程序设计师 Igor Sysoev 所开发,官方测试 ngi ...

  4. 【转】Nginx服务器的反向代理proxy_pass配置方法讲解

    [转]Nginx服务器的反向代理proxy_pass配置方法讲解 转自:http://www.jb51.net/article/78746.htm 就普通的反向代理来讲Nginx的配置还是比较简单的, ...

  5. nginx是一个反向代理的软件

    nginx只是一个反向代理的软件,和语言无关,理论上支持任何Web平台,当然http://Asp.net也不例外,http://51aspx.com就是http://Asp.net开发的,前端暴漏的是 ...

  6. Nginx+Tomcat实现反向代理及动静分离

    Nginx+Tomcat实现反向代理及动静分离 时间 2014-07-07 15:18:35  51CTO推荐博文 原文  http://yijiu.blog.51cto.com/433846/143 ...

  7. Nginx 之六: Nginx服务器的反向代理功能

    一:Nginx作为正向代理服务器: 1.正向代理:代理(proxy)服务也可以称为是正向代理,指的是将服务器部署在公司的网关,代理公司内部员工上外网的请求,可以起到一定的安全作用和管理限制作用,正向代 ...

  8. Nginx设置Https反向代理,指向Docker Gitlab11.3.9 Https服务

    目录 目录 1.GitLab11.3.9的安装 2.域名在阿里云托管,申请免费的1年证书 3.Gitlab 的 https 配置 4.Nginx 配置 https,反向代理指向 Gitlab 配置 目 ...

  9. Nginx 如何设置反向代理 多服务器,配置区分开来,单独文件保存单个服务器 server 主机名配置,通过 include 实现

    samcao 关注 2015.06.15 10:08* 字数 0 阅读 408评论 0喜欢 0   网络结构如上图.可能你只有一个公网的Ip地址. 但是您的内网有个网站需要映射至外网.而又不想添加其它 ...

  10. Nginx 部署、反向代理配置、负载均衡

    Nginx 部署.反向代理配置.负载均衡 最近我们的angular项目部署,我们采用的的是Nginx,下面对Nginx做一个简单的介绍. 为什么选择Nginx 轻:相比于Apache,同样的web服务 ...

随机推荐

  1. 初级知识六——C#事件通知系统实现(观察者模式运用)

    观察者模式,绝对是游戏中十分重要的一种模式,运用这种模式,可以让游戏模块间的通信变得简单,耦合度也会大大降低,下面讲解如何利用C#实现事件通知系统. 补充,首先说下这个系统的实现原理,不然一头扎进去就 ...

  2. P1090 合并果子(哈弗曼树)

    题目描述 在一个果园里,多多已经将所有的果子打了下来,而且按果子的不同种类分成了不同的堆.多多决定把所有的果子合成一堆. 每一次合并,多多可以把两堆果子合并到一起,消耗的体力等于两堆果子的重量之和.可 ...

  3. 如何让Web程序在点击按钮后出现如执行批处理程序般的效果

    在cli程序中,输入命令得到连续的输出已经是一种进度而美观的页面交互形式,好比下图: 而web程序里也有类似的场景,比如执行一个耗时任务,除了显示出等待图标外,用户还希望把执行的状态及时显示出来.如下 ...

  4. Java得到指定日期的时间

    //得到指定日期(几天前/几天后)整数往后推,负数往前移动private Date getAppointDay(int num) throws ParseException { DateFormat ...

  5. Python日志功能与处理逻辑

    前言 在应用程序执行过程中,我们希望通过规范格式输出程序执行的详细信息,这时我们需要用到日志功能.在Python语言中,有个內建模块logging能够很好的实现日志功能.整体来说,logging配置可 ...

  6. oracle之二日志挖掘log miner

    日志挖掘 log miner 6.1 log miner的作用: 数据库恢复中有时会需要对Redo log进行分析, 要会使用log miner,以便确定要恢复的时间点或SCN 6.2 有两种日志挖掘 ...

  7. 抓包工具Burp Suite安装步骤(待补充)

    Burp Suite V2.1(破解版)安装步骤:(可以自行下载破解版或汉化版) 百度网盘下载地址: 链接:https://pan.baidu.com/s/1bU5JME3OsEsXrSirTdesR ...

  8. python中函数的参数:必传参数(位置参数)、默认值参数、参数组传参、关键字传参

    1.必传参数也叫做位置参数,因为必填,也必须对应位置 2.默认值参数如上图的word 3.参数组参数:传进去的是0个.或多个value的形式,,,和位置参数有点像,只传value值,但是没有限制个数 ...

  9. Ajax每隔2秒自动请求服务端刷新页面

    1. window.onload = function () {automatic(); } 2. function automatic(){ //每隔两秒刷新一次页面setTimeout(autom ...

  10. SSO单点登录可以自己实现吗?--开源软件诞生10

    ERP与SSO的恩怨情仇--第10篇 用日志记录“开源软件”的诞生 赤龙 ERP 开源地址: 点亮星标,感谢支持,与开发者交流 kzca2000 码云:https://gitee.com/redrag ...