前面已经有ELK-Redis的安装,此处只讲在不改变日志格式的情况下收集Nginx日志.

1.Nginx端的日志格式设置如下:

  1. log_format access '$remote_addr - $remote_user [$time_local] "$request" '
  2. '$status $body_bytes_sent "$http_referer" '
  3. '"$http_user_agent" "$http_x_forwarded_for"';
  4. access_log /usr/local/nginx/logs/access.log access;

2.Nginx端logstash-agent的配置如下:

  1. [root@localhost conf]# cat logstash_agent.conf
  2. input {
  3. file {
  4. path => [ "/usr/local/nginx/logs/access.log" ]
  5. type => "nginx_access"
  6. }
  7.  
  8. }
  9. output {
  10. redis {
  11. data_type => "list"
  12. key => "nginx_access_log"
  13. host => "192.168.100.70"
  14. port => "6379"
  15.  
  16. }
  17. }

3.logstash_indexer的配置如下:

  1. [root@elk-node1 conf]# cat logstash_indexer.conf
  2. input {
  3. redis {
  4. data_type => "list"
  5. key => "nginx_access_log"
  6. host => "192.168.100.70"
  7. port => "6379"
  8.  
  9. }
  10. }
  11.  
  12. filter {
  13. grok {
  14. patterns_dir => "./patterns"
  15. match => { "message" => "%{NGINXACCESS}" }
  16.  
  17. }
  18. geoip {
  19. source => "clientip"
  20. target => "geoip"
  21. #database => "/usr/local/logstash/GeoLite2-City.mmdb"
  22. database => "/usr/local/src/GeoLiteCity.dat"
  23. add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
  24. add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
  25. }
  26.  
  27. mutate {
  28. convert => [ "[geoip][coordinates]", "float" ]
  29. convert => [ "response","integer" ]
  30. convert => [ "bytes","integer" ]
  31. }
  32. mutate {remove_field => ["message"]}
  33. date {
  34. match => [ "timestamp","dd/MMM/yyyy:HH:mm:ss Z"]
  35. }
  36. mutate {
  37. remove_field => "timestamp"
  38. }
  39. }
  40.  
  41. output {
  42. #stdout { codec => rubydebug }
  43. elasticsearch {
  44. hosts => "192.168.100.71"
  45. #protocol => "http"
  46. index => "logstash-nginx-access-log-%{+YYYY.MM.dd}"
  47. }
  48. }

3.创建存放logstash格式化Nginx日志的文件。

  1. mkdir -pv /usr/local/logstash/patterns
  2.  
  3. [root@elk-node1 ]# vim/usr/local/logstash/patterns/nginx
  4. ERNAME [a-zA-Z\.\@\-\+_%]+
  5. NGUSER %{NGUSERNAME}
  6. NGINXACCESS %{IPORHOST:clientip} - %{NOTSPACE:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{NOTSPACE:http_x_forwarded_for}
  7.  
  8. #这个格式要和Nginx的 log_format格式保持一致.

 假如说我 nginx 日志在加上一个 nginx 响应时间呢?修改格式加上”request_time”:  

修改日志结构生成数据:

  1. log_format main '$remote_addr - $remote_user [$time_local] "$request" '
  2. '$status $body_bytes_sent "$http_referer" '
  3. '"$http_user_agent" "$http_x_forwarded_for" $request_time';

修改一下 nginx 的正则匹配,多加一个选项:

[root@elk-node1 patterns]# cat nginx

  1. NGUSERNAME [a-zA-Z\.\@\-\+_%]+
  2. NGUSER %{NGUSERNAME}
  3. NGINXACCESS %{IPORHOST:clientip} - %{NGUSER:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes:float}|-) %{QS:referrer} %{QS:agent} %{NOTSPACE:http_x_forwarded_for} %{NUMBER:request_time:float}
  4. ~
  5. ~

附一份当时生产环境自己的logstash.conf配置实例(logstash-5.2.2的conf文件):

  1. input {
  2. redis {
  3.  
  4. data_type => "list"
  5. key => "uc01-nginx-access-logs"
  6. host => "192.168.100.71"
  7. port => ""
  8. db => ""
  9. password => "juzi1@#$%QW"
  10. }
  11.  
  12. redis {
  13.  
  14. data_type => "list"
  15. key => "uc02-nginx-access-logs"
  16. host => "192.168.100.71"
  17. port => ""
  18. db => ""
  19. password => "juzi1@#$%QW"
  20. }
  21. redis {
  22.  
  23. data_type => "list"
  24. key => "p-nginx-access-logs"
  25. host => "192.168.100.71"
  26. port => ""
  27. db => ""
  28. password => "juzi1@#$%QW"
  29. }
  30. redis {
  31.  
  32. data_type => "list"
  33. key => "https-nginx-access-logs"
  34. host => "192.168.100.71"
  35. port => ""
  36. db => ""
  37. password => "juzi1@#$%QW"
  38. }
  39. redis {
  40.  
  41. data_type => "list"
  42. key => "rms01-nginx-access-logs"
  43. host => "192.168.100.71"
  44. port => ""
  45. db => ""
  46. password => "juzi1@#$%QW"
  47. }
  48. redis {
  49.  
  50. data_type => "list"
  51. key => "rms02-nginx-access-logs"
  52. host => "192.168.100.71"
  53. port => ""
  54. db => ""
  55. password => "juzi1@#$%QW"
  56. }
  57.  
  58. }
  59.  
  60. filter {
  61. if [path] =~ "nginx" {
  62. grok {
  63. patterns_dir => "./patterns"
  64. match => { "message" => "%{NGINXACCESS}" }
  65.  
  66. }
  67.  
  68. mutate {
  69. remove_field => ["message"]
  70. }
  71. mutate {
  72. remove_field => "timestamp"
  73.  
  74. }
  75.  
  76. date {
  77. match => [ "timestamp","dd/MMM/yyyy:HH:mm:ss Z"]
  78.  
  79. }
  80.  
  81. geoip {
  82. source => "clientip"
  83. target => "geoip"
  84. database => "/usr/local/GeoLite2-City.mmdb"
  85. add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
  86. add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
  87.  
  88. }
  89.  
  90. mutate {
  91. convert => [ "[geoip][coordinates]", "float" ]
  92. }
  93.  
  94. }
  95. else {
  96. drop {}
  97. }
  98.  
  99. }
  100.  
  101. output {
  102.  
  103. if [type] == "uc01-nginx-access" {
  104. elasticsearch {
  105. hosts => [ "192.168.100.70:9200","192.168.100.71:9200" ]
  106. index => "logstash-uc01-log-%{+YYYY.MM.dd}"
  107. user => logstash_internal
  108. password => changeme
  109. }
  110. }
  111. if [type] == "uc02-nginx-access" {
  112. elasticsearch {
  113. hosts => [ "192.168.100.70:9200","192.168.100.71:9200" ]
  114. index => "logstash-uc02-log-%{+YYYY.MM.dd}"
  115. user => logstash_internal
  116. password => changeme
  117. }
  118. }
  119. if [type] == "p-nginx-access" {
  120. elasticsearch {
  121. hosts => [ "192.168.100.70:9200","192.168.100.71:9200" ]
  122. index => "logstash-p-log-%{+YYYY.MM.dd}"
  123. user => logstash_internal
  124. password => changeme
  125. }
  126. }
  127.  
  128. if [type] == "https-nginx-access" {
  129. elasticsearch {
  130. hosts => [ "192.168.100.70:9200","192.168.100.71:9200" ]
  131. index => "logstash-api-log-%{+YYYY.MM.dd}"
  132. user => logstash_internal
  133. password => changeme
  134. }
  135. }
  136.  
  137. if [type] == "rms01-nginx-access" {
  138. elasticsearch {
  139. hosts => [ "192.168.100.70:9200","192.168.100.71:9200" ]
  140. index => "logstash-rms01-log-%{+YYYY.MM.dd}"
  141. user => logstash_internal
  142. password => changeme
  143. }
  144. }
  145. if [type] == "rms02-nginx-access" {
  146. elasticsearch {
  147. hosts => [ "192.168.100.70:9200","192.168.100.71:9200" ]
  148. index => "logstash-rms02-log-%{+YYYY.MM.dd}"
  149. user => logstash_internal
  150. password => changeme
  151. }
  152. }
  153. }

logstash_indexer.conf

  1. [root@localhost ~]$cd /usr/local/logstash-5.2./etc
  2. [root@localhost etc]$ cat logstash_agentd.conf
  3. input {
  4. file {
  5. type => "web-nginx-access"
  6. path => "/usr/local/nginx/logs/access.log"
  7. }
  8.  
  9. }
  10.  
  11. output{
  12. #file {
  13. # path => "/tmp/%{+YYYY-MM-dd}.messages.gz"
  14. # gzip => true
  15. #}
  16.  
  17. redis {
  18. data_type => "list"
  19. key => "web01-nginx-access-logs"
  20. host => "192.168.100.71"
  21. port => ""
  22. db => ""
  23. password => "@#$%QW"
  24.  
  25. }
  26.  
  27. }

logstash_agentd.conf

ELK filter过滤器来收集Nginx日志的更多相关文章

  1. ELK 二进制安装并收集nginx日志

    对于日志来说,最常见的需求就是收集.存储.查询.展示,开源社区正好有相对应的开源项目:logstash(收集).elasticsearch(存储+搜索).kibana(展示),我们将这三个组合起来的技 ...

  2. ELK Stack (2) —— ELK + Redis收集Nginx日志

    ELK Stack (2) -- ELK + Redis收集Nginx日志 摘要 使用Elasticsearch.Logstash.Kibana与Redis(作为缓冲区)对Nginx日志进行收集 版本 ...

  3. ELK日志系统之使用Rsyslog快速方便的收集Nginx日志

    常规的日志收集方案中Client端都需要额外安装一个Agent来收集日志,例如logstash.filebeat等,额外的程序也就意味着环境的复杂,资源的占用,有没有一种方式是不需要额外安装程序就能实 ...

  4. 安装logstash5.4.1,并使用grok表达式收集nginx日志

    关于收集日志的方式,最简单性能最好的应该是修改nginx的日志存储格式为json,然后直接采集就可以了. 但是实际上会有一个问题,就是如果你之前有很多旧的日志需要全部导入elk上查看,这时就有两个问题 ...

  5. 第七章·Logstash深入-收集NGINX日志

    1.NGINX安装配置 源码安装nginx 因为资源问题,我们先将nginx安装在Logstash所在机器 #安装nginx依赖包 [root@elkstack03 ~]# yum install - ...

  6. ELK实践(二):收集Nginx日志

    Nginx访问日志 这里补充下Nginx访问日志使用的说明.一般在nginx.conf主配置文件里需要定义一种格式: log_format main '$remote_addr - $remote_u ...

  7. Docker 部署 ELK 收集 Nginx 日志

    一.简介 1.核心组成 ELK由Elasticsearch.Logstash和Kibana三部分组件组成: Elasticsearch是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引 ...

  8. ELK学习实验014:Nginx日志JSON格式收集

    1 Kibana的显示配置 https://demo.elastic.co/app/kibana#/dashboard/welcome_dashboard 环境先处理干净 安装nginx和httpd- ...

  9. ELASTIC 5.2部署并收集nginx日志

    elastic 5.2集群安装笔记   设计架构如下: nginx_json_log ->filebeat ->logstash ->elasticsearch ->kiban ...

随机推荐

  1. vue2 如何操作dom

    在vue中可以通过给标签加ref属性,就可以在js中利用ref去引用它,从而操作该dom元素,以下是个例子,可以当做参考 <template> <div> <div id ...

  2. Java类的成员初始化顺序

    Java类的成员初始化顺序 2017-06-01 代码: public class InitializeSequence { public static void main(String[] args ...

  3. refiling失败报错Invalid function: org-preserve-local-variables

    refiling失败报错Invalid function: org-preserve-local-variables,原因: elc,不太清楚 解决办法: 删除org??目录下的elc文件 https ...

  4. iosg给父类view添加透明度子类也变得透明

    用如下方式给父类view设置透明度不要使用alpha设置 self.backgroundColor = [[UIColor lightGrayColor] colorWithAlphaComponen ...

  5. virtualbox问题收集

    The VM session was closed before any attempt to power it on. 返回 代码:E_FAIL (0x80004005)组件:SessionMach ...

  6. 使用EGit插件将Eclipse现有项目分享到git@osc

    . . . . . 程序员一定要养成使用版本管理工具的好习惯,即使是自己一个人开发的项目也要加入到版本管理工具中.使用版本管理工具主要有两个好处:一个是更好的管理多个副本,这个优势不用说了:另一个就是 ...

  7. 【C#】使用user32.dll的MessageBox弹窗消息

    要使用user32.dll的MessageBox弹窗消息,自然需要引入user32.dll到项目中. 一个最简单的实例如下: using System; using System.Runtime.In ...

  8. <悟道一位IT高管20年的职场心经>笔记

    1. 你一定会在某个时候惊讶地发现,原来当初你曾经硬着头皮挨过来的日子对你是那么的珍贵.2. "'老板就是老板'.这一点,你可能会忘,他一定不会忘.'老板不会总是老板'.这一点,他可能会忘, ...

  9. git 服务器新建仓库 远程仓库

    Git 服务器搭建 上一章节中我们远程仓库使用了 Github,Github 公开的项目是免费的,但是如果你不想让其他人看到你的项目就需要收费. 这时我们就需要自己搭建一台Git服务器作为私有仓库使用 ...

  10. [技术选型] CDH-Cloudera Distribution Hadoop

    hadoop是一个开源项目,所以很多公司在这个基础进行商业化,Cloudera对hadoop做了相应的改变. Cloudera公司的发行版,我们将该版本称为CDH(Cloudera Distribut ...