ELK filter过滤器来收集Nginx日志
前面已经有ELK-Redis的安装,此处只讲在不改变日志格式的情况下收集Nginx日志.
1.Nginx端的日志格式设置如下:
- log_format access '$remote_addr - $remote_user [$time_local] "$request" '
- '$status $body_bytes_sent "$http_referer" '
- '"$http_user_agent" "$http_x_forwarded_for"';
- access_log /usr/local/nginx/logs/access.log access;
2.Nginx端logstash-agent的配置如下:
- [root@localhost conf]# cat logstash_agent.conf
- input {
- file {
- path => [ "/usr/local/nginx/logs/access.log" ]
- type => "nginx_access"
- }
- }
- output {
- redis {
- data_type => "list"
- key => "nginx_access_log"
- host => "192.168.100.70"
- port => "6379"
- }
- }
3.logstash_indexer的配置如下:
- [root@elk-node1 conf]# cat logstash_indexer.conf
- input {
- redis {
- data_type => "list"
- key => "nginx_access_log"
- host => "192.168.100.70"
- port => "6379"
- }
- }
- filter {
- grok {
- patterns_dir => "./patterns"
- match => { "message" => "%{NGINXACCESS}" }
- }
- geoip {
- source => "clientip"
- target => "geoip"
- #database => "/usr/local/logstash/GeoLite2-City.mmdb"
- database => "/usr/local/src/GeoLiteCity.dat"
- add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
- add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
- }
- mutate {
- convert => [ "[geoip][coordinates]", "float" ]
- convert => [ "response","integer" ]
- convert => [ "bytes","integer" ]
- }
- mutate {remove_field => ["message"]}
- date {
- match => [ "timestamp","dd/MMM/yyyy:HH:mm:ss Z"]
- }
- mutate {
- remove_field => "timestamp"
- }
- }
- output {
- #stdout { codec => rubydebug }
- elasticsearch {
- hosts => "192.168.100.71"
- #protocol => "http"
- index => "logstash-nginx-access-log-%{+YYYY.MM.dd}"
- }
- }
3.创建存放logstash格式化Nginx日志的文件。
- mkdir -pv /usr/local/logstash/patterns
- [root@elk-node1 ]# vim/usr/local/logstash/patterns/nginx
- ERNAME [a-zA-Z\.\@\-\+_%]+
- NGUSER %{NGUSERNAME}
- NGINXACCESS %{IPORHOST:clientip} - %{NOTSPACE:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{NOTSPACE:http_x_forwarded_for}
- #这个格式要和Nginx的 log_format格式保持一致.
假如说我 nginx 日志在加上一个 nginx 响应时间呢?修改格式加上”request_time”:
修改日志结构生成数据:
- log_format main '$remote_addr - $remote_user [$time_local] "$request" '
- '$status $body_bytes_sent "$http_referer" '
- '"$http_user_agent" "$http_x_forwarded_for" $request_time';
修改一下 nginx 的正则匹配,多加一个选项:
[root@elk-node1 patterns]# cat nginx
- NGUSERNAME [a-zA-Z\.\@\-\+_%]+
- NGUSER %{NGUSERNAME}
- NGINXACCESS %{IPORHOST:clientip} - %{NGUSER:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes:float}|-) %{QS:referrer} %{QS:agent} %{NOTSPACE:http_x_forwarded_for} %{NUMBER:request_time:float}
- ~
- ~
附一份当时生产环境自己的logstash.conf配置实例(logstash-5.2.2的conf文件):
- input {
- redis {
- data_type => "list"
- key => "uc01-nginx-access-logs"
- host => "192.168.100.71"
- port => ""
- db => ""
- password => "juzi1@#$%QW"
- }
- redis {
- data_type => "list"
- key => "uc02-nginx-access-logs"
- host => "192.168.100.71"
- port => ""
- db => ""
- password => "juzi1@#$%QW"
- }
- redis {
- data_type => "list"
- key => "p-nginx-access-logs"
- host => "192.168.100.71"
- port => ""
- db => ""
- password => "juzi1@#$%QW"
- }
- redis {
- data_type => "list"
- key => "https-nginx-access-logs"
- host => "192.168.100.71"
- port => ""
- db => ""
- password => "juzi1@#$%QW"
- }
- redis {
- data_type => "list"
- key => "rms01-nginx-access-logs"
- host => "192.168.100.71"
- port => ""
- db => ""
- password => "juzi1@#$%QW"
- }
- redis {
- data_type => "list"
- key => "rms02-nginx-access-logs"
- host => "192.168.100.71"
- port => ""
- db => ""
- password => "juzi1@#$%QW"
- }
- }
- filter {
- if [path] =~ "nginx" {
- grok {
- patterns_dir => "./patterns"
- match => { "message" => "%{NGINXACCESS}" }
- }
- mutate {
- remove_field => ["message"]
- }
- mutate {
- remove_field => "timestamp"
- }
- date {
- match => [ "timestamp","dd/MMM/yyyy:HH:mm:ss Z"]
- }
- geoip {
- source => "clientip"
- target => "geoip"
- database => "/usr/local/GeoLite2-City.mmdb"
- add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
- add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
- }
- mutate {
- convert => [ "[geoip][coordinates]", "float" ]
- }
- }
- else {
- drop {}
- }
- }
- output {
- if [type] == "uc01-nginx-access" {
- elasticsearch {
- hosts => [ "192.168.100.70:9200","192.168.100.71:9200" ]
- index => "logstash-uc01-log-%{+YYYY.MM.dd}"
- user => logstash_internal
- password => changeme
- }
- }
- if [type] == "uc02-nginx-access" {
- elasticsearch {
- hosts => [ "192.168.100.70:9200","192.168.100.71:9200" ]
- index => "logstash-uc02-log-%{+YYYY.MM.dd}"
- user => logstash_internal
- password => changeme
- }
- }
- if [type] == "p-nginx-access" {
- elasticsearch {
- hosts => [ "192.168.100.70:9200","192.168.100.71:9200" ]
- index => "logstash-p-log-%{+YYYY.MM.dd}"
- user => logstash_internal
- password => changeme
- }
- }
- if [type] == "https-nginx-access" {
- elasticsearch {
- hosts => [ "192.168.100.70:9200","192.168.100.71:9200" ]
- index => "logstash-api-log-%{+YYYY.MM.dd}"
- user => logstash_internal
- password => changeme
- }
- }
- if [type] == "rms01-nginx-access" {
- elasticsearch {
- hosts => [ "192.168.100.70:9200","192.168.100.71:9200" ]
- index => "logstash-rms01-log-%{+YYYY.MM.dd}"
- user => logstash_internal
- password => changeme
- }
- }
- if [type] == "rms02-nginx-access" {
- elasticsearch {
- hosts => [ "192.168.100.70:9200","192.168.100.71:9200" ]
- index => "logstash-rms02-log-%{+YYYY.MM.dd}"
- user => logstash_internal
- password => changeme
- }
- }
- }
logstash_indexer.conf
- [root@localhost ~]$cd /usr/local/logstash-5.2./etc
- [root@localhost etc]$ cat logstash_agentd.conf
- input {
- file {
- type => "web-nginx-access"
- path => "/usr/local/nginx/logs/access.log"
- }
- }
- output{
- #file {
- # path => "/tmp/%{+YYYY-MM-dd}.messages.gz"
- # gzip => true
- #}
- redis {
- data_type => "list"
- key => "web01-nginx-access-logs"
- host => "192.168.100.71"
- port => ""
- db => ""
- password => "@#$%QW"
- }
- }
logstash_agentd.conf
ELK filter过滤器来收集Nginx日志的更多相关文章
- ELK 二进制安装并收集nginx日志
对于日志来说,最常见的需求就是收集.存储.查询.展示,开源社区正好有相对应的开源项目:logstash(收集).elasticsearch(存储+搜索).kibana(展示),我们将这三个组合起来的技 ...
- ELK Stack (2) —— ELK + Redis收集Nginx日志
ELK Stack (2) -- ELK + Redis收集Nginx日志 摘要 使用Elasticsearch.Logstash.Kibana与Redis(作为缓冲区)对Nginx日志进行收集 版本 ...
- ELK日志系统之使用Rsyslog快速方便的收集Nginx日志
常规的日志收集方案中Client端都需要额外安装一个Agent来收集日志,例如logstash.filebeat等,额外的程序也就意味着环境的复杂,资源的占用,有没有一种方式是不需要额外安装程序就能实 ...
- 安装logstash5.4.1,并使用grok表达式收集nginx日志
关于收集日志的方式,最简单性能最好的应该是修改nginx的日志存储格式为json,然后直接采集就可以了. 但是实际上会有一个问题,就是如果你之前有很多旧的日志需要全部导入elk上查看,这时就有两个问题 ...
- 第七章·Logstash深入-收集NGINX日志
1.NGINX安装配置 源码安装nginx 因为资源问题,我们先将nginx安装在Logstash所在机器 #安装nginx依赖包 [root@elkstack03 ~]# yum install - ...
- ELK实践(二):收集Nginx日志
Nginx访问日志 这里补充下Nginx访问日志使用的说明.一般在nginx.conf主配置文件里需要定义一种格式: log_format main '$remote_addr - $remote_u ...
- Docker 部署 ELK 收集 Nginx 日志
一.简介 1.核心组成 ELK由Elasticsearch.Logstash和Kibana三部分组件组成: Elasticsearch是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引 ...
- ELK学习实验014:Nginx日志JSON格式收集
1 Kibana的显示配置 https://demo.elastic.co/app/kibana#/dashboard/welcome_dashboard 环境先处理干净 安装nginx和httpd- ...
- ELASTIC 5.2部署并收集nginx日志
elastic 5.2集群安装笔记 设计架构如下: nginx_json_log ->filebeat ->logstash ->elasticsearch ->kiban ...
随机推荐
- vue2 如何操作dom
在vue中可以通过给标签加ref属性,就可以在js中利用ref去引用它,从而操作该dom元素,以下是个例子,可以当做参考 <template> <div> <div id ...
- Java类的成员初始化顺序
Java类的成员初始化顺序 2017-06-01 代码: public class InitializeSequence { public static void main(String[] args ...
- refiling失败报错Invalid function: org-preserve-local-variables
refiling失败报错Invalid function: org-preserve-local-variables,原因: elc,不太清楚 解决办法: 删除org??目录下的elc文件 https ...
- iosg给父类view添加透明度子类也变得透明
用如下方式给父类view设置透明度不要使用alpha设置 self.backgroundColor = [[UIColor lightGrayColor] colorWithAlphaComponen ...
- virtualbox问题收集
The VM session was closed before any attempt to power it on. 返回 代码:E_FAIL (0x80004005)组件:SessionMach ...
- 使用EGit插件将Eclipse现有项目分享到git@osc
. . . . . 程序员一定要养成使用版本管理工具的好习惯,即使是自己一个人开发的项目也要加入到版本管理工具中.使用版本管理工具主要有两个好处:一个是更好的管理多个副本,这个优势不用说了:另一个就是 ...
- 【C#】使用user32.dll的MessageBox弹窗消息
要使用user32.dll的MessageBox弹窗消息,自然需要引入user32.dll到项目中. 一个最简单的实例如下: using System; using System.Runtime.In ...
- <悟道一位IT高管20年的职场心经>笔记
1. 你一定会在某个时候惊讶地发现,原来当初你曾经硬着头皮挨过来的日子对你是那么的珍贵.2. "'老板就是老板'.这一点,你可能会忘,他一定不会忘.'老板不会总是老板'.这一点,他可能会忘, ...
- git 服务器新建仓库 远程仓库
Git 服务器搭建 上一章节中我们远程仓库使用了 Github,Github 公开的项目是免费的,但是如果你不想让其他人看到你的项目就需要收费. 这时我们就需要自己搭建一台Git服务器作为私有仓库使用 ...
- [技术选型] CDH-Cloudera Distribution Hadoop
hadoop是一个开源项目,所以很多公司在这个基础进行商业化,Cloudera对hadoop做了相应的改变. Cloudera公司的发行版,我们将该版本称为CDH(Cloudera Distribut ...