1.配置filebeat_nginx.yml

filebeat.modules:
- module: nginx
access:
enabled: true
var.paths: ["/var/log/nginx/access.log*"]
error:
enabled: true
var.paths: ["/var/log/nginx/error.log*"] #----------------------------------Kafka output--------------------------------#
output.kafka:
version: "1.0.1"
enabled: true
hosts: ['xxx:9092', 'xxx:9092', 'xxx:9092']
topic: 'temp'
required_acks: 1 #default
compression: gzip #default
max_message_bytes: 1000000 #default
codec.format:
string: '%{[message]}'

2.启动filebeat

./filebeat -e -c filebeat_nginx.yml

3.访问nginx

tail -f /var/log/nginx/access.log

日志文件输出

{"ts":"2019-10-14 10:53:22","host":"127.0.0.1","clientip":"127.0.0.1","size":0,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"localhost","url":"/index.html","domain":"localhost","xff":"-","referer":"-","status":"304"}
{"ts":"2019-10-14 10:53:23","host":"127.0.0.1","clientip":"127.0.0.1","size":0,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"localhost","url":"/index.html","domain":"localhost","xff":"-","referer":"-","status":"304"}
{"ts":"2019-10-14 10:53:23","host":"127.0.0.1","clientip":"127.0.0.1","size":0,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"localhost","url":"/index.html","domain":"localhost","xff":"-","referer":"-","status":"304"}
{"ts":"2019-10-14 10:53:30","host":"127.0.0.1","clientip":"127.0.0.1","size":0,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"localhost","url":"/index.html","domain":"localhost","xff":"-","referer":"-","status":"304"}
{"ts":"2019-10-14 10:53:31","host":"127.0.0.1","clientip":"127.0.0.1","size":0,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"localhost","url":"/index.html","domain":"localhost","xff":"-","referer":"-","status":"304"}

kafka输出

{"ts":"2019-10-14 10:53:23","host":"127.0.0.1","clientip":"127.0.0.1","size":0,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"localhost","url":"/index.html","domain":"localhost","xff":"-","referer":"-","status":"304"}
{"ts":"2019-10-14 10:53:23","host":"127.0.0.1","clientip":"127.0.0.1","size":0,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"localhost","url":"/index.html","domain":"localhost","xff":"-","referer":"-","status":"304"}
{"ts":"2019-10-14 10:53:22","host":"127.0.0.1","clientip":"127.0.0.1","size":0,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"localhost","url":"/index.html","domain":"localhost","xff":"-","referer":"-","status":"304"}
{"ts":"2019-10-14 10:53:30","host":"127.0.0.1","clientip":"127.0.0.1","size":0,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"localhost","url":"/index.html","domain":"localhost","xff":"-","referer":"-","status":"304"}
{"ts":"2019-10-14 10:53:31","host":"127.0.0.1","clientip":"127.0.0.1","size":0,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"localhost","url":"/index.html","domain":"localhost","xff":"-","referer":"-","status":"304"}

使用filebeat发送nginx日志到kafka的更多相关文章

  1. ELK整合Filebeat监控nginx日志

    ELK 日志分析 1. 为什么用到 ELK 一般我们需要进行日志分析场景:直接在日志文件中 grep. awk 就可以获得自己想要的信息.但在规模较大的场景中,此方法效率低下,面临问题包括日志量太大如 ...

  2. 利用ELK分析Nginx日志生产实战(高清多图)

    本文以api.mingongge.com.cn域名为测试对象进行统计,日志为crm.mingongge.com.cn和risk.mingongge.com.cn请求之和(此二者域名不具生产换环境统计意 ...

  3. 利用ELK分析Nginx日志

    本文以api.mingongge.com.cn域名为测试对象进行统计,日志为crm.mingongge.com.cn和risk.mingongge.com.cn请求之和(此二者域名不具生产换环境统计意 ...

  4. ELK 6安装配置 nginx日志收集 kabana汉化

    #ELK 6安装配置 nginx日志收集 kabana汉化 #环境 centos 7.4 ,ELK 6 ,单节点 #服务端 Logstash 收集,过滤 Elasticsearch 存储,索引日志 K ...

  5. 一次flume exec source采集日志到kafka因为单条日志数据非常大同步失败的踩坑带来的思考

    本次遇到的问题描述,日志采集同步时,当单条日志(日志文件中一行日志)超过2M大小,数据无法采集同步到kafka,分析后,共踩到如下几个坑.1.flume采集时,通过shell+EXEC(tail -F ...

  6. ELK - nginx 日志分析及绘图

    1. 前言 先上一张整体的效果图: 上面这张图就是通过 ELK 分析 nginx 日志所得到的数据,通过 kibana 的功能展示出来的效果图.是不是这样对日志做了解析,想要知道的数据一目了然.接下来 ...

  7. 在kibana中查看nginx日志的Discover,Dashboards

    官方的操作: 1.安装filebeat,配置filebeat获取nginx日志,来源有两种: 第一种是使用自带的模块进行收集,在modules.d目录中启用模块配置,运行Filebeat时启用模块,在 ...

  8. CentOS6.9安装Filebeat监控Nginx的访问日志发送到Kafka

    一.下载地址: 官方:https://www.elastic.co/cn/downloads/beats/filebeat 百度云盘:https://pan.baidu.com/s/1dvhqb0 二 ...

  9. Kafka+Zookeeper+Filebeat+ELK 搭建日志收集系统

    ELK ELK目前主流的一种日志系统,过多的就不多介绍了 Filebeat收集日志,将收集的日志输出到kafka,避免网络问题丢失信息 kafka接收到日志消息后直接消费到Logstash Logst ...

随机推荐

  1. 使用Jedis操作Redis数据库

    Redis不仅是使用命令来操作,现在基本上主流的语言都有客户端支持,比如java.C.C#.C++.php.Node.js.Go等. 在官方网站里列一些Java的客户端,有Jedis.Redisson ...

  2. egg 连接 mysql 的 docker 容器,报错:Client does not support authentication protocol requested by server; consider upgrading MySQL client

    egg 连接 mysql 的 docker 容器,报错:Client does not support authentication protocol requested by server; con ...

  3. DOM是浏览器提供给开发者的语柄、套接字、文件接口

    DOM是浏览器提供给开发者的语柄.套接字.文件接口

  4. prisma2 预览版

    prisma2 预览版已经发布好几个版本了,同时官方的参考文档也在github 可以看到 新版本的架构变动 参考图 说明 photon 为一个类型安全的数据库客户端(替换orm) lift 数据模型的 ...

  5. CCF 201909-4 推荐系统

    CCF 201909-4 推荐系统 试题编号: 201909-4 试题名称: 推荐系统 时间限制: 5.0s 内存限制: 512.0MB 问题描述: 算法设计 由于我们需要选出得分最大的K件商品,得出 ...

  6. CCF 201803-3 URL映射

    CCF 201803-3  URL映射 试题编号: 201803-3 试题名称: URL映射 时间限制: 1.0s 内存限制: 256.0MB 问题描述: 问题描述 URL 映射是诸如 Django. ...

  7. Problem 1 珠江夜游 (cruise .cpp)———2019.10.6

    Problem 1 珠江夜游 (cruise.cpp)[题目描述]小 Z 放假后难得来一趟广州游玩,当然要吃遍广州各路美食小吃然后再到珠江新城看看远近闻名的小蛮腰啦!可当小 Z 一路吃吃吃以后,天渐渐 ...

  8. nginx架构分析之 事件驱动模型

    事件驱动模型 事件驱动模型是实现异步非阻塞的一个手段.事件驱动模型中,一个进程(线程)就可以了. 对于web服务器来说,客户端A的请求连接到服务端时,服务端的某个进程(Nginx worker pro ...

  9. “未在本地计算机上注册“Microsoft.Jet.OLEDB.4.0”提供程序

    一.背景: 开发一个工具的小项目,因为数据少,我就不想安装sqlserver数据库,就用Access数据库. 二.问题: 在客户安装程序的时候,接口访问Access数据库的时候,报错“未在本地计算机上 ...

  10. AttributeError: 'builtin_function_or_method' object has no attribute 'reshape'

    AttributeError: 'builtin_function_or_method' object has no attribute 'reshape' 读取.mat文件时,copy没加括号