06 . ELK Stack + kafka集群
简介
Filebeat用于收集本地文件的日志数据。 它作为服务器上的代理安装,Filebeat监视日志目录或特定的日志文件,尾部文件,并将它们转发到Elasticsearch或Logstash进行索引。
logstash 和filebeat都具有日志收集功能,filebeat更轻量,使用go语言编写,占用资源更少,可以有很高的并发,但logstash 具有filter功能,能过滤分析日志。一般结构都是filebeat采集日志,然后发送到消息队列,如redis,kafka。然后logstash去获取,利用filter功能过滤分析,然后存储到elasticsearch中。Kafka是LinkedIn开源的分布式发布-订阅消息系统,目前归属于Apache定级项目。Kafka主要特点是基于Pull的模式来处理消息消费,追求高吞吐量,一开始的目的就是用于日志收集和传输。0.8版本开始支持复制,不支持事务,对消息的重复、丢失、错误没有严格要求,适合产生大量数据的互联网服务的数据收集业务。
环境清单
IP | hostname | 软件 | 配置要求 | 网络 | 备注 |
---|---|---|---|---|---|
192.168.43.176 | ES/数据存储 | elasticsearch-7.2 | 内存2GB/硬盘40GB | Nat,内网 | |
192.168.43.215 | Kibana/UI展示 | kibana-7.2 | 内存2GB/硬盘40GB | Nat,内网 | |
192.168.43.164 | Filebeat/数据采集 | Filebeat-7.2/nginx | 内存2GB/硬盘40GB | Nat,内网 | |
192.168.43.30 | Logstash/数据管道 | logstash-7.2 | 内存2GB/硬盘40GB | Nat,内网 | |
192.168.43.86 | Kibana/UI展示 | kibana-7.2 | 内存2GB/硬盘40GB | Nat,内网 | |
192.168.43.47 | Kafka/消息队列 | Kafka2.12 / zk3.4 | 内存2GB/硬盘40GB | Nat,内网 | |
192.168.43.151 | Kafka/消息队列 | Kafka2.12 / zk3.4 | 内存2GB/硬盘40GB | Nat,内网 | |
192.168.43.43 | Kafka/消息队列 | Kafka2.12 / zk3.4 | 内存2GB/硬盘40GB | Nat,内网 | |
192.168.43.194 | Tomcat | tomcat8.5 | 内存2GB/硬盘40GB | Nat,内网 |
配置使用zookeeper和kafka请看我写的另一篇博客
https://www.cnblogs.com/you-men/p/12884779.html
使用Logstash和Kafka交互
编辑logstash配置文件
input{
stdin{}
}
output{
kafka{
topic_id =>"kafkatest"
bootstrap_servers => "192.168.43.47:9092"
batch_size => 5
}
stdout{
codec => "rubydebug"
}
}
启动logstash,输入数据
./bin/logstash -f kafka.conf
zhoujian
{
"@timestamp" => 2020-07-24T07:11:26.235Z,
"message" => "zhoujian",
"host" => "logstash-30",
"@version" => "1"
}
youmen
{
"@timestamp" => 2020-07-24T07:11:29.441Z,
"message" => "youmen",
"host" => "logstash-30",
"@version" => "1"
}
kafka中查看写入数据
# 查看kafka现有的topic
./bin/kafka-topics.sh --list --bootstrap-server 192.168.43.47:9092,192.168.43.151:9092,192.168.43.43:9092
kafkatest
test-you-io
# 查看kafkatest里面消息
./bin/kafka-console-consumer.sh --bootstrap-server 192.168.43.47:9092,192.168.43.151:9092,192.168.43.43:9092 --topic kafkatest --from-beginning
2020-07-24T07:13:59.461Z logstash-30 zhoujian
2020-07-24T07:14:01.518Z logstash-30 youmen
数据写入成功,kafka配置完成
配置Filebeat
输出日志到kafka
/etc/filebeat/filebeat.yml
# hosts: ["localhost:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
output.kafka:
enabled: true
hosts: ["192.168.43.47:9092","192.168.43.151:9092","192.168.43.43:9092"]
topic: "tomcat-filebeat"
partition.hash:
reachable_only: true
compression: gzip
max_message_bytes: 1000000
required_acks: 1
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
[root@tomcat-194 logs]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/local/tomcat/logs/localhost_access_log.2020-07*
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
output.kafka:
enabled: true
hosts: ["192.168.43.47:9092","192.168.43.151:9092","192.168.43.43:9092"]
topic: "tomcat-filebeat"
partition.hash:
reachable_only: true
compression: gzip
max_message_bytes: 1000000
required_acks: 1
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
kafka查询是否有tomcat日志
./bin/kafka-console-consumer.sh --bootstrap-server 192.168.43.47:9092,192.168.43.151:9092,192.168.43.43:9092 --topic tomcat-filebeat --from-beginning
{"@timestamp":"2020-07-24T06:35:24.294Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.2.0","topic":"tomcat-filebeat"},"message":"{\"client\":\"192.168.43.84\", \"client user\":\"-\", \"authenticated\":\"-\", \"access time\":\"[24/Jul/2020:14:35:10 +0800]\", \"method\":\"GET /docs/config/ HTTP/1.1\", \"status\":\"200\", \"send bytes\":\"6826\", \"Query?string\":\"\", \"partner\":\"http://192.168.43.194:8080/\", \"Agent version\":\"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36\"}","input":{"type":"log"},"ecs":{"version":"1.0.0"},"host":{"name":"tomcat-194","id":"b029c3ce28374f7db698c050e342457f","containerized":false,"hostname":"tomcat-194","architecture":"x86_64","os":{"platform":"centos","version":"7 (Core)","family":"redhat","name":"CentOS Linux","kernel":"3.10.0-514.el7.x86_64","codename":"Core"}},"agent":{"hostname":"tomcat-194","id":"cfe87df5-c912-49d0-8758-b73e917a6c9c","version":"7.2.0","type":"filebeat","ephemeral_id":"894657d2-af1a-4660-a3eb-98602bc3d1d7"},"log":{"offset":19393,"file":{"path":"/usr/local/tomcat/logs/localhost_access_log.2020-07-24.log"}}}
{"@timestamp":"2020-07-24T06:38:29.339Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.2.0","topic":"tomcat-filebeat"},"host":{"id":"b029c3ce28374f7db698c050e342457f","containerized":false,"hostname":"tomcat-194","name":"tomcat-194","architecture":"x86_64","os":{"family":"redhat","name":"CentOS Linux","kernel":"3.10.0-514.el7.x86_64","codename":"Core","platform":"centos","version":"7 (Core)"}},"agent":{"ephemeral_id":"894657d2-af1a-4660-a3eb-98602bc3d1d7","hostname":"tomcat-194","id":"cfe87df5-c912-49d0-8758-b73e917a6c9c","version":"7.2.0","type":"filebeat"},"ecs":{"version":"1.0.0"},"message":"{\"client\":\"192.168.43.84\", \"client user\":\"-\", \"authenticated\":\"-\", \"access time\":\"[24/Jul/2020:14:38:18 +0800]\", \"method\":\"GET /manager/status HTTP/1.1\", \"status\":\"403\", \"send bytes\":\"3446\", \"Query?string\":\"\", \"partner\":\"http://192.168.43.194:8080/\", \"Agent version\":\"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36\"}","log":{"offset":19797,"file":{"path":"/usr/local/tomcat/logs/localhost_access_log.2020-07-24.log"}},"input":{"type":"log"}}
^CProcessed a total of 66 messages
使用logstash从Kafka读取日志到es
配置logstash读取kafka日志
cat kafka-es.conf
input{
kafka{
bootstrap_servers => "192.168.43.62:9092,192.168.43.151:9092,192.168.43.43:9092"
topics => "tomcat-filebeat"
consumer_threads => 1
decorate_events => true
codec => "json"
auto_offset_reset => "latest"
}
}
output{
elasticsearch {
hosts => ["192.168.43.176:9200"]
index => "tomcat-filebeat-%{+YYYY.MM.dd}"
}
stdout{
codec => "rubydebug"
}
}
前台运行,确保日志能否正常输出
./bin/logstash -f kafka-es.conf
constant ::Fixnum is deprecated
{
"input" => {
"type" => "log"
},
"@version" => "1",
"message" => "{\"client\":\"192.168.43.227\", \"client user\":\"-\", \"authenticated\":\"-\", \"access time\":\"[24/Jul/2020:15:47:08 +0800]\", \"method\":\"GET / HTTP/1.1\", \"status\":\"200\", \"send bytes\":\"11215\", \"Query?string\":\"\", \"partner\":\"-\", \"Agent version\":\"curl/7.29.0\"}",
"agent" => {
"version" => "7.2.0",
"hostname" => "tomcat-194",
"ephemeral_id" => "894657d2-af1a-4660-a3eb-98602bc3d1d7",
"id" => "cfe87df5-c912-49d0-8758-b73e917a6c9c",
"type" => "filebeat"
},
"host" => {
"name" => "tomcat-194",
"os" => {
"version" => "7 (Core)",
"name" => "CentOS Linux",
"codename" => "Core",
"family" => "redhat",
"platform" => "centos",
"kernel" => "3.10.0-514.el7.x86_64"
},
"id" => "b029c3ce28374f7db698c050e342457f",
"containerized" => false,
"hostname" => "tomcat-194",
"architecture" => "x86_64"
},
"@timestamp" => 2020-07-24T07:47:11.857Z,
"log" => {
"offset" => 20203,
"file" => {
"path" => "/usr/local/tomcat/logs/localhost_access_log.2020-07-24.log"
}
},
"ecs" => {
"version" => "1.0.0"
}
}
# kafka节点查看
{"@timestamp":"2020-07-24T07:53:11.944Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.2.0","topic":"tomcat-filebeat"},"host":{"id":"b029c3ce28374f7db698c050e342457f","containerized":false,"hostname":"tomcat-194","architecture":"x86_64","name":"tomcat-194","os":{"codename":"Core","platform":"centos","version":"7 (Core)","family":"redhat","name":"CentOS Linux","kernel":"3.10.0-514.el7.x86_64"}},"agent":{"type":"filebeat","ephemeral_id":"894657d2-af1a-4660-a3eb-98602bc3d1d7","hostname":"tomcat-194","id":"cfe87df5-c912-49d0-8758-b73e917a6c9c","version":"7.2.0"},"log":{"file":{"path":"/usr/local/tomcat/logs/localhost_access_log.2020-07-24.log"},"offset":20462},"message":"{\"client\":\"192.168.43.227\", \"client user\":\"-\", \"authenticated\":\"-\", \"access time\":\"[24/Jul/2020:15:53:06 +0800]\", \"method\":\"GET / HTTP/1.1\", \"status\":\"200\", \"send bytes\":\"11215\", \"Query?string\":\"\", \"partner\":\"-\", \"Agent version\":\"curl/7.29.0\"}","input":{"type":"log"},"ecs":{"version":"1.0.0"}}
es查看索引
curl -XGET "http://127.0.0.1:9200/_cat/indices?v"
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .monitoring-es-7-2020.07.24 z0Ff-j7WSlSm4ZBH6IhZaw 1 1 185 60 3.7mb 2mb
green open .monitoring-kibana-7-2020.07.24 PWqXvObhSRazQn4CY8Z2lg 1 1 3 0 216.3kb 73.4kb
green open .kibana_task_manager Ptj7ydZmQqGG7hWxK2NbSg 1 1 2 0 61.2kb 45.5kb
green open .kibana_2 fot9Sk6jRWa2vS5cQGvOeQ 1 1 5 0 68.6kb 34.3kb
green open .kibana_1 jYD4jXLVTeeAMImEz9NEVA 1 1 1 0 18.7kb 9.3kb
green open .tasks NIwDk-PYQT-d-njh3g0t0g 1 1 1 0 12.7kb 6.3kb
green open tomcat-filebeat-2020.07.24 s3aB-c6GSemUHvaurYQ8Zw 1 1 38 0 227.4kb 80.3kb
06 . ELK Stack + kafka集群的更多相关文章
- ELK+Kafka集群日志分析系统
ELK+Kafka集群分析系统部署 因为是自己本地写好的word文档复制进来的.格式有些出入还望体谅.如有错误请回复.谢谢! 一. 系统介绍 2 二. 版本说明 3 三. 服务部署 3 1) JDK部 ...
- Zookeeper、Kafka集群与Filebeat+Kafka+ELK架构
Zookeeper.Kafka集群与Filebeat+Kafka+ELK架构 目录 Zookeeper.Kafka集群与Filebeat+Kafka+ELK架构 一.Zookeeper 1. Zook ...
- 《Apache kafka实战》读书笔记-kafka集群监控工具
<Apache kafka实战>读书笔记-kafka集群监控工具 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 如官网所述,Kafka使用基于yammer metric ...
- Kafka相关内容总结(Kafka集群搭建手记)
简介 Kafka is a distributed,partitioned,replicated commit logservice.它提供了类似于JMS的特性,但是在设计实现上完全不同,此外它并不是 ...
- 解决kafka集群由于默认的__consumer_offsets这个topic的默认的副本数为1而存在的单点故障问题
抛出问题: __consumer_offsets这个topic是由kafka自动创建的,默认50个,但是都存在一台kafka服务器上,这是不是就存在很明显的单点故障?经测试,如果将存储consumer ...
- Kafka学习之(五)搭建kafka集群之Zookeeper集群搭建
Zookeeper是一种在分布式系统中被广泛用来作为:分布式状态管理.分布式协调管理.分布式配置管理.和分布式锁服务的集群.kafka增加和减少服务器都会在Zookeeper节点上触发相应的事件kaf ...
- 4 kafka集群部署及kafka生产者java客户端编程 + kafka消费者java客户端编程
本博文的主要内容有 kafka的单机模式部署 kafka的分布式模式部署 生产者java客户端编程 消费者java客户端编程 运行kafka ,需要依赖 zookeeper,你可以使用已有的 zo ...
- 【拆分版】Docker-compose构建Zookeeper集群管理Kafka集群
写在前边 在搭建Logstash多节点之前,想到就算先搭好Logstash启动会因为日志无法连接到Kafka Brokers而无限重试,所以这里先构建下Zookeeper集群管理的Kafka集群. 众 ...
- Centos7.4 kafka集群安装与kafka-eagle1.3.9的安装
Centos7.4 kafka集群安装与kafka-eagle1.3.9的安装 集群规划: hostname Zookeeper Kafka kafka-eagle kafka01 √ √ √ kaf ...
随机推荐
- Java 多线程基础(十一)线程优先级和守护线程
Java 多线程基础(十一)线程优先级和守护线程 一.线程优先级 Java 提供了一个线程调度器来监控程序启动后进去就绪状态的所有线程.线程调度器通过线程的优先级来决定调度哪些线程执行.一般来说,Ja ...
- Spring7——开发基于注解形式的spring
开发基于注解形式的spring SpringIOC容器的2种形式: (1)xml配置文件:applicationContext.xml; 存bean:<bean> 取bean: Appli ...
- SpringBoot + MyBatis + MySQL 读写分离实战
1. 引言 读写分离要做的事情就是对于一条SQL该选择哪个数据库去执行,至于谁来做选择数据库这件事儿,无非两个,要么中间件帮我们做,要么程序自己做.因此,一般来讲,读写分离有两种实现方式.第一种是依靠 ...
- 使用原生js来控制、修改CSS伪元素的方法总汇, 例如:before和:after
在网页中,如果需要使用辅助性/装饰性的内容的时候,我们不应该直接写在HTML中,这样会影响真正的内容,这就需要使用伪元素了,这是由于css的纯粹语义化是没有意义的.在使用伪元素的时候,会发现js并不真 ...
- Html5中input新增的表单元素和属性介绍。
input标签主要用于Web表单的创建交互,以便接受来自用户的数据. 我们通过更改type属性的值,来实现不同的输入类型.在以前的写法中表单元素必须放在form元素所包含的里面,而在html5中,我们 ...
- ie浏览器不支持多行隐藏显示省略号
平时在写页面过程中,相信大家都遇到过文本显示多行后用省略号代替的问题,来看看代码: p{ display: -webkit-box; overflow: hidden; text-overflow: ...
- centos7安装Mysql爬坑记录
centos7安装Mysql爬坑记录 查看是否已安装 使用下列命令查看是否已经安装过mysql/mariadb/PostgreSQL 如果未安装,不返回任何结果(ECS的centos镜像默认未安装 ...
- Least Cost Bracket Sequence,题解
题目链接 题意: 给你一个含有(,),?的序列,每个?变成(或)有一定的花费,问变成课匹配的括号的最小花费. 分析: 首先如果能变成匹配的,那么就有右括号的个数始终不多于左括号且左右括号数量相等,那就 ...
- Uni-app页面路由区分注意事项
总结Tips: (1)navigateTo,redirectTo 只能打开非 tabBar页面 (2)switchTab只能打开 TabBar 页面 (3)reLaunch可以打开任意界面 (4)页面 ...
- element-ui自定义table表头,修改标题样式、添加tooltip及 :render-header使用简介
修改列标题样式1.在列标题后面加一个图标. 以element-ui官方文档一个table表格为例,我们在地址的后面加一个定位标志的图标,代码如下: <template> <el-ta ...