Kafka+Zookeeper+Filebeat+ELK 搭建日志收集系统
ELK
ELK目前主流的一种日志系统,过多的就不多介绍了
Filebeat收集日志,将收集的日志输出到kafka,避免网络问题丢失信息
kafka接收到日志消息后直接消费到Logstash
Logstash将从kafka中的日志发往elasticsearch
Kibana对elasticsearch中的日志数据进行展示

环境介绍:
软件版本:
- Centos 7.4
- java 1.8.0_45
- Elasticsearch 6.4.0
- Logstash 6.4.0
- Filebeat 6.4.0
- Kibana 6.4.0
- Kafka 2.12
- Zookeeper 3.4.13
服务器:
- 10.241.0.1 squid(软件分发,集中控制)
- 10.241.0.10 node1
- 10.241.0.11 node2
- 10.241.0.12 node3
部署角色
- elasticsearch: 10.241.0.10(master),10.241.0.11,10.241.0.12
https://www.elastic.co/cn/products/elasticsearch
Elasticsearch 允许执行和合并多种类型的搜索 ( 结构化、非结构化、地理位置、度量指标 )搜索方式
- logstash: 10.241.0.10,10.241.0.11,10.241.0.12
https://www.elastic.co/cn/products/logstash
Logstash 支持各种输入选择 ,可以在同一时间从众多常用来源捕捉事件
- filebeat: 10.241.0.10,10.241.0.11,10.241.0.12
https://www.elastic.co/cn/products/beats/filebeat
Filebeat 内置的多种模块(auditd、Apache、NGINX、System 和 MySQL)可实现对常见日志格式的一键收集、解析和可视化.
- kibana: 10.241.0.10
https://www.elastic.co/cn/products/kibana
Kibana 让您能够可视化 Elasticsearch 中的数据并操作 Elastic Stack
- kafka: 10.241.0.10,10.241.0.11,10.241.0.12
http://kafka.apache.org/
Kafka是一种高吞吐量的分布式发布订阅消息系统,它可以处理消费者规模的网站中的所有动作流数据
kafka集群部署前面的博客: https://www.jianshu.com/p/a9ff97dcfe4e
开始安装部署ELK
1.下载安装包及测试安装包完整性
[root@squid ~]# cat /etc/hosts
10.241.0.1 squid
10.241.0.10 squid
10.241.0.11 node2
10.241.0.12 node3
[root@squid ~]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.0.tar.gz
[root@squid ~]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.0.tar.gz.sha512
[root@squid ~]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.4.0-linux-x86_64.tar.gz
[root@squid ~]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.4.0-linux-x86_64.tar.gz.sha512
[root@squid ~]# wget https://artifacts.elastic.co/downloads/logstash/logstash-6.4.0.tar.gz
[root@squid ~]# wget https://artifacts.elastic.co/downloads/logstash/logstash-6.4.0.tar.gz.sha512
[root@squid ~]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.0-linux-x86_64.tar.gz
[root@squid ~]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.0-linux-x86_64.tar.gz.sha512
[root@squid ~]# yum install perl-Digest-SHA
[root@squid ~]# shasum -a 512 -c elasticsearch-6.4.0.tar.gz.sha512
elasticsearch-6.4.0.tar.gz: OK
[root@squid ~]# shasum -a 512 -c filebeat-6.4.0-linux-x86_64.tar.gz.sha512
filebeat-6.4.0-linux-x86_64.tar.gz: OK
[root@squid ~]# shasum -a 512 -c kibana-6.4.0-linux-x86_64.tar.gz.sha512
kibana-6.4.0-linux-x86_64.tar.gz: OK
[root@squid ~]# shasum -a 512 -c logstash-6.4.0.tar.gz.sha512
logstash-6.4.0.tar.gz: OK
2.部署elasticsearch
1) Ansible主机清单
[root@squid ~]# cat /etc/ansible/hosts
[client]
10.241.0.10 es_master=true
10.241.0.11 es_master=false
10.241.0.12 es_master=false
2) 创建es用户和用户组
[root@squid ~]# ansible client -m group -a 'name=elk'
[root@squid ~]# ansible client -m user -a 'name=es group=elk home=/home/es shell=/bin/bash'
3) 将elasticsearch解压到目标主机
[root@squid ~]# ansible client -m unarchive -a 'src=/root/elasticsearch-6.4.0.tar.gz dest=/usr/local owner=es group=elk'
4)将准备好的es配置文件模板分发到各个节点
[root@squid ~]# cat elasticsearch.yml.j2
#集群名称及数据存放位置
cluster.name: my_es_cluster
node.name: es-{{ansible_hostname}}
path.data: /data/elk/es/data
path.logs: /data/elk/es/logs
#允许跨域访问
http.cors.enabled: true
http.cors.allow-origin: "*"
#集群中的角色
node.master: {{es_master}}
node.data: true
#允许访问的地址及传输使用的端口
network.host: 0.0.0.0
transport.tcp.port: 9300
#使用tcp传输压缩
transport.tcp.compress: true
http.port: 9200
#使用单播模式去连接其他节点
discovery.zen.ping.unicast.hosts: ["node1","node2","node3"]
5) 执行ansible,分发配置文件
[root@squid ~]# ansible client -m template -a 'src=/root/elasticsearch.yml.j2 dest=/usr/local/elasticsearch-6.4.0/config/elasticsearch.yml owner=es group=elk'
6) 修改系统允许最大打开的文件句柄数等参数,
[root@squid ~]# cat change_system_args.sh
#!/bin/bash
if [ "`grep 65536 /etc/security/limits.conf`" = "" ]
then
cat >> /etc/security/limits.conf << EOF
# End of file
* - nofile 1800000
* soft nproc 65536
* hard nproc 65536
* soft nofile 65536
* hard nofile 65536
EOF
fi
if [ "`grep 655360 /etc/sysctl.conf`" = "" ]
then
echo "vm.max_map_count=655360" >> /etc/sysctl.conf
fi
7) 通过ansible来执行脚本
[root@squid ~]# ansible client -m script -a '/root/change_system_args.sh'
8) 重启目标主机,是参数生效(因为目标主机重启 所以ansible连不上)
[root@squid ~]# ansible client -m shell -a 'reboot'
10.241.0.11 | UNREACHABLE! => {
"changed": false,
"msg": "SSH Error: data could not be sent to remote host \"10.241.0.11\". Make sure this host can be reached over ssh",
"unreachable": true
}
10.241.0.12 | UNREACHABLE! => {
"changed": false,
"msg": "SSH Error: data could not be sent to remote host \"10.241.0.12\". Make sure this host can be reached over ssh",
"unreachable": true
}
10.241.0.10 | UNREACHABLE! => {
"changed": false,
"msg": "SSH Error: data could not be sent to remote host \"10.241.0.10\". Make sure this host can be reached over ssh",
"unreachable": true
}
9 )创建elk目录
[root@squid ~]# ansible client -m file -a 'name=/data/elk/ state=directory owner=es group=elk'
10) 启动es
[root@squid ~]# ansible client -m shell -a 'su - es -c "/usr/local/elasticsearch-6.4.0/bin/elasticsearch -d"'
10.241.0.11 | SUCCESS | rc=0 >>
10.241.0.10 | SUCCESS | rc=0 >>
10.241.0.12 | SUCCESS | rc=0 >>
11) 查看是否启动
[root@squid ~]# ansible client -m shell -a 'ps -ef|grep elasticsearch'
10.241.0.12 | SUCCESS | rc=0 >>
es 3553 1 19 20:35 ? 00:00:48 /usr/local/jdk1.8.0_45/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.eFvx2dMC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=logs/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:logs/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/local/elasticsearch-6.4.0 -Des.path.conf=/usr/local/elasticsearch-6.4.0/config -Des.distribution.flavor=default -Des.distribution.type=tar -cp /usr/local/elasticsearch-6.4.0/lib/* org.elasticsearch.bootstrap.Elasticsearch -d
es 3594 3553 0 20:35 ? 00:00:00 /usr/local/elasticsearch-6.4.0/modules/x-pack-ml/platform/linux-x86_64/bin/controller
root 3711 3710 0 20:39 ? 00:00:00 /bin/sh -c ps -ef|grep elasticsearch
root 3713 3711 0 20:39 ? 00:00:00 grep elasticsearch
10.241.0.10 | SUCCESS | rc=0 >>
es 4899 1 22 20:35 ? 00:00:54 /usr/local/jdk1.8.0_45/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.1uRdvBGd -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=logs/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:logs/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/local/elasticsearch-6.4.0 -Des.path.conf=/usr/local/elasticsearch-6.4.0/config -Des.distribution.flavor=default -Des.distribution.type=tar -cp /usr/local/elasticsearch-6.4.0/lib/* org.elasticsearch.bootstrap.Elasticsearch -d
es 4940 4899 0 20:35 ? 00:00:00 /usr/local/elasticsearch-6.4.0/modules/x-pack-ml/platform/linux-x86_64/bin/controller
root 5070 5069 0 20:39 ? 00:00:00 /bin/sh -c ps -ef|grep elasticsearch
root 5072 5070 0 20:39 ? 00:00:00 grep elasticsearch
10.241.0.11 | SUCCESS | rc=0 >>
es 3556 1 19 20:35 ? 00:00:47 /usr/local/jdk1.8.0_45/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.fnAavDi0 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=logs/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:logs/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/local/elasticsearch-6.4.0 -Des.path.conf=/usr/local/elasticsearch-6.4.0/config -Des.distribution.flavor=default -Des.distribution.type=tar -cp /usr/local/elasticsearch-6.4.0/lib/* org.elasticsearch.bootstrap.Elasticsearch -d
es 3597 3556 0 20:35 ? 00:00:00 /usr/local/elasticsearch-6.4.0/modules/x-pack-ml/platform/linux-x86_64/bin/controller
root 3710 3709 0 20:39 ? 00:00:00 /bin/sh -c ps -ef|grep elasticsearch
root 3712 3710 0 20:39 ? 00:00:00 grep elasticsearch
12) 查看集群状态
[root@squid ~]# curl -s http://node1:9200/_nodes/process?pretty |grep -C 5 _nodes
{
"_nodes" : {
"total" : 3,
"successful" : 3,
"failed" : 0
},
"cluster_name" : "my_es_cluster",
3.部署Filebeat
1) 分发安装包到客户机
[root@squid ~]# ansible client -m unarchive -a 'src=/root/filebeat-6.4.0-linux-x86_64.tar.gz dest=/usr/local'
2) 修改安装包名称
[root@squid ~]# ansible client -m shell -a 'mv /usr/local/filebeat-6.4.0-linux-x86_64 /usr/local/filebeat-6.4.0'
10.241.0.12 | SUCCESS | rc=0 >>
10.241.0.11 | SUCCESS | rc=0 >>
10.241.0.10 | SUCCESS | rc=0 >>
3) 修改配置文件
[root@squid ~]# cat filebeat.yml.j2
filebeat.prospectors:
- type: log
paths:
- /var/log/supervisor/kafka
output.kafka:
enabled: true
hosts: ["10.241.0.10:9092","10.241.0.11:9092","10.241.0.12:9092"]
topic: kafka_run_log
##参数解释
enabled 表明这个模块是启动的
host 把filebeat的数据发送到那台kafka上
topic 这个很重要,发送给kafka的topic,若topic不存在,则会自动创建此topic
4) 分发到客户机,并将原来的配置文件备份
[root@squid ~]# ansible client -m copy -a 'src=/root/filebeat.yml.j2 dest=/usr/local/filebeat-6.4.0/filebeat.yml backup=yes'
5) 启动filebeat
[root@squid ~]# ansible client -m shell -a '/usr/local/filebeat-6.4.0/filebeat -c /usr/local/filebeat-6.4.0/filebeat.yml &'
10.241.0.11 | SUCCESS | rc=0 >>
10.241.0.10 | SUCCESS | rc=0 >>
10.241.0.12 | SUCCESS | rc=0 >>
6) 查看filebeat进程
[root@squid ~]# ansible client -m shell -a 'ps -ef|grep filebeat| grep -v grep'
10.241.0.12 | SUCCESS | rc=0 >>
root 4890 1 0 22:50 ? 00:00:00 /usr/local/filebeat-6.4.0/filebeat -c /usr/local/filebeat-6.4.0/filebeat.yml
10.241.0.10 | SUCCESS | rc=0 >>
root 6881 1 0 22:50 ? 00:00:00 /usr/local/filebeat-6.4.0/filebeat -c /usr/local/filebeat-6.4.0/filebeat.yml
10.241.0.11 | SUCCESS | rc=0 >>
root 4939 1 0 22:50 ? 00:00:00 /usr/local/filebeat-6.4.0/filebeat -c /usr/local/filebeat-6.4.0/filebeat.yml
7) 查看是否有topic创建成功
[root@node1 local]# /usr/local/kafka/bin/kafka-topics.sh --list --zookeeper 10.241.0.10:2181
ConsumerTest
__consumer_offsets
kafka_run_log #filebeat创建的topic
topicTest
4.部署Logstash
1) 解压安装包值目标主机
[root@squid ~]# ansible client -m unarchive -a 'src=/root/logstash-6.4.0.tar.gz dest=/usr/local owner=es group=elk'
2) Logstash配置文件
[root@squid ~]# cat logstash-kafka.conf.j2
input {
kafka {
type => "kafka-logs"
bootstrap_servers => "10.241.0.10:9092,10.241.0.11:9092,10.241.0.12:9092"
group_id => "logstash"
auto_offset_reset => "earliest"
topics => "kafka_run_log"
consumer_threads => 5
decorate_events => true
}
}
output {
elasticsearch {
index => 'kafka-run-log-%{+YYYY.MM.dd}'
hosts => ["10.241.0.10:9200","10.241.0.11:9200","10.241.0.12:9200"]
}
3) 使用ansible推送logstash配置文件到目标主机
[root@squid ~]# ansible client -m copy -a 'src=/root/logstash.conf.j2 dest=/usr/local/logstash-6.4.0/config/logstash.conf owner=es group=elk'
4) 启动Logstash
[root@squid ~]# ansible client -m shell -a 'su - es -c "/usr/local/logstash-6.4.0/bin/logstash -f /usr/local/logstash-6.4.0/config/logstash.conf &"'
5)_查看Logstash进程
[root@squid ~]# ansible client -m shell -a 'ps -ef|grep logstash|grep -v grep'
10.241.0.11 | SUCCESS | rc=0 >>
es 6040 1 99 23:39 ? 00:02:11 /usr/local/jdk1.8.0_45/bin/java -Xms1g -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -cp /usr/local/logstash-6.4.0/logstash-core/lib/jars/animal-sniffer-annotations-1.14.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/commons-codec-1.11.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/commons-compiler-3.0.8.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/error_prone_annotations-2.0.18.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/gradle-license-report-0.7.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/guava-22.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/j2objc-annotations-1.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-annotations-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-core-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-databind-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/janino-3.0.8.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jruby-complete-9.1.13.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jsr305-1.3.9.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-api-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-core-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-slf4j-impl-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/logstash-core.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/slf4j-api-1.7.25.jar org.logstash.Logstash -f /usr/local/logstash-6.4.0/config/logstash.conf
10.241.0.12 | SUCCESS | rc=0 >>
es 5970 1 99 23:39 ? 00:02:13 /usr/local/jdk1.8.0_45/bin/java -Xms1g -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -cp /usr/local/logstash-6.4.0/logstash-core/lib/jars/animal-sniffer-annotations-1.14.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/commons-codec-1.11.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/commons-compiler-3.0.8.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/error_prone_annotations-2.0.18.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/gradle-license-report-0.7.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/guava-22.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/j2objc-annotations-1.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-annotations-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-core-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-databind-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/janino-3.0.8.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jruby-complete-9.1.13.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jsr305-1.3.9.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-api-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-core-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-slf4j-impl-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/logstash-core.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/slf4j-api-1.7.25.jar org.logstash.Logstash -f /usr/local/logstash-6.4.0/config/logstash.conf
10.241.0.10 | SUCCESS | rc=0 >>
es 9095 1 98 23:39 ? 00:02:10 /usr/local/jdk1.8.0_45/bin/java -Xms1g -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -cp /usr/local/logstash-6.4.0/logstash-core/lib/jars/animal-sniffer-annotations-1.14.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/commons-codec-1.11.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/commons-compiler-3.0.8.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/error_prone_annotations-2.0.18.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/gradle-license-report-0.7.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/guava-22.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/j2objc-annotations-1.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-annotations-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-core-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-databind-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/janino-3.0.8.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jruby-complete-9.1.13.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jsr305-1.3.9.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-api-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-core-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-slf4j-impl-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/logstash-core.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/slf4j-api-1.7.25.jar org.logstash.Logstash -f /usr/local/logstash-6.4.0/config/logstash.conf
5.部署kibana
1) 将安装包拷贝到node1节点
[root@squid ~]# scp kibana-6.4.0-linux-x86_64.tar.gz root@10.241.0.10:/root
kibana-6.4.0-linux-x86_64.tar.gz 100% 179MB 59.7MB/s 00:03
2) 解压kibana
[root@node1 ~]# tar -zxf kibana-6.4.0-linux-x86_64.tar.gz -C /usr/local
[root@node1 ~]# mv /usr/local/kibana-6.4.0-linux-x86_64/ /usr/local/kibana-6.4.0
3) 修改配置文件
[root@node1 ~]# cat /usr/local/kibana-6.4.0/config/kibana.yml
server.port: 5601
server.host: "10.241.0.10"
kibana.index: ".kibana
4) 启动kibana (前台启动)
[root@node1 ~]# /usr/local/kibana-6.4.0/bin/kibana
5) 访问的kibana
http://10.241.0.10:5601
6) 添加日志
Management -> Kibana 列Index Patterns -> Index pattern
7) 发送消息到kafka-run-log topic,查看是否能通过kibana展示





作者:baiyongjie
链接:https://www.jianshu.com/p/d072a55aa844
來源:简书
简书著作权归作者所有,任何形式的转载都请联系作者获得授权并注明出处。
Kafka+Zookeeper+Filebeat+ELK 搭建日志收集系统的更多相关文章
- 搭建日志收集系统时使用客户端连接etcd遇到的问题
问题: 在做日志收集系统时使用到etcd,其中server端在linux上,首先安装第三方包(windows)(安装过程可能会有问题,我遇到的是连接谷歌官网请求超时,如果已经出现下面的两个文件夹并且文 ...
- docker搭建日志收集系统EFK
EFK Elasticsearch是一个数据搜索引擎和分布式NoSQL数据库的组合,提过日志的存储和搜索功能. Fluentd是一个消息采集,转化,转发工具,目的是提供中心化的日志服务. Kibana ...
- ELK分布式日志收集搭建和使用
大型系统分布式日志采集系统ELK全框架 SpringBootSecurity1.传统系统日志收集的问题2.Logstash操作工作原理3.分布式日志收集ELK原理4.Elasticsearch+Log ...
- filebeat+redis+logstash+elasticsearch+kibana搭建日志分析系统
filebeat+redis+elk搭建日志分析系统 官网下载地址:https://www.elastic.co/downloads 1.下载安装filebeat wget https://artif ...
- ELK+kafka构建日志收集系统
ELK+kafka构建日志收集系统 原文 http://lx.wxqrcode.com/index.php/post/101.html 背景: 最近线上上了ELK,但是只用了一台Redis在 ...
- ELK+FileBeat+Log4Net搭建日志系统
ELK+FileBeat+Log4Net搭建日志系统 来源:https://www.zybuluo.com/muyanfeixiang/note/608470 标签(空格分隔): ELK Log4Ne ...
- 快速搭建应用服务日志收集系统(Filebeat + ElasticSearch + kibana)
快速搭建应用服务日志收集系统(Filebeat + ElasticSearch + kibana) 概要说明 需求场景,系统环境是CentOS,多个应用部署在多台服务器上,平时查看应用日志及排查问题十 ...
- 日志收集系统ELK搭建
一.ELK简介 在传统项目中,如果在生产环境中,有多台不同的服务器集群,如果生产环境需要通过日志定位项目的Bug的话,需要在每台节点上使用传统的命令方式查询,这样效率非常低下.因此我们需要集中化的管理 ...
- 日志收集系统搭建-BELK
前言 日志是我们分析系统运行情况.问题定位.优化分析等主要数据源头.目前,主流的业务系统都采用了分布式.微服务的形式.如果想要查看日志,就需要从不同的节点上去查看,而且对于整个业务链路也非常不清晰.因 ...
随机推荐
- EZ 2018 02 26 NOIP2018 模拟赛(一)
这次是校内OJ(HHHOJ)线上比赛,网址:http://211.140.156.254:2333/contest/51 (我去刚刚快写完了手贱关掉了) 这次总体难度也不高,T1&&T ...
- stl源码剖析 详细学习笔记 RB_tree (2)
//---------------------------15/03/22---------------------------- //一直好奇KeyOfValue是什么,查了下就是一个和仿函数差不多 ...
- WEB返回顶部效果
1. PC端页面返回顶部效果 1 $( window ).scroll(function(){ 2 if( $( window ).scrollTop() > 500 ){ // 当顶部的滚动距 ...
- Sqlserver_分组
create table orders ( id int, orderName varchar() ) go create table cars ( id ,) , ordersid int, pna ...
- 阿里云ECS 固定带宽变为按量付费的方式
阿里云ECS 固定带宽变为按量付费的方式 阿里云控制台 2.升降配置-降低配置-降低至最低配置 3.为按量带宽设置一个峰值,例如100M. 4.过几分钟,就自动变为按量付费的带宽了.
- 教你如何自学UI设计
一.常用的UI相关工具软件 PS Adobe Illustrator(AI) C4D AE Axure Sketch 墨刀 Principle Cutterman PxCook Zeplin 蓝湖 X ...
- OAuth 2.0 Salesforce & Azure
最近在学习Salesforce,浅谈一下 OAuth 2.0 在Salesforce and Azure 之间的应用. 假设有这样一个场景,在Salesforce中需要用到Azure中的一些服务,那么 ...
- Gartner研究副总裁:人工智能的五点傲慢与偏见
对于人工智能能够为各企业机构完成哪些任务,IT与业务领导者们时常感到困惑,并深受多个人工智能错误观念的困扰.全球领先的信息技术研究和顾问公司Gartner认为,开发人工智能项目的IT与业务领导者必须分 ...
- PAT甲题题解-1077. Kuchiguse (20)-找相同后缀
#include <iostream> #include <cstdio> #include <algorithm> #include <string.h&g ...
- C++中清空缓冲区
C++中标准输入cin有多种输入方式.. 这篇文章罗列的还是简要易懂的.C++输入cin详解...如果只是简单的使用cin>>的话,会单个token的读入.但是会忽略换行符,空格,制表符等 ...