前面的博客,有具体的ELK安装配置步骤,此处在其基础上修改

修改配置文件并启动

[root@topcheer filebeat-6.2.3-linux-x86_64]# vim filebeat.yml
[root@topcheer filebeat-6.2.3-linux-x86_64]# ll
总用量 50772
drwxr-x---. 2 root root 39 12月 2 13:57 data
-rw-r--r--. 1 root root 44384 3月 13 2018 fields.yml
-rwxr-xr-x. 1 root root 49058867 3月 13 2018 filebeat
-rw-r--r--. 1 root root 1887159 12月 3 17:47 filebeat-7-5-0
-rw-r-----. 1 root root 52193 3月 13 2018 filebeat.reference.yml
-rw-------. 1 root root 7299 12月 3 17:58 filebeat.yml
drwxrwxr-x. 4 wgr wgr 24 3月 13 2018 kibana
-rw-r--r--. 1 root root 583 3月 13 2018 LICENSE.txt
drwxr-xr-x. 14 wgr wgr 179 3月 13 2018 module
drwxr-xr-x. 2 root root 4096 3月 13 2018 modules.d
-rw-------. 1 root root 604101 12月 3 17:58 nohup.out
-rw-r--r--. 1 root root 198236 3月 13 2018 NOTICE.txt
-rw-r--r--. 1 root root 802 3月 13 2018 README.md
[root@topcheer filebeat-6.2.3-linux-x86_64]# rm -rf nohup.out
[root@topcheer filebeat-6.2.3-linux-x86_64]# nohup ./filebeat -e -c filebeat.yml &
[1] 66345
[root@topcheer filebeat-6.2.3-linux-x86_64]# nohup: 忽略输入并把输出追加到"nohup.out" [root@topcheer filebeat-6.2.3-linux-x86_64]# ll
总用量 50072
drwxr-x---. 2 root root 39 12月 3 17:58 data
-rw-r--r--. 1 root root 44384 3月 13 2018 fields.yml
-rwxr-xr-x. 1 root root 49058867 3月 13 2018 filebeat
-rw-r--r--. 1 root root 1887159 12月 3 17:47 filebeat-7-5-0
-rw-r-----. 1 root root 52193 3月 13 2018 filebeat.reference.yml
-rw-------. 1 root root 7299 12月 3 17:58 filebeat.yml
drwxrwxr-x. 4 wgr wgr 24 3月 13 2018 kibana
-rw-r--r--. 1 root root 583 3月 13 2018 LICENSE.txt
drwxr-xr-x. 14 wgr wgr 179 3月 13 2018 module
drwxr-xr-x. 2 root root 4096 3月 13 2018 modules.d
-rw-------. 1 root root 1708 12月 3 17:58 nohup.out
-rw-r--r--. 1 root root 198236 3月 13 2018 NOTICE.txt
-rw-r--r--. 1 root root 802 3月 13 2018 README.md
[root@topcheer filebeat-6.2.3-linux-x86_64]# tail -200f nohup.out
2019-12-03T17:58:50.916+0800 INFO instance/beat.go:468 Home path: [/mnt/filebeat-6.2.3-linux-x86_64] Config path: [/mnt/filebeat-6. 2.3-linux-x86_64] Data path: [/mnt/filebeat-6.2.3-linux-x86_64/data] Logs path: [/mnt/filebeat-6.2.3-linux-x86_64/logs]
2019-12-03T17:58:50.926+0800 INFO instance/beat.go:475 Beat UUID: 6e3ca243-535f-4f7b-946d-c1172536d8f5
2019-12-03T17:58:50.926+0800 INFO instance/beat.go:213 Setup Beat: filebeat; Version: 6.2.3
2019-12-03T17:58:50.928+0800 INFO pipeline/module.go:76 Beat name: topcheer
2019-12-03T17:58:50.980+0800 INFO instance/beat.go:301 filebeat start running.
2019-12-03T17:58:50.981+0800 INFO registrar/registrar.go:108 Loading registrar data from /mnt/filebeat-6.2.3-linux-x86_64/data/re gistry
2019-12-03T17:58:50.981+0800 INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2019-12-03T17:58:50.993+0800 INFO registrar/registrar.go:119 States Loaded from registrar: 2
2019-12-03T17:58:50.993+0800 WARN beater/filebeat.go:261 Filebeat is unable to load the Ingest Node pipelines for the configured modu les because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash p ipelines, you can ignore this warning.

配置文件信息filebeat.yml

- type: log

  # Change to true to enable this prospector configuration.
enabled: true # Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/nginx/*.log
#- c:\programdata\elasticsearch\logs\* output.redis:
# The Redis hosts
hosts: ["192.168.180.113:6379"]
key: "nginx-log"
db: 0

启动logstash

[root@topcheer logstash-6.2.3]# vim redis.conf
[root@topcheer logstash-6.2.3]# rm -rf nohup.out
[root@topcheer logstash-6.2.3]# nohup bin/logstash -f redis.conf &
[14] 37766
[root@topcheer logstash-6.2.3]# nohup: 忽略输入并把输出追加到"nohup.out" [root@topcheer logstash-6.2.3]# tail -200f nohup.out
Sending Logstash's logs to /mnt/logstash-6.2.3/logs which is now configured via log4j2.properties
[2019-12-03T18:03:42,080][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/mnt/logstash-6.2.3 /modules/fb_apache/configuration"}
[2019-12-03T18:03:42,268][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/mnt/logstash-6.2.3/m odules/netflow/configuration"}
[2019-12-03T18:03:45,727][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line option s are specified
[2019-12-03T18:03:52,276][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.3"}
[2019-12-03T18:03:54,771][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-12-03T18:03:59,664][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch. size"=>125, "pipeline.batch.delay"=>50}
[2019-12-03T18:04:00,579][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://1 27.0.0.1:9200/]}}
[2019-12-03T18:04:00,596][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:hea lthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2019-12-03T18:04:01,025][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"}
[2019-12-03T18:04:01,219][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-12-03T18:04:01,224][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to d etermine the document _type {:es_version=>6}

logstash配置文件

[root@topcheer logstash-6.2.3]# cat redis.conf
input {
redis {
host => "192.168.180.113"
data_type => "list"
port => ""
key => "nginx-log"
type => "redis-input"
codec => plain {
charset => "UTF-8"
}
}
}
filter {
json{
source =>"message"
}
}
output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
codec => "json"
}
}
[root@topcheer logstash-6.2.3]#

 启动Kibana

[root@topcheer kibana-6.2.3-linux-x86_64]# rm -rf nohup.out
[root@topcheer kibana-6.2.3-linux-x86_64]# nohup bin/kibana &
[16] 37870
[root@topcheer kibana-6.2.3-linux-x86_64]# nohup: 忽略输入并把输出追加到"nohup.out" [root@topcheer kibana-6.2.3-linux-x86_64]# ll
总用量 1164
drwxr-xr-x 2 wgr wgr 64 3月 13 2018 bin
drwxrwxr-x 2 wgr wgr 24 12月 2 11:01 config
drwxrwxr-x 2 wgr wgr 18 9月 21 23:35 data
-rw-rw-r-- 1 wgr wgr 562 3月 13 2018 LICENSE.txt
drwxrwxr-x 6 wgr wgr 108 3月 13 2018 node
drwxrwxr-x 906 wgr wgr 28672 3月 13 2018 node_modules
-rw------- 1 root root 0 12月 3 18:05 nohup.out
-rw-rw-r-- 1 wgr wgr 1129761 3月 13 2018 NOTICE.txt
drwxrwxr-x 3 wgr wgr 45 3月 13 2018 optimize
-rw-rw-r-- 1 wgr wgr 721 3月 13 2018 package.json
drwxrwxr-x 2 wgr wgr 6 3月 13 2018 plugins
-rw-rw-r-- 1 wgr wgr 4772 3月 13 2018 README.txt
drwxr-xr-x 15 wgr wgr 225 3月 13 2018 src
drwxrwxr-x 5 wgr wgr 47 3月 13 2018 ui_framework
drwxr-xr-x 2 wgr wgr 290 3月 13 2018 webpackShims
[root@topcheer kibana-6.2.3-linux-x86_64]# tail -200f nohup.out
{"type":"log","@timestamp":"2019-12-03T10:06:46Z","tags":["status","plugin:kibana@6.2.3","info"],"pid":37870,"state":"green","message":"Stat us changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-12-03T10:06:46Z","tags":["status","plugin:elasticsearch@6.2.3","info"],"pid":37870,"state":"yellow","messag e":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-12-03T10:06:47Z","tags":["status","plugin:console@6.2.3","info"],"pid":37870,"state":"green","message":"Sta tus changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-12-03T10:06:47Z","tags":["status","plugin:timelion@6.2.3","info"],"pid":37870,"state":"green","message":"St atus changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-12-03T10:06:48Z","tags":["status","plugin:metrics@6.2.3","info"],"pid":37870,"state":"green","message":"Sta tus changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-12-03T10:06:48Z","tags":["listening","info"],"pid":37870,"message":"Server running at http://192.168.180.11 3:5601"}
{"type":"log","@timestamp":"2019-12-03T10:06:50Z","tags":["status","plugin:elasticsearch@6.2.3","info"],"pid":37870,"state":"green","message ":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"response","@timestamp":"2019-12-03T10:07:46Z","tags":[],"pid":37870,"method":"get","statusCode":200,"req":{"url":"/","method":"get" ,"headers":{"host":"192.168.180.113:5601","connection":"keep-alive","upgrade-insecure-requests":"","user-agent":"Mozilla/5.0 (Windows NT 10 .0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36","accept":"text/html,application/xhtml+xml,applica tion/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3","accept-encoding":"gzip, deflate","accept-language":"zh-CN, zh;q=0.9,en-US;q=0.8,en;q=0.7"},"remoteAddress":"192.168.180.1","userAgent":"192.168.180.1"},"res":{"statusCode":200,"responseTime":178,"con tentLength":9},"message":"GET / 200 178ms - 9.0B"}

测试,多次发起请求

ELK+Filebeat+redis整合的更多相关文章

  1. ELK+filebeat+redis 日志分析平台

    一.简介 ELK Stack是软件集合Elasticsearch.Logstash.Kibana的简称,由这三个软件及其相关的组件可以打造大规模日志实时处理系统. 其中,Elasticsearch 是 ...

  2. ELK+Filebeat+Kafka+ZooKeeper 构建海量日志分析平台(elk5.2+filebeat2.11)

    ELK+Filebeat+Kafka+ZooKeeper 构建海量日志分析平台 参考:http://www.tuicool.com/articles/R77fieA 我在做ELK日志平台开始之初选择为 ...

  3. Docker 部署 elk + filebeat

    Docker 部署 elk + filebeat kibana 开源的分析与可视化平台logstash 日志收集工具 logstash-forwarder(原名lubmberjack)elastics ...

  4. elk + filebeat,6.3.2版本简单搭建,实现我们自己的集中式日志系统

    前言 刚从事开发那段时间不习惯输出日志,认为那是无用功,徒增代码量,总认为自己的代码无懈可击:老大的叮嘱.强调也都视为耳旁风,最终导致的结果是我加班排查问题,花的时间还挺长的,要复现问题.排查问题等, ...

  5. filebeat+redis+logstash+elasticsearch+kibana搭建日志分析系统

    filebeat+redis+elk搭建日志分析系统 官网下载地址:https://www.elastic.co/downloads 1.下载安装filebeat wget https://artif ...

  6. ELK+FileBeat+Log4Net

    ELK+FileBeat+Log4Net搭建日志系统 output { elasticsearch { hosts => ["localhost:9200"] } stdou ...

  7. ELK+FileBeat+Log4Net搭建日志系统

    ELK+FileBeat+Log4Net搭建日志系统 来源:https://www.zybuluo.com/muyanfeixiang/note/608470 标签(空格分隔): ELK Log4Ne ...

  8. SpringMVC+redis整合

    在网络上有一个很多人转载的springmvc+redis整合的案例,不过一直不完整,也是被各种人装来转去,现在基本将该框架搭建起来. package com.pudp.bae.base; import ...

  9. Nginx+Lua+Redis整合实现高性能API接口 - 网站服务器 - LinuxTone | 运维专家网论坛 - 最棒的Linux运维与开源架构技术交流社区! - Powered by Discuz!

    Nginx+Lua+Redis整合实现高性能API接口 - 网站服务器 - LinuxTone | 运维专家网论坛 - 最棒的Linux运维与开源架构技术交流社区! - Powered by Disc ...

随机推荐

  1. Java 注解:@PostConstruct和@PreConstruct

    从Java EE5规范开始,Servlet增加了两个影响Servlet生命周期的注解(Annotation):@PostConstruct和@PreConstruct.这两个注解被用来修饰一个非静态的 ...

  2. eclipse中svn的使用

    1.在eclipse中添加SVN插件或者说直接利用eclipse软件中的help-install项进行网站在线下载也可 2.安装好SVN之后, 2.1.从SVN检出项目到本地 右击鼠标-选import ...

  3. Vuejs——slot内容分发

    ①概述: 简单来说,假如父组件需要在子组件内放一些DOM,那么这些DOM是显示.不显示.在哪个地方显示.如何显示,就是slot分发负责的活. ②默认情况下父组件在子组件内套的内容,是不显示的. 例如代 ...

  4. neo4j开发自定义存储过程注意事项

    开发自定义的neo4j存储过程(procedures)注意事项及说明: 1.调用方式: 在neo4j的web界面(http://localhost:7474/)命令行输入框内,输入call your_ ...

  5. TCPcopy环境搭建

    参考文档:https://github.com/session-replay-tools/tcpcopy 辅助机器Assistant Server: 1.下载最新版本 git clone git:// ...

  6. C++11随机数的正确打开方式

    C++11随机数的正确打开方式 在C++11之前,现有的随机数函数都存在一个问题:在利用循环多次获取随机数时,如果程序运行过快或者使用了多线程等方法,srand((unsigned)time(null ...

  7. springboot中model,modelandview,modelmap的区别与联系

    springboot 中Model,ModelAndView,ModelMap的区别与联系 Model是一个接口,它的实现类为ExtendedModelMap,继承ModelMap类 public c ...

  8. Python3 A*寻路算法实现

    # -*- coding: utf-8 -*- import math import random import copy import time import sys import tkinter ...

  9. Delphi 逻辑运算符与布尔表达式

  10. Linux学习--第十二天--服务、ps、top、pstree、kill、&、jobs、fg、vmstat、dmesg、free、uptime、uname、crontab、ls

    服务分类 linux服务分为rpm包默认安装的服务和源码包安装的服务. rpm包默认安装的服务分为独立的服务和基于xinetd服务. 查询已安装的服务 rpm包安装的服务 chkconfig --li ...