如何通过FILEBEAT,LOGSTASH,ES,KIBNA实现数据的采集
总体参考网址:https://www.olinux.org.cn/elk/1157.html
官方网址:https://www.elastic.co/guide/en/beats/filebeat/6.2/filebeat-getting-started.html

第一步 启动ES,ES的安装请自行百度
第二步 启动LOGSTASH,LOGSTASH的安装请自行百度
启动命令../bin/logstash -f logstash.slowquery.conf
配置logstash.slowquery.conf文件
input {
beats {
host => "192.168.2.105"
port => 5044
}
}
filter {

grok {
match => [ "message", "(?m)^# User@Host: %{USER:query_user}\[[^\]]+\] @ (?:(?<query_host>\S*) )?\[(?:%{IP:query_ip})?\]\s*Id: %{NUMBER:id:int}\s+# Query_time: %{NUMBER:query_time:float}\s+Lock_time: %{NUMBER:lock_time:float}\s+Rows_sent: %{NUMBER:rows_sent:int}\s+Rows_examined: %{NUMBER:rows_examined:int}\s*(?:use %{DATA:database};\s*)?SET timestamp=%{NUMBER:timestamp};\s*(?<query>(?<action>\w+)\s+.*)" ]
}
grok {
match => { "message" => "# Time: " }
add_tag => [ "drop" ]
tag_on_failure =>[]
}
if "drop" in [tags] {
drop {}
}
date {
match => [ "timestamp", "UNIX", "YYYY-MM-dd HH:mm:ss"]
remove_field => [ "timestamp" ]
}

}
output {
elasticsearch {
hosts => "192.168.2.105:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[type]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"

第三步 启动filebeat 启动命令./filebeat -e -c filebeat.yml -d "publish"
编辑filebeat.yml,比较重要的是type为log,enabled必须为true,document_type: mysqlslowquerylog
-- filebeat.prospectors安装下面的进行配置
filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- type: log

# Change to true to enable this prospector configuration.
enabled: true

# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/lib/mysql/*.log
#- c:\programdata\elasticsearch\logs\*
document_type: mysqlslowquerylog
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']

# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']

# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$']

# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1

### Multiline options

# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation

# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[

multiline.pattern: "^# User@Host: "

# Defines if the pattern set under pattern should be negated or not. Default is false.
multiline.negate: true

# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
multiline.match: after

-- Output安装下面配置
#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["localhost:9200"]

# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["192.168.2.105:5044"]

# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"

# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"

第四步 启动kibana,执行命令 ./kibana

第五步 正常的使用浏览器192.168.2.105:5602进行访问,进行页面配置。

0415关于通过FILEBEAT,LOGSTASH,ES,KIBNA实现数据的采集的更多相关文章

  1. docker efk(filebeat+logstash+es+kibana)

    ​ 1.系统架构 ​ 通常我们说的elastic stack,也就是elk,通过es 收集日志数据,存到elasticsearch,最后通过kibana进行统计分析,但是elastic公司后续又推出了 ...

  2. Filebeat+Logstash+ElasticSearch+Kibana搭建Apache访问日志解析平台

    对于ELK还不太熟悉的同学可以参考我前面的两篇文章ElasticSearch + Logstash + Kibana 搭建笔记.Log stash学习笔记(一),本文搭建了一套专门访问Apache的访 ...

  3. filebeat+logstash+elasticsearch收集haproxy日志

    filebeat用于是日志收集,感觉和 flume相同,但是用go开发,性能比较好 在2.4版本中, 客户机部署logstash收集匹配日志,传输到 kafka,在用logstash 从消息队列中抓取 ...

  4. filebeat + logstash + elasticsearch + granfa

    filebeat + logstash + elasticsearch + granfa https://www.cnblogs.com/wenchengxiaopenyou/p/9034213.ht ...

  5. filebeat -> logstash -> elasticsearch -> kibana ELK 日志收集搭建

    Filebeat 安装参考 http://blog.csdn.net/kk185800961/article/details/54579376 elasticsearch 安装参考http://blo ...

  6. Nginx filebeat+logstash+Elasticsearch+kibana实现nginx日志图形化展示

    filebeat+logstash+Elasticsearch+kibana实现nginx日志图形化展示   by:授客  QQ:1033553122   测试环境 Win7 64 CentOS-7- ...

  7. Filebeat+Logstash+Elasticsearch测试

    安装配置好三个软件使之能够正常启动,下面开始测试. 第一步 elasticsearch提供了restful api,这些api会非常便利,为了方便查看,可以使用postman调用接口. 1.查看Ela ...

  8. 使用logstash拉取MySQL数据存储到es中的再次操作

    使用情况说明: 已经使用logstash拉取MySQL数据存储到es中,es中也创建了相应的索引,也存储了数据.假若把这个索引给删除了,再次进行同步操作的话要咋做,从最开始的数据进行同步,而不是新增的 ...

  9. Logstash过滤分析日志数据/kibanaGUI调试(四)

    [Logstash] [root@localhost ~]# wget https://artifacts.elastic.co/downloads/logstash/logstash-6.3.2.t ...

随机推荐

  1. Redis --> Ubuntu安装redis

    Ubuntu安装redis   一.下载安装 root@21ebdf03a086:/# apt-cache search redis root@21ebdf03a086:/# apt-get inst ...

  2. c++ --> cin和cout输入输出格式

    cin和cout输入输出格式 Cout 输出 1>. bool型输出 cout << true <<" or " << false < ...

  3. canvas星空和图形变换

    图形变换. 一.画一片星空 先画一片canvas.width宽canvas.height高的黑色星空,再画200个随机位置,随机大小,随机旋转角度的星星. window.onload=function ...

  4. Java基础学习笔记二 Java基础语法

    注释 注释用来解释和说明程序的文字,注释是不会被执行的. 单行注释 //这是一条单行注释 public int i; 多行注释 /* 这是 * 一段注释, * 它跨越了多个行 */ public vo ...

  5. 201621123062《java程序设计》第11周作业总结

    1. 本周学习总结 1.1 以你喜欢的方式(思维导图或其他)归纳总结多线程相关内容. 思维导图: 2. 书面作业 本次PTA作业题集多线程 2.1. 源代码阅读:多线程程序BounceThread 2 ...

  6. 2018上C语言程序设计(高级)作业- 第2次作业

    作业要求一 提交截图: 6-7: 6-8: 6-9: 7-1: 作业要求二 题目6-7删除字符中数字字符 1.设计思路: (1)第一步:本题要求是删除字符中的数字字符,我的主要思路是通过数组遍历若遇到 ...

  7. Markdown文本测试

    一级标题 二级标题 三级标题 四级标题 五级标题 六级标题 1. 这是一 2. 这是二 这是无序符号 My Github 这是着重表示 这是斜体 一级粗体 二级斜体 cin >> a; c ...

  8. Scrum 冲刺 第二日

    Scrum 冲刺 第二日 目录 要求 项目链接 燃尽图 问题 今日任务 明日计划 成员贡献量 要求 各个成员今日完成的任务(如果完成的任务为开发或测试任务,需给出对应的Github代码签入记录截图:如 ...

  9. CentOS 7 安装Graphite

    Graphite简介 Graphite是一个Python编写的企业级开源监控工具,采用django框架,用来收集服务器所有的即时状态,用户请求信息,Memcached命中率,RabbitMQ消息服务器 ...

  10. python time、datetime、random、os、sys模块

    一.模块1.定义模块:用来从逻辑上组织Python代码(变量,函数,类,逻辑:实现一个功能),本质就是.py结尾的python文件(文件名:test.py,对应的模块名:test)包:用来从逻辑上组织 ...