一、ElasticSearch+FileBeat+Kibana搭建平台

在C# 里面运行程序,输出日志(xxx.log 文本文件)到FileBeat配置的路径下面。

平台搭建,参考之前的随笔。

FileBeat配置如下:

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html # For more available modules and options, please see the filebeat.reference.yml sample
# configuration file. #=========================== Filebeat inputs ============================= filebeat.inputs: # Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations. - type: log # Paths that should be crawled and fetched. Glob based paths.
#paths:
#- E:\filebeat-6.6.2-windows-x86_64\data\logstash-tutorial.log\*.log
#- c:\programdata\elasticsearch\logs\* paths:
- E:\ELKLog\log\*.log
#- type: redis
#hosts: ["localhost:6379"]
#password: "hy900511@" # Change to true to enable this input configuration.
enabled: true #scan_frequency: 5s # Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG'] # Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN'] # Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$'] # Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1 ### Multiline options # Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[ # Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after #============================= Filebeat modules =============================== filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml # Set to true to enable config reloading
reload.enabled: false # Period on which files under path should be checked for changes
#reload.period: 10s #==================== Elasticsearch template setting ========================== setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false #================================ General ===================================== # The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name: # The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"] # Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging #============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false # The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url: #============================== Kibana ===================================== # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana: # Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601" # Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id: #============================= Elastic Cloud ================================== # These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/). # The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id: # The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth: #================================ Outputs ===================================== # Configure what output to use when sending the data collected by the beat. #-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"] # Enabled ilm (beta) to use index lifecycle management instead daily indices.
#ilm.enabled: false # Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme" #----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"] # Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] # Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem" # Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key" #================================ Processors ===================================== # Configure processors to enhance or manipulate events generated by the beat. processors:
- add_host_metadata: ~
- add_cloud_metadata: ~ #================================ Logging ===================================== # Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug # At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"] #============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default. # Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false # Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

以上程序都在 服务启动着。

然后在C#中,执行程序将 日志保存到 E:\ELKLog\log\

【注意:Filebeat是以行来识别日志的更改,所以将日志的同一段话以行写入】

在Kibana中看到:

_source字段拆分需要用到logstash。

二、配置Filebeat多行合并

一些错误日志,都是自动换行的。不配置的话 本来是一条日志的,在ES中就显示成了多条

于是配置multiline参数;

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html # For more available modules and options, please see the filebeat.reference.yml sample
# configuration file. #=========================== Filebeat inputs ============================= filebeat.inputs: # Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations. - type: log # Paths that should be crawled and fetched. Glob based paths.
#paths:
#- E:\filebeat-6.6.2-windows-x86_64\data\logstash-tutorial.log\*.log
#- c:\programdata\elasticsearch\logs\* paths:
- E:\ELKLog\log\*.log
#- type: redis
#hosts: ["localhost:6379"]
#password: "hy900511@" # Change to true to enable this input configuration.
enabled: true #scan_frequency: 5s # Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG'] # Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN'] # Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$'] # Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1 ### Multiline options # Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
multiline.pattern: '^Time' # Defines if the pattern set under pattern should be negated or not. Default is false.
multiline.negate: true # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
multiline.match: after
# max_lines: 500
# timeout: 5s #============================= Filebeat modules =============================== filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml # Set to true to enable config reloading
reload.enabled: false # Period on which files under path should be checked for changes
#reload.period: 10s #==================== Elasticsearch template setting ========================== setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false #================================ General ===================================== # The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name: # The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"] # Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging #============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false # The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url: #============================== Kibana ===================================== # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana: # Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601" # Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id: #============================= Elastic Cloud ================================== # These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/). # The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id: # The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth: #================================ Outputs ===================================== # Configure what output to use when sending the data collected by the beat. #-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"] # Enabled ilm (beta) to use index lifecycle management instead daily indices.
#ilm.enabled: false # Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme" #----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"] # Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] # Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem" # Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key" #================================ Processors ===================================== # Configure processors to enhance or manipulate events generated by the beat. processors:
- add_host_metadata: ~
- add_cloud_metadata: ~ #================================ Logging ===================================== # Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug # At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"] #============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default. # Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false # Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

配置中表示:

以Time开头的行是一条完整日志的开始,它和后面多个不以Time开头的行组成一条完整日志;【因为我的日志message都是以Time=  开头,所以这样配置】

参考:应该怎样正确配置filebeat文件(包括multiline、input_type等)

结果如下:

ELK实践的更多相关文章

  1. 大雄的elk实践

    目录 一.ElK环境搭建 1.1.elasticsearch 1..kibana 1..logstash二.elk实践 2.1 使用elk分析nginx日志 一.ElK环境搭建   1.1 elast ...

  2. Node 框架接入 ELK 实践总结

    本文由云+社区发表 作者:J2X 我们都有过上机器查日志的经历,当集群数量增多的时候,这种原始的操作带来的低效率不仅给我们定位现网问题带来极大的挑战,同时,我们也无法对我们服务框架的各项指标进行有效的 ...

  3. ELK实践-Kibana定制化扩展

    纵观任何一家大数据平台的技术架构,总少不了ElasticSearch:ES作为溶合了后端存储.快速检索.OLAP分析等功能的一套开源组件,更绝的是提供了一套集数据采集与前端展现为一体的框架(即ELK) ...

  4. ELK实践(二):收集Nginx日志

    Nginx访问日志 这里补充下Nginx访问日志使用的说明.一般在nginx.conf主配置文件里需要定义一种格式: log_format main '$remote_addr - $remote_u ...

  5. ELK实践(一):基础入门

    虽然用了ELK很久了,但一直苦于没有自己尝试搭建过,所以想抽时间尝试尝试.原本打算按照教程 <ELK集中式日志平台之二 - 部署>(作者:樊浩柏科学院) 进行测试的,没想到一路出了很多坑, ...

  6. logstash 学习小记

    logstash 学习小记 标签(空格分隔): 日志收集 Introduce Logstash is a tool for managing events and logs. You can use ...

  7. springcloud --- spring cloud sleuth和zipkin日志管理(spring boot 2.18)

    前言 在spring cloud分布式架构中,系统被拆分成了许多个服务单元,业务复杂性提高.如果出现了异常情况,很难定位到错误位置,所以需要实现分布式链路追踪,跟进一个请求有哪些服务参与,参与的顺序如 ...

  8. Nginx双机主备(Keepalived实现)

    前言 首先介绍一下Keepalived,它是一个高性能的服务器高可用或热备解决方案,起初是专为LVS负载均衡软件设计的,Keepalived主要来防止服务器单点故障的发生问题,可以通过其与Nginx的 ...

  9. ELK初步实践

    ELK是一个日志分析和统计框架,是Elasticsearch.Logstash和Kibana三个核心开源组件的首字母缩写,实践中还需要filebeat.redis配合完成日志的搜集. 组件一览 名称 ...

随机推荐

  1. ORACLE数据库实现自增的两种方式

    Mysql数据库因为其有自动+1,故一般我们不需要花费太多时间,直接用关键字auto_increment即可,但是Oracle不行,它没有自动增长机制.顾我们需要自己去实现.一般有两种方式,但是这两种 ...

  2. docker容器日志管理(清理)

    原文:docker容器日志管理(清理) 前言 在使用docker容器时候,其日志的管理是我们不得不考虑的事情.因为docker容器的日志文件会占据大量的磁盘空间.下面介绍的就是对docker容器日志的 ...

  3. nginx在Windows环境安装

    nginx介绍 nginx是一款自由的.开源的.高性能的HTTP服务器和反向代理服务器:同时也是一个IMAP.POP3.SMTP代理服务器:nginx可以作为一个HTTP服务器进行网站的发布处理,另外 ...

  4. 【转载】 C#使用Math.Round方法对计算结果进行四舍五入操作

    在C#的数值运算中,有时候需要对计算结果进行四舍五入操作,此时就可使用内置方法Math.Round方法来实现四舍五入操作,Math.Round方法有多个重载函数,支持设置有效位数进行四舍五入,如果没有 ...

  5. Part_four:redis主从复制

    redis主从复制 1.redis主从同步 Redis集群中的数据库复制是通过主从同步来实现的 主节点(Master)把数据分发从节点(slave) 主从同步的好处在于高可用,Redis节点有冗余设计 ...

  6. 详解html中的marquee属性

    转自:https://www.jb51.net/web/531309.html 该标签不是HTML3.2的一部分,并且只支持MSIE3以后内核,所以如果你使用非IE内核浏览器(如:Netscape)可 ...

  7. 导出带图片的Excel——OOXML文件分析

    需求: 普通js导出文件excel具有兼容性问题,通过js-xsl导出文件API未找到导出图片的方案,实例过少,因此针对07年后以.xlsx后缀的excel文件,通过修改后缀.zip参考文件模板来实现 ...

  8. Jackson动态处理返回字段

    有时候业务需要动态返回字段,比如, 场景一:返回 name , birthday, createDate 场景二:返回name, birthday, age 现做个备忘录,以便参考. 下面是引入的PO ...

  9. 【转】国内CPU现状

    首页 博客 学院 下载 图文课 论坛 APP CSDN                            CSDN学院                            问答 商城 VIP会员 ...

  10. 思考---(科研99% )VS (产品75%)

    转目前人脸识别技术的挑战是什么? - 知乎 标签: | 发表时间:-- : | 作者: 出处:https://www.zhihu.com 也是放假太闲,上知乎来锻炼一下手指. 在回答题主的问题的时候, ...