一、ElasticSearch+FileBeat+Kibana搭建平台

在C# 里面运行程序,输出日志(xxx.log 文本文件)到FileBeat配置的路径下面。

平台搭建,参考之前的随笔。

FileBeat配置如下:

  1. ###################### Filebeat Configuration Example #########################
  2.  
  3. # This file is an example configuration file highlighting only the most common
  4. # options. The filebeat.reference.yml file from the same directory contains all the
  5. # supported options with more comments. You can use it as a reference.
  6. #
  7. # You can find the full configuration reference here:
  8. # https://www.elastic.co/guide/en/beats/filebeat/index.html
  9.  
  10. # For more available modules and options, please see the filebeat.reference.yml sample
  11. # configuration file.
  12.  
  13. #=========================== Filebeat inputs =============================
  14.  
  15. filebeat.inputs:
  16.  
  17. # Each - is an input. Most options can be set at the input level, so
  18. # you can use different inputs for various configurations.
  19. # Below are the input specific configurations.
  20.  
  21. - type: log
  22.  
  23. # Paths that should be crawled and fetched. Glob based paths.
  24. #paths:
  25. #- E:\filebeat-6.6.2-windows-x86_64\data\logstash-tutorial.log\*.log
  26. #- c:\programdata\elasticsearch\logs\*
  27.  
  28. paths:
  29. - E:\ELKLog\log\*.log
  30. #- type: redis
  31. #hosts: ["localhost:6379"]
  32. #password: "hy900511@"
  33.  
  34. # Change to true to enable this input configuration.
  35. enabled: true
  36.  
  37. #scan_frequency: 5s
  38.  
  39. # Exclude lines. A list of regular expressions to match. It drops the lines that are
  40. # matching any regular expression from the list.
  41. #exclude_lines: ['^DBG']
  42.  
  43. # Include lines. A list of regular expressions to match. It exports the lines that are
  44. # matching any regular expression from the list.
  45. #include_lines: ['^ERR', '^WARN']
  46.  
  47. # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  48. # are matching any regular expression from the list. By default, no files are dropped.
  49. #exclude_files: ['.gz$']
  50.  
  51. # Optional additional fields. These fields can be freely picked
  52. # to add additional information to the crawled log files for filtering
  53. #fields:
  54. # level: debug
  55. # review: 1
  56.  
  57. ### Multiline options
  58.  
  59. # Multiline can be used for log messages spanning multiple lines. This is common
  60. # for Java Stack Traces or C-Line Continuation
  61.  
  62. # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  63. #multiline.pattern: ^\[
  64.  
  65. # Defines if the pattern set under pattern should be negated or not. Default is false.
  66. #multiline.negate: false
  67.  
  68. # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  69. # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  70. # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  71. #multiline.match: after
  72.  
  73. #============================= Filebeat modules ===============================
  74.  
  75. filebeat.config.modules:
  76. # Glob pattern for configuration loading
  77. path: ${path.config}/modules.d/*.yml
  78.  
  79. # Set to true to enable config reloading
  80. reload.enabled: false
  81.  
  82. # Period on which files under path should be checked for changes
  83. #reload.period: 10s
  84.  
  85. #==================== Elasticsearch template setting ==========================
  86.  
  87. setup.template.settings:
  88. index.number_of_shards: 3
  89. #index.codec: best_compression
  90. #_source.enabled: false
  91.  
  92. #================================ General =====================================
  93.  
  94. # The name of the shipper that publishes the network data. It can be used to group
  95. # all the transactions sent by a single shipper in the web interface.
  96. #name:
  97.  
  98. # The tags of the shipper are included in their own field with each
  99. # transaction published.
  100. #tags: ["service-X", "web-tier"]
  101.  
  102. # Optional fields that you can specify to add additional information to the
  103. # output.
  104. #fields:
  105. # env: staging
  106.  
  107. #============================== Dashboards =====================================
  108. # These settings control loading the sample dashboards to the Kibana index. Loading
  109. # the dashboards is disabled by default and can be enabled either by setting the
  110. # options here, or by using the `-setup` CLI flag or the `setup` command.
  111. #setup.dashboards.enabled: false
  112.  
  113. # The URL from where to download the dashboards archive. By default this URL
  114. # has a value which is computed based on the Beat name and version. For released
  115. # versions, this URL points to the dashboard archive on the artifacts.elastic.co
  116. # website.
  117. #setup.dashboards.url:
  118.  
  119. #============================== Kibana =====================================
  120.  
  121. # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
  122. # This requires a Kibana endpoint configuration.
  123. setup.kibana:
  124.  
  125. # Kibana Host
  126. # Scheme and port can be left out and will be set to the default (http and 5601)
  127. # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  128. # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  129. #host: "localhost:5601"
  130.  
  131. # Kibana Space ID
  132. # ID of the Kibana Space into which the dashboards should be loaded. By default,
  133. # the Default Space will be used.
  134. #space.id:
  135.  
  136. #============================= Elastic Cloud ==================================
  137.  
  138. # These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).
  139.  
  140. # The cloud.id setting overwrites the `output.elasticsearch.hosts` and
  141. # `setup.kibana.host` options.
  142. # You can find the `cloud.id` in the Elastic Cloud web UI.
  143. #cloud.id:
  144.  
  145. # The cloud.auth setting overwrites the `output.elasticsearch.username` and
  146. # `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
  147. #cloud.auth:
  148.  
  149. #================================ Outputs =====================================
  150.  
  151. # Configure what output to use when sending the data collected by the beat.
  152.  
  153. #-------------------------- Elasticsearch output ------------------------------
  154. output.elasticsearch:
  155. # Array of hosts to connect to.
  156. hosts: ["localhost:9200"]
  157.  
  158. # Enabled ilm (beta) to use index lifecycle management instead daily indices.
  159. #ilm.enabled: false
  160.  
  161. # Optional protocol and basic auth credentials.
  162. #protocol: "https"
  163. #username: "elastic"
  164. #password: "changeme"
  165.  
  166. #----------------------------- Logstash output --------------------------------
  167. #output.logstash:
  168. # The Logstash hosts
  169. #hosts: ["localhost:5044"]
  170.  
  171. # Optional SSL. By default is off.
  172. # List of root certificates for HTTPS server verifications
  173. #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
  174.  
  175. # Certificate for SSL client authentication
  176. #ssl.certificate: "/etc/pki/client/cert.pem"
  177.  
  178. # Client Certificate Key
  179. #ssl.key: "/etc/pki/client/cert.key"
  180.  
  181. #================================ Processors =====================================
  182.  
  183. # Configure processors to enhance or manipulate events generated by the beat.
  184.  
  185. processors:
  186. - add_host_metadata: ~
  187. - add_cloud_metadata: ~
  188.  
  189. #================================ Logging =====================================
  190.  
  191. # Sets log level. The default log level is info.
  192. # Available log levels are: error, warning, info, debug
  193. #logging.level: debug
  194.  
  195. # At debug level, you can selectively enable logging only for some components.
  196. # To enable all selectors use ["*"]. Examples of other selectors are "beat",
  197. # "publish", "service".
  198. #logging.selectors: ["*"]
  199.  
  200. #============================== Xpack Monitoring ===============================
  201. # filebeat can export internal metrics to a central Elasticsearch monitoring
  202. # cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
  203. # reporting is disabled by default.
  204.  
  205. # Set to true to enable the monitoring reporter.
  206. #xpack.monitoring.enabled: false
  207.  
  208. # Uncomment to send the metrics to Elasticsearch. Most settings from the
  209. # Elasticsearch output are accepted here as well. Any setting that is not set is
  210. # automatically inherited from the Elasticsearch output configuration, so if you
  211. # have the Elasticsearch output configured, you can simply uncomment the
  212. # following line.
  213. #xpack.monitoring.elasticsearch:

以上程序都在 服务启动着。

然后在C#中,执行程序将 日志保存到 E:\ELKLog\log\

【注意:Filebeat是以行来识别日志的更改,所以将日志的同一段话以行写入】

在Kibana中看到:

_source字段拆分需要用到logstash。

二、配置Filebeat多行合并

一些错误日志,都是自动换行的。不配置的话 本来是一条日志的,在ES中就显示成了多条

于是配置multiline参数;

  1. ###################### Filebeat Configuration Example #########################
  2.  
  3. # This file is an example configuration file highlighting only the most common
  4. # options. The filebeat.reference.yml file from the same directory contains all the
  5. # supported options with more comments. You can use it as a reference.
  6. #
  7. # You can find the full configuration reference here:
  8. # https://www.elastic.co/guide/en/beats/filebeat/index.html
  9.  
  10. # For more available modules and options, please see the filebeat.reference.yml sample
  11. # configuration file.
  12.  
  13. #=========================== Filebeat inputs =============================
  14.  
  15. filebeat.inputs:
  16.  
  17. # Each - is an input. Most options can be set at the input level, so
  18. # you can use different inputs for various configurations.
  19. # Below are the input specific configurations.
  20.  
  21. - type: log
  22.  
  23. # Paths that should be crawled and fetched. Glob based paths.
  24. #paths:
  25. #- E:\filebeat-6.6.2-windows-x86_64\data\logstash-tutorial.log\*.log
  26. #- c:\programdata\elasticsearch\logs\*
  27.  
  28. paths:
  29. - E:\ELKLog\log\*.log
  30. #- type: redis
  31. #hosts: ["localhost:6379"]
  32. #password: "hy900511@"
  33.  
  34. # Change to true to enable this input configuration.
  35. enabled: true
  36.  
  37. #scan_frequency: 5s
  38.  
  39. # Exclude lines. A list of regular expressions to match. It drops the lines that are
  40. # matching any regular expression from the list.
  41. #exclude_lines: ['^DBG']
  42.  
  43. # Include lines. A list of regular expressions to match. It exports the lines that are
  44. # matching any regular expression from the list.
  45. #include_lines: ['^ERR', '^WARN']
  46.  
  47. # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  48. # are matching any regular expression from the list. By default, no files are dropped.
  49. #exclude_files: ['.gz$']
  50.  
  51. # Optional additional fields. These fields can be freely picked
  52. # to add additional information to the crawled log files for filtering
  53. #fields:
  54. # level: debug
  55. # review: 1
  56.  
  57. ### Multiline options
  58.  
  59. # Multiline can be used for log messages spanning multiple lines. This is common
  60. # for Java Stack Traces or C-Line Continuation
  61.  
  62. # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  63. multiline.pattern: '^Time'
  64.  
  65. # Defines if the pattern set under pattern should be negated or not. Default is false.
  66. multiline.negate: true
  67.  
  68. # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  69. # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  70. # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  71. multiline.match: after
  72. # max_lines: 500
  73. # timeout: 5s
  74.  
  75. #============================= Filebeat modules ===============================
  76.  
  77. filebeat.config.modules:
  78. # Glob pattern for configuration loading
  79. path: ${path.config}/modules.d/*.yml
  80.  
  81. # Set to true to enable config reloading
  82. reload.enabled: false
  83.  
  84. # Period on which files under path should be checked for changes
  85. #reload.period: 10s
  86.  
  87. #==================== Elasticsearch template setting ==========================
  88.  
  89. setup.template.settings:
  90. index.number_of_shards: 3
  91. #index.codec: best_compression
  92. #_source.enabled: false
  93.  
  94. #================================ General =====================================
  95.  
  96. # The name of the shipper that publishes the network data. It can be used to group
  97. # all the transactions sent by a single shipper in the web interface.
  98. #name:
  99.  
  100. # The tags of the shipper are included in their own field with each
  101. # transaction published.
  102. #tags: ["service-X", "web-tier"]
  103.  
  104. # Optional fields that you can specify to add additional information to the
  105. # output.
  106. #fields:
  107. # env: staging
  108.  
  109. #============================== Dashboards =====================================
  110. # These settings control loading the sample dashboards to the Kibana index. Loading
  111. # the dashboards is disabled by default and can be enabled either by setting the
  112. # options here, or by using the `-setup` CLI flag or the `setup` command.
  113. #setup.dashboards.enabled: false
  114.  
  115. # The URL from where to download the dashboards archive. By default this URL
  116. # has a value which is computed based on the Beat name and version. For released
  117. # versions, this URL points to the dashboard archive on the artifacts.elastic.co
  118. # website.
  119. #setup.dashboards.url:
  120.  
  121. #============================== Kibana =====================================
  122.  
  123. # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
  124. # This requires a Kibana endpoint configuration.
  125. setup.kibana:
  126.  
  127. # Kibana Host
  128. # Scheme and port can be left out and will be set to the default (http and 5601)
  129. # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  130. # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  131. #host: "localhost:5601"
  132.  
  133. # Kibana Space ID
  134. # ID of the Kibana Space into which the dashboards should be loaded. By default,
  135. # the Default Space will be used.
  136. #space.id:
  137.  
  138. #============================= Elastic Cloud ==================================
  139.  
  140. # These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).
  141.  
  142. # The cloud.id setting overwrites the `output.elasticsearch.hosts` and
  143. # `setup.kibana.host` options.
  144. # You can find the `cloud.id` in the Elastic Cloud web UI.
  145. #cloud.id:
  146.  
  147. # The cloud.auth setting overwrites the `output.elasticsearch.username` and
  148. # `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
  149. #cloud.auth:
  150.  
  151. #================================ Outputs =====================================
  152.  
  153. # Configure what output to use when sending the data collected by the beat.
  154.  
  155. #-------------------------- Elasticsearch output ------------------------------
  156. output.elasticsearch:
  157. # Array of hosts to connect to.
  158. hosts: ["localhost:9200"]
  159.  
  160. # Enabled ilm (beta) to use index lifecycle management instead daily indices.
  161. #ilm.enabled: false
  162.  
  163. # Optional protocol and basic auth credentials.
  164. #protocol: "https"
  165. #username: "elastic"
  166. #password: "changeme"
  167.  
  168. #----------------------------- Logstash output --------------------------------
  169. #output.logstash:
  170. # The Logstash hosts
  171. #hosts: ["localhost:5044"]
  172.  
  173. # Optional SSL. By default is off.
  174. # List of root certificates for HTTPS server verifications
  175. #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
  176.  
  177. # Certificate for SSL client authentication
  178. #ssl.certificate: "/etc/pki/client/cert.pem"
  179.  
  180. # Client Certificate Key
  181. #ssl.key: "/etc/pki/client/cert.key"
  182.  
  183. #================================ Processors =====================================
  184.  
  185. # Configure processors to enhance or manipulate events generated by the beat.
  186.  
  187. processors:
  188. - add_host_metadata: ~
  189. - add_cloud_metadata: ~
  190.  
  191. #================================ Logging =====================================
  192.  
  193. # Sets log level. The default log level is info.
  194. # Available log levels are: error, warning, info, debug
  195. #logging.level: debug
  196.  
  197. # At debug level, you can selectively enable logging only for some components.
  198. # To enable all selectors use ["*"]. Examples of other selectors are "beat",
  199. # "publish", "service".
  200. #logging.selectors: ["*"]
  201.  
  202. #============================== Xpack Monitoring ===============================
  203. # filebeat can export internal metrics to a central Elasticsearch monitoring
  204. # cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
  205. # reporting is disabled by default.
  206.  
  207. # Set to true to enable the monitoring reporter.
  208. #xpack.monitoring.enabled: false
  209.  
  210. # Uncomment to send the metrics to Elasticsearch. Most settings from the
  211. # Elasticsearch output are accepted here as well. Any setting that is not set is
  212. # automatically inherited from the Elasticsearch output configuration, so if you
  213. # have the Elasticsearch output configured, you can simply uncomment the
  214. # following line.
  215. #xpack.monitoring.elasticsearch:

配置中表示:

以Time开头的行是一条完整日志的开始,它和后面多个不以Time开头的行组成一条完整日志;【因为我的日志message都是以Time=  开头,所以这样配置】

参考:应该怎样正确配置filebeat文件(包括multiline、input_type等)

结果如下:

ELK实践的更多相关文章

  1. 大雄的elk实践

    目录 一.ElK环境搭建 1.1.elasticsearch 1..kibana 1..logstash二.elk实践 2.1 使用elk分析nginx日志 一.ElK环境搭建   1.1 elast ...

  2. Node 框架接入 ELK 实践总结

    本文由云+社区发表 作者:J2X 我们都有过上机器查日志的经历,当集群数量增多的时候,这种原始的操作带来的低效率不仅给我们定位现网问题带来极大的挑战,同时,我们也无法对我们服务框架的各项指标进行有效的 ...

  3. ELK实践-Kibana定制化扩展

    纵观任何一家大数据平台的技术架构,总少不了ElasticSearch:ES作为溶合了后端存储.快速检索.OLAP分析等功能的一套开源组件,更绝的是提供了一套集数据采集与前端展现为一体的框架(即ELK) ...

  4. ELK实践(二):收集Nginx日志

    Nginx访问日志 这里补充下Nginx访问日志使用的说明.一般在nginx.conf主配置文件里需要定义一种格式: log_format main '$remote_addr - $remote_u ...

  5. ELK实践(一):基础入门

    虽然用了ELK很久了,但一直苦于没有自己尝试搭建过,所以想抽时间尝试尝试.原本打算按照教程 <ELK集中式日志平台之二 - 部署>(作者:樊浩柏科学院) 进行测试的,没想到一路出了很多坑, ...

  6. logstash 学习小记

    logstash 学习小记 标签(空格分隔): 日志收集 Introduce Logstash is a tool for managing events and logs. You can use ...

  7. springcloud --- spring cloud sleuth和zipkin日志管理(spring boot 2.18)

    前言 在spring cloud分布式架构中,系统被拆分成了许多个服务单元,业务复杂性提高.如果出现了异常情况,很难定位到错误位置,所以需要实现分布式链路追踪,跟进一个请求有哪些服务参与,参与的顺序如 ...

  8. Nginx双机主备(Keepalived实现)

    前言 首先介绍一下Keepalived,它是一个高性能的服务器高可用或热备解决方案,起初是专为LVS负载均衡软件设计的,Keepalived主要来防止服务器单点故障的发生问题,可以通过其与Nginx的 ...

  9. ELK初步实践

    ELK是一个日志分析和统计框架,是Elasticsearch.Logstash和Kibana三个核心开源组件的首字母缩写,实践中还需要filebeat.redis配合完成日志的搜集. 组件一览 名称 ...

随机推荐

  1. LeetCode977.Squares of a Sorted Array

    题目 977. Squares of a Sorted Array Given an array of integers A sorted in non-decreasing order, retur ...

  2. docker自动化脚本

    使用脚本从git上拉取项目并运行, 有些不足的地方 编写脚本 run.sh 如果用到redis和myslq,要先启动redis和mysql #!/bin/bash # author:qiao # 更新 ...

  3. Linux -- touch 命令

    在Linux中,每个文件都关联一个时间戳,并且每个文件搜会存储最近一次访问的时间.最近一次修改的时间和最近一次变更的时间等信息.所以,无论何时我们创建一个新文件,访问或者修改一个已经存在的文件,文件的 ...

  4. Asp.net 代码设置兼容性视图

    一.代码中设置兼容性 <summary> 兼容性视图 </summary> <param name="myPage"></param> ...

  5. linux识别ntfs U盘

    NTFS-3G 是一个开源的软件,可以实现 Linux.Free BSD.Mac OSX.NetBSD 和 Haiku 等操作系统中的 NTFS 读写支持 下载最新ntfs-3g源代码,编译安装 # ...

  6. git小结-ms

    目录 1.git是什么 2.git怎么工作的 3.git常用命令 4.git提效工具 5.git的技术用语 1.git是什么 git是开源的分布式的版本控制系统,可以有效.高速地处理的项目版本管理.g ...

  7. centos 7.6 修改vim配色方案

    cd ~ vim .vimrc colorscheme desert

  8. 安装folly库以及folly的ConcurrentHashMap的简单使用

    我在写grpc的实例时, 需要使用一个多线程的hash map, C++标准库中没有多线程的hash map, facebook开源的folly中存在大量的基础类, 中间存在一个高性能的hash ma ...

  9. elementui 多组件表单验证

      最近在做管理后台,vue2.0基于elementui框架进行开发. elementui的api中表单验证都是单个vue文件的验证.而我的保存按钮放在了父组件了,验证对象为三个子组件我的灵机一动 想 ...

  10. Java精通并发-透过openjdk源码分析wait与notify方法的本地实现

    上一次https://www.cnblogs.com/webor2006/p/11442551.html中通过openjdk从c++的底层来审视了ObjectMonitor的底层实现,这次继续来探究底 ...