Logstash是一款开源的数据收集引擎,具备实时管道处理能力。简单来说,logstash作为数据源与数据存储分析工具之间的桥梁,结合 ElasticSearch以及Kibana,能够极大方便数据的处理与分析。通过200多个插件,logstash可以接受几乎各种各样的数据。包括日志、网络请求、关系型数据库、传感器或物联网等等。

在Linux环境中,我们可以通过包管理进行安装,例如Unbuntu中的apt、以及CentOS中的yum。也可以从这里下载对应环境的二进制版本

体验Pipeline

Logstatsh最基本的Pipeline模式如下图所示,其需要两个必需参数inputoutput,以及一个可选参数filter

安装logstash

elasticstack,最好不要用root用户安装、启用。

一、创建普通用户(dyh)

  1. useradd dyh -d /home/dyh -m

-d:指定用户home目录;

-m:如果没有该目录,则创建;

创建用户密码

  1. passwd dyh
  2. #更改用户 dyh 的密码 。
  3. #新的 密码:
  4. #无效的密码: 密码是一个回文
  5. #重新输入新的 密码:
  6. #passwd:所有的身份验证令牌已经成功更新。

二、下载logstash安装包(logstash-6.2.3.tar.gz),并解压,下载地址:链接: https://pan.baidu.com/s/10MUE4tDqsKbHWNzL1Pbl7w 密码: hwyh

  1. tar zxvf logstash-6.2..tar.gz

三、简单测试,进入到解压包的bin目录下,执行如下命令:

  1. ./logstash -e 'input { stdin{} } output { stdout{} }'
  2. Exception in thread "main" java.lang.UnsupportedClassVersionError: org/logstash/Logstash : Unsupported major.minor version 52.0
  3. at java.lang.ClassLoader.defineClass1(Native Method)
  4. at java.lang.ClassLoader.defineClass(ClassLoader.java:)
  5. at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:)
  6. at java.net.URLClassLoader.defineClass(URLClassLoader.java:)
  7. at java.net.URLClassLoader.access$(URLClassLoader.java:)
  8. at java.net.URLClassLoader$.run(URLClassLoader.java:)
  9. at java.net.URLClassLoader$.run(URLClassLoader.java:)
  10. at java.security.AccessController.doPrivileged(Native Method)
  11. at java.net.URLClassLoader.findClass(URLClassLoader.java:)
  12. at java.lang.ClassLoader.loadClass(ClassLoader.java:)
  13. at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:)
  14. at java.lang.ClassLoader.loadClass(ClassLoader.java:)
  15. at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:)
  16. [dyh@centos74 bin]$ java -version
  17. java version "1.7.0_79"
  18. Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
  19. Java HotSpot(TM) -Bit Server VM (build 24.79-b02, mixed mode)

如果出现 “Unsupported major.minor version 52.0”错误,说明jdk版本太低,需要升级JDK;

升级JDK之后,执行以下命令(可能需要等待片刻):

  1. java -version
  2. java version "1.8.0_131"
  3. Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
  4. Java HotSpot(TM) -Bit Server VM (build 25.131-b11, mixed mode)
  5. [dyh@centos74 bin]$ ./logstash -e 'input { stdin{} } output { stdout{} }'

-e参数允许我们在命令行中直接输入配置,而不同通过-f参数指定配置文件。看到Pipeline main started表示logstash已经启动成功,在命令行输入一些文字后,logstash会加上日期和主机名(IP)输出到终端。这就是Logstash最基本的工作模式,接受输入,然后将信息加工后放入到输出。

  1. [dyh@centos74 bin]$ ./logstash -e 'input { stdin{} } output { stdout{} }'
  2. Sending Logstash's logs to /home/dyh/ELK/logstash-6.2.3/logs which is now configured via log4j2.properties
  3. [--13T09::,][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/home/dyh/ELK/logstash-6.2.3/modules/fb_apache/configuration"}
  4. [--13T09::,][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/home/dyh/ELK/logstash-6.2.3/modules/netflow/configuration"}
  5. [--13T09::,][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/home/dyh/ELK/logstash-6.2.3/data/queue"}
  6. [--13T09::,][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/home/dyh/ELK/logstash-6.2.3/data/dead_letter_queue"}
  7. [--13T09::,][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
  8. [--13T09::,][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"da1c26ee-45d0-419e-b305-3e0f3c0d852a", :path=>"/home/dyh/ELK/logstash-6.2.3/data/uuid"}
  9. [--13T09::,][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.3"}
  10. [--13T09::,][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>}
  11. [--13T09::,][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>, "pipeline.batch.size"=>, "pipeline.batch.delay"=>}
  12. [--13T09::,][INFO ][logstash.pipeline ] Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x236d5f92 run>"}
  13. The stdin plugin is now waiting for input:

展示如下信息,表示启动成功,等待中断输入信息。

输入hello world,回车。

  1. hello World
  2. --13T01::.703Z localhost hello World

ogstash会加上日期和主机名(IP)输出到终端。这就是Logstash最基本的工作模式,接受输入,然后将信息加工后放入到输出。

处理日志—接收fileBeat输出

Logstash收购了Filebeat加入到自己的生态系统中用来处理日志,Filebeat是一个轻量化的日志收集工具,还可以感知后台logstash的繁忙程度来决定日志发送的速度。

首先下载 Filebeat,解压后就可以使用。找到filebeat.yml修改如下:

  1. filebeat.prospectors:
  2.  
  3. # Each - is a prospector. Most options can be set at the prospector level, so
  4. # you can use different prospectors for various configurations.
  5. # Below are the prospector specific configurations.
  6.  
  7. - type: log
  8.  
  9. # Change to true to enable this prospector configuration.
  10. enabled: true
  11.  
  12. # Paths that should be crawled and fetched. Glob based paths.
  13. paths:
  14. - /var/log/*.log
  15. .
  16. .
  17. .
  18. 152 #----------------------------- Logstash output --------------------------------
  19. 153 output.logstash:
  20. 154 # The Logstash hosts
  21. 155 hosts: ["192.168.106.20:5044"]

配置完filebeat之后,等logstash启动之后,再启动。

配置logstash,接收filebeat输入,然后输出到终端(配置文件的方式)

在logstash安装路径的config目录下,新建filebeat.conf文件,内容如下:

  1. input {
        //接收beat输入
  2. beats{
              //指定接收的端口
  3. port => ""
  4. }
  5. }
  6. //输出到终端
  7. output {
  8. stdout {
  9. codec => rubydebug
  10. }
  11. }

测试下改配置文件准确性,命令(../bin/logstash -f filebeat.conf --config.test_and_exit).

命令(

  1. bin/logstash -f first-pipeline.conf --config.reload.automatic

):The --config.reload.automatic option enables automatic config reloading so that you don’t have to stop and restart Logstash every time you modify the configuration file.

  1. [dyh@centos74 bin]$ cd ../config/
  2. [dyh@centos74 config]$ ls
  3. jvm.options log4j2.properties logstash.yml pipelines.yml startup.options
  4. [dyh@centos74 config]$ vim filebeat.conf
  5. [dyh@centos74 config]$ ../bin/logstash -f filebeat.conf --config.test_and_exit
  6. Sending Logstash's logs to /home/dyh/ELK/logstash-6.2.3/logs which is now configured via log4j2.properties
  7. [--13T10::,][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/home/dyh/ELK/logstash-6.2.3/modules/fb_apache/configuration"}
  8. [--13T10::,][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/home/dyh/ELK/logstash-6.2.3/modules/netflow/configuration"}
  9. [--13T10::,][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
  10. Configuration OK //说明配置文件没有问题
  11. [--13T10::,][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

启用logstash,命令(../bin/logstash -f filebeat.conf),稍等一段时间,打印以下信息,表示logstash启动成功:

  1. [dyh@centos74 config]$ ../bin/logstash -f filebeat.conf --config.test_and_exit
  2. Sending Logstash's logs to /home/dyh/ELK/logstash-6.2.3/logs which is now configured via log4j2.properties
  3. [--13T10::,][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/home/dyh/ELK/logstash-6.2.3/modules/fb_apache/configuration"}
  4. [--13T10::,][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/home/dyh/ELK/logstash-6.2.3/modules/netflow/configuration"}
  5. [--13T10::,][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
  6. Configuration OK
  7. [--13T10::,][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
  8. [dyh@centos74 config]$ ../bin/logstash -f filebeat.conf
  9. Sending Logstash's logs to /home/dyh/ELK/logstash-6.2.3/logs which is now configured via log4j2.properties
  10. [--13T10::,][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/home/dyh/ELK/logstash-6.2.3/modules/fb_apache/configuration"}
  11. [--13T10::,][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/home/dyh/ELK/logstash-6.2.3/modules/netflow/configuration"}
  12. [--13T10::,][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
  13. [--13T10::,][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.3"}
  14. [--13T10::,][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>}
  15. [--13T10::,][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>, "pipeline.batch.size"=>, "pipeline.batch.delay"=>}
  16. [--13T10::,][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
  17. [--13T10::,][INFO ][logstash.pipeline ] Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x592e1bdc run>"}
  18. [--13T10::,][INFO ][org.logstash.beats.Server] Starting server on port:
  19. [--13T10::,][INFO ][logstash.agent ] Pipelines running {:count=>, :pipelines=>["main"]}

然后启用filebeat,命令(./filebeat -e -c filebeat.yml -d "public")

  1. [dyh@ump-pc1 filebeat-6.2.-linux-x86_64]$ ./filebeat -e -c filebeat.yml -d "public"
  2. --13T10::30.793+ INFO instance/beat.go: Home path: [/home/dyh/ELK/filebeat-6.2.-linux-x86_64] Config path: [/home/dyh/ELK/filebeat-6.2.-linux-x86_64] Data path: [/home/dyh/ELK/filebeat-6.2.-linux-x86_64/data] Logs path: [/home/dyh/ELK/filebeat-6.2.-linux-x86_64/logs]
  3. --13T10::30.810+ INFO instance/beat.go: Beat UUID: 641ac54b-9a52--99fb-8235e186a816
  4. --13T10::30.811+ INFO instance/beat.go: Setup Beat: filebeat; Version: 6.2.
  5. --13T10::30.825+ INFO pipeline/module.go: Beat name: ump-pc1
  6. --13T10::30.826+ INFO instance/beat.go: filebeat start running.
  7. --13T10::30.826+ INFO registrar/registrar.go: No registry file found under: /home/dyh/ELK/filebeat-6.2.-linux-x86_64/data/registry. Creating a new registry file.
  8. --13T10::30.843+ INFO registrar/registrar.go: Loading registrar data from /home/dyh/ELK/filebeat-6.2.-linux-x86_64/data/registry
  9. --13T10::30.843+ INFO registrar/registrar.go: States Loaded from registrar:
  10. --13T10::30.843+ WARN beater/filebeat.go: Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
  11. --13T10::30.843+ INFO crawler/crawler.go: Loading Prospectors:
  12. --13T10::30.843+ INFO log/prospector.go: Configured paths: [/var/log/*.log]
  13. 2018-09-13T10:13:30.844+0800 INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
  14. 2018-09-13T10:13:30.846+0800 ERROR log/prospector.go:437 Harvester could not be started on new file: /var/log/hawkey.log, Err: Error setting up harvester: Harvester setup failed. Unexpected file opening error: Failed opening /var/log/hawkey.log: open /var/log/hawkey.log: permission denied
  15. 2018-09-13T10:13:30.847+0800 ERROR log/prospector.go:437 Harvester could not be started on new file: /var/log/yum.log, Err: Error setting up harvester: Harvester setup failed. Unexpected file opening error: Failed opening /var/log/yum.log: open /var/log/yum.log: permission denied
  16. 2018-09-13T10:13:30.847+0800 ERROR log/prospector.go:437 Harvester could not be started on new file: /var/log/dnf.librepo.log, Err: Error setting up harvester: Harvester setup failed. Unexpected file opening error: Failed opening /var/log/dnf.librepo.log: open /var/log/dnf.librepo.log: permission denied
  17. 2018-09-13T10:13:30.847+0800 ERROR log/prospector.go:437 Harvester could not be started on new file: /var/log/dnf.log, Err: Error setting up harvester: Harvester setup failed. Unexpected file opening error: Failed opening /var/log/dnf.log: open /var/log/dnf.log: permission denied
  18. 2018-09-13T10:13:30.847+0800 ERROR log/prospector.go:437 Harvester could not be started on new file: /var/log/dnf.rpm.log, Err: Error setting up harvester: Harvester setup failed. Unexpected file opening error: Failed opening /var/log/dnf.rpm.log: open /var/log/dnf.rpm.log: permission denied
  19. 2018-09-13T10:13:30.869+0800 INFO crawler/crawler.go:82 Loading and starting Prospectors completed. Enabled prospectors: 1
  20. 2018-09-13T10:13:30.869+0800 INFO log/harvester.go:216 Harvester started for file: /var/log/boot.log
  21. 2018-09-13T10:13:30.870+0800 INFO cfgfile/reload.go:127 Config reloader started
  22. 2018-09-13T10:13:30.870+0800 INFO log/harvester.go:216 Harvester started for file: /var/log/vmware-vmsvc.log
  23. 2018-09-13T10:13:30.871+0800 INFO cfgfile/reload.go:219 Loading of config files completed.
  24. ^C2018-09-13T10:13:40.693+0800 INFO beater/filebeat.go:323 Stopping filebeat
  25. 2018-09-13T10:13:40.694+0800 INFO crawler/crawler.go:109 Stopping Crawler
  26. 2018-09-13T10:13:40.694+0800 INFO crawler/crawler.go:119 Stopping 1 prospectors
  27. 2018-09-13T10:13:40.694+0800 INFO prospector/prospector.go:121 Prospector ticker stopped
  28. 2018-09-13T10:13:40.695+0800 INFO prospector/prospector.go:138 Stopping Prospector: 11204088409762598069
  29. 2018-09-13T10:13:40.695+0800 INFO cfgfile/reload.go:222 Dynamic config reloader stopped
  30. 2018-09-13T10:13:40.695+0800 INFO log/harvester.go:237 Reader was closed: /var/log/boot.log. Closing.
  31. 2018-09-13T10:13:40.695+0800 INFO log/harvester.go:237 Reader was closed: /var/log/vmware-vmsvc.log. Closing.
  32. 2018-09-13T10:13:40.695+0800 INFO crawler/crawler.go:135 Crawler stopped
  33. 2018-09-13T10:13:40.695+0800 INFO registrar/registrar.go:210 Stopping Registrar
  34. 2018-09-13T10:13:40.695+0800 INFO registrar/registrar.go:165 Ending Registrar
  35. 2018-09-13T10:13:40.698+0800 INFO instance/beat.go:308 filebeat stopped.
  36. 2018-09-13T10:13:40.706+0800 INFO [monitoring] log/log.go:132 Total non-zero metrics {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":90,"time":97},"total":{"ticks":120,"time":133,"value":0},"user":{"ticks":30,"time":36}},"info":{"ephemeral_id":"56baf0c6-9785-42cc-8cf1-f8cd9a5abdb4","uptime":{"ms":10054}},"memstats":{"gc_next":7432992,"memory_alloc":4374544,"memory_total":8705840,"rss":23642112}},"filebeat":{"events":{"added":1325,"done":1325},"harvester":{"closed":2,"open_files":0,"running":0,"started":2}},"libbeat":{"config":{"module":{"running":0},"reloads":1},"output":{"events":{"acked":1323,"batches":1,"total":1323},"read":{"bytes":12},"type":"logstash","write":{"bytes":38033}},"pipeline":{"clients":0,"events":{"active":0,"filtered":2,"published":1323,"retry":1323,"total":1325},"queue":{"acked":1323}}},"registrar":{"states":{"current":2,"update":1325},"writes":5},"system":{"cpu":{"cores":2},"load":{"1":0.29,"15":0.02,"5":0.06,"norm":{"1":0.145,"15":0.01,"5":0.03}}}}}}
  37. 2018-09-13T10:13:40.706+0800 INFO [monitoring] log/log.go:133 Uptime: 10.058951193s
  38. 2018-09-13T10:13:40.706+0800 INFO [monitoring] log/log.go:110 Stopping metrics logging.

然后看logstash的终端输出,截取部分内容,如下

  1. {
  2. "tags" => [
  3. [] "beats_input_codec_plain_applied"
  4. ],
  5. "prospector" => {
  6. "type" => "log"
  7. },
  8. "@version" => "",
  9. "host" => "ump-pc1",
  10. "source" => "/var/log/vmware-vmsvc.log",
  11. "@timestamp" => --13T02::.957Z,
  12. "offset" => ,
  13. "message" => "[9月 13 09:32:32.917] [ message] [vmsvc] Cannot load message catalog for domain 'timeSync', language 'zh', catalog dir '/usr/share/open-vm-tools'.",
  14. "beat" => {
  15. "hostname" => "ump-pc1",
  16. "name" => "ump-pc1",
  17. "version" => "6.2.3"
  18. }
  19. }
  20. {
  21. "tags" => [
  22. [] "beats_input_codec_plain_applied"
  23. ],
  24. "prospector" => {
  25. "type" => "log"
  26. },
  27. "@version" => "",
  28. "host" => "ump-pc1",
  29. "source" => "/var/log/vmware-vmsvc.log",
  30. "@timestamp" => --13T02::.957Z,
  31. "offset" => ,
  32. "message" => "[9月 13 09:32:32.917] [ message] [vmtoolsd] Plugin 'timeSync' initialized.",
  33. "beat" => {
  34. "hostname" => "ump-pc1",
  35. "name" => "ump-pc1",
  36. "version" => "6.2.3"
  37. }
  38. }
  39. {
  40. "tags" => [
  41. [] "beats_input_codec_plain_applied"
  42. ],
  43. "prospector" => {
  44. "type" => "log"
  45. },
  46. "@version" => "",
  47. "host" => "ump-pc1",
  48. "source" => "/var/log/vmware-vmsvc.log",
  49. "@timestamp" => --13T02::.957Z,
  50. "offset" => ,
  51. "message" => "[9月 13 09:32:32.917] [ message] [vmsvc] Cannot load message catalog for domain 'vmbackup', language 'zh', catalog dir '/usr/share/open-vm-tools'.",
  52. "beat" => {
  53. "hostname" => "ump-pc1",
  54. "name" => "ump-pc1",
  55. "version" => "6.2.3"
  56. }
  57. }
  58. {
  59. "tags" => [
  60. [] "beats_input_codec_plain_applied"
  61. ],
  62. "prospector" => {
  63. "type" => "log"
  64. },
  65. "@version" => "",
  66. "host" => "ump-pc1",
  67. "source" => "/var/log/vmware-vmsvc.log",
  68. "@timestamp" => --13T02::.957Z,
  69. "offset" => ,
  70. "message" => "[9月 13 09:32:32.917] [ message] [vmtoolsd] Plugin 'vmbackup' initialized.",
  71. "beat" => {
  72. "hostname" => "ump-pc1",
  73. "name" => "ump-pc1",
  74. "version" => "6.2.3"
  75. }
  76. }
  77. {
  78. "tags" => [
  79. [] "beats_input_codec_plain_applied"
  80. ],
  81. "prospector" => {
  82. "type" => "log"
  83. },
  84. "@version" => "",
  85. "host" => "ump-pc1",
  86. "source" => "/var/log/vmware-vmsvc.log",
  87. "@timestamp" => --13T02::.957Z,
  88. "offset" => ,
  89. "message" => "[9月 13 09:32:32.920] [ message] [vix] VixTools_ProcessVixCommand: command 62",
  90. "beat" => {
  91. "hostname" => "ump-pc1",
  92. "name" => "ump-pc1",
  93. "version" => "6.2.3"
  94. }
  95. }
  96. {
  97. "tags" => [
  98. [] "beats_input_codec_plain_applied"
  99. ],
  100. "prospector" => {
  101. "type" => "log"
  102. },
  103. "@version" => "",
  104. "host" => "ump-pc1",
  105. "source" => "/var/log/vmware-vmsvc.log",
  106. "@timestamp" => --13T02::.957Z,
  107. "offset" => ,
  108. "message" => "[9月 13 09:32:33.154] [ message] [vix] VixTools_ProcessVixCommand: command 62",
  109. "beat" => {
  110. "hostname" => "ump-pc1",
  111. "name" => "ump-pc1",
  112. "version" => "6.2.3"
  113. }
  114. }
  115. {
  116. "tags" => [
  117. [] "beats_input_codec_plain_applied"
  118. ],
  119. "prospector" => {
  120. "type" => "log"
  121. },
  122. "@version" => "",
  123. "host" => "ump-pc1",
  124. "source" => "/var/log/vmware-vmsvc.log",
  125. "@timestamp" => --13T02::.957Z,
  126. "offset" => ,
  127. "message" => "[9月 13 09:32:33.154] [ message] [vix] ToolsDaemonTcloReceiveVixCommand: command 62, additionalError = 17",
  128. "beat" => {
  129. "hostname" => "ump-pc1",
  130. "name" => "ump-pc1",
  131. "version" => "6.2.3"
  132. }
  133. }
  134. {
  135. "tags" => [
  136. [] "beats_input_codec_plain_applied"
  137. ],
  138. "prospector" => {
  139. "type" => "log"
  140. },
  141. "@version" => "",
  142. "host" => "ump-pc1",
  143. "source" => "/var/log/vmware-vmsvc.log",
  144. "@timestamp" => --13T02::.957Z,
  145. "offset" => ,
  146. "message" => "[9月 13 09:32:33.155] [ message] [powerops] Executing script: '/etc/vmware-tools/poweron-vm-default'",
  147. "beat" => {
  148. "hostname" => "ump-pc1",
  149. "name" => "ump-pc1",
  150. "version" => "6.2.3"
  151. }
  152. }
  153. {
  154. "tags" => [
  155. [] "beats_input_codec_plain_applied"
  156. ],
  157. "prospector" => {
  158. "type" => "log"
  159. },
  160. "@version" => "",
  161. "host" => "ump-pc1",
  162. "source" => "/var/log/vmware-vmsvc.log",
  163. "@timestamp" => --13T02::.957Z,
  164. "offset" => ,
  165. "message" => "[9月 13 09:32:35.164] [ message] [powerops] Script exit code: 0, success = 1",
  166. "beat" => {
  167. "hostname" => "ump-pc1",
  168. "name" => "ump-pc1",
  169. "version" => "6.2.3"
  170. }
  171. }

说明filebeat搜集的日志信息,输出给logstash成功。

处理日志—LogStash自身日志采集配置

在logstash目录的config目录下新建logfile.conf文件,内容:

  1. input {
  2. file {
  3. path => "/home/dyh/logs/log20180823/*.log"
  4. type => "ws-log"
  5. start_position => "beginning"
  6. }
  7. }
  8.  
  9. output{
  10. stdout{
  11. codec => rubydebug
  12. }
  13. }

测试配置文件是否准确

  1. [dyh@centos74 config]$ ../bin/logstash -f logfile.conf --config.test_and_exit
  2. Sending Logstash's logs to /home/dyh/ELK/logstash-6.2.3/logs which is now configured via log4j2.properties
  3. [--13T10::,][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/home/dyh/ELK/logstash-6.2.3/modules/fb_apache/configuration"}
  4. [--13T10::,][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/home/dyh/ELK/logstash-6.2.3/modules/netflow/configuration"}
  5. [--13T10::,][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
  6. Configuration OK
  7. [--13T10::,][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

启用logstash,命令(../bin/logstash -f logfile.conf)

截取日志输出部分内容:

  1. {
  2. "@version" => "",
  3. "type" => "ws-log",
  4. "@timestamp" => --13T02::.368Z,
  5. "host" => "localhost",
  6. "message" => "08-23 07:55:53.207313 [-322978960] [\\xD0½\\xA8] [SUCESS] [/data/appdata/cbankapp4/iboc_corporbank.ear/corporbank.war/WEB-INF/conf/interface/ebus_SelectFund_ebus_WithdrawalFundTrading.xml] [0]",
  7. "path" => "/home/dyh/logs/log20180823/sa-evtlist_1.log"
  8. }
  9. {
  10. "@version" => "",
  11. "type" => "ws-log",
  12. "@timestamp" => --13T02::.368Z,
  13. "host" => "localhost",
  14. "message" => "08-23 07:55:53.212594 [-322978960] [ɾ\\xB3\\xFD] [SUCESS] [/data/appdata/cbankapp4/iboc_corporbank.ear/corporbank.war/WEB-INF/conf/interface/ebus_PaymentManage_ebus_authOrderRefundDetailQuery_batch.xml] [0]",
  15. "path" => "/home/dyh/logs/log20180823/sa-evtlist_1.log"
  16. }
  17. {
  18. "@version" => "",
  19. "type" => "ws-log",
  20. "@timestamp" => --13T02::.369Z,
  21. "host" => "localhost",
  22. "message" => "08-23 07:55:53.212758 [-322978960] [\\xD0½\\xA8] [SUCESS] [/data/appdata/cbankapp4/iboc_corporbank.ear/corporbank.war/WEB-INF/conf/interface/ebus_PaymentManage_ebus_authOrderRefundDetailQuery_batch.xml] [0]",
  23. "path" => "/home/dyh/logs/log20180823/sa-evtlist_1.log"
  24. }
  25. {
  26. "@version" => "",
  27. "type" => "ws-log",
  28. "@timestamp" => --13T02::.369Z,
  29. "host" => "localhost",
  30. "message" => "08-23 07:55:53.220285 [-322978960] [ɾ\\xB3\\xFD] [SUCESS] [/data/appdata/cbankapp4/iboc_corporbank.ear/corporbank.war/WEB-INF/conf/interface/ebus_QLFeePayment_ebus_QLFeePaymentSignQueryOrderList.xml] [0]",
  31. "path" => "/home/dyh/logs/log20180823/sa-evtlist_1.log"
  32. }
  33. {
  34. "@version" => "",
  35. "type" => "ws-log",
  36. "@timestamp" => --13T02::.369Z,
  37. "host" => "localhost",
  38. "message" => "08-23 07:55:53.220445 [-322978960] [\\xD0½\\xA8] [SUCESS] [/data/appdata/cbankapp4/iboc_corporbank.ear/corporbank.war/WEB-INF/conf/interface/ebus_QLFeePayment_ebus_QLFeePaymentSignQueryOrderList.xml] [0]",
  39. "path" => "/home/dyh/logs/log20180823/sa-evtlist_1.log"
  40. }

LogStash输出到elasticsearch配置

修改logfile.conf,内容如下:

  1. input {
  2. file {
  3. path => "/home/dyh/logs/log20180823/*.log"
  4. type => "ws-log"
  5. start_position => "beginning"
  6. }
  7. }
  8.  
  9. #output{
  10. # stdout{
  11. # codec => rubydebug
  12. # }
  13. #}
  14. output {
  15. elasticsearch {
  16. host => ["192.168.51.18:9200"]
  17. index => "system-ws-%{+YYYY.MM.dd}"
  18. }
  19. }

配置logstash扫描/home/dyh/logs/log20180823/路径下的所有已.log结尾的日志文件,并发扫描结果发送给elasticsearch(192.168.51.18:9200),存储的索引

  1. system-ws-%{+YYYY.MM.dd},启用elasticsearch
    然后启用logstash,命令(../bin/logstash -f logfile.conf
  1. ../bin/logstash -f logfile.conf

启用logstash之后,elasticsearch终端打印一下,说明logstash的日志发送到elasticsearch

  1. [--13T10::,][INFO ][o.e.c.m.MetaDataCreateIndexService] [lBKeFc1] [system-ws-2018.09.] creating index, cause [auto(bulk api)], templates [], shards []/[], mappings []
  2. [--13T10::,][INFO ][o.e.c.m.MetaDataMappingService] [lBKeFc1] [system-ws-2018.09./g1fY5lrxQPOsb8UCvEPnLg] create_mapping [doc]

在kibana中查看传输到elasticsearch中的日志

参考文章:http://www.cnblogs.com/cocowool/p/7326527.html

logstash安装及基础入门的更多相关文章

  1. Mysql的二进制安装和基础入门操作

    前言:Mysql数据库,知识非常的多,要想学精学通这块知识,估计也要花费和学linux一样的精力和时间.小编也是只会些毛皮,给大家分享一下~ 一.MySQL安装 (1)安装方式: 1 .程序包yum安 ...

  2. Mysql数据库的二进制安装和基础入门操作

    前言:Mysql数据库,知识非常的多,要想学精学通这块知识,估计也要花费和学linux一样的精力和时间.小编也是只会些毛皮,给大家分享一下~ 一.MySQL安装 (1)安装方式: 1 .程序包yum安 ...

  3. logstash安装与基础用法

    若是搭建elk,建议先安装好elasticsearch 来自官网,版本为2.3 wget -c https://download.elastic.co/logstash/logstash/packag ...

  4. Oracle学习第一天---安装和基础入门

    国庆七天假,决定静下心来入门Oracle数据库. 环境:Ocacle 11g 软件安装包和安装图解,大家可以在我的百度网盘上下载:链接:http://pan.baidu.com/s/1ntjDEnZ ...

  5. 1、Redis简介、安装和基础入门

    -------------------------------------------------------- 主要内容包括: 1.Redis简介 2.Redis安装.启动.停止 3.Redis基础 ...

  6. Logstash 基础入门

    原文地址:Logstash 基础入门博客地址:http://www.extlight.com 一.前言 Logstash 是一个开源的数据收集引擎,它具有备实时数据传输能力.它可以统一过滤来自不同源的 ...

  7. 王者荣耀是怎样炼成的(二)《王者荣耀》unity安装及使用的小白零基础入门

    转载请注明出处:http://www.cnblogs.com/yuxiuyan/p/7535345.html 工欲善其事,必先利其器. 上回书说到,开发游戏用到unity和C#.本篇博客将从零开始做一 ...

  8. 柴柴随笔第三篇:安装虚拟机以及Linux基础入门

    虚拟机的安装 老师提供的作业指南给了我莫大的帮助,一步一步按着其中操作提示和网址链接,我首先下好了VM,也创建好了自己的第一台虚拟机. 接着按照步骤安装了Ubuntu到我的虚拟机. 到此,一切都顺风顺 ...

  9. 安装虚拟机以及学习Linux基础入门

    安装虚拟机 参考基于VirtualBox虚拟机安装Ubuntu图文教程完成了虚拟机的安装,主要遇到了以下2个问题 在新建虚拟电脑的时候,如果类型选择了Linux,则版本就只能选择Ubuntu(32 位 ...

随机推荐

  1. svn状态与常见错误

    TortoiseSVN 1.6.16是最后一个目录独立管理自身cache的svn版本(每个目录下都有一个隐藏的.svn文件夹) 之后的版本会则会根目录上统一进行管理(只有根目录下有一个隐藏的.svn文 ...

  2. springmvc复习笔记----Restful 风格,PathVariable获取 Url实例

    结构 包与之前相同 <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns:xsi=&qu ...

  3. Django创建新项目

    1.安装Django       终端中输入:pip install Django==2.1.4   等于号后面的为版本,选则适合自己python的版本,如下图   Django version Py ...

  4. ueditor在线编辑器的简单使用-上传图片

    由于我的项目个人博客网站需要用到在线编辑器,百度的ueditor编辑器就是一个很好的编辑器.开始比较迷茫的使用,各种百度,没有我满意的答案,明明可以很简单的使用. 1.首先进入ueditor官网下载, ...

  5. 大话C#之委托

    开篇先来扯下淡,上篇博客LZ在结尾说这篇博客会来说说C#中的事件.但是当LZ看完事件之后发现事件是以委托为基础来实现的,于是LZ就自作主张地在这篇博客中先来说说委托,还烦请各位看官见谅!!!另外关于委 ...

  6. Linux命令一

    软件包管理命令: sudo apt-cache search package    #搜索包 sudo apt-cache show package     #获取包的相关信息,如说明.大小.版本 s ...

  7. Python3 socket网络编程(一)

    Socket的定义 套接字是为特定网络协议(例如TCP/IP,ICMP/IP,UDP/IP等)套件对上的网络应用程序提供者提供当前可移植标准的对象.它们允许程序接受并进行连接,如发送和接受数据.为了建 ...

  8. 【消息队列】RabbitMQ+PHP实现

    本文链接:http://www.cnblogs.com/aiweixiao/p/7374249.html 文档提纲: 扫描关注微信公众号 1.[下载和安装] 1)gitHub下载地址: https:/ ...

  9. (转)Spring Boot 2 (八):Spring Boot 集成 Memcached

    http://www.ityouknow.com/springboot/2018/09/01/spring-boot-memcached.html Memcached 介绍 Memcached 是一个 ...

  10. A - 畅通工程续 最短路

    某省自从实行了很多年的畅通工程计划后,终于修建了很多路.不过路多了也不好,每次要从一个城镇到另一个城镇时,都有许多种道路方案可以选择,而某些方案要比另一些方案行走的距离要短很多.这让行人很困扰. 现在 ...