前言

本篇文章主要介绍在两台机器上使用 Docker 搭建 ELK。

正文

环境

  • CentOS 7.7 系统

  • Docker version 19.03.8

  • docker-compose version 1.23.2

系统设置

vim 编辑 /etc/security/limits.conf,在末尾加上:

  1. * soft nofile 65536
  2. * hard nofile 65536
  3. * soft nproc 4096
  4. * hard nproc 4096

vim 编辑 /etc/sysctl.conf,在末尾加上:

  1. vm.max_map_count = 655360

执行 sysctl -p 命令是配置生效。

Elasticsearch 搭建

注意:如果用非 Docker 搭建,是不能用 root 用户去启动的。

由于我是用虚拟机搭建的,我的机器只能开两台,所以只有一个主节点和一个数据节点;在生产环境中最少要3台,防止脑裂问题。

注意:如果开启了防火墙,需要执行以下命令开放 9200 和 9300 端口号。

  1. firewall-cmd --zone=public --add-port=9200/tcp --permanent
  2. firewall-cmd --zone=public --add-port=9300/tcp --permanent

主节点

首先设置主节点的配置文件 elasticsearch.yml,如下:

  1. # ======================== Elasticsearch Configuration =========================
  2. #
  3. # NOTE: Elasticsearch comes with reasonable defaults for most settings.
  4. # Before you set out to tweak and tune the configuration, make sure you
  5. # understand what are you trying to accomplish and the consequences.
  6. #
  7. # The primary way of configuring a node is via this file. This template lists
  8. # the most important settings you may want to configure for a production cluster.
  9. #
  10. # Please consult the documentation for further information on configuration options:
  11. # https://www.elastic.co/guide/en/elasticsearch/reference/index.html
  12. #
  13. # ---------------------------------- Cluster -----------------------------------
  14. #
  15. # Use a descriptive name for your cluster:
  16. cluster.name: es-cluster
  17. #
  18. # ------------------------------------ Node ------------------------------------
  19. #
  20. # Use a descriptive name for the node:
  21. node.name: es-master
  22. node.master: true
  23. node.data: false
  24. #node.ingest: false
  25. #node.ml: false
  26. #xpack.ml.enabled: true
  27. #cluster.remote.connect: false
  28. #
  29. # Add custom attributes to the node:
  30. #
  31. #node.attr.rack: r1
  32. #
  33. # ----------------------------------- Paths ------------------------------------
  34. #
  35. # Path to directory where to store the data (separate multiple locations by comma):
  36. #
  37. #path.data: /path/to/data
  38. #
  39. # Path to log files:
  40. #
  41. #path.logs: /path/to/logs
  42. #
  43. # ----------------------------------- Memory -----------------------------------
  44. #
  45. # Lock the memory on startup:
  46. #
  47. #bootstrap.memory_lock: true
  48. #
  49. # Make sure that the heap size is set to about half the memory available
  50. # on the system and that the owner of the process is allowed to use this
  51. # limit.
  52. #
  53. # Elasticsearch performs poorly when the system is swapping the memory.
  54. #
  55. # ---------------------------------- Network -----------------------------------
  56. #
  57. # Set the bind address to a specific IP (IPv4 or IPv6):
  58. network.host: 0.0.0.0
  59. network.publish_host: 192.168.239.133
  60. #
  61. # Set a custom port for HTTP:
  62. http.port: 9200
  63. transport.tcp.port: 9300
  64. #
  65. # For more information, consult the network module documentation.
  66. #
  67. # --------------------------------- Discovery ----------------------------------
  68. #
  69. # Pass an initial list of hosts to perform discovery when this node is started:
  70. # The default list of hosts is ["127.0.0.1", "[::1]"]
  71. #
  72. discovery.seed_hosts:
  73. - 192.168.239.133
  74. - 192.168.239.131
  75. #
  76. # Bootstrap the cluster using an initial set of master-eligible nodes:
  77. cluster.initial_master_nodes:
  78. - es-master
  79. # - es-node2
  80. # - es-node3
  81. #
  82. # For more information, consult the discovery and cluster formation module documentation.
  83. #
  84. # ---------------------------------- Gateway -----------------------------------
  85. #
  86. # Block initial recovery after a full cluster restart until N nodes are started:
  87. #
  88. #gateway.recover_after_nodes: 2
  89. #
  90. # For more information, consult the gateway module documentation.
  91. #
  92. # ---------------------------------- Various -----------------------------------
  93. #
  94. # Require explicit names when deleting indices:
  95. #
  96. #action.destructive_requires_name: true
  97. http.cors.enabled: true
  98. http.cors.allow-origin: "*"

然后编写主节点的 docker-compose.yml,如下:

  1. version: "3"
  2. services:
  3. es-master:
  4. container_name: es-master
  5. hostname: es-master
  6. image: leisurexi/elasticsearch:7.1.0
  7. privileged: true
  8. ports:
  9. - 9200:9200
  10. - 9300:9300
  11. volumes:
  12. - ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
  13. - ./data:/usr/share/elasticsearch/data
  14. - ./logs:/usr/share/elasticsearch/logs
  15. environment:
  16. - "ES_JAVA_OPTS=-Xms2g -Xmx2g"
  17. ulimits:
  18. memlock:
  19. soft: -1
  20. hard: -1

注意:这个镜像是我自己 Docker Hup 上的,你可以换成官方的。(我的镜像和官方的一样,只是嫌每次下载太难,就把官方的镜像改了 tag 上传到自己的 Docker Hup 上了)

接着执行以下命令启动容器

  1. docker-compose up -d

如果出现下图所示的错误,可以使用 chmod 777 logschmod 777 data 来修改文件夹的权限,即可正常启动。

数据节点

首先设置数据节点的配置文件 elasticsearch.yml,如下:

  1. # ======================== Elasticsearch Configuration =========================
  2. #
  3. # NOTE: Elasticsearch comes with reasonable defaults for most settings.
  4. # Before you set out to tweak and tune the configuration, make sure you
  5. # understand what are you trying to accomplish and the consequences.
  6. #
  7. # The primary way of configuring a node is via this file. This template lists
  8. # the most important settings you may want to configure for a production cluster.
  9. #
  10. # Please consult the documentation for further information on configuration options:
  11. # https://www.elastic.co/guide/en/elasticsearch/reference/index.html
  12. #
  13. # ---------------------------------- Cluster -----------------------------------
  14. #
  15. # Use a descriptive name for your cluster:
  16. cluster.name: es-cluster
  17. #
  18. # ------------------------------------ Node ------------------------------------
  19. #
  20. # Use a descriptive name for the node:
  21. node.name: es-data
  22. node.master: true
  23. node.data: true
  24. #node.ingest: false
  25. #node.ml: false
  26. #xpack.ml.enabled: true
  27. #cluster.remote.connect: false
  28. #
  29. # Add custom attributes to the node:
  30. #
  31. #node.attr.rack: r1
  32. #
  33. # ----------------------------------- Paths ------------------------------------
  34. #
  35. # Path to directory where to store the data (separate multiple locations by comma):
  36. #
  37. #path.data: /path/to/data
  38. #
  39. # Path to log files:
  40. #
  41. #path.logs: /path/to/logs
  42. #
  43. # ----------------------------------- Memory -----------------------------------
  44. #
  45. # Lock the memory on startup:
  46. #
  47. #bootstrap.memory_lock: true
  48. #
  49. # Make sure that the heap size is set to about half the memory available
  50. # on the system and that the owner of the process is allowed to use this
  51. # limit.
  52. #
  53. # Elasticsearch performs poorly when the system is swapping the memory.
  54. #
  55. # ---------------------------------- Network -----------------------------------
  56. #
  57. # Set the bind address to a specific IP (IPv4 or IPv6):
  58. network.host: 0.0.0.0
  59. network.publish_host: 192.168.239.131
  60. #
  61. # Set a custom port for HTTP:
  62. http.port: 9200
  63. transport.tcp.port: 9300
  64. #
  65. # For more information, consult the network module documentation.
  66. #
  67. # --------------------------------- Discovery ----------------------------------
  68. #
  69. # Pass an initial list of hosts to perform discovery when this node is started:
  70. # The default list of hosts is ["127.0.0.1", "[::1]"]
  71. #
  72. discovery.seed_hosts:
  73. - 192.168.239.133
  74. - 192.168.239.131
  75. #
  76. # Bootstrap the cluster using an initial set of master-eligible nodes:
  77. cluster.initial_master_nodes:
  78. - es-master
  79. # - es-node2
  80. # - es-node3
  81. #
  82. # For more information, consult the discovery and cluster formation module documentation.
  83. #
  84. # ---------------------------------- Gateway -----------------------------------
  85. #
  86. # Block initial recovery after a full cluster restart until N nodes are started:
  87. #
  88. #gateway.recover_after_nodes: 2
  89. #
  90. # For more information, consult the gateway module documentation.
  91. #
  92. # ---------------------------------- Various -----------------------------------
  93. #
  94. # Require explicit names when deleting indices:
  95. #
  96. #action.destructive_requires_name: true
  97. http.cors.enabled: true
  98. http.cors.allow-origin: "*"

然后编写数据节点的 docker-compose.yml,如下:

  1. version: "3"
  2. services:
  3. es-master:
  4. container_name: es-data
  5. hostname: es-data
  6. image: leisurexi/elasticsearch:7.1.0
  7. privileged: true
  8. ports:
  9. - 9200:9200
  10. - 9300:9300
  11. volumes:
  12. - ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
  13. - ./data:/usr/share/elasticsearch/data
  14. - ./logs:/usr/share/elasticsearch/logs
  15. environment:
  16. - "ES_JAVA_OPTS=-Xms2g -Xmx2g"
  17. ulimits:
  18. memlock:
  19. soft: -1
  20. hard: -1

接着像上面主节点一样启动就行了,然后访问主节点的 http://192.168.239.133:9200/_cat/nodes API 地址,如下图所示就代表 Elasticsearch 集群搭建成功了。

Kibana 搭建

因为主节点负责集群范围内的轻量级操作,例如创建或删除索引,跟踪哪些节点是集群的一部分以及确定将哪些碎片分配给哪些节点;所以将 Kibana 跟主节点放在一台机器上。

注意:如果开启了防火墙,需要执行以下命令开放 5601 端口号。

  1. firewall-cmd --zone=public --add-port=5601/tcp --permanent

首先是 Kibana 的配置文件 Kibana.yml,如下:

  1. # Kibana is served by a back end server. This setting specifies the port to use.
  2. server.port: 5601
  3. # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
  4. # The default is 'localhost', which usually means remote machines will not be able to connect.
  5. # To allow connections from remote users, set this parameter to a non-loopback address.
  6. server.host: "0.0.0.0"
  7. # Enables you to specify a path to mount Kibana at if you are running behind a proxy.
  8. # Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
  9. # from requests it receives, and to prevent a deprecation warning at startup.
  10. # This setting cannot end in a slash.
  11. #server.basePath: ""
  12. # Specifies whether Kibana should rewrite requests that are prefixed with
  13. # `server.basePath` or require that they are rewritten by your reverse proxy.
  14. # This setting was effectively always `false` before Kibana 6.3 and will
  15. # default to `true` starting in Kibana 7.0.
  16. #server.rewriteBasePath: false
  17. # The maximum payload size in bytes for incoming server requests.
  18. #server.maxPayloadBytes: 1048576
  19. # The Kibana server's name. This is used for display purposes.
  20. #server.name: "your-hostname"
  21. # The URLs of the Elasticsearch instances to use for all your queries.
  22. elasticsearch.hosts: ["http://192.168.239.133:9200", "http://192.168.239.131:9200"]
  23. # When this setting's value is true Kibana uses the hostname specified in the server.host
  24. # setting. When the value of this setting is false, Kibana uses the hostname of the host
  25. # that connects to this Kibana instance.
  26. #elasticsearch.preserveHost: true
  27. # Kibana uses an index in Elasticsearch to store saved searches, visualizations and
  28. # dashboards. Kibana creates a new index if the index doesn't already exist.
  29. #kibana.index: ".kibana"
  30. # The default application to load.
  31. #kibana.defaultAppId: "home"
  32. # If your Elasticsearch is protected with basic authentication, these settings provide
  33. # the username and password that the Kibana server uses to perform maintenance on the Kibana
  34. # index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
  35. # is proxied through the Kibana server.
  36. #elasticsearch.username: "user"
  37. #elasticsearch.password: "pass"
  38. # Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
  39. # These settings enable SSL for outgoing requests from the Kibana server to the browser.
  40. #server.ssl.enabled: false
  41. #server.ssl.certificate: /path/to/your/server.crt
  42. #server.ssl.key: /path/to/your/server.key
  43. # Optional settings that provide the paths to the PEM-format SSL certificate and key files.
  44. # These files validate that your Elasticsearch backend uses the same key files.
  45. #elasticsearch.ssl.certificate: /path/to/your/client.crt
  46. #elasticsearch.ssl.key: /path/to/your/client.key
  47. # Optional setting that enables you to specify a path to the PEM file for the certificate
  48. # authority for your Elasticsearch instance.
  49. #elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
  50. # To disregard the validity of SSL certificates, change this setting's value to 'none'.
  51. #elasticsearch.ssl.verificationMode: full
  52. # Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
  53. # the elasticsearch.requestTimeout setting.
  54. #elasticsearch.pingTimeout: 1500
  55. # Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
  56. # must be a positive integer.
  57. #elasticsearch.requestTimeout: 30000
  58. # List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
  59. # headers, set this value to [] (an empty list).
  60. #elasticsearch.requestHeadersWhitelist: [ authorization ]
  61. # Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
  62. # by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
  63. #elasticsearch.customHeaders: {}
  64. # Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
  65. #elasticsearch.shardTimeout: 30000
  66. # Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
  67. #elasticsearch.startupTimeout: 5000
  68. # Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
  69. #elasticsearch.logQueries: false
  70. # Specifies the path where Kibana creates the process ID file.
  71. #pid.file: /var/run/kibana.pid
  72. # Enables you specify a file where Kibana stores log output.
  73. #logging.dest: stdout
  74. # Set the value of this setting to true to suppress all logging output.
  75. #logging.silent: false
  76. # Set the value of this setting to true to suppress all logging output other than error messages.
  77. #logging.quiet: false
  78. # Set the value of this setting to true to log all events, including system usage information
  79. # and all requests.
  80. #logging.verbose: false
  81. # Set the interval in milliseconds to sample system and process performance
  82. # metrics. Minimum is 100ms. Defaults to 5000.
  83. #ops.interval: 5000
  84. # Specifies locale to be used for all localizable strings, dates and number formats.
  85. i18n.locale: "zh-CN"

然后是 docker-compose.yml 文件的编写,如下:

  1. version: "3"
  2. services:
  3. kibana:
  4. container_name: kibana
  5. hostname: kibana
  6. image: leisurexi/kibana:7.1.0
  7. ports:
  8. - 5601:5601
  9. volumes:
  10. - ./kibana.yml:/usr/share/kibana/config/kibana.yml

注意:这个镜像是我自己 Docker Hup 上的,你可以换成官方的。

接着像 Elasticsearch 几点一样启动就可以了。

我们访问 Kibana 节点的 5601 端口就可以看到界面了,接下来执行 GET _cluster/health 查看 ES 集群的健康状况,来验证 Kibana 是否可以正常工作。

如上图一样就代表你已经 kibana 已经搭建成功了。

logstash 搭建

logstash 在 ES 的数据节点上搭建。

注意:如果开启了防火墙,需要执行以下命令开放 4560 和 5044 端口号。

  1. firewall-cmd --zone=public --add-port=4560/tcp --permanent
  2. firewall-cmd --zone=public --add-port=5044/tcp --permanent

首先是 logstash 的全局配置文件 logstash.yml,如下:

  1. # Settings file in YAML
  2. #
  3. # Settings can be specified either in hierarchical form, e.g.:
  4. #
  5. # pipeline:
  6. # batch:
  7. # size: 125
  8. # delay: 5
  9. #
  10. # Or as flat keys:
  11. #
  12. # pipeline.batch.size: 125
  13. # pipeline.batch.delay: 5
  14. #
  15. # ------------ Node identity ------------
  16. #
  17. # Use a descriptive name for the node:
  18. #
  19. # node.name: test
  20. #
  21. # If omitted the node name will default to the machine's host name
  22. #
  23. # ------------ Data path ------------------
  24. #
  25. # Which directory should be used by logstash and its plugins
  26. # for any persistent needs. Defaults to LOGSTASH_HOME/data
  27. #
  28. # path.data:
  29. #
  30. # ------------ Pipeline Settings --------------
  31. #
  32. # The ID of the pipeline.
  33. #
  34. # pipeline.id: main
  35. #
  36. # Set the number of workers that will, in parallel, execute the filters+outputs
  37. # stage of the pipeline.
  38. #
  39. # This defaults to the number of the host's CPU cores.
  40. #
  41. # pipeline.workers: 2
  42. #
  43. # How many events to retrieve from inputs before sending to filters+workers
  44. #
  45. # pipeline.batch.size: 125
  46. #
  47. # How long to wait in milliseconds while polling for the next event
  48. # before dispatching an undersized batch to filters+outputs
  49. #
  50. # pipeline.batch.delay: 50
  51. #
  52. # Force Logstash to exit during shutdown even if there are still inflight
  53. # events in memory. By default, logstash will refuse to quit until all
  54. # received events have been pushed to the outputs.
  55. #
  56. # WARNING: enabling this can lead to data loss during shutdown
  57. #
  58. # pipeline.unsafe_shutdown: false
  59. #
  60. # ------------ Pipeline Configuration Settings --------------
  61. #
  62. # Where to fetch the pipeline configuration for the main pipeline
  63. #
  64. # path.config:
  65. #
  66. # Pipeline configuration string for the main pipeline
  67. #
  68. # config.string:
  69. #
  70. # At startup, test if the configuration is valid and exit (dry run)
  71. #
  72. # config.test_and_exit: false
  73. #
  74. # Periodically check if the configuration has changed and reload the pipeline
  75. # This can also be triggered manually through the SIGHUP signal
  76. #
  77. # config.reload.automatic: false
  78. #
  79. # How often to check if the pipeline configuration has changed (in seconds)
  80. #
  81. # config.reload.interval: 3s
  82. #
  83. # Show fully compiled configuration as debug log message
  84. # NOTE: --log.level must be 'debug'
  85. #
  86. # config.debug: false
  87. #
  88. # When enabled, process escaped characters such as \n and \" in strings in the
  89. # pipeline configuration files.
  90. #
  91. # config.support_escapes: false
  92. #
  93. # ------------ Module Settings ---------------
  94. # Define modules here. Modules definitions must be defined as an array.
  95. # The simple way to see this is to prepend each `name` with a `-`, and keep
  96. # all associated variables under the `name` they are associated with, and
  97. # above the next, like this:
  98. #
  99. # modules:
  100. # - name: MODULE_NAME
  101. # var.PLUGINTYPE1.PLUGINNAME1.KEY1: VALUE
  102. # var.PLUGINTYPE1.PLUGINNAME1.KEY2: VALUE
  103. # var.PLUGINTYPE2.PLUGINNAME1.KEY1: VALUE
  104. # var.PLUGINTYPE3.PLUGINNAME3.KEY1: VALUE
  105. #
  106. # Module variable names must be in the format of
  107. #
  108. # var.PLUGIN_TYPE.PLUGIN_NAME.KEY
  109. #
  110. # modules:
  111. #
  112. # ------------ Cloud Settings ---------------
  113. # Define Elastic Cloud settings here.
  114. # Format of cloud.id is a base64 value e.g. dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRub3RhcmVhbCRpZGVudGlmaWVy
  115. # and it may have an label prefix e.g. staging:dXMtZ...
  116. # This will overwrite 'var.elasticsearch.hosts' and 'var.kibana.host'
  117. # cloud.id: <identifier>
  118. #
  119. # Format of cloud.auth is: <user>:<pass>
  120. # This is optional
  121. # If supplied this will overwrite 'var.elasticsearch.username' and 'var.elasticsearch.password'
  122. # If supplied this will overwrite 'var.kibana.username' and 'var.kibana.password'
  123. # cloud.auth: elastic:<password>
  124. #
  125. # ------------ Queuing Settings --------------
  126. #
  127. # Internal queuing model, "memory" for legacy in-memory based queuing and
  128. # "persisted" for disk-based acked queueing. Defaults is memory
  129. #
  130. # queue.type: memory
  131. #
  132. # If using queue.type: persisted, the directory path where the data files will be stored.
  133. # Default is path.data/queue
  134. #
  135. # path.queue:
  136. #
  137. # If using queue.type: persisted, the page data files size. The queue data consists of
  138. # append-only data files separated into pages. Default is 64mb
  139. #
  140. # queue.page_capacity: 64mb
  141. #
  142. # If using queue.type: persisted, the maximum number of unread events in the queue.
  143. # Default is 0 (unlimited)
  144. #
  145. # queue.max_events: 0
  146. #
  147. # If using queue.type: persisted, the total capacity of the queue in number of bytes.
  148. # If you would like more unacked events to be buffered in Logstash, you can increase the
  149. # capacity using this setting. Please make sure your disk drive has capacity greater than
  150. # the size specified here. If both max_bytes and max_events are specified, Logstash will pick
  151. # whichever criteria is reached first
  152. # Default is 1024mb or 1gb
  153. #
  154. # queue.max_bytes: 1024mb
  155. #
  156. # If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
  157. # Default is 1024, 0 for unlimited
  158. #
  159. # queue.checkpoint.acks: 1024
  160. #
  161. # If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
  162. # Default is 1024, 0 for unlimited
  163. #
  164. # queue.checkpoint.writes: 1024
  165. #
  166. # If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
  167. # Default is 1000, 0 for no periodic checkpoint.
  168. #
  169. # queue.checkpoint.interval: 1000
  170. #
  171. # ------------ Dead-Letter Queue Settings --------------
  172. # Flag to turn on dead-letter queue.
  173. #
  174. # dead_letter_queue.enable: false
  175. # If using dead_letter_queue.enable: true, the maximum size of each dead letter queue. Entries
  176. # will be dropped if they would increase the size of the dead letter queue beyond this setting.
  177. # Default is 1024mb
  178. # dead_letter_queue.max_bytes: 1024mb
  179. # If using dead_letter_queue.enable: true, the directory path where the data files will be stored.
  180. # Default is path.data/dead_letter_queue
  181. #
  182. # path.dead_letter_queue:
  183. #
  184. # ------------ Metrics Settings --------------
  185. #
  186. # Bind address for the metrics REST endpoint
  187. #
  188. # http.host: "127.0.0.1"
  189. #
  190. # Bind port for the metrics REST endpoint, this option also accept a range
  191. # (9600-9700) and logstash will pick up the first available ports.
  192. #
  193. # http.port: 9600-9700
  194. #
  195. # ------------ Debugging Settings --------------
  196. #
  197. # Options for log.level:
  198. # * fatal
  199. # * error
  200. # * warn
  201. # * info (default)
  202. # * debug
  203. # * trace
  204. #
  205. # log.level: info
  206. # path.logs:
  207. #
  208. # ------------ Other Settings --------------
  209. #
  210. # Where to find custom plugins
  211. # path.plugins: []
  212. #
  213. # ------------ X-Pack Settings (not applicable for OSS build)--------------
  214. #
  215. # X-Pack Monitoring
  216. # https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
  217. xpack.monitoring.enabled: true
  218. #xpack.monitoring.elasticsearch.username: logstash_system
  219. #xpack.monitoring.elasticsearch.password: password
  220. xpack.monitoring.elasticsearch.hosts: ["http://192.168.239.133:9200", "http://192.168.239.131:9200"]
  221. #xpack.monitoring.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
  222. #xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
  223. #xpack.monitoring.elasticsearch.ssl.truststore.password: password
  224. #xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
  225. #xpack.monitoring.elasticsearch.ssl.keystore.password: password
  226. #xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
  227. #xpack.monitoring.elasticsearch.sniffing: false
  228. #xpack.monitoring.collection.interval: 10s
  229. #xpack.monitoring.collection.pipeline.details.enabled: true
  230. #
  231. # X-Pack Management
  232. # https://www.elastic.co/guide/en/logstash/current/logstash-centralized-pipeline-management.html
  233. xpack.management.enabled: false
  234. #xpack.management.pipeline.id: ["main", "apache_logs"]
  235. #xpack.management.elasticsearch.username: logstash_admin_user
  236. #xpack.management.elasticsearch.password: password
  237. #xpack.management.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
  238. #xpack.management.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
  239. #xpack.management.elasticsearch.ssl.truststore.path: /path/to/file
  240. #xpack.management.elasticsearch.ssl.truststore.password: password
  241. #xpack.management.elasticsearch.ssl.keystore.path: /path/to/file
  242. #xpack.management.elasticsearch.ssl.keystore.password: password
  243. #xpack.management.elasticsearch.ssl.verification_mode: certificate
  244. #xpack.management.elasticsearch.sniffing: false
  245. #xpack.management.logstash.poll_interval: 5s

然后是自定义的 logstash 的配置文件 logstash.conf,如下:

  1. input {
  2. tcp {
  3. mode => "server"
  4. host => "0.0.0.0"
  5. port => 4560
  6. codec => json_lines
  7. }
  8. }
  9. output {
  10. elasticsearch {
  11. hosts => "http://192.168.239.133:9200"
  12. index => "log-%{+YYYY.MM.dd}"
  13. }
  14. }

上面文件的大概意思就是监听 4560 端口,然后写入 ES,索引名称就是 log 前缀加上日期;每天都会创建一个新的索引。

然后是 docker-compose.yml,如下:

  1. version: "3"
  2. services:
  3. logstash:
  4. container_name: logstash
  5. hostname: logstash
  6. image: leisurexi/logstash:7.1.0
  7. command: logstash -f ./config/logstash.conf
  8. volumes:
  9. - ./logstash.conf:/usr/share/logstash/config/logstash.conf
  10. - ./logstash.yml:/usr/share/logstash/config/logstash.yml
  11. environment:
  12. - elasticsearch.hosts=http://192.168.239.133:9200
  13. ports:
  14. - 4560:4560
  15. - 5044:5044

最后像上面启动 ES 一样,启动 logstash 即可。

定期删除索引

如果长时间运行,会有磁盘满的而无法写入 ES 的情况,所以得定时删除不怎么重要的索引数据;如下,可以通过定时脚本来实现。

我们先写个删除15天前索引的脚本 es-index-clear.sh,如下:

  1. # /bin/bash
  2. # es-index-clear
  3. # 只保留15天内的日志索引
  4. LAST_DATA=`date -d "-15 days" "+%Y.%m.%d"`
  5. # 删除索引
  6. curl -XDELETE 'http://192.168.239.133:9200/*-'${LAST_DATA}'*'

然后利用 crontab 去添加定时任务,首先执行 crontab -e,然后添加以下内容:

  1. 0 1 * * * /opt/elk/es-index-clear.sh

该定时会在每天的凌晨1点执行,后面换成你自己脚本所在的绝对路径即可。

可以执行 tail -f /var/log/cron,查看定时任务的日志。

测试

我们新建一个 spring-boot 应用,添加 logstash 的依赖,如下:

  1. <dependency>
  2. <groupId>net.logstash.logback</groupId>
  3. <artifactId>logstash-logback-encoder</artifactId>
  4. <version>5.3</version>
  5. </dependency>

然后新建一个 logback.xml 放在 resources 目录下,内容如下:

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <!DOCTYPE configuration>
  3. <configuration>
  4. <include resource="org/springframework/boot/logging/logback/defaults.xml"/>
  5. <include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
  6. <!--应用名称-->
  7. <property name="APP_NAME" value="log"/>
  8. <!--输出到logstash的appender-->
  9. <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
  10. <!--可以访问的logstash日志收集端口-->
  11. <destination>192.168.239.131:4560</destination>
  12. <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/>
  13. </appender>
  14. <root level="INFO">
  15. <appender-ref ref="CONSOLE"/>
  16. <appender-ref ref="LOGSTASH"/>
  17. </root>
  18. </configuration>

接着编写一个定时任务,Java 代码如下:

  1. @EnableScheduling
  2. @Configuration
  3. public class LogScheduler {
  4. private static Logger log = LoggerFactory.getLogger(LogScheduler.class);
  5. @Scheduled(cron = " 0/30 * * * * ? ")
  6. public void doTiming() {
  7. log.info("ELK测试日志");
  8. }
  9. }

该定时任务每30秒输出一条日志。

最后我们查看 kibana 的界面就可以看到啦!

总结

本次只是简单的搭建了 ELK,如果要在生成环境上使用,还需要做很多修改;例如,ES 开启安全认证,端口不可直接暴露在公网上,索引最好使用模板创建等。

最后本篇文章的代码和 ELK 的配置文件,我都上传到 https://github.com/leisurexi/elk访问新博客地址,观看效果更佳 https://leisurexi.github.io/

注意:Github 上的 docker-compose.yml 我是和在一起写的,文章中是分开写的,为了更清晰一点。

Docker 搭建 ELK 集群步骤的更多相关文章

  1. 通过docker搭建ELK集群

    单机ELK,另外两台服务器分别有一个elasticsearch节点,这样形成一个3节点的ES集群. 可以先尝试单独搭建es集群或单机ELK https://www.cnblogs.com/lz0925 ...

  2. 【杂记】docker搭建ELK 集群6.4.0版本 + elasticsearch-head IK分词器与拼音分词器整合

    大佬博客地址:https://blog.csdn.net/supermao1013/article/category/8269552 docker elasticsearch 集群启动命令 docke ...

  3. Docker搭建RabbitMQ集群

    Docker搭建RabbitMQ集群 Docker安装 见官网 RabbitMQ镜像下载及配置 见此博文 集群搭建 首先,我们需要启动运行RabbitMQ docker run -d --hostna ...

  4. 庐山真面目之十二微服务架构基于Docker搭建Consul集群、Ocelot网关集群和IdentityServer版本实现

    庐山真面目之十二微服务架构基于Docker搭建Consul集群.Ocelot网关集群和IdentityServer版本实现 一.简介      在第七篇文章<庐山真面目之七微服务架构Consul ...

  5. Docker 搭建 etcd 集群

    阅读目录: 主机安装 集群搭建 API 操作 API 说明和 etcdctl 命令说明 etcd 是 CoreOS 团队发起的一个开源项目(Go 语言,其实很多这类项目都是 Go 语言实现的,只能说很 ...

  6. Docker搭建PXC集群

    如何创建MySQL的PXC集群 下载PXC集群镜像文件 下载 docker pull percona/percona-xtradb-cluster 重命名 [root@hongshaorou ~]# ...

  7. docker搭建etcd集群环境

    其实关于集群网上说的方案已经很多了,尤其是官网,只是这里我个人只有一个虚拟机,在开发环境下建议用docker-compose来搭建etcd集群. 1.拉取etcd镜像 docker pull quay ...

  8. docker 搭建zookeeper集群和kafka集群

    docker 搭建zookeeper集群 安装docker-compose容器编排工具 Compose介绍 Docker Compose 是 Docker 官方编排(Orchestration)项目之 ...

  9. 使用Docker搭建Spark集群(用于实现网站流量实时分析模块)

    上一篇使用Docker搭建了Hadoop的完全分布式:使用Docker搭建Hadoop集群(伪分布式与完全分布式),本次记录搭建spark集群,使用两者同时来实现之前一直未完成的项目:网站日志流量分析 ...

随机推荐

  1. ANTLR随笔(一)

    学习背景 最近做项目需要开发一个类似Graphql的简单版的自定义查询功能. 功能主要是通过前端自定义的复查询条件来控制后端的查询字段以及最终返回的JSON格式. 最初准备直接使用Graphql实现但 ...

  2. Face The Right Way POJ - 3276(区间)

    Farmer John has arranged his N (1 ≤ N ≤ 5,000) cows in a row and many of them are facing forward, li ...

  3. java对象clone

    java克隆 为什么需要克隆 我们在很多时候需要使用一个对象去记录另外一个对象的当前状态,对象中可能会有很多属性,如果我们一个一个去设置,不仅不方便,而且效率很低,我们看一个初学者可能遇到的问题 cl ...

  4. API网关--Kong的实践

    1. 什么是Kong 目前互联网后台架构一般是采用微服务,或者类似微服务的形式,应用的请求通常需要访问多个后台系统.如果让每一个后台系统都实现鉴权.限流.负载均衡.审计等基础功能是不合适的,通用的做法 ...

  5. git基础使用合集

    1.git初始化仓库-git init git init 创建一个.git目录,跟踪管理版本 2.git 添加-git add git add xxx.xxx 添加到暂缓区里 git add * 添加 ...

  6. 家庭记账本app进度之下拉框和数字转轮的相关应用

    这次主要是悬系的下拉框Spinner和数字转轮NumberPicker的使用.先分析相关的用到的知识点. 在Android中,用string-array是一种简单的提取XML资源文件数据的方法. 例子 ...

  7. flask-文件上传的使用

    flask-文件上传 在flask中使用request.files.get来获取文件对象 对获取到的文件对象可以使用save(filepath)方法来保存文件 上传的文件在保存前需要对文件名做一个过滤 ...

  8. ECSHOP数据表结构完整仔细说明教程 (http://www.ecshop119.com/ecshopjc-868.html)

    s_account_log //用户账目日志表 字段 类型 Null 默认 注释 log_id mediumint(8) 否   自增ID号 user_id mediumint(8) 否   用户登录 ...

  9. 汇编刷题:统计2000H开始的正负数的个数

    DATA SEGMENT ORG 2000H INFO DB 1,2,3,4,5,70H,71H,72H,80H,92H N_NUMS DB 00H P_NUMS DB 00H DATA ENDS C ...

  10. vscode如何安装eslint插件 代码自动修复

    ESlint:是用来统一JavaScript代码风格的工具,不包含css.html等. 方法和步骤: 通常情况下vue项目都会添加eslint组件,我们可以查看webpack的配置文件package. ...