logstash.yml

# Settings file in YAML
#
# Settings can be specified either in hierarchical form, e.g.:
#
# pipeline:
# batch:
# size: 125
# delay: 5
#
# Or as flat keys:
#
# pipeline.batch.size: 125
# pipeline.batch.delay: 5
#
# ------------ Node identity ------------
#
# Use a descriptive name for the node:
#
# node.name: test
#
# If omitted the node name will default to the machine's host name
#
# ------------ Data path ------------------
#
# Which directory should be used by logstash and its plugins
# for any persistent needs. Defaults to LOGSTASH_HOME/data
#数据位置
path.data: /var/lib/logstash
#
# ------------ Pipeline Settings --------------
#
# The ID of the pipeline.
#
# pipeline.id: main
#
# Set the number of workers that will, in parallel, execute the filters+outputs
# stage of the pipeline.
#
# This defaults to the number of the host's CPU cores.
#
# pipeline.workers: 2
#
# How many events to retrieve from inputs before sending to filters+workers
#
# pipeline.batch.size: 125
#
# How long to wait in milliseconds while polling for the next event
# before dispatching an undersized batch to filters+outputs
#
# pipeline.batch.delay: 50
#
# Force Logstash to exit during shutdown even if there are still inflight
# events in memory. By default, logstash will refuse to quit until all
# received events have been pushed to the outputs.
#
# WARNING: enabling this can lead to data loss during shutdown
#
# pipeline.unsafe_shutdown: false
#
# ------------ Pipeline Configuration Settings --------------
#
# Where to fetch the pipeline configuration for the main pipeline
#
# path.config:
#
# Pipeline configuration string for the main pipeline
#
# config.string:
#
# At startup, test if the configuration is valid and exit (dry run)
#
# config.test_and_exit: false
#
# Periodically check if the configuration has changed and reload the pipeline
# This can also be triggered manually through the SIGHUP signal
#
#config.reload.automatic: true
#
# How often to check if the pipeline configuration has changed (in seconds)
#
# config.reload.interval: 3s
#
# Show fully compiled configuration as debug log message
# NOTE: --log.level must be 'debug'
#
# config.debug: false
#
# When enabled, process escaped characters such as \n and \" in strings in the
# pipeline configuration files.
#
# config.support_escapes: false
#
# ------------ Module Settings ---------------
# Define modules here. Modules definitions must be defined as an array.
# The simple way to see this is to prepend each `name` with a `-`, and keep
# all associated variables under the `name` they are associated with, and
# above the next, like this:
#
# modules:
# - name: MODULE_NAME
# var.PLUGINTYPE1.PLUGINNAME1.KEY1: VALUE
# var.PLUGINTYPE1.PLUGINNAME1.KEY2: VALUE
# var.PLUGINTYPE2.PLUGINNAME1.KEY1: VALUE
# var.PLUGINTYPE3.PLUGINNAME3.KEY1: VALUE
#
# Module variable names must be in the format of
#
# var.PLUGIN_TYPE.PLUGIN_NAME.KEY
#
#modules:
#
# ------------ Cloud Settings ---------------
# Define Elastic Cloud settings here.
# Format of cloud.id is a base64 value e.g. dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRub3RhcmVhbCRpZGVudGlmaWVy
# and it may have an label prefix e.g. staging:dXMtZ...
# This will overwrite 'var.elasticsearch.hosts' and 'var.kibana.host'
# cloud.id: <identifier>
#
# Format of cloud.auth is: <user>:<pass>
# This is optional
# If supplied this will overwrite 'var.elasticsearch.username' and 'var.elasticsearch.password'
# If supplied this will overwrite 'var.kibana.username' and 'var.kibana.password'
# cloud.auth: elastic:<password>
#
# ------------ Queuing Settings --------------
#
# Internal queuing model, "memory" for legacy in-memory based queuing and
# "persisted" for disk-based acked queueing. Defaults is memory
#
# queue.type: memory
#
# If using queue.type: persisted, the directory path where the data files will be stored.
# Default is path.data/queue
#
# path.queue:
#
# If using queue.type: persisted, the page data files size. The queue data consists of
# append-only data files separated into pages. Default is 64mb
#
# queue.page_capacity: 64mb
#
# If using queue.type: persisted, the maximum number of unread events in the queue.
# Default is 0 (unlimited)
#
# queue.max_events: 0
#
# If using queue.type: persisted, the total capacity of the queue in number of bytes.
# If you would like more unacked events to be buffered in Logstash, you can increase the
# capacity using this setting. Please make sure your disk drive has capacity greater than
# the size specified here. If both max_bytes and max_events are specified, Logstash will pick
# whichever criteria is reached first
# Default is 1024mb or 1gb
#
# queue.max_bytes: 1024mb
#
# If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.acks: 1024
#
# If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.writes: 1024
#
# If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
# Default is 1000, 0 for no periodic checkpoint.
#
# queue.checkpoint.interval: 1000
#
# ------------ Dead-Letter Queue Settings --------------
# Flag to turn on dead-letter queue.
#
# dead_letter_queue.enable: false

# If using dead_letter_queue.enable: true, the maximum size of each dead letter queue. Entries
# will be dropped if they would increase the size of the dead letter queue beyond this setting.
# Default is 1024mb
# dead_letter_queue.max_bytes: 1024mb

# If using dead_letter_queue.enable: true, the directory path where the data files will be stored.
# Default is path.data/dead_letter_queue
#
# path.dead_letter_queue:
#
# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
# http.host: "127.0.0.1"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
# http.port: 9600-9700
#
# ------------ Debugging Settings --------------
#
# Options for log.level:
# * fatal
# * error
# * warn
# * info (default)
# * debug
# * trace
#
# log.level: info
#日志
path.logs: /var/log/logstash
#
# ------------ Other Settings --------------
#
# Where to find custom plugins
# path.plugins: []
#xpack.monitoring.elasticsearch.url: http://192.168.10.196:9200
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: changeme

jvm.options

## JVM configuration

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space

-Xms512m
-Xmx512m

################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################

## GC configuration
-XX:+UseParNewGC
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly

## optimizations

# disable calls to System#gc
-XX:+DisableExplicitGC

## locale
# Set the locale language
#-Duser.language=en

# Set the locale country
#-Duser.country=US

# Set the locale variant, if any
#-Duser.variant=

## basic

# set the I/O temp directory
#-Djava.io.tmpdir=$HOME

# set to headless, just in case
-Djava.awt.headless=true

# ensure UTF-8 encoding by default (e.g. filenames)
-Dfile.encoding=UTF-8

# use our provided JNA always versus the system one
#-Djna.nosys=true

## heap dumps

# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError

# specify an alternative path for heap dumps
# ensure the directory exists and has sufficient space
#-XX:HeapDumpPath=${LOGSTASH_HOME}/heapdump.hprof

## GC logging
#-XX:+PrintGCDetails
#-XX:+PrintGCTimeStamps
#-XX:+PrintGCDateStamps
#-XX:+PrintClassHistogram
#-XX:+PrintTenuringDistribution
#-XX:+PrintGCApplicationStoppedTime

# log GC status to a file with time stamps
# ensure the directory exists
#-Xloggc:${LS_GC_LOG_FILE}

ELK部署详解--logstash的更多相关文章

  1. ELK部署详解--filebeat

    filebeat.yml ###################### Filebeat Configuration Example ######################### # This ...

  2. ELK部署详解--elasticsearch

    #Elasticsearch 是一个实时的分布式搜索和分析引擎,它可以用于全文搜索,结构化搜索以及分析.它是一个建立在全文搜索引擎 Apache Lucene 基础上的搜索引擎,使用 Java 语言编 ...

  3. ELK部署详解--kibana

    kibana.yml # Kibana is served by a back end server. This setting specifies the port to use.#端口server ...

  4. 日志分析工具ELK配置详解

    日志分析工具ELK配置详解 一.ELK介绍 1.1 elasticsearch 1.1.1 elasticsearch介绍 ElasticSearch是一个基于Lucene的搜索服务器.它提供了一个分 ...

  5. centos7.2环境elasticsearch-5.0.1+kibana-5.0.1+zookeeper3.4.6+kafka_2.9.2-0.8.2.1部署详解

    centos7.2环境elasticsearch-5.0.1+kibana-5.0.1+zookeeper3.4.6+kafka_2.9.2-0.8.2.1部署详解 环境准备: 操作系统:centos ...

  6. 【转】Nginx+php-fpm+MySQL分离部署详解

    转:http://www.linuxidc.com/Linux/2015-07/120580.htm Nginx+php-fpm+MySQL分离部署详解 [日期:2015-07-26] 来源:Linu ...

  7. Solr部署详解

    Solr部署详解 时间:2013-11-24 方式:转载 目录 1 solr概述 1.1 solr的简介 1.2 solr的特点 2 Solr安装 2.1 安装JDK 2.2 安装Tomcat 2.3 ...

  8. MySQL高可用方案-PXC(Percona XtraDB Cluster)环境部署详解

    MySQL高可用方案-PXC(Percona XtraDB Cluster)环境部署详解 Percona XtraDB Cluster简称PXC.Percona Xtradb Cluster的实现是在 ...

  9. legend3---Windows 7/8/10 系统下Laravel框架的开发环境安装及部署详解(Vagrant + Homestead)

    legend3---Windows 7/8/10 系统下Laravel框架的开发环境安装及部署详解(Vagrant + Homestead) 一.总结 一句话总结: 1.安装的话就是下载好git,va ...

随机推荐

  1. 【学习总结】之 3Blue1Brown系列

    刷知乎看到的,各种力荐. YouTube: https://www.youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw/featured B站: https:// ...

  2. ESLint常见命令(规则表)

    1 禁用 ESLint: /* eslint-disable */ ; console.log(a); /* eslint-enable */ 2 禁用一条规则: /*eslint-disable n ...

  3. [转帖] BMC安全隐患

    BMC再现漏洞,裸金属云服务器岌岌可危 https://zhuanlan.kanxue.com/article-8006.htm 之前有vt-x 可能有隐患 现在看起来BMC 也就是IPMI 也有隐患 ...

  4. Protocol buffers--python 实践(二) protocol buffers vs json

    为什么专门开一个坑,来使用pb.放弃本在各平台上都支持得很好的json而使用pb的一个归根到底的理由,就是希望在保证强类型和跨平台的情况下,能够更轻,更快,更简单.既然是奔着这个目标去的,到底多快我需 ...

  5. linux audit审计(4)--audit的日志切分,以及与rsyslog的切分协同使用

    audit的规则配置稍微不当,就会短时间内产生大量日志,所以这个规则配置一定要当心.当audit日志写满后,可以看到如下场景: -r-------- 1 root root 8388609 Mar 3 ...

  6. C#中is运算符

    is运算符可以检查对象是否与特定的类型兼容.“兼容”表示对象或者该类型,或者派生自该类型.例如,要检查变量是否与object类型兼容,可以使用下面的代码: int i=10; if(i  is  ob ...

  7. python设计模式第二十二天【备忘录模式】

    1.应用场景 (1)能保存对象的状态,并能够恢复到之前的状态 2.代码实现 #!/usr/bin/env python #! _*_ coding:UTF-8 _*_ class Originator ...

  8. SSH本地端口转发的理解

    ssh -L 3307:127.0.0.1:3306 user@ssh-server -N 其中127.0.0.1:3306是指 ssh-server要访问资源的ip和端口 而3307则是隧道的开口, ...

  9. python之range()函数、for-in循环和while循环

    range()函数和for-in循环 函数原型:range(start, end, scan): 参数含义:start:计数从start开始.默认是从0开始.例如range(5)等价于range(0, ...

  10. 一、关于a标签伪类中的visited不起作用问题

    一.代码示范 <html> <head> <title>伪类超链接</title> <!--<link href="./test. ...