转载地址:https://dongbo0737.github.io/2017/06/13/logstash-config/

Logtash 配置文件解析

logstash 一个ELK架构中,专门用来进行接受数据进行处理,可以和很好的扩展节点

官网:https://www.elastic.co/guide/en/logstash/5.2/index.html

logstash安装很简单,推荐使用tar包下载,解压

logstash.yml

//配置文件,以英文为准
# Settings file in YAML
#
# Settings can be specified either in hierarchical form, e.g.:
#
# pipeline:
# batch:
# size: 125
# delay: 5
#
# Or as flat keys:
#
# pipeline.batch.size: 125
# pipeline.batch.delay: 5
#
# ------------ Node identity ------------
#
# Use a descriptive name for the node:
#
//节点名称
node.name: dev211133
#
# If omitted the node name will default to the machine's host name
#
# ------------ Data path ------------------
#
# Which directory should be used by logstash and its plugins
# for any persistent needs. Defaults to LOGSTASH_HOME/data
#
//数据存储路径
path.data: /data/logstash/data
#
# ------------ Pipeline Settings --------------
#
# Set the number of workers that will, in parallel, execute the filters+outputs
# stage of the pipeline.
#
# This defaults to the number of the host's CPU cores.
#
//输出通道的工作workers数据量(提升输出效率)
pipeline.workers: 8
#
# How many workers should be used per output plugin instance
#
//每个输出插件的工作wokers数量
# pipeline.output.workers: 1
#
# How many events to retrieve from inputs before sending to filters+workers
#
//每次input数量
pipeline.batch.size: 4000
#
# How long to wait before dispatching an undersized batch to filters+workers
# Value is in milliseconds.
#
# pipeline.batch.delay: 5
#
# Force Logstash to exit during shutdown even if there are still inflight
# events in memory. By default, logstash will refuse to quit until all
# received events have been pushed to the outputs.
#
# WARNING: enabling this can lead to data loss during shutdown
#
# pipeline.unsafe_shutdown: false
#
# ------------ Pipeline Configuration Settings --------------
#
# Where to fetch the pipeline configuration for the main pipeline
#
//过滤配置文件目录
path.config: /opt/logstash/config/conf.d
#
# Pipeline configuration string for the main pipeline
#
# config.string:
#
# At startup, test if the configuration is valid and exit (dry run)
#
# config.test_and_exit: false
#
# Periodically check if the configuration has changed and reload the pipeline
# This can also be triggered manually through the SIGHUP signal
#
//自动从新加载被修改配置
# config.reload.automatic: false
#
# How often to check if the pipeline configuration has changed (in seconds)
#
//配置文件检查时间
# config.reload.interval: 3
#
# Show fully compiled configuration as debug log message
# NOTE: --log.level must be 'debug'
#
//开始debug日志
# config.debug: false
#
# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
//绑定主机地址,用户指标收集
http.host: "192.168.211.133"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
//绑定端口
http.port: 5000-9700
#
# ------------ Debugging Settings --------------
#
# Options for log.level:
# * fatal
# * error
# * warn
# * info (default)
# * debug
# * trace
#
//日志输出级别和路径,如果config.debug开启,这里一定要是debug日志
log.level: info
path.logs: /data/logstash/logs
#
# ------------ Other Settings --------------
#
# Where to find custom plugins
//自定义插件
# path.plugins: []

startup.options

################################################################################启动配置,以英文为准
# These settings are ONLY used by $LS_HOME/bin/system-install to create a custom
# startup script for Logstash. It should automagically use the init system
# (systemd, upstart, sysv, etc.) that your Linux distribution uses.
#
# After changing anything here, you need to re-run $LS_HOME/bin/system-install
# as root to push the changes to the init script.
################################################################################ # Override Java location
#本地jdk
JAVACMD=/usr/bin/java # Set a home directory
#logstash所在目录
LS_HOME=/opt/logstash # logstash settings directory, the path which contains logstash.yml
#默认logstash配置文件目录
LS_SETTINGS_DIR="${LS_HOME}/config" # Arguments to pass to logstash
#logstash启动命令参数 指定配置文件目录
LS_OPTS="--path.settings ${LS_SETTINGS_DIR}" # Arguments to pass to java
#指定jdk目录
#LS_JAVA_OPTS="" # pidfiles aren't used the same way for upstart and systemd; this is for sysv users.
#logstash.pid所在目录
LS_PIDFILE=/var/run/logstash.pid # user and group id to be invoked as
#logstash启动组和用户
LS_USER=logstash
LS_GROUP=logstash # Enable GC logging by uncommenting the appropriate lines in the GC logging
# section in jvm.options
#logstash jvm gc日志路径
LS_GC_LOG_FILE=/var/log/logstash/gc.log # Open file limit
#logstash最多打开监控文件数量
LS_OPEN_FILES=65534 # Nice level
LS_NICE=19 # Change these to have the init script named and described differently
# This is useful when running multiple instances of Logstash on the same
# physical box or vm
#初始化脚本和描述名称
SERVICE_NAME="logstash"
SERVICE_DESCRIPTION="logstash" # If you need to run a command or script before launching Logstash, put it
# between the lines beginning with `read` and `EOM`, and uncomment those lines.
###
## read -r -d '' PRESTART << EOM
## EOM

集成的插件配置,数据处理

logstash加载配置是循序加载

100-input.conf

输入插件

input {
kafka {
bootstrap_servers => "172.25.156.113:9092,172.25.156.114:9092,172.25.156.115:9092"
group_id => "clio-consr-weba-go1"#consumergroup
topics => ["test-webaccess"]#topic
session_timeout_ms => "60000"#session超时
request_timeout_ms => "180000"#request超时
max_poll_records => "500"
check_crcs => "true"
codec => "json"
decorate_events => true#输出kafka信息
consumer_threads => 3#消费的线程,根据threads*workers*服务器数量=partition
add_field => {
"processor_host" => "172.25.156.74"
}
}
}

200-initialize-filter.conf

初始化配置

filter {
mutate {
add_tag => [ "invalid" ] #添加invaild标签
add_field => [ "receive" , "%{[@timestamp]}" ]#将filebeat带过来的timestamp赋值给receive
}
ruby {
init => "require 'time'"
code => "event.set('processor_timestamp' , Time.now());event.set('lag' , Time.now().to_i-event.get('@timestamp').to_i)"#ruby计算时间差,进入logstash时间- filebeat抓取日志时间
}
#进行kafka.topic判断,如果是app是audit,topic赋值为test-audit,、
#后面输入到es会根据topic 生成索引
if [kafka][topic] == "test-business" {
if [app] == "audit" {
mutate {
update => { "[kafka][topic]" => "test-audit" }
}
} else {
mutate {
remove_tag => [ "invalid" ]#删除invalid标签
}
}
} #
if [kafka][topic] == "system-logstash" {
mutate {
remove_tag => [ "invalid" ]
}
}
}

210-webaccess-filter.conf

apache日志接受处理

#apache日志过滤
filter {
if [kafka][topic] == "test-webaccess" {
grok {
#根据apache日志格式解析
match => { "message" => "\"%{DATA:xforward}\" %{COMBINEDAPACHELOG} (?:%{NUMBER:duration:float}) (?:%{DATA:domain}) \"(?:%{DATA:protocol}|)\" \"(?:%{DATA:rawurlpath})\" \"(?:%{DATA:rawurlquery}|)\" (?:%{DATA:method}) (?:%{NUMBER:ibytes:int}) (?:%{NUMBER:obytes:int}) \"(?:%{DATA:uleck}|)\"" }
}
date {
#格式化时间戳
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
mutate {
#删除timestamp字段和invalid标签
remove_field => [ "timestamp" ]
remove_tag => [ "invalid" ]
}
} }

211-audit-filter.conf

审计日志处理

filter {
if [kafka][topic] == "test-audit" {
grok {
#格式化日志信息
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}" }
}
date {
#格式化时间戳
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss,SSS" , "ISO8601" ]
}
mutate {
#删除invalid标签
remove_tag => [ "invalid" ]
}
}
}

299-common-filter.conf

数据处理完成后的通用过滤

filter {
#判断浏览器agent,进行解析
if [agent] and [agent] != "-" and [agent] != "" {
useragent {
source => "agent"
prefix => "UA-"
}
}
#如果有timestamp字段,则删除
if [timestamp] {
mutate {
remove_field => ["timestamp"]
}
}
}

300-elasticsearch-output.conf

输出插件,一般都是输出到es中,当然也可以输出到redis,kafka中,具体官网上查询

output {
#如果标签为invalid,不进行输出到es
if "invalid" not in [tags] {
#如果topic是system-logstash生成是索引是按月来的
if [kafka][topic] == "system-logstash" {
elasticsearch {
hosts => ["172.25.156.62:9202","172.25.156.66:9202"]#es服务器
index => "%{[kafka][topic]}-%{+YYYY.MM}"#指定索引格式
document_type => "%{[type]}"#文档类型
flush_size => 5000#缓存数量
}
} else {
#生成索引按天来
elasticsearch {
hosts => ["172.25.156.62:9202","172.25.156.66:9202"]
index => "%{[kafka][topic]}-%{+YYYY.MM.dd}"
document_type => "%{[type]}"
flush_size => 5000
}
}
}
}

Similar Posts

Logtash 配置文件解析-转载的更多相关文章

  1. Filebeat配置文件解析-转载

    转载地址:https://dongbo0737.github.io/2017/06/13/filebeat-config/ Filebeat配置文件解析 filebeat 一个ELK架构中,专门用来收 ...

  2. Hadoop配置文件解析

    Hadoop源码解析 2 --- Hadoop配置文件解析 1 Hadoop Configuration简介    Hadoop没有使用java.util.Properties管理配置文件, 也没有使 ...

  3. MyBatis 源码分析 - 配置文件解析过程

    * 本文速览 由于本篇文章篇幅比较大,所以这里拿出一节对本文进行快速概括.本篇文章对 MyBatis 配置文件中常用配置的解析过程进行了较为详细的介绍和分析,包括但不限于settings,typeAl ...

  4. Golang配置文件解析-oozgconf

    代码地址如下:http://www.demodashi.com/demo/14411.html 简介 oozgconf基于Golang开发,用于项目中配置文件的读取以及加载,是一个轻量级的配置文件工具 ...

  5. MyBatis配置文件解析

    MyBatis配置文件解析(概要) 1.configuration:根元素 1.1 properties:定义配置外在化 1.2 settings:一些全局性的配置 1.3 typeAliases:为 ...

  6. Nginx安装与配置文件解析

    导读 Nginx是一款开放源代码的高性能HTTP服务器和反向代理服务器,同时支持IMAP/POP3代理服务,是一款自由的软件,同时也是运维工程师必会的一种服务器,下面我就简单的说一下Nginx服务器的 ...

  7. Python3 配置文件 解析

    /************************************************************************ * Python3 配置文件 解析 * 说明: * ...

  8. Hibernate的配置文件解析

    配置mybatis.xml或hibernate.cfg.xml报错: <property name="connection.url">jdbc:mysql://loca ...

  9. WCF中配置文件解析

    WCF中配置文件解析[1] 2014-06-14 WCF中配置文件解析 参考 WCF中配置文件解析 返回 在WCF Service Configuration Editor的使用中,我们通过配置工具自 ...

随机推荐

  1. vue & $router & History API

    vue & $router gotoTemplateManage(e) { e.preventDefault(); this.$router.push({ path: `/operate-to ...

  2. mysql数据库表引入redis解决方案

    缓存方案 缓存方案在我的另外一篇博客里有详细说明,地址:https://www.cnblogs.com/wingfirefly/p/14419728.html 数据结构: 方案1: 1.存储结构采用h ...

  3. JUnit5学习之三:Assertions类

    欢迎访问我的GitHub https://github.com/zq2599/blog_demos 内容:所有原创文章分类汇总及配套源码,涉及Java.Docker.Kubernetes.DevOPS ...

  4. Mysql训练:where后不可以进行聚合函数的判断,而having可以进行聚合函数的判断

    力扣题目:查找重复的电子邮箱 编写一个 SQL 查询,查找 Person 表中所有重复的电子邮箱. +----+---------+ | Id | Email | +----+---------+ | ...

  5. ReactElement源码笔记

    ReactElement 源码笔记 ReactElement通过 createElement创建,调用该方法需要 传入三个参数: type config children type指代这个ReactE ...

  6. 【老孟Flutter】Flutter 2.0 重磅更新

    老孟导读:昨天期待已久的 Flutter 2.0 终于发布了,Web 端终于提正了,春季期间我发布的一篇文章,其中的一个预测就是 Web 正式发布,已经实现了,还有一个预测是:2021年将是 Flut ...

  7. C#中委托、匿名函数、Lambda表达式的一些个人理解

    0x01定义一个委托,相当于定义一个可以存储方法的特殊变量类型 下面我们看具体的代码,通过代码更好理解 delegate void IntMethodInvoker(int x); 这行代码就是声明一 ...

  8. 1.4 数据库和常用SQL语句(正文)——MySQL数据库命令和SQL语句

    前面我们已经讲述了,登录时,我们使用mysql –u root –p命令进行,此时如果设置了密码,则需要输入密码. 输入密码后即进入MySQL的操作界面,此时,命令行窗体左侧显示"mysql ...

  9. 设计模式系列之原型模式(Prototype Pattern)——对象的克隆

    说明:设计模式系列文章是读刘伟所著<设计模式的艺术之道(软件开发人员内功修炼之道)>一书的阅读笔记.个人感觉这本书讲的不错,有兴趣推荐读一读.详细内容也可以看看此书作者的博客https:/ ...

  10. Tomcat后台爆破指南

          0x00 实验环境 攻击机:Win 10 0x01 爆破指南 针对某Tomcat默认管理页面: (1)这里主要是介绍一种比较好用的burp爆破方法: 点击Tomcat后台管理链接 Tomc ...