logstash json和rubydebug 第次重启logstash都会把所有的日志读完 而不是只读入新输入的内容
查看一下agent端的shipper的配置:
# cat logstash_test2.shipper.conf
input {
file {
path => ["/apps/logstash/conf/test/test2_log.txt"]
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
output {
stdout {
#codec => rubydebug
codec => json
}
}
#这个测试主要是看输出的格式为json的
先简测一下刚配好的shipper:
# ./../bin/logstash -f logstash_test2.shipper.conf -t
Sending Logstash's logs to /apps/logstash/logs which is now configured via log4j2.properties
Configuration OK
[--08T18::,][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
可以看到没有报错,接下来启动logstash并指定刚才配置好的配置文件:
# ./../bin/logstash -f logstash_test2.shipper.conf -t
Sending Logstash's logs to /apps/logstash/logs which is now configured via log4j2.properties
Configuration OK
[--08T18::,][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[root@Appsrv130 conf]# ./../bin/logstash -f logstash_test2.shipper.conf
Sending Logstash's logs to /apps/logstash/logs which is now configured via log4j2.properties
[--08T18::,][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>, "pipeline.batch.size"=>, "pipeline.batch.delay"=>, "pipeline.max_inflight"=>}
[--08T18::,][INFO ][logstash.pipeline ] Pipeline main started
[--08T18::,][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>}
{"path":"/apps/logstash/conf/test/test2_log.txt","@timestamp":"2016-12-08T10:19:13.102Z","@version":"","host":"ofs1","message":"haha------>","tags":[]}{"path":"/apps/logstash/conf/test/test2_log.txt","@timestamp":"2016-12-08T10:19:13.113Z","@version":"","host":"ofs1","message":"haha------>2","tags":[]}{"path":"/apps/logstash/conf/test/test2_log.txt","@timestamp":"2016-12-08T10:19:13.118Z","@version":"","host":"ofs1","message":"haha------>3","tags":[]}{"path":"/apps/logstash/conf/test/test2_log.txt","@timestamp":"2016-12-08T10:19:13.121Z","@version":"","host":"ofs1","message":"haha------>3","tags":[]}
再看看所监控的log日志的内容:
# cat test/test2_log.txt
haha------>
haha------>
haha------>
haha------>
发现 这个shipper启动的时候会从头到尾,把配置文件全读一边(这种效里也是从配置文件中配置好的)
再看一下这个配置文件:
# cat logstash_test2.shipper.conf
input {
file {
path => ["/apps/logstash/conf/test/test2_log.txt"]
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
output {
stdout {
#codec => rubydebug
codec => json
}
}
#要点就是这行sincedb_path =>"/dev/null"了!该参数用来指定sincedb文件名,但是如果我们设置为/dev/null这个linux系统上特殊的空洞文件,
那么logstash每次重启进程的时候,尝试读取sincedb内容,都只会读到空洞,也就可以理解为前不有过运行记录,自然就从初始位置开始读取了!
下面往监控文件里写入内容时,会发生下面变化:
# echo "查看json格式是什么输出-------》">>test/test2_log.txt
再看一下输出的内容:
# ./../bin/logstash -f logstash_test2.shipper.conf -t
Sending Logstash's logs to /apps/logstash/logs which is now configured via log4j2.properties
Configuration OK
[--08T18::,][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[root@Appsrv130 conf]# ./../bin/logstash -f logstash_test2.shipper.conf
Sending Logstash's logs to /apps/logstash/logs which is now configured via log4j2.properties
[--08T18::,][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>, "pipeline.batch.size"=>, "pipeline.batch.delay"=>, "pipeline.max_inflight"=>}
[--08T18::,][INFO ][logstash.pipeline ] Pipeline main started
[--08T18::,][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>}
{"path":"/apps/logstash/conf/test/test2_log.txt","@timestamp":"2016-12-08T10:19:13.102Z","@version":"","host":"ofs1","message":"haha------>","tags":[]}{"path":"/apps/logstash/conf/test/test2_log.txt","@timestamp":"2016-12-08T10:19:13.113Z","@version":"","host":"ofs1","message":"haha------>2","tags":[]}{"path":"/apps/logstash/conf/test/test2_log.txt","@timestamp":"2016-12-08T10:19:13.118Z","@version":"","host":"ofs1","message":"haha------>3","tags":[]}{"path":"/apps/logstash/conf/test/test2_log.txt","@timestamp":"2016-12-08T10:19:13.121Z","@version":"","host":"ofs1","message":"haha------>3","tags":[]}{"path":"/apps/logstash/conf/test/test2_log.txt","@timestamp":"2016-12-08T11:17:45.060Z","@version":"","host":"ofs1","message":"查看json格式是什么输出-------》","tags":[]}
修改配置文件:
# cat logstash_test2.shipper.conf
input {
file {
path => ["/apps/logstash/conf/test/test2_log.txt"]
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
output {
stdout {
codec => rubydebug #查看这种格式的日志输出
#codec => json
}
}
查看日志:
# echo "查看rubydebug格式是什么输出-------》">>test/test2_log.txt
# ./../bin/logstash -f logstash_test2.shipper.conf
Sending Logstash's logs to /apps/logstash/logs which is now configured via log4j2.properties
[--08T19::,][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>, "pipeline.batch.size"=>, "pipeline.batch.delay"=>, "pipeline.max_inflight"=>}
[--08T19::,][INFO ][logstash.pipeline ] Pipeline main started
[--08T19::,][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>}
{
"path" => "/apps/logstash/conf/test/test2_log.txt",
"@timestamp" => --08T11::.290Z,
"@version" => "",
"host" => "ofs1",
"message" => "haha------>",
"tags" => []
}
{
"path" => "/apps/logstash/conf/test/test2_log.txt",
"@timestamp" => --08T11::.299Z,
"@version" => "",
"host" => "ofs1",
"message" => "haha------>2",
"tags" => []
}
{
"path" => "/apps/logstash/conf/test/test2_log.txt",
"@timestamp" => --08T11::.301Z,
"@version" => "",
"host" => "ofs1",
"message" => "haha------>3",
"tags" => []
}
{
"path" => "/apps/logstash/conf/test/test2_log.txt",
"@timestamp" => --08T11::.302Z,
"@version" => "",
"host" => "ofs1",
"message" => "haha------>3",
"tags" => []
}
{
"path" => "/apps/logstash/conf/test/test2_log.txt",
"@timestamp" => --08T11::.303Z,
"@version" => "",
"host" => "ofs1",
"message" => "查看json格式是什么输出-------》",
"tags" => []
}
{
"path" => "/apps/logstash/conf/test/test2_log.txt",
"@timestamp" => --08T11::.415Z,
"@version" => "",
"host" => "ofs1",
"message" => "查看rubydebug格式是什么输出-------》",
"tags" => []
}
如果去掉上面的两个参数,看一下效果:
# cat logstash_test2.shipper.conf
input {
file {
path => ["/apps/logstash/conf/test/test2_log.txt"]
#start_position => "beginning"
#sincedb_path => "/dev/null"
}
}
output {
stdout {
codec => rubydebug
#codec => json
}
}
从另一个shell可以看到效果:
# ./../bin/logstash -f logstash_test2.shipper.conf
Sending Logstash's logs to /apps/logstash/logs which is now configured via log4j2.properties
[--09T13::,][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>, "pipeline.batch.size"=>, "pipeline.batch.delay"=>, "pipeline.max_inflight"=>}
[--09T13::,][INFO ][logstash.pipeline ] Pipeline main started
[--09T13::,][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>}
先导入数据:
echo '去掉参数start_position => "beginning" sincedb_path => "/dev/null"' >>test/test2_log.txt
下面看一下效果:
# ./../bin/logstash -f logstash_test2.shipper.conf
Sending Logstash's logs to /apps/logstash/logs which is now configured via log4j2.properties
[--09T13::,][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>, "pipeline.batch.size"=>, "pipeline.batch.delay"=>, "pipeline.max_inflight"=>}
[--09T13::,][INFO ][logstash.pipeline ] Pipeline main started
[--09T13::,][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>}
{
"path" => "/apps/logstash/conf/test/test2_log.txt",
"@timestamp" => --09T05::.155Z,
"@version" => "",
"host" => "ofs1",
"message" => "去掉参数start_position => \"beginning\" sincedb_path => \"/dev/null\"",
"tags" => []
}
logstash json和rubydebug 第次重启logstash都会把所有的日志读完 而不是只读入新输入的内容的更多相关文章
- ELK学习笔记之Logstash和Filebeat解析对java异常堆栈下多行日志配置支持
0x00 概述 logstash官方最新文档.假设有几十台服务器,每台服务器要监控系统日志syslog.tomcat日志.nginx日志.mysql日志等等,监控OOM.内存低下进程被kill.ngi ...
- 使用Elasticsearch、Logstash、Kibana与Redis(作为缓冲区)对Nginx日志进行收集(转)
摘要 使用Elasticsearch.Logstash.Kibana与Redis(作为缓冲区)对Nginx日志进行收集 版本 elasticsearch版本: elasticsearch-2.2.0 ...
- 小白都会超详细--ELK日志管理平台搭建教程
目录 一.介绍 二.安装JDK 三.安装Elasticsearch 四.安装Logstash 五.安装Kibana 六.Kibana简单使用 系统环境:CentOS Linux release 7.4 ...
- Logstash Json 过滤器插件
1. Json Filter 功能概述 这是一个JSON解析过滤器.它接受一个包含JSON的现有字段,并将其扩展为Logstash事件中的实际数据结构. 默认情况下,它将把解析过的JSON放在Logs ...
- Logstash:在 Docker 中部署 Logstash
文章转载自:https://elasticstack.blog.csdn.net/article/details/116516923 创建一个目录 docker-logstash.在该目录下,有如下的 ...
- logstash报错401 需要在logstash启动的配置文件中增加es的用户名和密码
- Logstash:如何使用Elasticsearch,Logstash和Kibana管理Apache日志
- 【linux】linux重启tomcat + 实时查看tomcat启动日志
linux重启tomcat命令: http://www.cnblogs.com/plus301/p/6237468.html linux查看toncat实时的启动日志: https://www.cnb ...
- Ajax请求Json数据,报500错误,后台没有错误日志。
post请求:http://localhost:9080/DataDiscoveryWeb/issueformcount/queryIssueTendencyDetail.xhtml?jobId=86 ...
随机推荐
- Eclipse 语法提示
新建一个txt 拷贝下面的文本,然后保存修改扩展名为.epf #Sat Nov :: CST /instance/org.eclipse.jdt.core/org.eclipse.jdt.core.c ...
- [转载]PO BO VO DTO POJO DAO概念及其作用
原文链接:http://jeoff.blog.51cto.com/186264/88517/ POJO = pure old java object or plain ordinary java ob ...
- (转)一个JavaWeb项目开发总结
原文地址:http://www.cnblogs.com/lzb1096101803/p/4907775.html 一.学会如何读一个JavaWeb项目源代码 步骤:表结构->web.xml-&g ...
- Python检验一个文件是否存在,如果不在就自己填写内容。
import os while True: filename=input('Please enter the filename') if os.path.exists(filename): print ...
- 图解SSL/TLS协议(HTTPS的安全层)
http://blog.csdn.net/wallezhe/article/details/50977337 图解SSL/TLS协议 作者: 阮一峰 日期: 2014年9月20日 本周,Clo ...
- springboot 连接池wait_timeout超时设置
使用springboot 线程池连接MySQL时,mysql数据库wait_timeout 为8个小时,所以程序第二天发现报错,在url配置了 autoReconnect=true 也不行,查询配置以 ...
- Zigzag Iterator
Given two 1d vectors, implement an iterator to return their elements alternately. For example, given ...
- oracle用户创建及权限设置及表空间
建立表空间: create tablespace portx_data datafile 'D:\oracle_data\portx.dbf' size 50m autoextend on next ...
- centos7.0改变用户创建目录组权限
centos7.0改变用户创建目录组权限可通过umask进行设置. 临时改变可通过umask命令进行设置 永久性改变,可通过修改~/.bash_profile的方式进行调整.
- Java中对List集合内的元素进行顺序、倒序、随机排序的示例代码
import java.util.Collections; import java.util.LinkedList; import java.util.List; public class Test ...