ELK快速入门(二)通过logstash收集日志
ELK快速入门二-通过logstash收集日志
说明
这里的环境接着上面的ELK快速入门-基本部署文章继续下面的操作。
收集多个日志文件
1)logstash
配置文件编写
[root@linux-elk1 ~]# vim /etc/logstash/conf.d/system-log.conf
input {
file {
path => "/var/log/messages"
type => "systemlog"
start_position => "beginning"
stat_interval => ""
}
file {
path => "/var/log/secure"
type => "securelog"
start_position => "beginning"
stat_interval => ""
}
} output {
if [type] == "systemlog" {
elasticsearch {
hosts => ["192.168.1.31:9200"]
index => "system-log-%{+YYYY.MM.dd}"
}
}
if [type] == "securelog" {
elasticsearch {
hosts => ["192.168.1.31:9200"]
index => "secure-log-%{+YYYY.MM.dd}"
}
}
}
2)给日志文件赋予可读权限并重启logstash
[root@linux-elk1 ~]# chmod /var/log/secure
[root@linux-elk1 ~]# chmod /var/log/messages
[root@linux-elk1 ~]# systemctl restart logstash
3)向被收集的文件中写入数据;是为了马上能在elasticsearch
的web
界面和klbana
的web
界面里面查看到数据。
[root@linux-elk1 ~]# echo "test" >> /var/log/secure
[root@linux-elk1 ~]# echo "test" >> /var/log/messages
4)在kibana
界面添加system-log
索引模式
5)在kibana
界面添加secure-log
索引模式
6)kibana
查看日志
收集tomcat和java日志
收集Tomcat
服务器的访问日志以及Tomcat
错误日志进行实时统计,在kibana
页面进行搜索展示,每台Tomcat
服务器需要安装logstash
负责收集日志,然后将日志发给elasticsearch
进行分析,在通过kibana
在前端展示。
部署tomcat服务
说明,我这里在linux-elk2
节点上面装tomcat
1)下载并安装tomcat
[root@linux-elk2 ~]# cd /usr/local/
[root@linux-elk2 local]# wget http://mirrors.tuna.tsinghua.edu.cn/apache/tomcat/tomcat-9/v9.0.21/bin/apache-tomcat-9.0.21.tar.gz
[root@linux-elk2 local]# tar xvzf apache-tomcat-9.0..tar.gz
[root@linux-elk2 local]# ln -s /usr/local/apache-tomcat-9.0. /usr/local/tomcat
2)测试页面准备
[root@linux-elk2 local]# cd /usr/local/tomcat/webapps/
[root@linux-elk2 webapps]# mkdir webdir
[root@linux-elk2 webapps]# echo "<h1>Welcome to Tomcat</h1>" > /usr/local/tomcat/webapps/webdir/index.html
3)tomcat
日志转json
[root@linux-elk2 tomcat]# vim /usr/local/tomcat/conf/server.xml
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="localhost_access_log" suffix=".txt"
pattern="{"clientip":"%h","ClientUser":"%l","authenticated":"%u","AccessTime":"%t","method":"%r","status":"%s","SendBytes":"%b","Query?string":"%q","partner":"%{Referer}i","AgentVersion":"%{User-Agent}i"}"/>
4)启动tomcat
,并进行访问测试生成日志
[root@linux-elk2 tomcat]# /usr/local/tomcat/bin/startup.sh
[root@linux-elk2 tomcat]# ss -nlt |grep
LISTEN ::: :::*
[root@linux-elk2 tomcat]# ab -n100 -c100 http://192.168.1.32:8080/webdir/ [root@linux-elk2 ~]# tailf /usr/local/tomcat/logs/localhost_access_log.--.log
{"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"","SendBytes":"","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}
{"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"","SendBytes":"","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}
{"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"","SendBytes":"","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}
{"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"","SendBytes":"","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}
{"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"","SendBytes":"","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}
{"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"","SendBytes":"","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}
{"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"","SendBytes":"","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}
{"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"","SendBytes":"","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}
5)验证日志是否为json
格式,http://www.kjson.com/
配置logstash收集tomcat日志
说明:如果是需要收集别的服务器上面的tomcat
日志,那么在所需要收集的服务器上面都得安装logstash
。此处是在linux-elk2
节点上面部署的tomcat
,之前安装过logstash
。
1)配置logstash
[root@linux-elk2 ~]# vim /etc/logstash/conf.d/tomcat.conf
input {
file {
path => "/usr/local/tomcat/logs/localhost_access_log.*.log"
type => "tomcat-access-log"
start_position => "beginning"
stat_interval => ""
}
} output {
elasticsearch {
hosts => ["192.168.1.31:9200"]
index => "logstash-tomcat-132-accesslog-%{+YYYY.MM.dd}"
}
file {
path => "/tmp/logstash-tomcat-132-accesslog-%{+YYYY.MM.dd}"
}
}
2)检测配置文件语法,并重启logstash
[root@linux-elk2 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tomcat.conf -tWARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] -- ::34.583 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK [root@linux-elk2 ~]# /usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd
[root@linux-elk2 ~]# systemctl start logstash
3)权限修改,不然elasticsearch
界面和kibana
界面是无法查看到的
[root@linux-elk2 ~]# ll /usr/local/tomcat/logs/ -d
drwxr-xr-x root root 7月 : /usr/local/tomcat/logs/
[root@linux-elk2 ~]# ll /usr/local/tomcat/logs/
总用量
-rw-r----- root root 7月 : catalina.--.log
-rw-r----- root root 7月 : catalina.out
-rw-r----- root root 7月 : host-manager.--.log
-rw-r----- root root 7月 : localhost.--.log
-rw-r----- root root 7月 : localhost_access_log.--.log
-rw-r----- root root 7月 : manager.--.log
[root@linux-elk2 ~]# chown logstash.logstash /usr/local/tomcat/logs/ -R
[root@linux-elk2 ~]# ll /usr/local/tomcat/logs/
总用量
-rw-r----- logstash logstash 7月 : catalina.--.log
-rw-r----- logstash logstash 7月 : catalina.out
-rw-r----- logstash logstash 7月 : host-manager.--.log
-rw-r----- logstash logstash 7月 : localhost.--.log
-rw-r----- logstash logstash 7月 : localhost_access_log.--.log
-rw-r----- logstash logstash 7月 : manager.--.log
4)访问elasticsearch
界面验证插件
数据浏览
5)在kibana
上添加索引模式
6)kibana
验证数据
配置logstash收集java日志
使用codec的multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并,https://www.elastic.co/guide/en/logstash/current/plugins-codecs-multiline.html
语法格式:
input {
stdin {
codec => multiline {
pattern => "^\[" #当遇到[开头的行时候将多行进行合并
negate => true #true为匹配成功进行操作,false为不成功进行操
what => "previous" #与上面的行合并,如果是下面的行合并就是
}
}
}
命令行测试输入输出:
[root@linux-elk2 ~]# /usr/share/logstash/bin/logstash -e 'input { stdin { codec => multiline { pattern => "^\[" negate => true what => "previous" } } } output { stdout { codec => rubydebug }}'
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] -- ::04.938 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] -- ::04.968 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.8.1"}
[INFO ] -- ::19.167 [Converge PipelineAction::Create<main>] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>, "pipeline.batch.size"=>, "pipeline.batch.delay"=>}
[INFO ] -- ::19.918 [Converge PipelineAction::Create<main>] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0xc8dd9a1 run>"}
The stdin plugin is now waiting for input: aaaaaa
[
{
"@timestamp" => --08T07::.063Z,
"tags" => [
[] "multiline"
],
"@version" => "",
"message" => "[12\n111111\n222222\naaaaaa",
"host" => "linux-elk2.exmaple.com"
} aaaaaa
[
{
"@timestamp" => --08T07::.522Z,
"tags" => [
[] "multiline"
],
"@version" => "",
"message" => "[44444\n444444\naaaaaa",
"host" => "linux-elk2.exmaple.com"
}
示例:收集ELK集群日志
1)观察日志文件,elk
集群日志都是以"["
开头并且每一个信息都是如此。
[root@linux-elk2 ~]# tailf /elk/logs/ELK-Cluster.log
[--08T11::,][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elk-node2] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[--08T11::,][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elk-node2] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[--08T11::,][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elk-node2] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[--08T11::,][INFO ][o.e.c.m.MetaDataMappingService] [elk-node2] [.kibana_1/yRee-8HYS8KiVwnuADXAbA] update_mapping [doc]
[--08T11::,][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elk-node2] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[--08T11::,][INFO ][o.e.c.m.MetaDataMappingService] [elk-node2] [.kibana_1/yRee-8HYS8KiVwnuADXAbA] update_mapping [doc]
[--08T11::,][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elk-node2] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[--08T11::,][WARN ][o.e.m.j.JvmGcMonitorService] [elk-node2] [gc][young][][] duration [.3s], collections []/[.7s], total [.3s]/[4s], memory [176mb]->[.6mb]/[.9gb], all_pools {[young] [.8mb]->[.4kb]/[.5mb]}{[survivor] [.3mb]->[3mb]/[.3mb]}{[old] [.8mb]->[.8mb]/[.9gb]}
[--08T11::,][WARN ][o.e.m.j.JvmGcMonitorService] [elk-node2] [gc][] overhead, spent [.3s] collecting in the last [.7s]
[--08T11::,][INFO ][o.e.x.m.p.NativeController] [elk-node2] Native controller process has stopped - no new native processes can be started
2)配置logstash
[root@linux-elk2 ~]# vim /etc/logstash/conf.d/java.conf
input {
file {
path => "/elk/logs/ELK-Cluster.log"
type => "java-elk-cluster-log"
start_position => "beginning"
stat_interval => ""
code => multiline {
pattern => "^\[" #以"["开头进行正则匹配,匹配规则
negate => "true" #正则匹配成功,false匹配不成功
what => "previous" #和前面的内容进行合并,如果是和下面的合并就是next
}
}
} output {
if [type] == "java-elk-cluster-log" {
elasticsearch {
hosts => ["192.168.1.31:9200"]
index => "java-elk-cluster-log-%{+YYYY.MM.dd}"
}
}
}
3)检查配置文件语法是否有误并重启logstash
[root@linux-elk2 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/java.conf -t
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] -- ::51.996 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK
[INFO ] -- ::04.438 [LogStash::Runner] runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash [root@linux-elk2 ~]# systemctl restart logstash
4)访问elasticsearch
界面验证数据
5)在kibana
上添加索引验证模式
6)kibana
验证数据
收集Nginx访问日志
收集nginx
的json
访问日志,这里为了测试,是在一台新的服务器上面安装了nginx
和logstash
1)安装nginx并准备一个测试页面
[root@node01 ~]# yum -y install nginx
[root@node01 ~]# echo "<h1>whelcom to nginx server</h1>" > /usr/share/nginx/html/index.html
[root@node01 ~]# systemctl start nginx
[root@node01 ~]# curl localhost
<h1>whelcom to nginx server</h1>
2)将nginx日志转换成json格式
[root@node01 ~]# vim /etc/nginx/nginx.conf
log_format access_json '{"@timestamp":"$time_iso8601",'
'"host":"$server_addr",'
'"clientip":"$remote_addr",'
'"size":$body_bytes_sent,'
'"responsetime":$request_time,'
'"upstreamtime":"$upstream_response_time",'
'"upstreamhost":"$upstream_addr",'
'"http_host":"$host",'
'"url":"$uri",'
'"domain":"$host",'
'"xff":"$http_x_forwarded_for",'
'"referer":"$http_referer",'
'"status":"$status"}'; access_log /var/log/nginx/access.log access_json; [root@node01 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@node01 ~]# systemctl restart nginx
3)访问一次,确认日志为json
格式
[root@node01 ~]# tail /var/log/nginx/access.log
{"@timestamp":"2019-07-09T11:21:28+08:00","host":"192.168.1.30","clientip":"192.168.1.144","size":,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.1.30","url":"/index.html","domain":"192.168.1.30","xff":"-","referer":"-","status":""}
4)安装logstash
并配置收集nginx
日志
#将logstash软件包copy到nginx服务器上
[root@linux-elk1 ~]# scp logstash-6.8..rpm 192.168.1.30:/root/
#安装logstash
[root@node01 ~]# yum -y localinstall logstash-6.8..rpm
#生成logstash.service启动文件
[root@node01 ~]# /usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd
#将logstash启动用户更改为root,不然可能会导致收集不到日志
[root@node01 ~]# vim /etc/systemd/system/logstash.service
User=root
Group=root
[root@node01 ~]# systemctl daemon-reload [root@node01 ~]# vim /etc/logstash/conf.d/nginx.conf
input {
file {
path => "/var/log/nginx/access.log"
type => "nginx-accesslog"
start_position => "beginning"
stat_interval => ""
codec => json
}
} output {
if [type] == "nginx-accesslog" {
elasticsearch {
hosts => ["192.168.1.31:9200"]
index => "logstash-nginx-accesslog-30-%{+YYYY.MM.dd}"
}
}
}
5)检查配置文件语法是否有误并重启logstash
[root@node01 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx.conf -t
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] -- ::04.277 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK
[INFO ] -- ::09.055 [LogStash::Runner] runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash [root@node01 ~]# systemctl restart logstash
6)在kibana
上添加索引验证模式
7)在kibana
上验证数据,可以通过添加筛选,让日志更加了然名目
收集TCP/UDP日志
通过logstash
的tcp/udp
插件收集日志,通常用于在向elasticsearch
日志补录丢失的部分日志,可以将丢失的日志通过一个TCP
端口直接写入到elasticsearch
服务器。
进行收集测试
1)logstash配置
[root@linux-elk1 ~]# vim /etc/logstash/conf.d/tcp.conf
input {
tcp {
port =>
type => "tcplog"
mode => "server"
}
} output {
stdout {
codec => rubydebug
}
}
2)验证端口是否启动成功
[root@linux-elk1 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tcp.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] -- ::07.538 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] -- ::07.551 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.8.1"}
[INFO ] -- ::14.416 [Converge PipelineAction::Create<main>] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>, "pipeline.batch.size"=>, "pipeline.batch.delay"=>}
[INFO ] -- ::14.885 [Converge PipelineAction::Create<main>] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x240c27a6 sleep>"}
[INFO ] -- ::14.911 [[main]<tcp] tcp - Starting tcp input listener {:address=>"0.0.0.0:9889", :ssl_enable=>"false"}
[INFO ] -- ::14.953 [Ruby--Thread-: /usr/share/logstash/lib/bootstrap/environment.rb:] agent - Pipelines running {:count=>, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] -- ::15.223 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>} # 新开一个终端验证端口
[root@linux-elk1 ~]# netstat -nlutp |grep
tcp6 ::: :::* LISTEN /java
3)在别的服务器通过nc命令进行测试,查看logstash是否收到数据
# echo "nc test" | nc 192.168.1.31 #在另外一台服务器上执行 # 在上面启动logstash的那个终端查看
{
"message" => "nc test",
"host" => "192.168.1.30",
"type" => "tcplog",
"@version" => "",
"@timestamp" => --09T10::.139Z,
"port" =>
}
4)通过nc命令发送一个文件,查看logstash收到的数据
# nc 192.168.1.31 < /etc/passwd #同样在上面执行nc那台服务器上执行 # 同样还是在上面启动logstash的那个终端查看
{
"message" => "mysql:x:27:27:MariaDB Server:/var/lib/mysql:/sbin/nologin",
"host" => "192.168.1.30",
"type" => "tcplog",
"@version" => "",
"@timestamp" => --09T10::.186Z,
"port" =>
}
{
"message" => "logstash:x:989:984:logstash:/usr/share/logstash:/sbin/nologin",
"host" => "192.168.1.30",
"type" => "tcplog",
"@version" => "",
"@timestamp" => --09T10::.187Z,
"port" =>
}
5)通过伪设备的方式发送消息:
在类Unix操作系统中,设备节点并不一定要对应物理设备。没有这种对应关系的设备是伪设备。操作系统运用了它们提供的多种功能,tcp只是dev下面众多伪设备当中的一种设备。
# echo "伪设备" >/dev/tcp/192.168.1.31/ #同样在上面执行nc那台服务器上执行 # 同样还是在上面启动logstash的那个终端查看
{
"message" => "伪设备",
"host" => "192.168.1.30",
"type" => "tcplog",
"@version" => "",
"@timestamp" => --09T10::.487Z,
"port" =>
}
6)将输出更改到elasticsearch
[root@linux-elk1 ~]# vim /etc/logstash/conf.d/tcp.conf
input {
tcp {
port =>
type => "tcplog"
mode => "server"
}
} output {
elasticsearch {
hosts => ["192.168.1.31:9200"]
index => "logstash-tcp-log-%{+YYYY.MM.dd}"
}
}
7)通过nc命令或伪设备输入日志
# echo "伪设备 1" >/dev/tcp/192.168.1.31/
# echo "伪设备 2" >/dev/tcp/192.168.1.31/
8)在kibana界面创建索引模式
9)验证数据
ELK快速入门(二)通过logstash收集日志的更多相关文章
- ELK快速入门(三)logstash收集日志写入redis
ELK快速入门三-logstash收集日志写入redis 用一台服务器部署redis服务,专门用于日志缓存使用,一般用于web服务器产生大量日志的场景. 这里是使用一台专门用于部署redis ,一台专 ...
- ELK快速入门(四)filebeat替代logstash收集日志
ELK快速入门四-filebeat替代logstash收集日志 filebeat简介 Filebeat是轻量级单用途的日志收集工具,用于在没有安装java的服务器上专门收集日志,可以将日志转发到log ...
- ELK之filebeat替代logstash收集日志
filebeat->redis->logstash->elasticsearch 官网下载地址:https://www.elastic.co/downloads/beats/file ...
- ELK快速入门(一)基本部署
ELK快速入门一-基本部署 ELK简介 什么是ELK?通俗来讲,ELK是由Elasticsearch.Logstash.Kibana 三个开源软件组成的一个组合体,这三个软件当中,每个软件用于完成不同 ...
- logstash快速入门实战指南-Logstash简介
作者其他ELK快速入门系列文章 Elasticsearch从入门到精通 Kibana从入门到精通 Logstash是一个具有实时流水线功能的开源数据收集引擎.Logstash可以动态统一来自不同来源的 ...
- ELK快速入门(五)配置nginx代理kibana
ELK快速入门五-配置nginx代理kibana 由于kibana界面默认没有安全认证界面,为了保证安全,通过nginx进行代理并设置访问认证. 配置kibana [root@linux-elk1 ~ ...
- elk快速入门-Logstash
Logstash1.功能:数据输入,数据筛选,数据输出2.特性:数据来源中立性,支持众多数据源:如文件log file,指标,网站服务日志,关系型数据库,redis,mq等产生的数据3.beats:分 ...
- ELK 使用filebeat替代Logstash收集日志
使用beats采集日志 之前也介绍过beats是ELK体系中新增的一个工具,它属于一个轻量的日志采集器,以上我们使用的日志采集工具是logstash,但是logstash占用的资源比较大,没有beat ...
- ELK之logstash收集日志写入redis及读取redis
logstash->redis->logstash->elasticsearch 1.安装部署redis cd /usr/local/src wget http://download ...
随机推荐
- 11-散列2 Hashing (25 分)
The task of this problem is simple: insert a sequence of distinct positive integers into a hash tabl ...
- Redis读写分离技术解析
背景 云数据库Redis版不管主从版还是集群规格,replica作为备库不对外提供服务,只有在发生HA的时候,replica提升为master后才承担读写流量.这种架构读写请求都在master上完成, ...
- Golang(五)Context 的使用和源码分析
0. 前言 golang 的 Context 包,是专门用来简化对于处理单次请求但是涉及到多个 goroutine 之间与请求域的数据.取消信号.截止时间等相关操作,这些操作可能涉及多个 API 调用 ...
- 第4课 decltype类型推导
第4课 decltype类型推导 一.decltype类型推导 (一)语法: 1.语法:decltype(expr),其中的expr为变量(实体)或表达式 2.说明: ①decltype用于获取变量的 ...
- 【Maven插件】exec-maven-plugin
<plugin> <artifactId>exec-maven-plugin</artifactId> <groupId>org.codehaus.mo ...
- pytest 学习笔记一 入门篇
前言 之前做自动化测试的时候,用的测试框架为Python自带的unittest框架,随着工作的深入,发现了另外一个框架就是pytest (官方地址文档http://www.pytest.org/en/ ...
- Configuration类的@Value属性值为null
今天写的Configuration类的@Value属性值为null @Configuration public class MybatisConfigurer { @Value("${spr ...
- Python【每日一问】23
问: [基础题]:判断 101-200 之间有多少个素数,并输出所有素数 PS:素数:一个大于1的自然数,除了1和它自身外,不能被其他自然数整除的数 [提高题]:输入某年某月某日,判断这一天是周几?( ...
- [原创]Ladon5.7大型内网渗透综合漏洞扫描器
Ladon LadonGUI Cobalt Strike PowerLadon PythonLadon LinuxLadon 使用说明 ID 主题 URL 1 Ladon文档主页 https://gi ...
- Javascript的闭包(上)
了解了预编译和作用域的相关知识以后我们来看一下开发中常见的工具——闭包.还是来看一个实例. function a(){ function b() { ; console.log(aa); } ; re ...