ELK快速入门二-通过logstash收集日志

说明

这里的环境接着上面的ELK快速入门-基本部署文章继续下面的操作。

收集多个日志文件

1)logstash配置文件编写

[root@linux-elk1 ~]# vim /etc/logstash/conf.d/system-log.conf
input {
file {
path => "/var/log/messages"
type => "systemlog"
start_position => "beginning"
stat_interval => ""
}
file {
path => "/var/log/secure"
type => "securelog"
start_position => "beginning"
stat_interval => ""
}
} output {
if [type] == "systemlog" {
elasticsearch {
hosts => ["192.168.1.31:9200"]
index => "system-log-%{+YYYY.MM.dd}"
}
}
if [type] == "securelog" {
elasticsearch {
hosts => ["192.168.1.31:9200"]
index => "secure-log-%{+YYYY.MM.dd}"
}
}
}

2)给日志文件赋予可读权限并重启logstash

[root@linux-elk1 ~]# chmod  /var/log/secure
[root@linux-elk1 ~]# chmod /var/log/messages
[root@linux-elk1 ~]# systemctl restart logstash

3)向被收集的文件中写入数据;是为了马上能在elasticsearchweb界面和klbanaweb界面里面查看到数据。

[root@linux-elk1 ~]# echo "test" >> /var/log/secure
[root@linux-elk1 ~]# echo "test" >> /var/log/messages

4)在kibana界面添加system-log索引模式

5)在kibana界面添加secure-log索引模式

6)kibana查看日志

收集tomcat和java日志

收集Tomcat服务器的访问日志以及Tomcat错误日志进行实时统计,在kibana页面进行搜索展示,每台Tomcat服务器需要安装logstash负责收集日志,然后将日志发给elasticsearch进行分析,在通过kibana在前端展示。

部署tomcat服务

说明,我这里在linux-elk2节点上面装tomcat

1)下载并安装tomcat

[root@linux-elk2 ~]# cd /usr/local/
[root@linux-elk2 local]# wget http://mirrors.tuna.tsinghua.edu.cn/apache/tomcat/tomcat-9/v9.0.21/bin/apache-tomcat-9.0.21.tar.gz
[root@linux-elk2 local]# tar xvzf apache-tomcat-9.0..tar.gz
[root@linux-elk2 local]# ln -s /usr/local/apache-tomcat-9.0. /usr/local/tomcat

2)测试页面准备

[root@linux-elk2 local]# cd /usr/local/tomcat/webapps/
[root@linux-elk2 webapps]# mkdir webdir
[root@linux-elk2 webapps]# echo "<h1>Welcome to Tomcat</h1>" > /usr/local/tomcat/webapps/webdir/index.html

3)tomcat日志转json

[root@linux-elk2 tomcat]# vim /usr/local/tomcat/conf/server.xml
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="localhost_access_log" suffix=".txt"
pattern="{&quot;clientip&quot;:&quot;%h&quot;,&quot;ClientUser&quot;:&quot;%l&quot;,&quot;authenticated&quot;:&quot;%u&quot;,&quot;AccessTime&quot;:&quot;%t&quot;,&quot;method&quot;:&quot;%r&quot;,&quot;status&quot;:&quot;%s&quot;,&quot;SendBytes&quot;:&quot;%b&quot;,&quot;Query?string&quot;:&quot;%q&quot;,&quot;partner&quot;:&quot;%{Referer}i&quot;,&quot;AgentVersion&quot;:&quot;%{User-Agent}i&quot;}"/>

4)启动tomcat,并进行访问测试生成日志

[root@linux-elk2 tomcat]# /usr/local/tomcat/bin/startup.sh
[root@linux-elk2 tomcat]# ss -nlt |grep
LISTEN ::: :::*
[root@linux-elk2 tomcat]# ab -n100 -c100 http://192.168.1.32:8080/webdir/ [root@linux-elk2 ~]# tailf /usr/local/tomcat/logs/localhost_access_log.--.log
{"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"","SendBytes":"","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}
{"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"","SendBytes":"","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}
{"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"","SendBytes":"","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}
{"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"","SendBytes":"","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}
{"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"","SendBytes":"","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}
{"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"","SendBytes":"","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}
{"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"","SendBytes":"","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}
{"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"","SendBytes":"","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}

5)验证日志是否为json格式,http://www.kjson.com/

配置logstash收集tomcat日志

说明:如果是需要收集别的服务器上面的tomcat日志,那么在所需要收集的服务器上面都得安装logstash。此处是在linux-elk2节点上面部署的tomcat,之前安装过logstash

1)配置logstash

[root@linux-elk2 ~]# vim /etc/logstash/conf.d/tomcat.conf
input {
file {
path => "/usr/local/tomcat/logs/localhost_access_log.*.log"
type => "tomcat-access-log"
start_position => "beginning"
stat_interval => ""
}
} output {
elasticsearch {
hosts => ["192.168.1.31:9200"]
index => "logstash-tomcat-132-accesslog-%{+YYYY.MM.dd}"
}
file {
path => "/tmp/logstash-tomcat-132-accesslog-%{+YYYY.MM.dd}"
}
}

2)检测配置文件语法,并重启logstash

[root@linux-elk2 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tomcat.conf -tWARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] -- ::34.583 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK [root@linux-elk2 ~]# /usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd
[root@linux-elk2 ~]# systemctl start logstash

3)权限修改,不然elasticsearch界面和kibana界面是无法查看到的

[root@linux-elk2 ~]# ll /usr/local/tomcat/logs/ -d
drwxr-xr-x root root 7月 : /usr/local/tomcat/logs/
[root@linux-elk2 ~]# ll /usr/local/tomcat/logs/
总用量
-rw-r----- root root 7月 : catalina.--.log
-rw-r----- root root 7月 : catalina.out
-rw-r----- root root 7月 : host-manager.--.log
-rw-r----- root root 7月 : localhost.--.log
-rw-r----- root root 7月 : localhost_access_log.--.log
-rw-r----- root root 7月 : manager.--.log
[root@linux-elk2 ~]# chown logstash.logstash /usr/local/tomcat/logs/ -R
[root@linux-elk2 ~]# ll /usr/local/tomcat/logs/
总用量
-rw-r----- logstash logstash 7月 : catalina.--.log
-rw-r----- logstash logstash 7月 : catalina.out
-rw-r----- logstash logstash 7月 : host-manager.--.log
-rw-r----- logstash logstash 7月 : localhost.--.log
-rw-r----- logstash logstash 7月 : localhost_access_log.--.log
-rw-r----- logstash logstash 7月 : manager.--.log

4)访问elasticsearch界面验证插件

数据浏览

5)在kibana上添加索引模式

6)kibana验证数据

配置logstash收集java日志

使用codec的multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并,https://www.elastic.co/guide/en/logstash/current/plugins-codecs-multiline.html

语法格式:

input {
stdin {
codec => multiline {
pattern => "^\[" #当遇到[开头的行时候将多行进行合并
negate => true #true为匹配成功进行操作,false为不成功进行操
what => "previous" #与上面的行合并,如果是下面的行合并就是
}
}
}

命令行测试输入输出:

[root@linux-elk2 ~]# /usr/share/logstash/bin/logstash -e 'input { stdin { codec => multiline { pattern => "^\[" negate => true what => "previous" } } } output { stdout { codec => rubydebug }}'
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] -- ::04.938 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] -- ::04.968 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.8.1"}
[INFO ] -- ::19.167 [Converge PipelineAction::Create<main>] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>, "pipeline.batch.size"=>, "pipeline.batch.delay"=>}
[INFO ] -- ::19.918 [Converge PipelineAction::Create<main>] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0xc8dd9a1 run>"}
The stdin plugin is now waiting for input: aaaaaa
[
{
"@timestamp" => --08T07::.063Z,
"tags" => [
[] "multiline"
],
"@version" => "",
"message" => "[12\n111111\n222222\naaaaaa",
"host" => "linux-elk2.exmaple.com"
} aaaaaa
[
{
"@timestamp" => --08T07::.522Z,
"tags" => [
[] "multiline"
],
"@version" => "",
"message" => "[44444\n444444\naaaaaa",
"host" => "linux-elk2.exmaple.com"
}

示例:收集ELK集群日志

1)观察日志文件,elk集群日志都是以"["开头并且每一个信息都是如此。

[root@linux-elk2 ~]# tailf /elk/logs/ELK-Cluster.log
[--08T11::,][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elk-node2] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[--08T11::,][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elk-node2] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[--08T11::,][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elk-node2] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[--08T11::,][INFO ][o.e.c.m.MetaDataMappingService] [elk-node2] [.kibana_1/yRee-8HYS8KiVwnuADXAbA] update_mapping [doc]
[--08T11::,][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elk-node2] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[--08T11::,][INFO ][o.e.c.m.MetaDataMappingService] [elk-node2] [.kibana_1/yRee-8HYS8KiVwnuADXAbA] update_mapping [doc]
[--08T11::,][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elk-node2] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[--08T11::,][WARN ][o.e.m.j.JvmGcMonitorService] [elk-node2] [gc][young][][] duration [.3s], collections []/[.7s], total [.3s]/[4s], memory [176mb]->[.6mb]/[.9gb], all_pools {[young] [.8mb]->[.4kb]/[.5mb]}{[survivor] [.3mb]->[3mb]/[.3mb]}{[old] [.8mb]->[.8mb]/[.9gb]}
[--08T11::,][WARN ][o.e.m.j.JvmGcMonitorService] [elk-node2] [gc][] overhead, spent [.3s] collecting in the last [.7s]
[--08T11::,][INFO ][o.e.x.m.p.NativeController] [elk-node2] Native controller process has stopped - no new native processes can be started

2)配置logstash

[root@linux-elk2 ~]# vim /etc/logstash/conf.d/java.conf
input {
file {
path => "/elk/logs/ELK-Cluster.log"
type => "java-elk-cluster-log"
start_position => "beginning"
stat_interval => ""
code => multiline {
pattern => "^\[" #以"["开头进行正则匹配,匹配规则
negate => "true" #正则匹配成功,false匹配不成功
what => "previous" #和前面的内容进行合并,如果是和下面的合并就是next
}
}
} output {
if [type] == "java-elk-cluster-log" {
elasticsearch {
hosts => ["192.168.1.31:9200"]
index => "java-elk-cluster-log-%{+YYYY.MM.dd}"
}
}
}

3)检查配置文件语法是否有误并重启logstash

[root@linux-elk2 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/java.conf -t
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] -- ::51.996 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK
[INFO ] -- ::04.438 [LogStash::Runner] runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash [root@linux-elk2 ~]# systemctl restart logstash

4)访问elasticsearch界面验证数据

5)在kibana上添加索引验证模式

6)kibana验证数据

收集Nginx访问日志

收集nginxjson访问日志,这里为了测试,是在一台新的服务器上面安装了nginxlogstash

1)安装nginx并准备一个测试页面

[root@node01 ~]# yum -y install nginx
[root@node01 ~]# echo "<h1>whelcom to nginx server</h1>" > /usr/share/nginx/html/index.html
[root@node01 ~]# systemctl start nginx
[root@node01 ~]# curl localhost
<h1>whelcom to nginx server</h1>

2)将nginx日志转换成json格式

[root@node01 ~]# vim /etc/nginx/nginx.conf
log_format access_json '{"@timestamp":"$time_iso8601",'
'"host":"$server_addr",'
'"clientip":"$remote_addr",'
'"size":$body_bytes_sent,'
'"responsetime":$request_time,'
'"upstreamtime":"$upstream_response_time",'
'"upstreamhost":"$upstream_addr",'
'"http_host":"$host",'
'"url":"$uri",'
'"domain":"$host",'
'"xff":"$http_x_forwarded_for",'
'"referer":"$http_referer",'
'"status":"$status"}'; access_log /var/log/nginx/access.log access_json; [root@node01 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@node01 ~]# systemctl restart nginx

3)访问一次,确认日志为json格式

[root@node01 ~]# tail /var/log/nginx/access.log
{"@timestamp":"2019-07-09T11:21:28+08:00","host":"192.168.1.30","clientip":"192.168.1.144","size":,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.1.30","url":"/index.html","domain":"192.168.1.30","xff":"-","referer":"-","status":""}

4)安装logstash并配置收集nginx日志

#将logstash软件包copy到nginx服务器上
[root@linux-elk1 ~]# scp logstash-6.8..rpm 192.168.1.30:/root/
#安装logstash
[root@node01 ~]# yum -y localinstall logstash-6.8..rpm
#生成logstash.service启动文件
[root@node01 ~]# /usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd
#将logstash启动用户更改为root,不然可能会导致收集不到日志
[root@node01 ~]# vim /etc/systemd/system/logstash.service
User=root
Group=root
[root@node01 ~]# systemctl daemon-reload [root@node01 ~]# vim /etc/logstash/conf.d/nginx.conf
input {
file {
path => "/var/log/nginx/access.log"
type => "nginx-accesslog"
start_position => "beginning"
stat_interval => ""
codec => json
}
} output {
if [type] == "nginx-accesslog" {
elasticsearch {
hosts => ["192.168.1.31:9200"]
index => "logstash-nginx-accesslog-30-%{+YYYY.MM.dd}"
}
}
}

5)检查配置文件语法是否有误并重启logstash

[root@node01 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx.conf -t
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] -- ::04.277 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK
[INFO ] -- ::09.055 [LogStash::Runner] runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash [root@node01 ~]# systemctl restart logstash

6)在kibana上添加索引验证模式

7)在kibana上验证数据,可以通过添加筛选,让日志更加了然名目

收集TCP/UDP日志

通过logstashtcp/udp插件收集日志,通常用于在向elasticsearch日志补录丢失的部分日志,可以将丢失的日志通过一个TCP端口直接写入到elasticsearch服务器。

进行收集测试

1)logstash配置

[root@linux-elk1 ~]# vim /etc/logstash/conf.d/tcp.conf
input {
tcp {
port =>
type => "tcplog"
mode => "server"
}
} output {
stdout {
codec => rubydebug
}
}

2)验证端口是否启动成功

[root@linux-elk1 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tcp.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] -- ::07.538 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] -- ::07.551 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.8.1"}
[INFO ] -- ::14.416 [Converge PipelineAction::Create<main>] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>, "pipeline.batch.size"=>, "pipeline.batch.delay"=>}
[INFO ] -- ::14.885 [Converge PipelineAction::Create<main>] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x240c27a6 sleep>"}
[INFO ] -- ::14.911 [[main]<tcp] tcp - Starting tcp input listener {:address=>"0.0.0.0:9889", :ssl_enable=>"false"}
[INFO ] -- ::14.953 [Ruby--Thread-: /usr/share/logstash/lib/bootstrap/environment.rb:] agent - Pipelines running {:count=>, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] -- ::15.223 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>} # 新开一个终端验证端口
[root@linux-elk1 ~]# netstat -nlutp |grep
tcp6 ::: :::* LISTEN /java

3)在别的服务器通过nc命令进行测试,查看logstash是否收到数据

# echo "nc test" | nc 192.168.1.31     #在另外一台服务器上执行

# 在上面启动logstash的那个终端查看
{
"message" => "nc test",
"host" => "192.168.1.30",
"type" => "tcplog",
"@version" => "",
"@timestamp" => --09T10::.139Z,
"port" =>
}

4)通过nc命令发送一个文件,查看logstash收到的数据

# nc 192.168.1.31  < /etc/passwd    #同样在上面执行nc那台服务器上执行

# 同样还是在上面启动logstash的那个终端查看
{
"message" => "mysql:x:27:27:MariaDB Server:/var/lib/mysql:/sbin/nologin",
"host" => "192.168.1.30",
"type" => "tcplog",
"@version" => "",
"@timestamp" => --09T10::.186Z,
"port" =>
}
{
"message" => "logstash:x:989:984:logstash:/usr/share/logstash:/sbin/nologin",
"host" => "192.168.1.30",
"type" => "tcplog",
"@version" => "",
"@timestamp" => --09T10::.187Z,
"port" =>
}

5)通过伪设备的方式发送消息:

在类Unix操作系统中,设备节点并不一定要对应物理设备。没有这种对应关系的设备是伪设备。操作系统运用了它们提供的多种功能,tcp只是dev下面众多伪设备当中的一种设备。

# echo "伪设备" >/dev/tcp/192.168.1.31/    #同样在上面执行nc那台服务器上执行

# 同样还是在上面启动logstash的那个终端查看
{
"message" => "伪设备",
"host" => "192.168.1.30",
"type" => "tcplog",
"@version" => "",
"@timestamp" => --09T10::.487Z,
"port" =>
}

6)将输出更改到elasticsearch

[root@linux-elk1 ~]# vim /etc/logstash/conf.d/tcp.conf
input {
tcp {
port =>
type => "tcplog"
mode => "server"
}
} output {
elasticsearch {
hosts => ["192.168.1.31:9200"]
index => "logstash-tcp-log-%{+YYYY.MM.dd}"
}
}

7)通过nc命令或伪设备输入日志

# echo "伪设备 1" >/dev/tcp/192.168.1.31/
# echo "伪设备 2" >/dev/tcp/192.168.1.31/

8)在kibana界面创建索引模式

9)验证数据

ELK快速入门(二)通过logstash收集日志的更多相关文章

  1. ELK快速入门(三)logstash收集日志写入redis

    ELK快速入门三-logstash收集日志写入redis 用一台服务器部署redis服务,专门用于日志缓存使用,一般用于web服务器产生大量日志的场景. 这里是使用一台专门用于部署redis ,一台专 ...

  2. ELK快速入门(四)filebeat替代logstash收集日志

    ELK快速入门四-filebeat替代logstash收集日志 filebeat简介 Filebeat是轻量级单用途的日志收集工具,用于在没有安装java的服务器上专门收集日志,可以将日志转发到log ...

  3. ELK之filebeat替代logstash收集日志

    filebeat->redis->logstash->elasticsearch 官网下载地址:https://www.elastic.co/downloads/beats/file ...

  4. ELK快速入门(一)基本部署

    ELK快速入门一-基本部署 ELK简介 什么是ELK?通俗来讲,ELK是由Elasticsearch.Logstash.Kibana 三个开源软件组成的一个组合体,这三个软件当中,每个软件用于完成不同 ...

  5. logstash快速入门实战指南-Logstash简介

    作者其他ELK快速入门系列文章 Elasticsearch从入门到精通 Kibana从入门到精通 Logstash是一个具有实时流水线功能的开源数据收集引擎.Logstash可以动态统一来自不同来源的 ...

  6. ELK快速入门(五)配置nginx代理kibana

    ELK快速入门五-配置nginx代理kibana 由于kibana界面默认没有安全认证界面,为了保证安全,通过nginx进行代理并设置访问认证. 配置kibana [root@linux-elk1 ~ ...

  7. elk快速入门-Logstash

    Logstash1.功能:数据输入,数据筛选,数据输出2.特性:数据来源中立性,支持众多数据源:如文件log file,指标,网站服务日志,关系型数据库,redis,mq等产生的数据3.beats:分 ...

  8. ELK 使用filebeat替代Logstash收集日志

    使用beats采集日志 之前也介绍过beats是ELK体系中新增的一个工具,它属于一个轻量的日志采集器,以上我们使用的日志采集工具是logstash,但是logstash占用的资源比较大,没有beat ...

  9. ELK之logstash收集日志写入redis及读取redis

    logstash->redis->logstash->elasticsearch 1.安装部署redis cd /usr/local/src wget http://download ...

随机推荐

  1. 转载:tensorflow保存训练后的模型

    训练完一个模型后,为了以后重复使用,通常我们需要对模型的结果进行保存.如果用Tensorflow去实现神经网络,所要保存的就是神经网络中的各项权重值.建议可以使用Saver类保存和加载模型的结果. 1 ...

  2. 疯了!同事又问我为什么不能用 isXXX

    最近在做Code Review,写下了这篇文章:代码写成这样,老夫无可奈何!,说多了都是泪啊.. 最近又有人同事跑过来质疑我: 为什么变量名取名不能用 isXXX 这种方式,这样有什么问题?! 醉了, ...

  3. concurrent(三)互斥锁ReentrantLock & 源码分析

    参考文档:Java多线程系列--“JUC锁”02之 互斥锁ReentrantLock:http://www.cnblogs.com/skywang12345/p/3496101.html Reentr ...

  4. 面向对象的理解 抽象类&接口

    一.关于面向对象 1.什么是面向对象     在解释面向对象之前,先说说面向过程.学过C的同学都知道,C就是面向过程的一种语言.那什么是面向过程呢?比方说组装主机,对于面向过程,需要从0开始.买cpu ...

  5. vim查找

    vim编辑器查找的时候,先 esc,然后 /要查找的内容,按下enter,查找下一个使用 n. 如果向向上查找使用 ?要查找的内容,按下enter,查找上一个使用n

  6. sde.layers表的eflags字段解析

    ArcSDE地理数据库,虽然经常在用,但仅限于了解功能层面的东西,其内部实现机制对我来说是个黑盒子.因为想了解register with geodatabase操作在数据库层面到底发生了什么,我分析了 ...

  7. ES6中的关键字 - let

    let关键字 1.let关键字声明的变量仅仅在自己的块级作用域内起作用,出了块级作用域就不起作用了: var arr2 = []; for (let index = 0; index < 10; ...

  8. [转帖]关于 ./configure

    ./configure --prefix=/usr/local/scws --prefix选项是配置安装的路径,如果不配置该选项,安装后可执行文件默认放在/usr/local/bin,库文件默认放在/ ...

  9. RuntimeError: Model class myapp.models.Test doesn't declare an explicit app_label and isn't in an application in INSTALLED_APPS.

    没有添加子应用在settings里面!

  10. python实现根据前序与中序求后序

    我就不板门弄斧了求后序 class Tree(): def __init__(self,x): self.value=x self.left=None self.right=None class So ...