官方文档

什么是ELK?

  通俗来讲,ELK是由Elasticsearch、Logstash、Kibana三个开源软件组成的一个组合体,这三个软件当中,每个软件用于完成不同的功能,ELK又称为ELK stack,官方域名为static.co,ELK-stack的主要优点有如下几个:
处理方式灵活:elasticsearch是实时全文索引,具有强大的搜索功能。
配置相对简单:elasticsearch全部使用JSON接口,logstash使用模板配置,kibana的配置文件部分更简单。
检索性能高效:基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿级数据的查询秒级响应。
集群线性扩展:elasticsearch和logstash都可以灵活线性扩展。
前端操作绚丽:kibana的前端设计比较绚丽,而且操作简单。

什么是Elasticsearch:

  是一个高度可扩展的开源全文搜索和分析引擎,它可实现数据的实时全文搜索、支持分布式可实现高可用、提供API接口,可以处理大规模日志数据,比如Nginx、Tomcat、系统日志等功能。

什么是Logstash:

  可以通过插件实现日志收集和转发,支持日志过滤,支持普通log、自定义json格式的日志解析。

什么是Kibana:

  主要是通过接口调用elasticsearch的数据,并进行前端数据可视化的展现。

Beats 比 logstash 更轻量级,不需要装java环境。

1. elasticsearch 部署

环境初始化

最小化安装 Centos-7.2-x86_64操作系统的虚拟机,vcpu-2,内存4G或更多,操作系统盘50G,主机名设置规则
为linux-hostX.example.com,其中host1和host2为elasticsearch服务器,为保证效果特额外添加一块单独的
数据磁盘大小为50G并格式化挂载到/data。

1.1 主机名和磁盘挂载

  1. # 修改主机名
  2. hostnamectl set-hostname linux-hostx.example.com && rebbot
  3. hostnamectl set-hostname linux-host2.example.com && rebbot
  4.  
  5. # 磁盘挂载
  6. mkdir /data
  7. mkfs.xfs /dev/sdb
  8. blkid /dev/sdb
  9. /dev/sdb: UUID="bb780805-efed-43ff-84cb-a0c59c6f4ef9" TYPE="xfs"
  10.  
  11. vim /etc/fstab
  12. UUID="bb780805-efed-43ff-84cb-a0c59c6f4ef9" /data xfs defaults 0 0
  13. mount -a
  14.  
  15. # 各服务器配置本地域名解析
  16. vim /etc/hosts
  17. 192.168.182.137 linux-host1.example.com
  18. 192.168.182.138 linux-host2.example.com

1.2 关闭防火墙和SELinux,调整文件描述符

1.3 设置epel源、安装基本操作命令并同步时间

  1. yum install -y net-tools vim lrzsz tree screen lsof tcpdump wget ntpdate
  2. cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
  3. # 添加计划任务
  4. echo "*/5 * * * * ntpdate time1.aliyun.com &>/dev/null && hwclock-w" >> /var/spool/cron/root
  5. systemctl restart crond

1.4 安装elasticsearch

在host1和host2分别安装elasticsearch
在两台服务器准备java环境:
方式一:直接使用yum安装openjdk
yum install java-1.8.0*
方式二:本地安装在oracle官网下载rpm安装包
yum localinstall jdk-8u92-linux-x64.rpm
方式三:下载二进制包自定义profile环境变量

  1. tar xvf jdk-8u121-linux-x64.tar.gz -C /usr/local/
  2. ln -sv /usr/local/jdk-8u121-linux-x64 /usr/local/jdk
  3. vim /etc/profile
  4.  
  5. java -version

安装elasticsearch

  1. yum install jdk-8u121-linux-x64.rpm elasticsearch-5.4.0.rpm

1.5 配置elasticsearch

  1. grep "^[a-Z]" /etc/elasticsearch/elasticsearch.yml
  2. cluster.name: elk-cluster
  3. node.name: elk-node1
  4. path.data: /data/elkdata
  5. path.logs: /data/logs
  6. bootstrap.memory_lock: true
  7. network.host: 192.168.152.138
  8. http.port: 9200
  9. discovery.zen.ping.unicast.hosts: ["192.168.152.138", "192.168.152.139"]

另外一个节点,只需要更改节点名称和监听地址即可:

  1. grep "^[a-Z]" /etc/elasticsearch/elasticsearch.yml
  2. cluster.name: elk-cluster
  3. node.name: elk-node2
  4. path.data: /data/elkdata
  5. path.logs: /data/logs
  6. bootstrap.memory_lock: true
  7. network.host: 192.168.152.139
  8. http.port: 9200
  9. discovery.zen.ping.unicast.hosts: ["192.168.152.138", "192.168.152.139"]

创建数据和日志目录并授权:

  1. mkdir /data/elkdata
  2. mkdir /data/logs
  3. chown -R elasticsearch.elasticsearch /data/

在启动脚本中修改,开启内存锁定参数:

  1. vim /usr/lib/systemd/system/elasticsearch.service
  2. LimitMEMLOCK=infinity

注意:不开启内存锁定参数,会因为 bootstrap.memory_lock: true 这个参数而启不来。

调整内存大小,默认是2g:

  1. vim /etc/elasticsearch/jvm.options
  2. -Xms2g
  3. -Xmx2g

注意:

将Xmx设置为不超过物理RAM的50%,以确保有足够的物理RAM留给内核文件系统缓存。
elasticsearch内存最高不要超过32G。
https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html

启动服务:

  1. systemctl restart elasticsearch.service
  2. systemctl enable elasticsearch.service

# 检查状态

  1. curl -sXGET http://192.168.152.139:9200/_cluster/health?pretty=true

2.elasticsearch插件之head部署

插件是为了完成不同的功能,官方提供了一些插件但大部分是收费的,另外也有一些开发爱好者提供的插件,可以实现对elasticsearch集群的状态监控与管理配置等功能。

在elasticsearch5.x版本以后不再支持直接安装head插件,而是需要通过启动一个服务方式,git地址:https://github.com/mobz/elasticsearch-head

# NPM 的全称是Node-Package Manager,是随同NodeJS一起安装的包管理和分发工具,它很方便让JavaScript开发者下载、安装、上传以及管理以及安装的包。

安装部署:

  1. cd /usr/local/src
  2. git clone git://github.com/mobz/elasticsearch-head.git
  3. cd elasticsearch-head/
  4. yum install npm -y
  5. npm install grunt -save
  6. ll node_modules/grunt # 确认生成文件
  7. npm install # 执行安装
  8. npm run start & 后台启动服务

修改elasticsearch服务配置文件:

开启跨域访问支持,然后重启elasticsearch服务

  1. vim /etc/elasticsearch/elasticsearch.yml
  2. http.cors.enabled: true #最下方添加
  3. http.cors.allow-origin: "*"

重启服务:

  1. systemctl restart elasticsearch
  2. systemctl enable elasticsearch

粗得是主分片,其他的是副分片是细的 用于备份。

2.1 docker 版本启动head插件

安装docker:

  1. yum install docker -y
  2. systemctl start docker && systemctl enable docker

下载镜像:

  1. docker run -p 9100:9100 mobz/elasticsearch-head:5

如果已有镜像,则导入镜像:

  1. # docker load < elasticsearch-head-docker.tar.gz

查看镜像:

  1. docker images

启动镜像:

  1. docker run -d -p 9100:9100 docker.io/mobz/elasticsearch-head:5

监控脚本:

  1. vim els-cluster-monitor.py
  2. #!/usr/bin/env python
  3. #coding:utf-8
  4.  
  5. import smtplib
  6. from email.mime.text import MIMEText
  7. from email.utils import formataddr
  8. import subprocess
  9. body=""
  10. false="false"
  11. obj=subprocess.Popen(("curl -sXGET http://192.168.152.139:9200/_cluster/health?pretty=true"),shell=True,stdout=subprocess.PIPE)
  12. data=obj.stdout.read()
  13. data1=eval(data)
  14. status = data1.get("status")
  15. if status == "green":
  16. print "50"
  17. else:
  18. print "100"
  1. 注意:
  2. 如果通过head做数据浏览,
  3. /var/lib/docker/overlay2/840b5e6d4ef64ecfdccfad5aa6d061a43f0efb10dfdff245033e90ce9b524f06/diff/usr/src/app/_site/vendor.js
  4. /var/lib/docker/overlay2/048d9106359b9e263e74246c56193a5852db6a5b99e4a0f9dd438e657ced78d3/diff/usr/src/app/_site/vendor.js
  5.  
  6. 更改application/x-www-form-urlencoded application/json

3.logstash部署

logstash环境准备及安装:

Logstash是一个开源的数据收集引擎,可以水平伸缩,而且logstash整个ELK当中拥有最多插件的一个组件,其可以接收来自不同来源的数据并统一输出到指定的且可以是多个不同目的地。

环境准备:

关闭防火墙和selinux,并且安装java环境

  1. sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config
  2. yum install jdk-8u121-linux-x64.rpm

安装logstash:

  1. yum install logstash-5.3.0.rpm -y

# 权限更改为logstash用户和组,否则启动的时候日志报错

  1. chown logstash.logstash /usr/share/logstash/data/queue -R

测试logstash:

测试标准输入和输出:

  1. /usr/share/logstash/bin/logstash -e 'input{ stdin{} } output{stdout{ codec=>rubydebug}}' #标准输入和输出
  2. hello
  3. {
  4. "@timestamp" => 2017-11-18T13:49:41.425Z, #当前事件的发生时间,
  5. "@version" => "1", #事件版本号,一个事件就是一个ruby对象
  6. "host" => "linux-host2.example.com", #标记事件发生在哪里
  7. "message" => "hello" #消息的具体内容
  8. }
  9.  
  10. # 时间不用管它,浏览器会帮我们转换得。

# 压缩文件

  1. /usr/share/logstash/bin/logstash -e 'input{ stdin{} } output{file{path=>"/tmp/test-%{+YYYY.MM.dd}.log.tar.gz" gzip=>true}}'

# 测试输出到elasticsearch

  1. /usr/share/logstash/bin/logstash -e 'input{ stdin{} } output{ elasticsearch {hosts => ["192.168.152.138:9200"] index => "logstash-test-%{+YYYY.MM.dd}"}}'

# 索引存放位置

  1. ll /data/elkdata/nodes/0/indices/
  2. total 0
  3. drwxr-xr-x. 8 elasticsearch elasticsearch 65 Nov 18 20:17 W8VO0wNfTDy9h37CYpu17g

# 删除索引有两种方式:

  1. 一种是elasticsearch,一种是通过api

logstash配置文件之收集系统日志:

  1. # 配置文件说明:conf结尾,名字自定义,一个配置文件可以收集多个日志
  2. vim /etc/logstash/conf.d/system.conf
  3. input {
  4. file {
  5. path => "/var/log/messages"
  6. type => "systemlog" # 日志类型
  7. start_position => "beginning" #第一次从头收集,之后从新添加的日志收集
  8. stat_interval => "2" # 多长时间去收集一次,两秒
  9. }
  10. }
  11.  
  12. output {
  13. elasticsearch { # 定义插件名称
  14. hosts => ["192.168.152.138:9200"]
  15. index => "logstash-systemlog-%{+YYYY.MM.dd}" # 为什么要加logstash,主要是后期再地图上显示客户端显示城市,模板上必须要以logstash开头
  16. }
  17. }

更改/var/log/messages权限:

  1. # 由于logstash对message没有读得权限
  2. chmod 644 /var/log/messages

检查配置文件是否有报错:

  1. # /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/system.conf -t
  2. WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
  3. Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs to console
  4. Configuration OK
  5. 15:43:39.440 [LogStash::Runner] INFO logstash.runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
  6. # 启动服务:
  7. systemctl restart logstash

# 启动成功后,显示logstash-systemlog:

# 添加到kibana:

4.kibana部署

kibana可以单独安装到一台服务器

  1. yum install kibana-5.4.0-x86_64.rpm -y
  2.  
  3. 更改配置文件:
  4. grep "^[a-Z]" /etc/kibana/kibana.yml
  5. server.port: 5601 #端口
  6. server.host: "192.168.152.138" #监听地址
  7. elasticsearch.url: "http://192.168.152.139:9200" #URL地址
  8.  
  9. # 查看kibana状态
  10. http://192.168.152.138:5601/status
  11.  
  12. # 启动kibana
  13. systemctl restart kibana
  14. systemctl enable kibana

# kibana 匹配:
[logstash-test]-YYYY.MM.DD

# 显示

5. if判断多个type类型

  1. cat /etc/logstash/conf.d/system.conf
  2. input {
  3. file {
  4. path => "/var/log/messages"
  5. type => "systemlog"
  6. start_position => "beginning"
  7. stat_interval => "2"
  8. }
  9.  
  10. file {
  11. path => "/var/log/lastlog"
  12. type => "system-last"
  13. start_position => "beginning"
  14. stat_interval => "2"
  15. }}
  16.  
  17. output {
  18. if [type] == "systemlog"{
  19. elasticsearch {
  20. hosts => ["192.168.152.138:9200"]
  21. index => "logstash-systemlog-%{+YYYY.MM.dd}"
  22. }
  23. file{
  24. path => "/tmp/last.log"
  25. }}
  26. if [type] == "system-last" {
  27. elasticsearch {
  28. hosts => ["192.168.152.138:9200"]
  29. index => "logstash-lastmlog-%{+YYYY.MM.dd}"
  30. }}
  31. }
  32.  
  33. # 检查配置是否正确
  34. /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/system.conf -t
  35. # 重启服务
  36. systemctl restart logstash

在elastic-head查看节点:

添加到kibana:

6.收集nginx访问日志

部署nginx服务:

编辑配置文件并准备web页面:

  1. # 添加到nginx.conf
  2. vim conf/nginx.conf
  3.  
  4. # 添加json格式的日志
  5. log_format access_json '{"@timestamp":"$time_iso8601",'
  6. '"host":"$server_addr",'
  7. '"clientip":"$remote_addr",'
  8. '"size":$body_bytes_sent,'
  9. '"responsetime":$request_time,'
  10. '"upstreamtime":"$upstream_response_time",'
  11. '"upstreamhost":"$upstream_addr",'
  12. '"http_host":"$host",'
  13. '"url":"$uri",'
  14. '"domain":"$host",'
  15. '"xff":"$http_x_forwarded_for",'
  16. '"referer":"$http_referer",'
  17. '"status":"$status"}';
  18. access_log /var/log/nginx/access.log access_json;
  19.  
  20. # 添加站点
  21. location /web{
  22. root html;
  23. index index.html index.htm;
  24. }
  25.  
  26. # 创建目录
  27. mkdir /usr/local/nginx/html/web
  28.  
  29. # 首页文件
  30. echo 'Nginx webPage!' > /usr/local/nginx/html/web/index.html
  31.  
  32. # 不stop,日志格式会乱
  33. /usr/local/nginx/sbin/nginx -s stop
  34.  
  35. # 授权
  36. chown nginx.nginx /var/log/nginx
  37.  
  38. # 启动
  39. /usr/local/nginx/sbin/nginx
  40.  
  41. # 查看访问日志
  42. [root@linux-host2 conf]# tail -f /var/log/nginx/access.log
  43. {"@timestamp":"2017-11-20T23:51:00+08:00","host":"192.168.152.139","clientip":"192.168.152.1","size":,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.152.139","url":"/web/index.html","domain":"192.168.152.139","xff":"-","referer":"-","status":""}

模拟访问:

  1. # 模拟多个访问
  2. yum install httpd-tools -y
  3. # 一千个请求,每次处理100个,共10次处理完。
  4. ab -n1000 -c100 http://192.168.152.139/web/index.html

添加logstash配置:

  1. vim /etc/logstash/conf.d/nginx-accesslog.conf
  2. input {
  3. file {
  4. path => "/var/log/nginx/access.log"
  5. type => "nginx-access-log"
  6. start_position => "beginning"
  7. stat_interval => "2"
  8. }
  9. }
  10.  
  11. output {
  12. elasticsearch {
  13. hosts => ["192.168.152.139:9200"]
  14. index => "logstash-nginx-access-log-%{+YYYY.MM.dd}"
  15. }
  16. }
  17.  
  18. # 检查配置文件:
  19. /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx-accesslog.conf -t
  20.  
  21. # 重启logstash
  22. systemctl restart logstash

查看是否已添加到es:

添加到kibana:

7.Tomcat日志转json并收集

服务器部署tomcat服务:
安装java环境,并自定义一个web界面进行测试。
配置java环境并部署Tomcat:

  1. yum install jdk-8u121-linux-x64.rpm
  2. cd /usr/local/src
  3. [root@linux-host1 src]# tar -xf apache-tomcat-8.0.27.tar.gz
  4. [root@linux-host1 src]# cp -rf apache-tomcat-8.0.27 /usr/local/
  5. [root@linux-host1 src]# ln -sv /usr/local/apache-tomcat-8.0.27/ /usr/local/tomcat
  6. "/usr/local/tomcat" -> "/usr/local/apache-tomcat-8.0.27/"
  7. [root@linux-host1 webapps]# mkdir /usr/local/tomcat/webapps/webdir
  8. [root@linux-host1 webapps]# echo "Tomcat Page" > /usr/local/tomcat/webapps/webdir/index.html
  9. [root@linux-host1 webapps]# ../bin/catalina.sh start
  10. Using CATALINA_BASE: /usr/local/tomcat
  11. Using CATALINA_HOME: /usr/local/tomcat
  12. Using CATALINA_TMPDIR: /usr/local/tomcat/temp
  13. Using JRE_HOME: /usr
  14. Using CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
  15. Tomcat started.
  16. [root@linux-host1 webapps]# ss -lnt | grep 8080
  17. LISTEN 0 100 :::8080 :::*

配置tomcat的server.xml配置文件:

  1. [root@linux-host1 conf]# diff server.xml server.xml.bak
  2. 136,137c136,137
  3. < prefix="tomcat_access_log" suffix=".log"
  4. < pattern="{"clientip":"%h","ClientUser":"%l","authenticated":"%u","AccessTime":"%t","method":"%r","status":"%s","SendBytes":"%b","Query?string":"%q","partner":"%{Refere}i","Agentversion":"%{User-Agent}i"}"/>
  5. ---
  6. > prefix="localhost_access_log" suffix=".txt"
  7. > pattern="%h %l %u %t "%r" %s %b" />
  8.  
  9. [root@linux-host1 conf]# ../bin/startup.sh stop
  10. Using CATALINA_BASE: /usr/local/tomcat
  11. Using CATALINA_HOME: /usr/local/tomcat
  12. Using CATALINA_TMPDIR: /usr/local/tomcat/temp
  13. Using JRE_HOME: /usr
  14. Using CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
  15. Tomcat started.
  16. [root@linux-host1 conf]# ../bin/startup.sh start
  17. Using CATALINA_BASE: /usr/local/tomcat
  18. Using CATALINA_HOME: /usr/local/tomcat
  19. Using CATALINA_TMPDIR: /usr/local/tomcat/temp
  20. Using JRE_HOME: /usr
  21. Using CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
  22. Tomcat started.

查看日志:

  1. [root@linux-host1 tomcat]# tail -f logs/tomcat_access_log.--.log | jq
  2. {
  3. "clientip": "192.168.152.1",
  4. "ClientUser": "-",
  5. "authenticated": "-",
  6. "AccessTime": "[21/Nov/2017:23:45:45 +0800]",
  7. "method": "GET /webdir2/ HTTP/1.1",
  8. "status": "",
  9. "SendBytes": "-",
  10. "Query?string": "",
  11. "partner": "-",
  12. "Agentversion": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36"
  13. }
  14. {
  15. "clientip": "192.168.152.1",
  16. "ClientUser": "-",
  17. "authenticated": "-",
  18. "AccessTime": "[21/Nov/2017:23:45:45 +0800]",
  19. "method": "GET /webdir2/ HTTP/1.1",
  20. "status": "",
  21. "SendBytes": "",
  22. "Query?string": "",
  23. "partner": "-",
  24. "Agentversion": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36"
  25. }

添加logstash配置:

  1. [root@linux-host2 ~]# vim /etc/logstash/conf.d/tomcat_access.conf
  2. input {
  3. file {
  4. path => "/usr/local/tomcat/logs/tomcat_access_log.*.log"
  5. type => "tomcat-accesslog"
  6. start_position => "beginning"
  7. stat_interval => "2"
  8. }
  9. }
  10.  
  11. output {
  12. if [type] == "tomcat-accesslog" {
  13. elasticsearch {
  14. hosts => ["192.168.152.138:9200"]
  15. index => "logstash-tomcat152139-accesslog-%{+YYYY.MM.dd}"
  16. }}
  17. }
  18.  
  19. # 检查配置文件:
  20. /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx-accesslog.conf -t

注意:

path => "/usr/local/tomcat/logs/tomcat_access_log.*.log "中一定不要有空格,不然会找不到索引,血得教训。

path日志 * 代表匹配所有日志,如果需要直观定位哪台机器的索引,可以添加后两位的ip地址。

查看es:

添加到kiban:

测试并发:

  1. [root@linux-host2 tomcat]# ab -n10000 -c100 http://192.168.152.139:8080/webdir/index.html
  2. This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
  3. Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
  4. Licensed to The Apache Software Foundation, http://www.apache.org/
  5.  
  6. Benchmarking 192.168.152.139 (be patient)
  7. Completed 1000 requests
  8. Completed 2000 requests
  9. Completed 3000 requests
  10. Completed 4000 requests
  11. Completed 5000 requests
  12. Completed 6000 requests
  13. Completed 7000 requests
  14. Completed 8000 requests
  15. Completed 9000 requests
  16. Completed 10000 requests
  17. Finished 10000 requests
  18.  
  19. Server Software: Apache-Coyote/1.1
  20. Server Hostname: 192.168.152.139
  21. Server Port: 8080
  22.  
  23. Document Path: /webdir/index.html
  24. Document Length: 12 bytes
  25.  
  26. Concurrency Level: 100
  27. Time taken for tests: 17.607 seconds
  28. Complete requests: 10000
  29. Failed requests: 0
  30. Write errors: 0
  31. Total transferred: 2550000 bytes
  32. HTML transferred: 120000 bytes
  33. Requests per second: 567.96 [#/sec] (mean)
  34. Time per request: 176.068 [ms] (mean)
  35. Time per request: 1.761 [ms] (mean, across all concurrent requests)
  36. Transfer rate: 141.44 [Kbytes/sec] received
  37.  
  38. Connection Times (ms)
  39. min mean[+/-sd] median max
  40. Connect: 0 22 24.1 11 158
  41. Processing: 19 154 117.4 116 2218
  42. Waiting: 1 141 113.7 95 2129
  43. Total: 19 175 113.6 142 2226
  44.  
  45. Percentage of the requests served within a certain time (ms)
  46. 50% 142
  47. 66% 171
  48. 75% 204
  49. 80% 228
  50. 90% 307
  51. 95% 380
  52. 98% 475
  53. 99% 523
  54. 100% 2226 (longest request)

8.收集java日志

使用codec的multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并,
https://www.elastic.co/guide/en/logstash/current/plugins-codecs-multiline.html

在elasticsearch服务器部署logstash示例:

  1. chown logstash.logstash /usr/share/logstash/data/queue -R
  2. ll -d /usr/share/logstash/data/queue
  3. cat /etc/logstash/conf.d/java.conf
  4. input{
  5. stdin{
  6. codec=>multiline{
  7. pattern=>"^\[" #当遇到[开头的行时候将多行进行合并
  8. negate=>true #true 为匹配成功进行操作,false为不成功进行操作
  9. what=>"previous" #与上面的行合并,如果是下面的行合并就是next
  10. }}
  11. }
  12. filter{ #日志过滤,如果所有的日志都过滤就写这里,如果只针对某一个过滤就写在input里面的日志输入里面
  13. }
  14. output{
  15. stdout{
  16. codec=>rubydebug
  17. }}

测试匹配代码:

  1. /usr/share/logstash/bin/logstash -e 'input { stdin { codec => multiline { pattern => "^\[" negate => true what => "previous" }}} output { stdout { codec => rubydebug}}'

注意:如果匹配空行,使用$

测试匹配输出:

日志格式:

  1. [root@linux-host1 ~]# tail /data/logs/elk-cluster.log
  2. [--23T00::,][INFO ][o.e.c.m.MetaDataMappingService] [elk-node1] [logstash-nginx-access-log-2017.11./N8AF_HmTSiqBiX7pNulkYw] create_mapping [elasticsearch-java-log]
  3. [--23T00::,][INFO ][o.e.c.m.MetaDataCreateIndexService] [elk-node1] [elasticsearch-java-log-2017.11.] creating index, cause [auto(bulk api)], templates [], shards []/[], mappings []
  4. [--23T00::,][INFO ][o.e.c.m.MetaDataMappingService] [elk-node1] [elasticsearch-java-log-2017.11./S5LpdLyDRCq3ozqVnJnyBg] create_mapping [elasticsearch-java-log]
  5. [--23T00::,][INFO ][o.e.c.r.a.AllocationService] [elk-node1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[elasticsearch-java-log-2017.11.][]] ...]).

生产配置文件:

  1. vim /etc/logstash/conf.d/java.conf
  2. input {
  3. file {
  4. path => "/data/logs/elk-cluster.log"
  5. type => "elasticsearch-java-log"
  6. start_position => "beginning"
  7. stat_interval => "2"
  8. codec => multiline
  9. { pattern => "^\["
  10. negate => true
  11. what => "previous" }
  12. }}
  13.  
  14. output {
  15. if [type] == "elasticsearch-java-log" {
  16. elasticsearch {
  17. hosts => ["192.168.152.138:9200"]
  18. index => "elasticsearch-java-log-%{+YYYY.MM.dd}"
  19. }}
  20. }

验证语法:

  1. /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/java.conf -t
  2.  
  3. WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
  4. Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs to console
  5. Configuration OK
  6. ::47.228 [LogStash::Runner] INFO logstash.runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

重启服务:

  1. systemctl restart logstash

查看es状态:

添加到kibana:

kibana展示:

9.收集TCP日志

如果一些日志丢失,可以通过这种方式来进行了补一些日志。
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-tcp.html

# 测试配置文件

  1. vim /etc/logstash/conf.d/tcp.conf
  2. input {
  3. tcp {
  4. port => 5600
  5. mode => "server"
  6. type => "tcplog"
  7. }
  8. }
  9.  
  10. output {
  11. stdout {
  12. codec => rubydebug
  13. }
  14. }

# 验证配置是否正确语法

  1. /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tcp.conf -t

# 启动

  1. /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tcp.conf

在其他服务器安装nc命令:

NetCat简称nc,在网络工具中有“瑞士军刀”美誉,其功能实用,是一个简单、可靠的网络工具,可通过TCP或UDP协议传输读写数据,另外还具有很多其他功能。

  1. yum install nc -y

# 发送数据

  1. echo "nc test"|nc 192.168.56.16 9889

验证logstash是否接收到数据:

  1. {
  2. "@timestamp" => 2017-11-23T15:36:50.938Z,
  3. "port" => 34082,
  4. "@version" => "1",
  5. "host" => "192.168.152.138",
  6. "message" => "tcpdata",
  7. "type" => "tcplog"
  8. }

通过nc命令发送一个文件:

  1. nc 192.168.152.139 5600 < /etc/passwd

通过伪设备的方式发送消息:

在类Unix操作系统中,设备节点并不一定要对应物理设备。没有这种对应关系的设备是伪设备。操作系统运用了它们提供的多种功能,tcp只是dev下面众多伪设备当中的一种设备。

  1. echo "伪设备" > /dev/tcp/192.168.152.139/5600
  2. echo "2222" > /dev/tcp/192.168.152.139/5600

生产配置:

  1. vim /etc/logstash/conf.d/tomcat_tcp.conf
  2. input {
  3. file {
  4. path => "/usr/local/tomcat/logs/tomcat_access_log.*.log"
  5. type => "tomcat-accesslog"
  6. start_position => "beginning"
  7. stat_interval => "2"
  8. }
  9. tcp {
  10. port => 5600
  11. mode => "server"
  12. type => "tcplog"
  13. }
  14. }
  15.  
  16. output {
  17. if [type] == "tomcat-accesslog" {
  18. elasticsearch {
  19. hosts => ["192.168.152.138:9200"]
  20. index => "logstash-tomcat152139-accesslog-%{+YYYY.MM.dd}"
  21. }}
  22. if [type] == "tcplog" {
  23. elasticsearch {
  24. hosts => ["192.168.152.138:9200"]
  25. index => "tcplog-test152139-%{+YYYY.MM.dd}"
  26. }}
  27. }

查看ES:

查看kibana:

发送数据:

11.架构规划

  在下面的图当中从左向右看,当要访问ELK日志统计平台的时候,首先访问的是两天Nginx+keepalived做的负载高可用,访问的地址是keepalived的IP,当一台nginx代理服务器挂掉之后也不影响访问,然后nginx将请求转发到kibana,kibana再去elasticsearch获取数据,elasticsearch是两台做的集群,数据会随机保存在任意一台elasticsearch服务器,redis服务器做数据的临时保存,避免web服务器日志量过大的时候造成的数据收集与保存不一致导致的日志丢失,可以临时保存到redis,redis可以是集群,然后再由logstash服务器在非高峰时期从redis持续的取出即可,另外有一台mysql数据库服务器,用于持久化保存特定的数据,web服务器的日志由filebeat收集之后发送给另外的一台logstash,再有其写入到redis即可完成日志的收集,从图中可以看出,redis服务器处于前端结合的最中间,其左右都要依赖于redis的正常运行,那么我们就先从部署redis开始,然后将日志从web服务器收集到redis,在安装elasticsearch、kibana和从redis提取日志的logstash。

12. logstash收集日志并写入redis

用一台服务器按照部署redis服务,专门用于日志缓存使用,用于web服务器产生大量日志的场景,例如下面的服务器内存即将被使用完毕,查看是因为redis服务保存了大量的数据没有被读取而占用了大量的内存空间。
如果占用内存太多,这时候需要添加logstash服务器了,增加读取速度。

安装并配置redis:

redis安装参考链接

  1. ln -sv /usr/local/src/redis-4.0.6 /usr/local/redis
  2. cp src/redis-server /usr/bin/
  3. cp src/redis-cli /usr/bin/
  4.  
  5. bind 192.168.152.139
  6. daemonize yes # 允许后台启动
  7. # 打开save "",save 全部禁止
  8. save ""
  9. #save 900 1
  10. #save 300 10
  11. #save 60 10000
  12. # 开启认证
  13. requirepass 123456
  14.  
  15. 启动:
  16. redis-server /usr/local/redis/redis.conf
  17.  
  18. 测试:
  19. [root@linux-host2 redis-4.0.6]# redis-cli -h 192.168.152.139
  20. 192.168.152.139:6379> KEYS *
  21. (error) NOAUTH Authentication required.
  22. 192.168.152.139:6379> auth 123456
  23. OK
  24. 192.168.152.139:6379> KEYS
  25. (error) ERR wrong number of arguments for 'keys' command
  26. 192.168.152.139:6379> KEYS *
  27. (empty list or set)
  28. 192.168.152.139:6379>

配置logstash将日志写入至redis:

将tomcat服务器的logstash收集之后的tomcat访问日志写入到redis服务器,然后通过另外的logstash将redis服务器的数据取出再写入elasticsearch服务器。

官方文档:
www.elastic.co/guide/en/logstash/current/plugins-outputs-redis.html

  1. redis-cli -h 192.168.152.139 -a 123456
  2. LLEN rsyslog-5612
  3. LPOP rsyslog-5612 # 弹一条
  4.  
  5. 查看数据:
  6. redis-cli -h 192.168.152.139 -a 123456
  7. #查询数据
  8. SELECT 1
  9. #查看数据
  10. KEYS *

logstash配置:

  1. input {
  2. redis {
  3. data_type => "list"
  4. host => "192.168.152.139"
  5. db => "1"
  6. port => "6379"
  7. key => "rsyslog-5612"
  8. password => "123456"
  9. }
  10. }
  11.  
  12. output {
  13. elasticsearch {
  14. hosts => ["192.168.152.139:9200"]
  15. index => "redis-rsyslog-5612-%{+YYYY.MM.dd}"
  16. }
  17. }

待补充:

  1. 通过rsyslog收集haproxy日志:
  2. centos 6及之前的版本叫做syslogcentos7开始叫做rsyslog,根据官方的介绍,rsyslog2013年版本)可以达到每秒转发百万条日志的级别,官方网址http://www.rsyslog.com/,确认系统安装的版本命令如下:
  3.  
  4. 安装:
  5. yum install gcc gcc-c++ pcre pcre-devel openssl openss-devel -y
  6.  
  7. make TARGET=linux2628 USER_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 PREFIX=/usr/local/haproxy
  8.  
  9. make install PREFIX=/usr/local/haproxy
  10.  
  11. # 查看版本
  12. /usr/local/haproxy/sbin/haproxy -v
  13.  
  14. 准备启动脚本:
  15. vim /usr/lib/systemd/system/haproxy.service
  16. [Unit]
  17. Description=HAProxy Load Balancer
  18. After=syslog.target network.target
  19.  
  20. [Service]
  21. EnvironmentFile=/etc/sysconfig/haproxy
  22. ExecStart=/usr/sbin/haproxy-systemd-wrapper -f /etc/sysconfig/haproxy.cfg -p /run/haproxy.pid $OPTIONS
  23. ExecReload=/bin/kill -USR2 $MAINPID
  24.  
  25. [Install]
  26. WantedBy=multi-user.target
  27.  
  28. [root@linux-host2 haproxy-1.7.9]# cp /usr/local/src/haproxy-1.7.9/haproxy-systemd-wrapper /usr/sbin/
  29. [root@linux-host2 haproxy-1.7.9]# cp /usr/local/src/haproxy-1.7.9/haproxy /usr/sbin/
  30.  
  31. vim /etc/sysconfig/haproxy #系统级配置文件
  32. OPTIONS=""
  33.  
  34. mkdir /etc/haproxy
  35.  
  36. vim /etc/sysconfig/haproxy.cfg
  37. global
  38. maxconn 100000
  39. chroot /usr/local/haproxy
  40. uid 99
  41. gid 99
  42. daemon
  43. nbproc 1
  44. pidfile /usr/local/haproxy/run/haproxy.pid
  45. log 127.0.0.1 local6 info
  46.  
  47. defaults
  48. option http-keep-alive
  49. option forwardfor
  50. maxconn 100000
  51. mode http
  52. timeout connect 300000ms
  53. timeout client 300000ms
  54. timeout server 300000ms
  55.  
  56. listen stats
  57. mode http
  58. bind 0.0.0.0:9999
  59. stats enable
  60. log global
  61. stats uri /haproxy-status
  62. stats auth headmin:123456
  63.  
  64. #frontend web_port
  65. frontend web_port
  66. bind 0.0.0.0:80
  67. mode http
  68. option httplog
  69. log global
  70. option forwardfor
  71.  
  72. ###################ACL Setting###################
  73. acl pc hdr_dom(host) -i www.elk.com
  74. acl mobile hdr_dom(host) -i m.elk.com
  75. ###################USE ACL ######################
  76. use_backend pc_host if pc
  77. use_backend mobile_host if mobile
  78. #################################################
  79.  
  80. backend pc_host
  81. mode http
  82. option httplog
  83. balance source
  84. server web1 192.168.56.11:80 check inter 2000 rise 3 fall 2 weight 1
  85.  
  86. backend mobile_host
  87. mode http
  88. option httplog
  89. balance source
  90. server web1 192.168.56.11:80 check inter 2000 rise 3 fall 2 weight 1
  91.  
  92. vim /etc/rsyslog.conf
  93. $ModLoad imudp
  94. $UDPServerRun 514
  95.  
  96. $ModLoad imtcp
  97. $InputTCPServerRun 514
  98.  
  99. local6.* @@192.168.152.139:5160
  100.  
  101. 重新启动rsyslog服务:
  102. systemctl restart rsyslog
  103.  
  104. input{
  105. syslog {
  106. type => "rsyslog-5612"
  107. port => "5160"
  108. }
  109. }
  110.  
  111. output {
  112. stdout {
  113. codec => rubydebug
  114. }
  115. }
  116.  
  117. ###########################
  118. input{
  119. syslog {
  120. type => "rsyslog-5612"
  121. port => "5160"
  122. }
  123. }
  124. output {
  125. if [type] == "rsyslog-5612"{
  126. elasticsearch {
  127. hosts => ["192.168.152.139:9200"]
  128. index => "rsyslog-5612-%{+YYYY.MM.dd}"
  129. }}
  130. }

使用filebeat替代logstash收集日志

  1. Filebeat是轻量级单用途的日志收集工具,用于在没有安装java的服务器上专门收集日志,可以将日志转发到logstashelasticsearchredis等场景中进行下一步处理。
  2. 官网下载地址:https://www.elastic.co/downloads/beats/filebeat
  3. 官方文档:https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html
  4.  
  5. 确定日志格式为json格式:
  6. 先访问web服务器,以产生一定的日志,然后确认是json格式:
  7. ab -n100 -c100 http://192.168.56.16:8080/web
  8.  
  9. 安装:
  10. yum -y install filebeat-5.4.0-x86_64.rpm
  11.  
  12. https://www.elastic/guide/en/beats/filebeat/current/filebeat-configuration-details.html
  13.  
  14. [root@linux-host2 src]# grep -v "#" /etc/filebeat/filebeat.yml | grep -v "^$"
  15. filebeat.prospectors:
  16. - input_type: log
  17. paths:
  18. - /var/log/*.log
  19. - /var/log/messages
  20. exclude_lines: ["^DBG","^$"] #如果有空行,日志往数据写会报错
  21. document_type: system-log-5612 #日志类型
  22. output.file:
  23. path: "/tmp"
  24. name: "filebeat.txt"
  25. [root@linux-host2 src]# systemctl restart filebeat
  26.  
  27. 测试使用echo形式:
  28. echo "test" >> /var/log/messages
  29.  
  30. [root@linux-host2 src]# tail -f /tmp/filebeat
  31. {"@timestamp":"2017-12-21T15:45:05.715Z","beat":{"hostname":"linux-host2.example.com","name":"linux-host2.example.com","version":"5.4.0"},"input_type":"log","message":"Dec 21 23:45:01 linux-host2 systemd: Starting Session 9 of user root.","offset":1680721,"source":"/var/log/messages","type":"system-log-5612"}
  32.  
  33. logstash收集日志并吸入redis:
  34.  
  35. 输出到redis:
  36. output.redis:
  37. hosts: ["192.168.56.12:6379"]
  38. db: "1" #使用第几个库
  39. timeout: "5" #超时时间
  40. password: "123456" #redis密码
  41. key: "system-log-5612" #为了后期日志处理,建议自定义
  42.  
  43. 查看数据:
  44. SELECT 3
  45. KYES *
  46. LLEN system-log-5612
  47. RPOP system-log-5612
  48.  
  49. 从redis取日志:
  50. input {
  51. redis {
  52. data_type => "list"
  53. host => "172.20.8.13"
  54. db => "1"
  55. port => "6379"
  56. key => "system-log-0840"
  57. password => "123456"
  58. }
  59. }
  60. output {
  61. if [type] == "system-log-0840" {
  62. elasticsearch {
  63. hosts => ["172.20.8.12:9200"]
  64. index => "system-log-0840-%{+YYYY.MM.dd}"
  65. }
  66. }
  67. }
  68.  
  69. logstash 一般是每秒几百行的数据,redis每秒钟上百万行数据

监控redis数据长度

  1. 实际环境当中,可能会出现redis当中堆积了大量的数据而logstash由于种种原因未能及时提取日志,此时会导致redis服务器的内存被大量使用,甚至出现如下内存即将被使用完毕的情景:
  2. 查看redis中的日志队列长度发现有大量的日志堆积在redis当中:
  3.  
  4. 安装redis模块:
  5. yum install python-pip -y
  6. pip install redis
  7.  
  8. 报警脚本:
  9. #!/usr/bin/env python
  10. #coding:utf-8
  11. #Author
  12. import redis
  13. def redis_conn():
  14. pool = redis.ConnectionPool(host="192.168.56.12",port=6379,db=1,password=123456)
  15. conn = redis.Redis(connection_pool=pool)
  16. data = conn.llen('tomcat-accesslog-5612')
  17. print(data)
  18. redis_conn()

结合logstash进行输出测试

  1. vim beats.conf
  2. input{
  3. beats{
  4. port => 5044
  5. }
  6. }
  7.  
  8. output{
  9. stdout {
  10. codec => rubydebug
  11. }
  12. }
  13.  
  14. #将输出改为文件进行临时输出测试
  15. output{
  16. file{
  17. path => "/tmp/filebeat.txt"
  18. }
  19. }
  20.  
  21. filebeat配置文件由redis更改为logstash
  22. output.logstash:
  23. hosts: ["192.168.56.11:5044"] #logstash 服务器地址,可以是多个
  24. enabled: true #是否开启输出至logstash,默认即为true
  25. worker: 2 #工作线程数
  26. compression_level: 3 #压缩级别
  27. loadbalance: true #多个输出的时候开启负载
  28.  
  29. 配置logstash的配置文件收集beats的文件,再存入redis
  30. vim beats.conf
  31. input{
  32. beats{
  33. port => 5044
  34. }
  35. }
  36.  
  37. output{
  38. if [type] == "filebeat-system-log-5612"{
  39. redis {
  40. data_type => "list"
  41. host => "192.168.56.12"
  42. db => "3"
  43. port => "6379"
  44. key => "filebeat-system-log-5612-%{+YYYY.MM.dd}"
  45. password => "123456"
  46. }}
  47. }
  48.  
  49. redis中取数据,并写入elastsearch
  50. vim redis-es.yaml
  51. input {
  52. redis {
  53. data_type => "list"
  54. host => "192.168.56.12"
  55. db = > "3"
  56. port => "6379"
  57. key => "filebeat-system1-log-5612"
  58. password => "123456"
  59. }
  60. }
  61.  
  62. output {
  63. if [type] == "filebeat-system1-log-5612" {
  64. elasticsearch {
  65. hosts => ["192.168.56.11:9200"]
  66. index => "filebeat-system1-log-5612-%{+YYYY.MM.dd}"
  67. }}
  68. }

filebeat收集tomcat日志

  1. filebeat配置中添加如下配置:
  2. - input_type: log
  3. paths:
  4. - /usr/local/tomcat/logs/tomcat_access_log.*.log
  5. document_type: tomcat-accesslog-5612
  6.  
  7. grep -v "#" /etc/filebeat/filebeat.yml | grep -v "^$"
  8. filebeat.prospectors:
  9. - input_type: log
  10. paths:
  11. - /var/log/messages
  12. - /var/log/*.log
  13. exclude_lines: ["^DBG","^$"]
  14. document_type: filebeat-system-log-5612
  15. - input_type: log
  16. paths:
  17. - /usr/local/tomcat/logs/tomcat_access_log.*.log
  18. document_type: tomcat-accesslog-5612
  19. output.logstash:
  20. hosts: ["192.168.56.11:5044"]
  21. enabled: true
  22. worker: 2
  23. compression_level: 3
  24.  
  25. logstash收集redis中的日志,传给redis:
  26. vim beats.conf
  27. input{
  28. beats{
  29. port => 5044
  30. }
  31. }
  32.  
  33. output{
  34. if [type] == "filebeat-system-log-5612"{
  35. redis {
  36. data_type => "list"
  37. host => "192.168.56.12"
  38. db => "3"
  39. port => "6379"
  40. key => "filebeat-system-log-5612-%{+YYYY.MM.dd}"
  41. password => "123456"
  42. }}
  43. if [type] == "tomcat-accesslog-5612" {
  44. redis {
  45. data_type => "list"
  46. host => "192.168.56.12"
  47. db => "4"
  48. port => "6379"
  49. key => "tomcat-accesslog-5612"
  50. password => "123456"
  51. }}
  52. }
  53.  
  54. LPOP验证一下redis:
  55.  
  56. 由redis中取数据,并写入elastsearch:
  57. vim redis-es.yaml
  58. input {
  59. redis {
  60. data_type => "list"
  61. host => "192.168.56.12"
  62. db => "3"
  63. port => "6379"
  64. key => "filebeat-system1-log-5612"
  65. password => "123456"
  66. }
  67. redis {
  68. data_type => "list"
  69. host => "192.168.56.12"
  70. db => "4"
  71. port => "6379"
  72. key => "tomcat-accesslog-5612"
  73. password => "123456"
  74. }
  75. }
  76.  
  77. output {
  78. if [type] == "filebeat-system1-log-5612" {
  79. elasticsearch {
  80. hosts => ["192.168.56.11:9200"]
  81. index => "filebeat-system1-log-5612-%{+YYYY.MM.dd}"
  82. }}
  83. if [type] == "tomcat-accesslog-5612" {
  84. elasticsearch {
  85. hosts => ["192.168.56.12:9200"]
  86. index => "tomcat-accesslog-5612-%{+YYYY.MM.dd}"
  87. }}
  88. }
  89.  
  90. 添加到kibana:

添加代理

  1. 添加haproxy代理:
  2.  
  3. ##################ACL Setting#################
  4. acl pc hdr_dom(host) -i www.elk.com
  5. acl mobile hdr_dom(host) -i m.elk.com
  6. acl kibana hdr_dom(host) -i www.kibana5612.com
  7. ##################USE ACL######################
  8. use_backend pc_host if pc
  9. use_backend mobile_host if mobile
  10. use_backend kibana_host if kibana
  11. ###############################################
  12.  
  13. backend kibana_host
  14. mode http
  15. option httplog
  16. balance source
  17. server web1 127.0.0.1:5601 check inter 2000 rise 3 fall 2 weight 1
  18.  
  19. kibana配置:
  20. server.port: 5601
  21. server.host: "127.0.0.1"
  22. elasticsearch.url: "http://192.168.56.12:9200"
  23.  
  24. systemctl start kibana
  25. systemctl enable kibana
  26.  
  27. Nginx代理并授权:
  28. vim nginx.conf
  29. include /usr/local/nginx/conf/conf.d/*.conf;
  30.  
  31. vim /usr/local/nginx/conf/conf.d/kibana5612.conf
  32. upstream kibana_server {
  33. server 127.0.0.1:5601 weight=1 max_fails=3 fail_timeout=60;
  34. }
  35.  
  36. server {
  37. listen 80;
  38. server_name www.kibana5611.com;
  39. location /{
  40. proxy_pass http://kibana_server;
  41. proxy_http_version 1.1;
  42. proxy_set_header Upgrade $http_upgrade;
  43. proxy_set_header Connection 'upgrade';
  44. proxy_set_header Host $host;
  45. proxy_cache_bypass $http_upgrade;
  46. }
  47. }
  48.  
  49. yum install httpd-tools -y
  50. #第一次需要加-c
  51. htpasswd -bc /usr/local/nginx/conf/htpasswd.users luchuangao 123456
  52. #第二次需要把-c去掉,否则会覆盖原有得。
  53. htpasswd -b /usr/local/nginx/conf/htpasswd.users luchuangao 123456
  54. #查看tail /usr/local/nginx/conf/htpasswd.users
  55. ...
  56. #授权
  57. chown nginx.nginx /usr/local/nginx/conf/htpasswd.users
  58. #重启服务
  59. /usr/local/nginx/sbin/nginx -s reload
  60.  
  61. 添加进nginx配置文件:
  62. vim /usr/local/nginx/conf/conf.d/kibana5612.conf
  63. upstream kibana_server {
  64. server 127.0.0.1:5601 weight=1 max_fails=3 fail_timeout=60;
  65. }
  66.  
  67. server {
  68. listen 80;
  69. server_name www.kibana5611.com;
  70. auth_basic "Restricted Access";
  71. auth_basic_user_file /usr/local/nginx/conf/htpasswd.users;
  72. location /{
  73. proxy_pass http://kibana_server;
  74. proxy_http_version 1.1;
  75. proxy_set_header Upgrade $http_upgrade;
  76. proxy_set_header Connection 'upgrade';
  77. proxy_set_header Host $host;
  78. proxy_cache_bypass $http_upgrade;
  79. }
  80. }

elk定时删除索引

http://www.iyunw.cn/archives/elk-mei-ri-qing-chu-30-tian-suo-yin-jiao-ben/

ELK服务基础的更多相关文章

  1. 如何从零开始实现一个soa远程调用服务基础组件

    说起soa远程调用基础组件,最著名的莫过于淘宝的dubbo了,目前很多的大型互联网公司都有一套自己的远程服务调用分布式框架,或者是使用开源的(例如dubbo),或者是自己基于某种协议(例如hessia ...

  2. DNS服务基础原理介绍

    FQDN 全称域名 localhost(主机名或者是别名).localdomain(域名)    FQDN=主机名.域名 根域               . 顶级域名       .com   .n ...

  3. linux web服务基础知识,dns

    #web服务基础知识c/s 客户端/服务器b/s 浏览器/服务器 nginx   >   web  server  服务端浏览器  >    web  client  客户端 #dns解析 ...

  4. Web服务基础介绍

    Web服务基础介绍 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.正常情况下的单次web服务访问流程 博主推荐阅读: https://www.cnblogs.com/yinzh ...

  5. SpringCloud微服务基础学习

    看了蚂蚁课堂的微服务学习,确实学习了不少关于微服务的知识,现在总结学习如下 : SpringCloud微服务基础单点系统架构传统项目架构传统项目分为三层架构,将业务逻辑层.数据库访问层.控制层放入在一 ...

  6. web服务基础

    Web服务基础 用户访问网站的基本流程 我们每天都会用web客户端上网,浏览器就是一个web客户端,例如谷歌浏览器,以及火狐浏览器等. 当我们输入www.oldboyedu.com/时候,很快就能看到 ...

  7. C#创建Windows Service(Windows 服务)基础教程

    Windows Service这一块并不复杂,但是注意事项太多了,网上资料也很凌乱,偶尔自己写也会丢三落四的.所以本文也就产生了,本文不会写复杂的东西,完全以基础应用的需求来写,所以不会对Window ...

  8. C# 编写Window服务基础(一)

    一.Windows服务介绍: Windows服务以前被称作NT服务,是一些运行在Windows NT.Windows 2000和Windows XP等操作系统下用户环境以外的程序.在以前,编写Wind ...

  9. Aooms_微服务基础开发平台实战_002_工程构建

    一.关于框架更名的一点说明 最近在做年终总结.明年规划.还有几个项目需要了结.出解决方案,事情还比较多,死了不少脑细胞,距离上一篇文章发出已经过了3天,是不是有些人会认为我放弃了又不搞了,NONO,一 ...

随机推荐

  1. C++编程 - tuple、any容器

    C++编程 - tuple.any容器 flyfish 2014-10-29 一 tuple tuple是固定大小的容器,每一个元素类型能够不同 作用1 替换struct struct t1 { in ...

  2. CentOS7.2内核版本查看简述

    1.uname 命令 [root@bogon /]# uname --help 用法:uname [选项]... 输出一组系统信息.如果不跟随选项,则视为只附加-s 选项.   -a, --all以如 ...

  3. jQuery动态生成Bootstrap表格

    <%@ page language="java" import="java.util.*" pageEncoding="UTF-8"% ...

  4. UGUI之Canvas Group

    可以通过Canvas Group影响该组UI元素的部分性质,而不需要费力的对该组UI下的每个元素逐个调整.Canvas Group是同时作用于该组UI下的全部元素. 参数:Alpha:该组UI元素的透 ...

  5. unity如何停止不用字符串方式开启协程的方法

    通常我们知道开启协程用StartCoroutine("Method"); 停止协程用StopCoroutine("Method"); 如果我们想要终止所有的协程 ...

  6. android 开发者资源下载地址记录(转+补充)

    https如果无法下载的话将下面的:https://dl-ssl 部分改为 http://dl (1)Android SDK (Android SDK主安装包,包含SDK Manager.AVD Ma ...

  7. 第四章 Spring.Net 如何管理您的类___让对象了解自己的容器

    我们在开发中,经常需要让对象了解自己所在的容器的信息,例如,有时我们需要让对象知道,对象所在容器的引用是什么,或者是对象在容器中的名称是什么 .Spring.Net 中提供了两个接口,我们使用这两个接 ...

  8. 【RF库Collections测试】Get Index From List

    Name:Get Index From ListSource:Collections <test library>Arguments:[ list_ | value | start=0 | ...

  9. 第五篇:浅谈CPU 并行编程和 GPU 并行编程的区别

    前言 CPU 的并行编程技术,也是高性能计算中的热点,也是今后要努力学习的方向.那么它和 GPU 并行编程有何区别呢? 本文将做出详细的对比,分析各自的特点,为将来深入学习 CPU 并行编程技术打下铺 ...

  10. block基本使用和底层

    block基础使用语法   一.block与函数的对比 定义函数指针 int (*myFn)(); 定义Blocks int (^MyBlocks)(int,int); 调用函数指针 (*myFn)( ...