一套好的日志分析系统可以详细记录系统的运行情况,方便我们定位分析系统性能瓶颈、查找定位系统问题。上一篇说明了日志的多种业务场景以及日志记录的实现方式,那么日志记录下来,相关人员就需要对日志数据进行处理与分析,基于E(ElasticSearch)L(Logstash)K(Kibana)组合的日志分析系统可以说是目前各家公司普遍的首选方案。

  • Elasticsearch: 分布式、RESTful 风格的搜索和数据分析引擎,可快速存储、搜索、分析海量的数据。在ELK中用于存储所有日志数据。

  • Logstash: 开源的数据采集引擎,具有实时管道传输功能。Logstash 能够将来自单独数据源的数据动态集中到一起,对这些数据加以标准化并传输到您所选的地方。在ELK中用于将采集到的日志数据进行处理、转换然后存储到Elasticsearch。

  • Kibana: 免费且开放的用户界面,能够让您对 Elasticsearch 数据进行可视化,并让您在 Elastic Stack 中进行导航。您可以进行各种操作,从跟踪查询负载,到理解请求如何流经您的整个应用,都能轻松完成。在ELK中用于通过界面展示存储在Elasticsearch中的日志数据。

  作为微服务集群,必须要考虑当微服务访问量暴增时的高并发场景,此时系统的日志数据同样是爆发式增长,我们需要通过消息队列做流量削峰处理,Logstash官方提供Redis、Kafka、RabbitMQ等输入插件。Redis虽然可以用作消息队列,但其各项功能显示不如单一实现的消息队列,所以通常情况下并不使用它的消息队列功能;Kafka的性能要优于RabbitMQ,通常在日志采集,数据采集时使用较多,所以这里我们采用Kafka实现消息队列功能。

  ELK日志分析系统中,数据传输、数据保存、数据展示、流量削峰功能都有了,还少一个组件,就是日志数据的采集,虽然log4j2可以将日志数据发送到Kafka,甚至可以将日志直接输入到Logstash,但是基于系统设计解耦的考虑,业务系统运行不会影响到日志分析系统,同时日志分析系统也不会影响到业务系统,所以,业务只需将日志记录下来,然后由日志分析系统去采集分析即可,Filebeat是ELK日志系统中常用的日志采集器,它是 Elastic Stack 的一部分,因此能够与 Logstash、Elasticsearch 和 Kibana 无缝协作。

  • Kafka: 高吞吐量的分布式发布订阅消息队列,主要应用于大数据的实时处理。

  • Filebeat: 轻量型日志采集器。在 Kubernetes、Docker 或云端部署中部署 Filebeat,即可获得所有的日志流:信息十分完整,包括日志流的 pod、容器、节点、VM、主机以及自动关联时用到的其他元数据。此外,Beats Autodiscover 功能可检测到新容器,并使用恰当的 Filebeat 模块对这些容器进行自适应监测。

软件下载:

  因经常遇到在内网搭建环境的问题,所以这里习惯使用下载软件包的方式进行安装,虽没有使用Yum、Docker等安装方便,但是可以对软件目录、配置信息等有更深的了解,在后续采用Yum、Docker等方式安装时,也能清楚安装了哪些东西,安装配置的文件是怎样的,即使出现问题,也可以快速的定位解决。

Elastic Stack全家桶下载主页: https://www.elastic.co/cn/downloads/

我们选择如下版本:

Kafka下载:

安装配置:

  安装前先准备好三台CentOS7服务器用于集群安装,这是IP地址为:172.16.20.220、172.16.20.221、172.16.20.222,然后将上面下载的软件包上传至三台服务器的/usr/local目录。因服务器资源有限,这里所有的软件都安装在这三台集群服务器上,在实际生产环境中,请根据业务需求设计规划进行安装。

  在集群搭建时,如果能够编写shell安装脚本就会很方便,如果不能编写,就需要在每台服务器上执行安装命令,多数ssh客户端提供了多会话同时输入的功能,这里一些通用安装命令可以选择启用该功能。

一、安装Elasticsearch集群

1、Elasticsearch是使用Java语言开发的,所以需要在环境上安装jdk并配置环境变量。

新建/usr/local/java目录

  1. mkdir /usr/local/java

将下载的jdk软件包jdk-8u64-linux-x64.tar.gz上传到/usr/local/java目录,然后解压

  1. tar -zxvf jdk-8u77-linux-x64.tar.gz

配置环境变量/etc/profile

  1. vi /etc/profile

在底部添加以下内容

  1. JAVA_HOME=/usr/local/java/jdk1.8.0_64
  2. PATH=$JAVA_HOME/bin:$PATH
  3. CLASSPATH=$JAVA_HOME/jre/lib/ext:$JAVA_HOME/lib/tools.jar
  4. export PATH JAVA_HOME CLASSPATH

使环境变量生效

  1. source /etc/profile
  • 另外一种十分快捷的方式,如果不是内网环境,可以直接使用命令行安装,这里安装的是免费版本的openjdk
  1. yum install java-1.8.0-openjdk* -y
2、安装配置Elasticsearch
  • 进入/usr/local目录,解压Elasticsearch安装包,请确保执行命令前已将环境准备时的Elasticsearch安装包上传至该目录。
  1. tar -zxvf elasticsearch-8.0.0-linux-x86_64.tar.gz
  • 重命名文件夹
  1. mv elasticsearch-8.0.0 elasticsearch
  • elasticsearch不能使用root用户运行,这里创建运行elasticsearch的用户组和用户
  1. # 创建用户组
  2. groupadd elasticsearch
  3. # 创建用户并添加至用户组
  4. useradd elasticsearch -g elasticsearch
  5. # 更改elasticsearch密码,设置一个自己需要的密码,这里设置为和用户名一样:El12345678
  6. passwd elasticsearch
  • 新建elasticsearch数据和日志存放目录,并给elasticsearch用户赋权限
  1. mkdir -p /data/elasticsearch/data
  2. mkdir -p /data/elasticsearch/log
  3. chown -R elasticsearch:elasticsearch /data/elasticsearch/*
  4. chown -R elasticsearch:elasticsearch /usr/local/elasticsearch/*
  • elasticsearch默认启用了x-pack,集群通信需要进行安全认证,所以这里需要用到SSL证书。注意:这里生成证书的命令只在一台服务器上执行,执行之后copy到另外两台服务器的相同目录下。
  1. # 提示输入密码时,直接回车
  2. ./elasticsearch-certutil ca -out /usr/local/elasticsearch/config/elastic-stack-ca.p12
  3. # 提示输入密码时,直接回车
  4. ./elasticsearch-certutil cert --ca /usr/local/elasticsearch/config/elastic-stack-ca.p12 -out /usr/local/elasticsearch/config/elastic-certificates.p12 -pass ""
  5. # 如果使用root用户生成的证书,记得给elasticsearch用户赋权限
  6. chown -R elasticsearch:elasticsearch /usr/local/elasticsearch/config/elastic-certificates.p12
  • 设置密码,这里在出现输入密码时,所有的都是输入的123456
  1. ./elasticsearch-setup-passwords interactive
  2. Enter password for [elastic]:
  3. Reenter password for [elastic]:
  4. Enter password for [apm_system]:
  5. Reenter password for [apm_system]:
  6. Enter password for [kibana_system]:
  7. Reenter password for [kibana_system]:
  8. Enter password for [logstash_system]:
  9. Reenter password for [logstash_system]:
  10. Enter password for [beats_system]:
  11. Reenter password for [beats_system]:
  12. Enter password for [remote_monitoring_user]:
  13. Reenter password for [remote_monitoring_user]:
  14. Changed password for user [apm_system]
  15. Changed password for user [kibana_system]
  16. Changed password for user [kibana]
  17. Changed password for user [logstash_system]
  18. Changed password for user [beats_system]
  19. Changed password for user [remote_monitoring_user]
  20. Changed password for user [elastic]
  • 修改elasticsearch配置文件
  1. vi /usr/local/elasticsearch/config/elasticsearch.yml
  1. # 修改配置
  2. # 集群名称
  3. cluster.name: log-elasticsearch
  4. # 节点名称
  5. node.name: node-1
  6. # 数据存放路径
  7. path.data: /data/elasticsearch/data
  8. # 日志存放路径
  9. path.logs: /data/elasticsearch/log
  10. # 当前节点IP
  11. network.host: 192.168.60.201
  12. # 对外端口
  13. http.port: 9200
  14. # 集群ip
  15. discovery.seed_hosts: ["172.16.20.220", "172.16.20.221", "172.16.20.222"]
  16. # 初始主节点
  17. cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
  18. # 新增配置
  19. # 集群端口
  20. transport.tcp.port: 9300
  21. transport.tcp.compress: true
  22. http.cors.enabled: true
  23. http.cors.allow-origin: "*"
  24. http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE
  25. http.cors.allow-headers: "X-Requested-With, Content-Type, Content-Length, X-User"
  26. xpack.security.enabled: true
  27. xpack.security.transport.ssl.enabled: true
  28. xpack.security.transport.ssl.verification_mode: certificate
  29. xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
  30. xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
  • 配置Elasticsearch的JVM参数
  1. vi /usr/local/elasticsearch/config/jvm.options
  1. -Xms1g
  2. -Xmx1g
  • 修改Linux默认资源限制数
  1. vi /etc/security/limits.conf
  1. # 在最后加入,修改完成后,重启系统生效。
  2. * soft nofile 131072
  3. * hard nofile 131072
  1. vi /etc/sysctl.conf
  2. # 将值vm.max_map_count值修改为655360
  3. vm.max_map_count=655360
  4. # 使配置生效
  5. sysctl -p
  • 切换用户启动服务
  1. su elasticsearch
  2. cd /usr/local/elasticsearch/bin
  3. # 控制台启动命令,可以看到具体报错信息
  4. ./elasticsearch
  1. ./elasticsearch -d

备注:后续可通过此命令停止elasticsearch运行

  1. # 查看进程id
  2. ps -ef | grep elastic
  3. # 关闭进程
  4. kill -9 1376(进程id)
3、安装ElasticSearch界面管理插件elasticsearch-head,只需要在一台服务器上安装即可,这里我们安装到172.16.20.220服务器上
  1. # 解压
  2. tar -xvJf node-v16.14.0-linux-x64.tar.xz
  3. # 重命名
  4. mv node-v16.14.0-linux-x64 nodejs
  5. # 配置环境变量
  6. vi /etc/profile
  7. # 新增以下内容
  8. export NODE_HOME=/usr/local/nodejs
  9. PATH=$JAVA_HOME/bin:$NODE_HOME/bin:/usr/local/mysql/bin:/usr/local/subversion/bin:$PATH
  10. export PATH JAVA_HOME NODE_HOME JENKINS_HOME CLASSPATH
  11. # 使配置生效
  12. source /etc/profile
  13. # 测试是否配置成功
  14. node -v
  1. # 解压
  2. unzip elasticsearch-head-master.zip
  3. # 重命名
  4. mv elasticsearch-head-master elasticsearch-head
  5. # 进入到elasticsearch-head目录
  6. cd elasticsearch-head
  7. #切换软件源,可以提升安装速度
  8. npm config set registry https://registry.npm.taobao.org
  9. # 执行安装命令
  10. npm install -g npm@8.5.1
  11. npm install phantomjs-prebuilt@2.1.16 --ignore-scripts
  12. npm install
  13. # 启动命令
  14. npm run start
  • 浏览器访问http://172.16.20.220:9100/?auth_user=elastic&auth_password=123456 ,需要加上我们上面设置的用户名密码,就可以看到我们的Elasticsearch集群状态了。

二、安装Kafka集群

  • 环境准备:

  新建kafka的日志目录和zookeeper数据目录,因为这两项默认放在tmp目录,而tmp目录中内容会随重启而丢失,所以我们自定义以下目录:

  1. mkdir /data/zookeeper
  2. mkdir /data/zookeeper/data
  3. mkdir /data/zookeeper/logs
  4. mkdir /data/kafka
  5. mkdir /data/kafka/data
  6. mkdir /data/kafka/logs
  • zookeeper.properties配置
  1. vi /usr/local/kafka/config/zookeeper.properties

修改如下:

  1. # 修改为自定义的zookeeper数据目录
  2. dataDir=/data/zookeeper/data
  3. # 修改为自定义的zookeeper日志目录
  4. dataLogDir=/data/zookeeper/logs
  5. # 端口
  6. clientPort=2181
  7. # 注释掉
  8. #maxClientCnxns=0
  9. # 设置连接参数,添加如下配置
  10. # 为zk的基本时间单元,毫秒
  11. tickTime=2000
  12. # Leader-Follower初始通信时限 tickTime*10
  13. initLimit=10
  14. # Leader-Follower同步通信时限 tickTime*5
  15. syncLimit=5
  16. # 设置broker Id的服务地址,本机ip一定要用0.0.0.0代替
  17. server.1=0.0.0.0:2888:3888
  18. server.2=172.16.20.221:2888:3888
  19. server.3=172.16.20.222:2888:3888
  • 在各台服务器的zookeeper数据目录/data/zookeeper/data添加myid文件,写入服务broker.id属性值

在data文件夹中新建myid文件,myid文件的内容为1(一句话创建:echo 1 > myid)

  1. cd /data/zookeeper/data
  2. vi myid
  3. #添加内容:1 其他两台主机分别配置 2和3
  4. 1
  • kafka配置,进入config目录下,修改server.properties文件
  1. vi /usr/local/kafka/config/server.properties
  1. # 每台服务器的broker.id都不能相同
  2. broker.id=1
  3. # 是否可以删除topic
  4. delete.topic.enable=true
  5. # topic 在当前broker上的分片个数,与broker保持一致
  6. num.partitions=3
  7. # 每个主机地址不一样:
  8. listeners=PLAINTEXT://172.16.20.220:9092
  9. advertised.listeners=PLAINTEXT://172.16.20.220:9092
  10. # 具体一些参数
  11. log.dirs=/data/kafka/kafka-logs
  12. # 设置zookeeper集群地址与端口如下:
  13. zookeeper.connect=172.16.20.220:2181,172.16.20.221:2181,172.16.20.222:2181
  • Kafka启动

kafka启动时先启动zookeeper,再启动kafka;关闭时相反,先关闭kafka,再关闭zookeeper。

1、zookeeper启动命令

  1. ./zookeeper-server-start.sh ../config/zookeeper.properties &

后台运行启动命令:

  1. nohup ./zookeeper-server-start.sh ../config/zookeeper.properties >/data/zookeeper/logs/zookeeper.log 2>1 &

或者

  1. ./zookeeper-server-start.sh -daemon ../config/zookeeper.properties &

查看集群状态:

  1. ./zookeeper-server-start.sh status ../config/zookeeper.properties

2、kafka启动命令

  1. ./kafka-server-start.sh ../config/server.properties &

后台运行启动命令:

  1. nohup bin/kafka-server-start.sh ../config/server.properties >/data/kafka/logs/kafka.log 2>1 &

或者

  1. ./kafka-server-start.sh -daemon ../config/server.properties &

3、创建topic,最新版本已经不需要使用zookeeper参数创建。

  1. ./kafka-topics.sh --create --replication-factor 2 --partitions 1 --topic test --bootstrap-server 172.16.20.220:9092

参数解释:

复制两份

  --replication-factor 2

创建1个分区

  --partitions 1

topic 名称

  --topic test

4、查看已经存在的topic(三台设备都执行时可以看到)

  1. ./kafka-topics.sh --list --bootstrap-server 172.16.20.220:9092

5、启动生产者:

  1. ./kafka-console-producer.sh --broker-list 172.16.20.220:9092 --topic test

6、启动消费者:

  1. ./kafka-console-consumer.sh --bootstrap-server 172.16.20.221:9092 --topic test
  2. ./kafka-console-consumer.sh --bootstrap-server 172.16.20.222:9092 --topic test

添加参数 --from-beginning 从开始位置消费,不是从最新消息

  1. ./kafka-console-consumer.sh --bootstrap-server 172.16.20.221 --topic test --from-beginning

7、测试:在生产者输入test,可以在消费者的两台服务器上看到同样的字符test,说明Kafka服务器集群已搭建成功。

三、安装配置Logstash

Logstash没有提供集群安装方式,相互之间并没有交互,但是我们可以配置同属一个Kafka消费者组,来实现统一消息只消费一次的功能。

  • 解压安装包
  1. tar -zxvf logstash-8.0.0-linux-x86_64.tar.gz
  2. mv logstash-8.0.0 logstash
  • 配置kafka主题和组
  1. cd logstash
  2. # 新建配置文件
  3. vi logstash-kafka.conf
  4. # 新增以下内容
  5. input {
  6. kafka {
  7. codec => "json"
  8. group_id => "logstash"
  9. client_id => "logstash-api"
  10. topics_pattern => "api_log"
  11. type => "api"
  12. bootstrap_servers => "172.16.20.220:9092,172.16.20.221:9092,172.16.20.222:9092"
  13. auto_offset_reset => "latest"
  14. }
  15. kafka {
  16. codec => "json"
  17. group_id => "logstash"
  18. client_id => "logstash-operation"
  19. topics_pattern => "operation_log"
  20. type => "operation"
  21. bootstrap_servers => "172.16.20.220:9092,172.16.20.221:9092,172.16.20.222:9092"
  22. auto_offset_reset => "latest"
  23. }
  24. kafka {
  25. codec => "json"
  26. group_id => "logstash"
  27. client_id => "logstash-debugger"
  28. topics_pattern => "debugger_log"
  29. type => "debugger"
  30. bootstrap_servers => "172.16.20.220:9092,172.16.20.221:9092,172.16.20.222:9092"
  31. auto_offset_reset => "latest"
  32. }
  33. kafka {
  34. codec => "json"
  35. group_id => "logstash"
  36. client_id => "logstash-nginx"
  37. topics_pattern => "nginx_log"
  38. type => "nginx"
  39. bootstrap_servers => "172.16.20.220:9092,172.16.20.221:9092,172.16.20.222:9092"
  40. auto_offset_reset => "latest"
  41. }
  42. }
  43. output {
  44. if [type] == "api"{
  45. elasticsearch {
  46. hosts => ["172.16.20.220:9200","172.16.20.221:9200","172.16.20.222:9200"]
  47. index => "logstash_api-%{+YYYY.MM.dd}"
  48. user => "elastic"
  49. password => "123456"
  50. }
  51. }
  52. if [type] == "operation"{
  53. elasticsearch {
  54. hosts => ["172.16.20.220:9200","172.16.20.221:9200","172.16.20.222:9200"]
  55. index => "logstash_operation-%{+YYYY.MM.dd}"
  56. user => "elastic"
  57. password => "123456"
  58. }
  59. }
  60. if [type] == "debugger"{
  61. elasticsearch {
  62. hosts => ["172.16.20.220:9200","172.16.20.221:9200","172.16.20.222:9200"]
  63. index => "logstash_operation-%{+YYYY.MM.dd}"
  64. user => "elastic"
  65. password => "123456"
  66. }
  67. }
  68. if [type] == "nginx"{
  69. elasticsearch {
  70. hosts => ["172.16.20.220:9200","172.16.20.221:9200","172.16.20.222:9200"]
  71. index => "logstash_operation-%{+YYYY.MM.dd}"
  72. user => "elastic"
  73. password => "123456"
  74. }
  75. }
  76. }
  • 启动logstash
  1. # 切换到bin目录
  2. cd /usr/local/logstash/bin
  3. # 启动命令
  4. nohup ./logstash -f ../config/logstash-kafka.conf &
  5. #查看启动日志
  6. tail -f nohup.out

四、安装配置Kibana

  • 解压安装文件
  1. tar -zxvf kibana-8.0.0-linux-x86_64.tar.gz
  2. mv kibana-8.0.0 kibana
  • 修改配置文件
  1. cd /usr/local/kibana/config
  2. vi kibana.yml
  3. # 修改以下内容
  4. server.port: 5601
  5. server.host: "172.16.20.220"
  6. elasticsearch.hosts: ["http://172.16.20.220:9200","http://172.16.20.221:9200","http://172.16.20.222:9200"]
  7. elasticsearch.username: "kibana_system"
  8. elasticsearch.password: "123456"
  • 启动服务
  1. cd /usr/local/kibana/bin
  2. # 默认不允许使用root运行,可以添加 --allow-root 参数使用root用户运行,也可以跟Elasticsearch一样新增一个用户组用户
  3. nohup ./kibana --allow-root &
  • 访问http://172.16.20.220:5601/,并使用elastic / 123456登录。

五、安装Filebeat

  Filebeat用于安装在业务软件运行服务器,收集业务产生的日志,并推送到我们配置的Kafka、Redis、RabbitMQ等消息中间件,或者直接保存到Elasticsearch,下面来讲解如何安装配置:

1、进入到/usr/local目录,执行解压命令

  1. tar -zxvf filebeat-8.0.0-linux-x86_64.tar.gz
  2. mv filebeat-8.0.0-linux-x86_64 filebeat

2、编辑配置filebeat.yml

  配置文件中默认是输出到elasticsearch,这里我们改为kafka,同文件目录下的filebeat.reference.yml文件是所有配置的实例,可以直接将kafka的配置复制到filebeat.yml

  • 配置采集开关和采集路径:
  1. # filestream is an input for collecting log messages from files.
  2. - type: filestream
  3. # Change to true to enable this input configuration.
  4. # enable改为true
  5. enabled: true
  6. # Paths that should be crawled and fetched. Glob based paths.
  7. # 修改微服务日志的实际路径
  8. paths:
  9. - /data/gitegg/log/gitegg-service-system/*.log
  10. - /data/gitegg/log/gitegg-service-base/*.log
  11. - /data/gitegg/log/gitegg-service-oauth/*.log
  12. - /data/gitegg/log/gitegg-service-gateway/*.log
  13. - /data/gitegg/log/gitegg-service-extension/*.log
  14. - /data/gitegg/log/gitegg-service-bigdata/*.log
  15. #- c:\programdata\elasticsearch\logs\*
  16. # Exclude lines. A list of regular expressions to match. It drops the lines that are
  17. # matching any regular expression from the list.
  18. #exclude_lines: ['^DBG']
  19. # Include lines. A list of regular expressions to match. It exports the lines that are
  20. # matching any regular expression from the list.
  21. #include_lines: ['^ERR', '^WARN']
  22. # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  23. # are matching any regular expression from the list. By default, no files are dropped.
  24. #prospector.scanner.exclude_files: ['.gz$']
  25. # Optional additional fields. These fields can be freely picked
  26. # to add additional information to the crawled log files for filtering
  27. #fields:
  28. # level: debug
  29. # review: 1
  • Elasticsearch 模板配置
  1. # ======================= Elasticsearch template setting =======================
  2. setup.template.settings:
  3. index.number_of_shards: 3
  4. index.number_of_replicas: 1
  5. #index.codec: best_compression
  6. #_source.enabled: false
  7. # 允许自动生成index模板
  8. setup.template.enabled: true
  9. # # 生成index模板时字段配置文件
  10. setup.template.fields: fields.yml
  11. # # 如果存在模块则覆盖
  12. setup.template.overwrite: true
  13. # # 生成index模板的名称
  14. setup.template.name: "api_log"
  15. # # 生成index模板匹配的index格式
  16. setup.template.pattern: "api-*"
  17. #索引生命周期管理ilm功能默认开启,开启的情况下索引名称只能为filebeat-*, 通过setup.ilm.enabled: false进行关闭;
  18. setup.ilm.pattern: "{now/d}"
  19. setup.ilm.enabled: false
  • 开启仪表盘并配置使用Kibana仪表盘:
  1. # ================================= Dashboards =================================
  2. # These settings control loading the sample dashboards to the Kibana index. Loading
  3. # the dashboards is disabled by default and can be enabled either by setting the
  4. # options here or by using the `setup` command.
  5. setup.dashboards.enabled: true
  6. # The URL from where to download the dashboards archive. By default this URL
  7. # has a value which is computed based on the Beat name and version. For released
  8. # versions, this URL points to the dashboard archive on the artifacts.elastic.co
  9. # website.
  10. #setup.dashboards.url:
  11. # =================================== Kibana ===================================
  12. # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
  13. # This requires a Kibana endpoint configuration.
  14. setup.kibana:
  15. # Kibana Host
  16. # Scheme and port can be left out and will be set to the default (http and 5601)
  17. # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  18. # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  19. host: "172.16.20.220:5601"
  20. # Kibana Space ID
  21. # ID of the Kibana Space into which the dashboards should be loaded. By default,
  22. # the Default Space will be used.
  23. #space.id:
  • 配置输出到Kafka,完整的filebeat.yml如下
  1. ###################### Filebeat Configuration Example #########################
  2. # This file is an example configuration file highlighting only the most common
  3. # options. The filebeat.reference.yml file from the same directory contains all the
  4. # supported options with more comments. You can use it as a reference.
  5. #
  6. # You can find the full configuration reference here:
  7. # https://www.elastic.co/guide/en/beats/filebeat/index.html
  8. # For more available modules and options, please see the filebeat.reference.yml sample
  9. # configuration file.
  10. # ============================== Filebeat inputs ===============================
  11. filebeat.inputs:
  12. # Each - is an input. Most options can be set at the input level, so
  13. # you can use different inputs for various configurations.
  14. # Below are the input specific configurations.
  15. # filestream is an input for collecting log messages from files.
  16. - type: filestream
  17. # Change to true to enable this input configuration.
  18. enabled: true
  19. # Paths that should be crawled and fetched. Glob based paths.
  20. paths:
  21. - /data/gitegg/log/*/*operation.log
  22. #- c:\programdata\elasticsearch\logs\*
  23. # Exclude lines. A list of regular expressions to match. It drops the lines that are
  24. # matching any regular expression from the list.
  25. #exclude_lines: ['^DBG']
  26. # Include lines. A list of regular expressions to match. It exports the lines that are
  27. # matching any regular expression from the list.
  28. #include_lines: ['^ERR', '^WARN']
  29. # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  30. # are matching any regular expression from the list. By default, no files are dropped.
  31. #prospector.scanner.exclude_files: ['.gz$']
  32. # Optional additional fields. These fields can be freely picked
  33. # to add additional information to the crawled log files for filtering
  34. fields:
  35. topic: operation_log
  36. # level: debug
  37. # review: 1
  38. # filestream is an input for collecting log messages from files.
  39. - type: filestream
  40. # Change to true to enable this input configuration.
  41. enabled: true
  42. # Paths that should be crawled and fetched. Glob based paths.
  43. paths:
  44. - /data/gitegg/log/*/*api.log
  45. #- c:\programdata\elasticsearch\logs\*
  46. # Exclude lines. A list of regular expressions to match. It drops the lines that are
  47. # matching any regular expression from the list.
  48. #exclude_lines: ['^DBG']
  49. # Include lines. A list of regular expressions to match. It exports the lines that are
  50. # matching any regular expression from the list.
  51. #include_lines: ['^ERR', '^WARN']
  52. # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  53. # are matching any regular expression from the list. By default, no files are dropped.
  54. #prospector.scanner.exclude_files: ['.gz$']
  55. # Optional additional fields. These fields can be freely picked
  56. # to add additional information to the crawled log files for filtering
  57. fields:
  58. topic: api_log
  59. # level: debug
  60. # review: 1
  61. # filestream is an input for collecting log messages from files.
  62. - type: filestream
  63. # Change to true to enable this input configuration.
  64. enabled: true
  65. # Paths that should be crawled and fetched. Glob based paths.
  66. paths:
  67. - /data/gitegg/log/*/*debug.log
  68. #- c:\programdata\elasticsearch\logs\*
  69. # Exclude lines. A list of regular expressions to match. It drops the lines that are
  70. # matching any regular expression from the list.
  71. #exclude_lines: ['^DBG']
  72. # Include lines. A list of regular expressions to match. It exports the lines that are
  73. # matching any regular expression from the list.
  74. #include_lines: ['^ERR', '^WARN']
  75. # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  76. # are matching any regular expression from the list. By default, no files are dropped.
  77. #prospector.scanner.exclude_files: ['.gz$']
  78. # Optional additional fields. These fields can be freely picked
  79. # to add additional information to the crawled log files for filtering
  80. fields:
  81. topic: debugger_log
  82. # level: debug
  83. # review: 1
  84. # filestream is an input for collecting log messages from files.
  85. - type: filestream
  86. # Change to true to enable this input configuration.
  87. enabled: true
  88. # Paths that should be crawled and fetched. Glob based paths.
  89. paths:
  90. - /usr/local/nginx/logs/access.log
  91. #- c:\programdata\elasticsearch\logs\*
  92. # Exclude lines. A list of regular expressions to match. It drops the lines that are
  93. # matching any regular expression from the list.
  94. #exclude_lines: ['^DBG']
  95. # Include lines. A list of regular expressions to match. It exports the lines that are
  96. # matching any regular expression from the list.
  97. #include_lines: ['^ERR', '^WARN']
  98. # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  99. # are matching any regular expression from the list. By default, no files are dropped.
  100. #prospector.scanner.exclude_files: ['.gz$']
  101. # Optional additional fields. These fields can be freely picked
  102. # to add additional information to the crawled log files for filtering
  103. fields:
  104. topic: nginx_log
  105. # level: debug
  106. # review: 1
  107. # ============================== Filebeat modules ==============================
  108. filebeat.config.modules:
  109. # Glob pattern for configuration loading
  110. path: ${path.config}/modules.d/*.yml
  111. # Set to true to enable config reloading
  112. reload.enabled: false
  113. # Period on which files under path should be checked for changes
  114. #reload.period: 10s
  115. # ======================= Elasticsearch template setting =======================
  116. setup.template.settings:
  117. index.number_of_shards: 3
  118. index.number_of_replicas: 1
  119. #index.codec: best_compression
  120. #_source.enabled: false
  121. # 允许自动生成index模板
  122. setup.template.enabled: true
  123. # # 生成index模板时字段配置文件
  124. setup.template.fields: fields.yml
  125. # # 如果存在模块则覆盖
  126. setup.template.overwrite: true
  127. # # 生成index模板的名称
  128. setup.template.name: "gitegg_log"
  129. # # 生成index模板匹配的index格式
  130. setup.template.pattern: "filebeat-*"
  131. #索引生命周期管理ilm功能默认开启,开启的情况下索引名称只能为filebeat-*, 通过setup.ilm.enabled: false进行关闭;
  132. setup.ilm.pattern: "{now/d}"
  133. setup.ilm.enabled: false
  134. # ================================== General ===================================
  135. # The name of the shipper that publishes the network data. It can be used to group
  136. # all the transactions sent by a single shipper in the web interface.
  137. #name:
  138. # The tags of the shipper are included in their own field with each
  139. # transaction published.
  140. #tags: ["service-X", "web-tier"]
  141. # Optional fields that you can specify to add additional information to the
  142. # output.
  143. #fields:
  144. # env: staging
  145. # ================================= Dashboards =================================
  146. # These settings control loading the sample dashboards to the Kibana index. Loading
  147. # the dashboards is disabled by default and can be enabled either by setting the
  148. # options here or by using the `setup` command.
  149. setup.dashboards.enabled: true
  150. # The URL from where to download the dashboards archive. By default this URL
  151. # has a value which is computed based on the Beat name and version. For released
  152. # versions, this URL points to the dashboard archive on the artifacts.elastic.co
  153. # website.
  154. #setup.dashboards.url:
  155. # =================================== Kibana ===================================
  156. # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
  157. # This requires a Kibana endpoint configuration.
  158. setup.kibana:
  159. # Kibana Host
  160. # Scheme and port can be left out and will be set to the default (http and 5601)
  161. # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  162. # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  163. host: "172.16.20.220:5601"
  164. # Optional protocol and basic auth credentials.
  165. #protocol: "https"
  166. username: "elastic"
  167. password: "123456"
  168. # Optional HTTP path
  169. #path: ""
  170. # Optional Kibana space ID.
  171. #space.id: ""
  172. # Custom HTTP headers to add to each request
  173. #headers:
  174. # X-My-Header: Contents of the header
  175. # Use SSL settings for HTTPS.
  176. #ssl.enabled: true
  177. # =============================== Elastic Cloud ================================
  178. # These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
  179. # The cloud.id setting overwrites the `output.elasticsearch.hosts` and
  180. # `setup.kibana.host` options.
  181. # You can find the `cloud.id` in the Elastic Cloud web UI.
  182. #cloud.id:
  183. # The cloud.auth setting overwrites the `output.elasticsearch.username` and
  184. # `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
  185. #cloud.auth:
  186. # ================================== Outputs ===================================
  187. # Configure what output to use when sending the data collected by the beat.
  188. # ---------------------------- Elasticsearch Output ----------------------------
  189. #output.elasticsearch:
  190. # Array of hosts to connect to.
  191. hosts: ["localhost:9200"]
  192. # Protocol - either `http` (default) or `https`.
  193. #protocol: "https"
  194. # Authentication credentials - either API key or username/password.
  195. #api_key: "id:api_key"
  196. #username: "elastic"
  197. #password: "changeme"
  198. # ------------------------------ Logstash Output -------------------------------
  199. #output.logstash:
  200. # The Logstash hosts
  201. #hosts: ["localhost:5044"]
  202. # Optional SSL. By default is off.
  203. # List of root certificates for HTTPS server verifications
  204. #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
  205. # Certificate for SSL client authentication
  206. #ssl.certificate: "/etc/pki/client/cert.pem"
  207. # Client Certificate Key
  208. #ssl.key: "/etc/pki/client/cert.key"
  209. # -------------------------------- Kafka Output --------------------------------
  210. output.kafka:
  211. # Boolean flag to enable or disable the output module.
  212. enabled: true
  213. # The list of Kafka broker addresses from which to fetch the cluster metadata.
  214. # The cluster metadata contain the actual Kafka brokers events are published
  215. # to.
  216. hosts: ["172.16.20.220:9092","172.16.20.221:9092","172.16.20.222:9092"]
  217. # The Kafka topic used for produced events. The setting can be a format string
  218. # using any event field. To set the topic from document type use `%{[type]}`.
  219. topic: '%{[fields.topic]}'
  220. # The Kafka event key setting. Use format string to create a unique event key.
  221. # By default no event key will be generated.
  222. #key: ''
  223. # The Kafka event partitioning strategy. Default hashing strategy is `hash`
  224. # using the `output.kafka.key` setting or randomly distributes events if
  225. # `output.kafka.key` is not configured.
  226. partition.hash:
  227. # If enabled, events will only be published to partitions with reachable
  228. # leaders. Default is false.
  229. reachable_only: true
  230. # Configure alternative event field names used to compute the hash value.
  231. # If empty `output.kafka.key` setting will be used.
  232. # Default value is empty list.
  233. #hash: []
  234. # Authentication details. Password is required if username is set.
  235. #username: ''
  236. #password: ''
  237. # SASL authentication mechanism used. Can be one of PLAIN, SCRAM-SHA-256 or SCRAM-SHA-512.
  238. # Defaults to PLAIN when `username` and `password` are configured.
  239. #sasl.mechanism: ''
  240. # Kafka version Filebeat is assumed to run against. Defaults to the "1.0.0".
  241. #version: '1.0.0'
  242. # Configure JSON encoding
  243. #codec.json:
  244. # Pretty-print JSON event
  245. #pretty: false
  246. # Configure escaping HTML symbols in strings.
  247. #escape_html: false
  248. # Metadata update configuration. Metadata contains leader information
  249. # used to decide which broker to use when publishing.
  250. #metadata:
  251. # Max metadata request retry attempts when cluster is in middle of leader
  252. # election. Defaults to 3 retries.
  253. #retry.max: 3
  254. # Wait time between retries during leader elections. Default is 250ms.
  255. #retry.backoff: 250ms
  256. # Refresh metadata interval. Defaults to every 10 minutes.
  257. #refresh_frequency: 10m
  258. # Strategy for fetching the topics metadata from the broker. Default is false.
  259. #full: false
  260. # The number of concurrent load-balanced Kafka output workers.
  261. #worker: 1
  262. # The number of times to retry publishing an event after a publishing failure.
  263. # After the specified number of retries, events are typically dropped.
  264. # Some Beats, such as Filebeat, ignore the max_retries setting and retry until
  265. # all events are published. Set max_retries to a value less than 0 to retry
  266. # until all events are published. The default is 3.
  267. #max_retries: 3
  268. # The number of seconds to wait before trying to republish to Kafka
  269. # after a network error. After waiting backoff.init seconds, the Beat
  270. # tries to republish. If the attempt fails, the backoff timer is increased
  271. # exponentially up to backoff.max. After a successful publish, the backoff
  272. # timer is reset. The default is 1s.
  273. #backoff.init: 1s
  274. # The maximum number of seconds to wait before attempting to republish to
  275. # Kafka after a network error. The default is 60s.
  276. #backoff.max: 60s
  277. # The maximum number of events to bulk in a single Kafka request. The default
  278. # is 2048.
  279. #bulk_max_size: 2048
  280. # Duration to wait before sending bulk Kafka request. 0 is no delay. The default
  281. # is 0.
  282. #bulk_flush_frequency: 0s
  283. # The number of seconds to wait for responses from the Kafka brokers before
  284. # timing out. The default is 30s.
  285. #timeout: 30s
  286. # The maximum duration a broker will wait for number of required ACKs. The
  287. # default is 10s.
  288. #broker_timeout: 10s
  289. # The number of messages buffered for each Kafka broker. The default is 256.
  290. #channel_buffer_size: 256
  291. # The keep-alive period for an active network connection. If 0s, keep-alives
  292. # are disabled. The default is 0 seconds.
  293. #keep_alive: 0
  294. # Sets the output compression codec. Must be one of none, snappy and gzip. The
  295. # default is gzip.
  296. compression: gzip
  297. # Set the compression level. Currently only gzip provides a compression level
  298. # between 0 and 9. The default value is chosen by the compression algorithm.
  299. #compression_level: 4
  300. # The maximum permitted size of JSON-encoded messages. Bigger messages will be
  301. # dropped. The default value is 1000000 (bytes). This value should be equal to
  302. # or less than the broker's message.max.bytes.
  303. max_message_bytes: 1000000
  304. # The ACK reliability level required from broker. 0=no response, 1=wait for
  305. # local commit, -1=wait for all replicas to commit. The default is 1. Note:
  306. # If set to 0, no ACKs are returned by Kafka. Messages might be lost silently
  307. # on error.
  308. required_acks: 1
  309. # The configurable ClientID used for logging, debugging, and auditing
  310. # purposes. The default is "beats".
  311. #client_id: beats
  312. # Use SSL settings for HTTPS.
  313. #ssl.enabled: true
  314. # Controls the verification of certificates. Valid values are:
  315. # * full, which verifies that the provided certificate is signed by a trusted
  316. # authority (CA) and also verifies that the server's hostname (or IP address)
  317. # matches the names identified within the certificate.
  318. # * strict, which verifies that the provided certificate is signed by a trusted
  319. # authority (CA) and also verifies that the server's hostname (or IP address)
  320. # matches the names identified within the certificate. If the Subject Alternative
  321. # Name is empty, it returns an error.
  322. # * certificate, which verifies that the provided certificate is signed by a
  323. # trusted authority (CA), but does not perform any hostname verification.
  324. # * none, which performs no verification of the server's certificate. This
  325. # mode disables many of the security benefits of SSL/TLS and should only be used
  326. # after very careful consideration. It is primarily intended as a temporary
  327. # diagnostic mechanism when attempting to resolve TLS errors; its use in
  328. # production environments is strongly discouraged.
  329. # The default value is full.
  330. #ssl.verification_mode: full
  331. # List of supported/valid TLS versions. By default all TLS versions from 1.1
  332. # up to 1.3 are enabled.
  333. #ssl.supported_protocols: [TLSv1.1, TLSv1.2, TLSv1.3]
  334. # List of root certificates for HTTPS server verifications
  335. #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
  336. # Certificate for SSL client authentication
  337. #ssl.certificate: "/etc/pki/client/cert.pem"
  338. # Client certificate key
  339. #ssl.key: "/etc/pki/client/cert.key"
  340. # Optional passphrase for decrypting the certificate key.
  341. #ssl.key_passphrase: ''
  342. # Configure cipher suites to be used for SSL connections
  343. #ssl.cipher_suites: []
  344. # Configure curve types for ECDHE-based cipher suites
  345. #ssl.curve_types: []
  346. # Configure what types of renegotiation are supported. Valid options are
  347. # never, once, and freely. Default is never.
  348. #ssl.renegotiation: never
  349. # Configure a pin that can be used to do extra validation of the verified certificate chain,
  350. # this allow you to ensure that a specific certificate is used to validate the chain of trust.
  351. #
  352. # The pin is a base64 encoded string of the SHA-256 fingerprint.
  353. #ssl.ca_sha256: ""
  354. # A root CA HEX encoded fingerprint. During the SSL handshake if the
  355. # fingerprint matches the root CA certificate, it will be added to
  356. # the provided list of root CAs (`certificate_authorities`), if the
  357. # list is empty or not defined, the matching certificate will be the
  358. # only one in the list. Then the normal SSL validation happens.
  359. #ssl.ca_trusted_fingerprint: ""
  360. # Enable Kerberos support. Kerberos is automatically enabled if any Kerberos setting is set.
  361. #kerberos.enabled: true
  362. # Authentication type to use with Kerberos. Available options: keytab, password.
  363. #kerberos.auth_type: password
  364. # Path to the keytab file. It is used when auth_type is set to keytab.
  365. #kerberos.keytab: /etc/security/keytabs/kafka.keytab
  366. # Path to the Kerberos configuration.
  367. #kerberos.config_path: /etc/krb5.conf
  368. # The service name. Service principal name is contructed from
  369. # service_name/hostname@realm.
  370. #kerberos.service_name: kafka
  371. # Name of the Kerberos user.
  372. #kerberos.username: elastic
  373. # Password of the Kerberos user. It is used when auth_type is set to password.
  374. #kerberos.password: changeme
  375. # Kerberos realm.
  376. #kerberos.realm: ELASTIC
  377. # Enables Kerberos FAST authentication. This may
  378. # conflict with certain Active Directory configurations.
  379. #kerberos.enable_krb5_fast: false
  380. # ================================= Processors =================================
  381. processors:
  382. - add_host_metadata:
  383. when.not.contains.tags: forwarded
  384. - add_cloud_metadata: ~
  385. - add_docker_metadata: ~
  386. - add_kubernetes_metadata: ~
  387. # ================================== Logging ===================================
  388. # Sets log level. The default log level is info.
  389. # Available log levels are: error, warning, info, debug
  390. #logging.level: debug
  391. # At debug level, you can selectively enable logging only for some components.
  392. # To enable all selectors use ["*"]. Examples of other selectors are "beat",
  393. # "publisher", "service".
  394. #logging.selectors: ["*"]
  395. # ============================= X-Pack Monitoring ==============================
  396. # Filebeat can export internal metrics to a central Elasticsearch monitoring
  397. # cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
  398. # reporting is disabled by default.
  399. # Set to true to enable the monitoring reporter.
  400. #monitoring.enabled: false
  401. # Sets the UUID of the Elasticsearch cluster under which monitoring data for this
  402. # Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
  403. # is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
  404. #monitoring.cluster_uuid:
  405. # Uncomment to send the metrics to Elasticsearch. Most settings from the
  406. # Elasticsearch output are accepted here as well.
  407. # Note that the settings should point to your Elasticsearch *monitoring* cluster.
  408. # Any setting that is not set is automatically inherited from the Elasticsearch
  409. # output configuration, so if you have the Elasticsearch output configured such
  410. # that it is pointing to your Elasticsearch monitoring cluster, you can simply
  411. # uncomment the following line.
  412. #monitoring.elasticsearch:
  413. # ============================== Instrumentation ===============================
  414. # Instrumentation support for the filebeat.
  415. #instrumentation:
  416. # Set to true to enable instrumentation of filebeat.
  417. #enabled: false
  418. # Environment in which filebeat is running on (eg: staging, production, etc.)
  419. #environment: ""
  420. # APM Server hosts to report instrumentation results to.
  421. #hosts:
  422. # - http://localhost:8200
  423. # API Key for the APM Server(s).
  424. # If api_key is set then secret_token will be ignored.
  425. #api_key:
  426. # Secret token for the APM Server(s).
  427. #secret_token:
  428. # ================================= Migration ==================================
  429. # This allows to enable 6.7 migration aliases
  430. #migration.6_to_7.enabled: true
  • 执行filebeat启动命令
  1. ./filebeat -e -c filebeat.yml

后台启动命令

  1. nohup ./filebeat -e -c filebeat.yml >/dev/null 2>&1 &

停止命令

  1. ps -ef |grep filebeat
  2. kill -9 进程号

六、测试配置是否正确

1、测试filebeat是否能够采集log文件并发送到Kafka
  • 在kafka服务器开启消费者,监听api_log主题和operation_log主题
  1. ./kafka-console-consumer.sh --bootstrap-server 172.16.20.221:9092 --topic api_log
  2. ./kafka-console-consumer.sh --bootstrap-server 172.16.20.222:9092 --topic operation_log
  • 手动写入日志文件,按照filebeat配置的采集目录写入
  1. echo "api log1111" > /data/gitegg/log/gitegg-service-system/api.log
  2. echo "operation log1111" > /data/gitegg/log/gitegg-service-system/operation.log
  • 观察消费者是消费到日志推送内容





2、测试logstash是消费Kafka的日志主题,并将日志内容存入Elasticsearch

  • 手动写入日志文件
  1. echo "api log8888888888888888888888" > /data/gitegg/log/gitegg-service-system/api.log
  2. echo "operation loggggggggggggggggggg" > /data/gitegg/log/gitegg-service-system/operation.log

自动新增的两个index,规则是logstash中配置的



数据浏览页可以看到Elasticsearch中存储的日志数据内容,说明我们的配置已经生效。

七、配置Kibana用于日志统计和展示

  • 依次点击左侧菜单Management -> Kibana -> Data Views -> Create data view , 输入logstash_* ,选择@timestamp,再点击Create data view按钮,完成创建。







  • 点击日志分析查询菜单Analytics -> Discover,选择logstash_* 进行日志查询



源码地址:

Gitee: https://gitee.com/wmz1930/GitEgg

GitHub: https://github.com/wmz1930/GitEgg

SpringCloud微服务实战——搭建企业级开发框架(三十八):搭建ELK日志采集与分析系统的更多相关文章

  1. springcloud微服务实战--笔记

    目前对Springcloud对了解仅限于:“用[注册服务.配置服务]来统一管理其他微服务” 这个水平.有待提高 Springcloud微服务实战这本书是翟永超2017年5月写的,时间已经过去了两年,略 ...

  2. SpringCloud学习(SPRINGCLOUD微服务实战)一

    SpringCloud学习(SPRINGCLOUD微服务实战) springboot入门 1.配置文件 1.1可以自定义参数并在程序中使用 注解@component @value 例如 若配置文件为a ...

  3. SpringCloud微服务实战——搭建企业级开发框架(三十七):微服务日志系统设计与实现

      针对业务开发人员通常面对的业务需求,我们将日志分为操作(请求)日志和系统运行日志,操作(请求)日志可以让管理员或者运营人员方便简单的在系统界面中查询追踪用户具体做了哪些操作,便于分析统计用户行为: ...

  4. SpringCloud微服务实战——第三章服务治理

    Spring Cloud Eureka 服务治理 是微服务架构中最核心最基本的模块.用于实现各个微服务实例的自动化注册与发现. 服务注册: 在服务治理框架中,都会构建一个注册中心,每个服务单元向注册中 ...

  5. 【SpringCloud微服务实战学习系列】服务治理Spring Cloud Eureka

    Spring Cloud Eureka是Spring Cloud Netflix微服务中的一部分,它基于NetFlix Sureka做了二次封装,主要负责完成微服务架构中的服务治理功能. 一.服务治理 ...

  6. SpringCloud微服务实战——搭建企业级开发框架(三十四):SpringCloud + Docker + k8s实现微服务集群打包部署-Maven打包配置

      SpringCloud微服务包含多个SpringBoot可运行的应用程序,在单应用程序下,版本发布时的打包部署还相对简单,当有多个应用程序的微服务发布部署时,原先的单应用程序部署方式就会显得复杂且 ...

  7. SpringCloud微服务实战——搭建企业级开发框架(三十六):使用Spring Cloud Stream实现可灵活配置消息中间件的功能

      在以往消息队列的使用中,我们通常使用集成消息中间件开源包来实现对应功能,而消息中间件的实现又有多种,比如目前比较主流的ActiveMQ.RocketMQ.RabbitMQ.Kafka,Stream ...

  8. SpringCloud微服务实战——搭建企业级开发框架(三十五):SpringCloud + Docker + k8s实现微服务集群打包部署-集群环境部署

    一.集群环境规划配置 生产环境不要使用一主多从,要使用多主多从.这里使用三台主机进行测试一台Master(172.16.20.111),两台Node(172.16.20.112和172.16.20.1 ...

  9. SpringCloud微服务实战——搭建企业级开发框架(三十一):自定义MybatisPlus代码生成器实现前后端代码自动生成

      理想的情况下,代码生成可以节省很多重复且没有技术含量的工作量,并且代码生成可以按照统一的代码规范和格式来生成代码,给日常的代码开发提供很大的帮助.但是,代码生成也有其局限性,当牵涉到复杂的业务逻辑 ...

随机推荐

  1. Bootstrap实战 - 注册和登录

    一.介绍 注册和登录在社交和商业网站中是必不可少的一个部分. 二.知识点 2.1 标签页 2.1.1 基础标签页 标签页的使用与导航栏类似,同时都依赖于基础样式 nav,不同的是附加样式变成了 nav ...

  2. Solon 开发,四、Bean 扫描的三种方式

    Solon 开发 一.注入或手动获取配置 二.注入或手动获取Bean 三.构建一个Bean的三种方式 四.Bean 扫描的三种方式 五.切面与环绕拦截 六.提取Bean的函数进行定制开发 七.自定义注 ...

  3. DEEP LEARNING WITH PYTORCH: A 60 MINUTE BLITZ | NEURAL NETWORKS

    神经网络可以使用 torch.nn包构建. 现在你已经对autograd有所了解,nn依赖 autograd 定义模型并对其求微分.nn.Module 包括层,和一个返回 output 的方法 - f ...

  4. 学习AJAX必知必会(4)~同源策略、解决跨域问题(JSONP、CORS)

    一.同源策略(Same-Origin Policy),是浏览器的一种安全策略. 1.同源(即url相同):协议.域名.端口号 必须完全相同.(请求是来自同一个服务) 2.跨域:违背了同源策略,即跨域. ...

  5. gin中jsonp的用法

    package main import ( "github.com/gin-gonic/gin" "net/http" ) func main() { r := ...

  6. Python 安装MySQL 错误处理

    正常情况下如果使用python 连接数据库需要安装 python-MySQL 类库 #pip install python-MySQL 等待安装完成即可 使用时 import MySQLdb ==== ...

  7. Nginx请求连接限制

    目录 Nginx的请求限制 HTTP协议的连接与请求 连接限制 配置示例 做个演示: 请求限制 配置示例 基本指令 limit_req_zone limit_req zone 做个演示: Nginx的 ...

  8. python网络爬虫-解析网页(六)

    解析网页 主要使用到3种方法提取网页中的数据,分别是正则表达式.beautifulsoup和lxml. 使用正则表达式解析网页 正则表达式是对字符串操作的逻辑公式 .代替任意字符 . *匹配前0个或多 ...

  9. Spring源码-IOC部分-容器初始化过程【2】

    实验环境:spring-framework-5.0.2.jdk8.gradle4.3.1 Spring源码-IOC部分-容器简介[1] Spring源码-IOC部分-容器初始化过程[2] Spring ...

  10. JS generator(生成器)

    笔记整理自:廖雪峰老师的JS教程 目录 简介 与函数的不同之处 函数写法 generator写法 generator调用 generator对象的`next()`方法调用 `for ... of`循环 ...