ELK 日志查询分析nginx日志
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: hna-es
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: hna-es-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /var/lib/elasticsearch
path.data: /data/components/elasticsearch
#
# Path to log files:
#
path.logs: /data/logs/elasticsearch
#path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.zen.ping.unicast.hosts: ["192.168.100.130"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
# ---------------------------------- X-pack-----------------------------------
xpack.ssl.key: /etc/elasticsearch/config/hna-es-1/hna-es-1.key
xpack.ssl.certificate: /etc/elasticsearch/config/hna-es-1/hna-es-1.crt
xpack.ssl.certificate_authorities: /etc/elasticsearch/config/ca/ca.crt
xpack.security.transport.ssl.enabled: true
xpack.ssl.verification_mode: certificate
elasticsearch1.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: hna-es
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: hna-es-2
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /var/lib/elasticsearch
path.data: /data/components/elasticsearch
#
# Path to log files:
#
path.logs: /data/logs/elasticsearch
#path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.zen.ping.unicast.hosts: ["192.168.100.129"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true # ---------------------------------- X-pack-----------------------------------
xpack.ssl.key: /etc/elasticsearch/config/hna-es-2/hna-es-2.key
xpack.ssl.certificate: /etc/elasticsearch/config/hna-es-2/hna-es-2.crt
xpack.ssl.certificate_authorities: /etc/elasticsearch/config/ca/ca.crt
xpack.security.transport.ssl.enabled: true
xpack.ssl.verification_mode: certificate
elasticsearch2.yml
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 8080 # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0" # Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: "" # The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576 # The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname" # The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://192.168.100.129:9200" # When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true # Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana" # The default application to load.
#kibana.defaultAppId: "discover" # If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
elasticsearch.username: "kibana"
elasticsearch.password: "123456" # Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key # Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key # Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ] # To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full # Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500 # Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000 # List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ] # Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {} # Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 0 # Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000 # Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid # Enables you specify a file where Kibana stores log output.
#logging.dest: stdout # Set the value of this setting to true to suppress all logging output.
#logging.silent: false # Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false # Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false # Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000 # The default locale. This locale can be used in certain circumstances to substitute any missing
# translations.
#i18n.defaultLocale: "en"
kibana.yml
elasticserch 由java编写运行时需要安装JDK
kibana 由nodejs编写
安装时应注意elasticsearch版本要高于kibana版本安装完成后修改配置文件即可使用
安全组件x-pack安装 https://www.elastic.co/guide/en/x-pack/current/installing-xpack.html#xpack-installing-offline
重点详细说一下logstash的安装及配置
ELK 日志查询分析nginx日志的更多相关文章
- elk+redis分布式分析nginx日志
一.elk套件介绍 ELK 由 ElasticSearch . Logstash 和 Kiabana 三个开源工具组成.官方网站: https://www.elastic.co/products El ...
- elk实战分析nginx日志文档
elk实战分析nginx日志文档 架构: kibana <--- es-cluster <--- logstash <--- filebeat 环境准备:192.168.3.1 no ...
- elk平台分析nginx日志的基本搭建
一.elk套件介绍 ELK 由 ElasticSearch . Logstash 和 Kiabana 三个开源工具组成.官方网站: https://www.elastic.co/products El ...
- 使用Docker快速部署ELK分析Nginx日志实践(二)
Kibana汉化使用中文界面实践 一.背景 笔者在上一篇文章使用Docker快速部署ELK分析Nginx日志实践当中有提到如何快速搭建ELK分析Nginx日志,但是这只是第一步,后面还有很多仪表盘需要 ...
- 使用Docker快速部署ELK分析Nginx日志实践
原文:使用Docker快速部署ELK分析Nginx日志实践 一.背景 笔者所在项目组的项目由多个子项目所组成,每一个子项目都存在一定的日志,有时候想排查一些问题,需要到各个地方去查看,极为不方便,此前 ...
- ELK 6安装配置 nginx日志收集 kabana汉化
#ELK 6安装配置 nginx日志收集 kabana汉化 #环境 centos 7.4 ,ELK 6 ,单节点 #服务端 Logstash 收集,过滤 Elasticsearch 存储,索引日志 K ...
- 使用Hive的正则解析器RegexSerDe分析nginx日志
1.环境: hadoop-2.6.0 + apache-hive-1.2.0-bin 2.使用Hive分析nginx日志,站点的訪问日志部分内容为: cat /home/hadoop/hivetest ...
- 使用awstats分析nginx日志
1.awstats介绍 本文主要是记录centos6.5下安装配置awstats,并统计nginx访问日志 1.1 awstats介绍 awstats是一款日志统计工具,它使用Perl语言编写,可统计 ...
- 烂泥:利用awstats分析nginx日志
本文由ilanniweb提供友情赞助,首发于烂泥行天下 想要获得更多的文章,可以关注我的微信ilanniweb 昨天把nginx的日志进行了切割,关于如何切割nginx日志,可以查看<烂泥:切割 ...
随机推荐
- tomcat部署项目(war文件)
首先配置jdk环境 下载jdk 例如,我将jdk安装在d盘jdk目录下 配置系统环境 新建系统变量JAVA_HOME值为D:\jdk 新建系统变量CLASS_HOME值为 .%JAVA_HOME%\l ...
- codeblock设置快捷键
第一步: 第二步: 第三步: 格式化代码设置: 在代码框里点右键,按Format use Astyle就会自动代码格式化了 但是它默认的风格是大括号另起一行,很不习惯,实际上是可以改的 1.Sett ...
- jsp与servlet的区别与联系
jsp和servlet的区别和联系:1.jsp经编译后就变成了Servlet.(JSP的本质就是Servlet,JVM只能识别java的类,不能识别JSP的代码,Web容器将JSP的代码编译成JVM能 ...
- vsphere和vmware快照的不足之处
当快照创建时虚拟机执行一个读操作,hypervisor会检查快照VMDK,查看是否有被读取的区块存在.如果有,则从快照中为虚拟机提供这个区块,如果没有,虚拟机还需要去读取基础VMDK.如果只有一个快照 ...
- c#与C++类型转换网摘
转载自 C++和C#转换 https://www.cnblogs.com/zjoch/p/4147182.html c#与C++类型转换,网摘 //c++:HANDLE(void *) ...
- centos git server 的搭建
安装环境 centos7 说明:centos yum 库里面的git 好像是不区分 客户端和服务器端, 安装 git 以后 就可以创建 仓库,也可以检出 别的 git 仓库的 代码了.所以不 ...
- C# MD5位加密
/// <summary> /// 方法一:通过使用 new 运算符创建对象 /// </summary> /// <param name="strSource ...
- 基于OLSR的路由协议实现Ad-Hoc组网
一.软件包的安装 1. olsrd软件包的安装 libpthread_0.9.33.2-1_ar71xx.ipk olsrd_0.6.6.2-4_ar71xx.ipk 2. luci的安装 olsrd ...
- RGB Resampler IP核的测试
关于RGB Resampler IP核的测试 1.RGB Resampler功能描述 将输入的RGB数据流转换成其它格式的RGB数据流. 2.功能验证 设置源图像像素数据为:3X4格式. 设置RGB ...
- git 仓库相关命令
git配置文件 : .git/config 配置存储远程连接用户信息 [credential] helper = store 配置www用户下默认git pull账号和密码,这样每一个新加的项目都不用 ...