学习ELK日志平台(五)
ELK Stack
通常情况下:
1,开发人员是不能登录线上服务器查看日志信息
2,各个系统的日志繁多,日志数据分散难以查找
3,日志数据量较大,查询速度慢,数据不够实时性
4,一个调用会涉及到多个系统,难以在这些系统中快速定位数据
elk stack = elastic search + logstash + kibana
这里的redis,松耦合,任何数据写入到redis都可以
elasticsearch配置:
1,首先需要配置好jdk配置好环境变量
[root@nginx-proxy2 local]# rpm -ivh jdk-8u73-linux-x64.rpm
Preparing... ########################################### [100%]
1:jdk1.8.0_73 ########################################### [100%]
Unpacking JAR files...
tools.jar...
plugin.jar...
javaws.jar...
deploy.jar...
rt.jar...
jsse.jar...
charsets.jar...
localedata.jar...
jfxrt.jar...
[root@nginx-proxy2 local]# cat /etc/profile.d/java.sh
export JAVA_HOME=/usr/java/latest
export PATH=$JAVA_HOME/bin/:$PATH
[root@nginx-proxy2 local]# source /etc/profile.d/java.sh
[root@nginx-proxy2 local]# java -version
java version "1.8.0_73"
Java(TM) SE Runtime Environment (build 1.8.0_73-b02)
Java HotSpot(TM) 64-Bit Server VM (build 25.73-b02, mixed mode)
[root@nginx-proxy2 local]#
elasticsearch安装:
安装参考:https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html#setup-installation
[root@nginx-proxy2 local]# wget https://download.elasticsearch.org/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.2.0/elasticsearch-2.2.0.tar.gz
[root@nginx-proxy2 local]# tar xf elasticsearch-2.2.0.tar.gz
[root@nginx-proxy2 local]# ln -sv elasticsearch-2.2.0 elasticsearch
`elasticsearch' -> `elasticsearch-2.2.0'
[root@nginx-proxy2 local]#
配置文件
cluster.name: node1
node.name: "linux-node1"
node.master: true 这个节点是否被选举为master节点
node.data: true 这个节点是否存储数据
index.number_of_shards: 5 索引分片为5个
index.number_of_replicas:1 分片的副本默认为1
path.data: /usr/local/elasticsearch/data 数据文件位置,使用逗号,可以配置多个
path.conf: /usr/local/elasticsearch/conf 配置文件位置
path.work: /usr/local/elasticsearch/work 临时文件目录
path.logs: /usr/local/elasticsearch/conf/logs 日志文件目录
path.plugins: /usr/local/elasticsearch/plugins 插件目录,大多数js程序
bootstrap.mlockall: true swap效率较低,锁住内存效率提高
2.0之前的修改
[root@nginx-proxy2 config]# grep "^[a-z]" elasticsearch.yml
cluster.name: node1
node.name: "linux-node1"
node.master: true
node.data: true
index.number_of_shards: 5
index.number_of_replicas:1
path.data: /usr/local/elasticsearch/data
path.conf: /usr/local/elasticsearch/conf
path.work: /usr/local/elasticsearch/work
path.logs: /usr/local/elasticsearch/logs
path.plugins: /usr/local/elasticsearch/plugins
bootstrap.mlockall: true
[root@nginx-proxy2 config]# mkdir /usr/local/elasticsearch/conf -p
[root@nginx-proxy2 config]# mkdir /usr/local/elasticsearch/logs -p
[root@nginx-proxy2 config]# mkdir /usr/local/elasticsearch/work -p
[root@nginx-proxy2 config]# mkdir /usr/local/elasticsearch/data -p
我只修改了如下:
cluster.name: my-linuxea
node.name: "linuxea"
curl测试
[mark@nginx-proxy2 elasticsearch]# su mark
[mark@nginx-proxy2 elasticsearch]$ bin/elasticsearch -d
[root@nginx-proxy2 ~]# curl 127.0.0.1:9200
{
"name" : "linuxea",
"cluster_name" : "my-linuxea",
"version" : {
"number" : "2.2.0",
"build_hash" : "8ff36d139e16f8720f2947ef62c8167a888992fe",
"build_timestamp" : "2016-01-27T13:32:39Z",
"build_snapshot" : false,
"lucene_version" : "5.4.1"
},
"tagline" : "You Know, for Search"
}
[root@nginx-proxy2 ~]#
关闭掉
[root@nginx-proxy2 ~]# jps
3515 Elasticsearch
3564 Jps
[root@nginx-proxy2 ~]# kill 3515
去gitlab官网下载服务启动脚本:
[root@nginx-proxy2 ~]# git clone https://github.com/elastic/elasticsearch-servicewrapper.git
Initialized empty Git repository in /root/elasticsearch-servicewrapper/.git/
remote: Counting objects: 184, done.
remote: Total 184 (delta 0), reused 0 (delta 0), pack-reused 184
Receiving objects: 100% (184/184), 4.55 MiB | 245 KiB/s, done.
Resolving deltas: 100% (53/53), done.
[root@nginx-proxy2 ~]# mv elasticsearch-servicewrapper/service/ /usr/local/elasticsearch/bin/
[root@nginx-proxy2 ~]# /usr/local/elasticsearch/bin/service/elasticsearch
Usage: /usr/local/elasticsearch/bin/service/elasticsearch [ console | start | stop | restart | condrestart | status | install | remove | dump ]
Commands:
console Launch in the current console.
start Start in the background as a daemon process.
stop Stop if running as a daemon or in another console.
restart Stop if running and then start.
condrestart Restart only if already running.
status Query the current status.
install Install to start automatically when system boots.
remove Uninstall.
dump Request a Java thread dump if running.
[root@nginx-proxy2 ~]# .
安装即可
[root@nginx-proxy2 ~]# /usr/local/elasticsearch/bin/service/elasticsearch install
Detected RHEL or Fedora:
Installing the Elasticsearch daemon..
[root@nginx-proxy2 ~]# ls /etc/init.d/elasticsearch
/etc/init.d/elasticsearch
[root@nginx-proxy2 ~]# chkconfig --list |grep ela
elasticsearch 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@nginx-proxy2 ~]#
很遗憾,还不支持。gitlab上Clinton Gormley并没有为2.2更新,所有这个还是无法启动的。
#############################################################################################
Es+Head+Logstash
在昨天,使用tar.gz安装很多问题,本次使用yum安装。
参考权威指南:http://www.learnes.net/
1,安装java
[root@ELK1 ~]# yum -y install java-1.8.0-openjdk* git
[root@ELK1 ~]# java -version
openjdk version "1.8.0_71"
OpenJDK Runtime Environment (build 1.8.0_71-b15)
OpenJDK 64-Bit Server VM (build 25.71-b15, mixed mode)
2,安装elasticsearch
[root@ELK1 ~]# yum install https://download.elasticsearch.org/elasticsearch/release/org/elasticsearch/distribution/rpm/elasticsearch/2.2.0/elasticsearch-2.2.0.rpm
这些安装路径可以使用 rpm -ql elasticsearch
查看
3,编辑配置文件,
[root@ELK1 elasticsearch]# grep "^[a-z]" /etc/elasticsearch/elasticsearch.yml
cluster.name: linuxea-my
node.name: "linuxea-ES1"
node.master: true
node.data: true
index.number_of_shards: 5
index.number_of_replicas: 1
path.data: /data/es-data
path.work: /data/es-worker
path.logs: /var/log/elasticsearch/
path.plugins: /usr/share/elasticsearch/plugins
bootstrap.mlockall: true
network.bind_host: 10.10.0.200
network.publish_host: 10.10.0.200
network.host: 10.10.0.200
http.port: 9200
discovery.zen.ping.multicast.enabled: false
#discovery.zen.ping.timeout: 3s
discovery.zen.ping.unicast.hosts: ["10.10.0.201", "127.0.0.1",]
===============================配置说明========================
cluster.name: elasticsearch #组播的名称地址
node.name: "linux-ES1" #节点名称,不能和其他节点重复
node.master: true #节点能否被选举为master
node.data: true #节点是否存储数据
index.number_of_shards: 5 #索引分片的个数
index.number_of_replicas: 1 #分片的副本个数
path.conf: /usr/local/elasticsearch/config/ #配置文件的路径
path.data: /data/es-data #数据目录路径
path.work: /data/es-worker #工作目录路径
path.logs: /usr/local/elasticsearch/logs/ #日志文件路径
path.plugins: /usr/local/elasticsearch/plugins #插件路径
bootstrap.mlockall: true #内存不向swap交换
discovery.zen.ping.unicast.hosts: ["10.10.0.201", "127.0.0.1",] 节点ip,head需要
=================================================================
4,创建tata目录
[root@ELK1 /]# mkdir /data/es-data -p
[root@ELK1 /]# mkdir /data/es-worker -p
[root@ELK1 /]# chown elasticsearch.elasticsearch data -R
ES2
[root@ELK2 local]# grep "^[a-z]" /etc/elasticsearch/elasticsearch.yml
cluster.name: linuxea-my
node.name: "linuxea-ES2"
node.master: true
node.data: true
index.number_of_shards: 5
index.number_of_replicas: 1
path.data: /data/es-data
path.work: /data/es-worker
path.logs: /var/log/elasticsearch/
path.plugins: /usr/share/elasticsearch/plugins
bootstrap.mlockall: true
network.host: 10.10.0.201
http.port: 9200
[root@ELK2 local]# mkdir /data/es-data -p
[root@ELK2 local]# mkdir /data/es-worker -p
[root@ELK2 local]# chown elasticsearch.elasticsearch /data -R
elasticsearch-head安装
[root@ELK1 local]# /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head
-> Installing mobz/elasticsearch-head...
Trying https://github.com/mobz/elasticsearch-head/archive/master.zip ...
Downloading ...................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONE
Verifying https://github.com/mobz/elasticsearch-head/archive/master.zip checksums if available ...
NOTE: Unable to verify checksum for downloaded plugin (unable to find .sha1 or .md5 file to verify)
Installed head into /usr/local/elasticsearch/plugin/head
生产可能需要配置已下:
max_file_descriptors: 64000
/etc/sysctl.conf
sysctl -w vm_max_count=262144
logstash安装
[root@ELK1 /]# yum install https://download.elastic.co/logstash/logstash/packages/centos/logstash-2.2.2-1.noarch.rpm
[root@ELK1 /]# rpm -ql logstash |less
输入和输出
[root@ELK1 /]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{codec => rubydebug} }'
Settings: Default pipeline workers: 1
Logstash startup completed
hello word
{
"message" => "hello word",
"@version" => "1",
"@timestamp" => "2016-03-06T12:25:26.807Z",
"host" => "ELK1"
}
linuxea.com
{
"message" => "linuxea.com",
"@version" => "1",
"@timestamp" => "2016-03-06T12:25:31.943Z",
"host" => "ELK1"
}
logstash并写入数据到elasticsearch
参考页面:https://www.elastic.co/guide/en/logstash/current/configuration.html
[root@ELK1 /]# /opt/logstash/bin/logstash -e 'input { stdin { } } output { elasticsearch { hosts => ["10.10.0.200:9200"] } stdout { codec => rubydebug } }'
Settings: Default pipeline workers: 1
Logstash startup completed
hello word
{
"message" => "hello word",
"@version" => "1",
"@timestamp" => "2016-03-06T12:46:08.504Z",
"host" => "ELK1"
}
www.linuxea.com
{
"message" => "www.linuxea.com",
"@version" => "1",
"@timestamp" => "2016-03-06T12:46:18.127Z",
"host" => "ELK1"
}
############################################################################################
[root@node conf.d]# cat /etc/logstash/conf.d/logstash.conf
input {
file {
path => "/var/log/messages"
}
}
output {
file {
path => "/logstash-test/%{+YYYY-MM-dd-HH}.messages.gz"
gzip => true
}
# elasticsearch {
# hosts => "10.10.0.200"
# protocol => "http"
# index => "system-messages-%{+YYYY-MM-dd}"
#}
}
[root@node conf.d]#
创建目录和授权
[root@node conf.d]# mkdir /logstash-test
[root@node conf.d]# chown logstash.logstash /logstash-test
[root@node conf.d]# chown logstash.logstash /var/log/messages
尝试写入:
[root@node conf.d]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 >> /var/log/messages
查看
[root@node conf.d]# ll /logstash-test/
total 8
-rw-r--r-- 1 logstash logstash 126 Mar 8 07:51 2016-03-08-15.messages.gz
-rw-r--r-- 1 logstash logstash 431 Mar 8 07:50 2016-03-08.messages.gz
[root@node conf.d]#
至于权限,如果我没有修改messages权限,则会警告,我并没有尝试如果修改后日志还是否正常记录。如果你觉得有问题的地方,请告知我,谢谢!
{:timestamp=>"2016-03-08T07:42:00.876000-0800", :message=>"failed to open /var/log/messages: Permission denied - /var/log/messages", :level=>:warn}
{:timestamp=>"2016-03-08T07:43:14.534000-0800", :message=>"SIGTERM received. Shutting down the pipeline.", :level=>:warn}
##########################################################################################
安装redis,logstash日志将会存放到redis,在经过redis上的logstash发送到es
yum -y install redis
vim /etc/redis.conf
bind 192.168.1.6
/etc/init.d/redis starthsi
连接:
redis-cli -h 192.168.1.6
logstash配置测试
[root@elk1 ~]# vim /etc/logstash.conf
input {
file {
path => "/var/log/messages"
type => "system-log"
}
file {
path => "/root/test.log"
type => "test.log"
}
}
output {
if [type] == "system-log" {
elasticsearch {
host => ["192.168.1.4:9200","192.168.1.5:9200"]
index => "system-messages-%{+YYYY.MM.dd.HH}"
protocol => "http"
workers => 5
template_overwrite => true
}
}
if [type] == "test.log" {
elasticsearch {
host => ["192.168.1.4:9200","192.168.1.5:9200"]
index => "test.log-%{+YYYY.MM.dd.HH}"
protocol => "http"
workers => 5
template_overwrite => true
}
}
redis {
host => "192.168.1.6" redis主机ip
date_type => "list" 指定数据类型为list
key => "test.log" 存入的key值
prot => "6379" 端口
db => "1" db类型。可区分其他日志类型
}
}
给/var/log/messages中添加内容,以便于测试:
[root@elk1 ~]# cat /etc/logstash.conf >> /var/log/messages
[root@elk1 ~]# cat /etc/logstash.conf >> /var/log/messages
登录redis查看
[root@yum-down ~]# redis-cli -h 192.168.1.6
redis 192.168.1.6:6379> select 1
OK
redis 192.168.1.6:6379[1]> keys *
1) "test.log"
redis 192.168.1.6:6379[1]> LLEN test.log 查看有多少行
(integer) 75
redis 192.168.1.6:6379[1]> LINDEX test.log -1 查看最后一行
"{\"message\":\"}\",\"@version\":\"1\",\"@timestamp\":\"2016-03-20T11:24:04.602Z\",\"host\":\"elk1\",\"path\":\"/var/log/messages\",\"type\":\"system-log\"}"
redis 192.168.1.6:6379[1]>
测试完成后再redis机器上安装logstash来读取redis内容到es
tar xf logstash-1.5.5.tar.gz
ln -sv logstash-1.5.5 logstash
logstash配置文件
[root@elk1 ~]# cat /etc/logstash.conf
input {
file {
path => "/var/log/messages"
type => "system-log"
}
}
output {
redis {
host => "192.168.1.6"
data_type => "list"
key => "system.messages"
port => "6379"
db => "1"
}
}
[root@elk1 ~]#
redis+logstash配置文件
[root@yum-down ~]# cat /etc/logstash.conf
input {
redis {
host => "192.168.1.6"
data_type => "list"
key => "test.log"
port => "6379"
db => "1"
}
}
output {
elasticsearch {
host => ["192.168.1.4:9200","192.168.1.5:9200"]
index => "redis-system-messages-%{+YYYY.MM.dd.HH}"
protocol => "http"
workers => 5
template_overwrite => true
}
}
[root@yum-down ~]#
[root@elk1 ~]# cat /etc/shadow >> /var/log/messages
插入后,则看到有日志输入
###########################################################################################
安装nginx
yum -y install pcre pcre-devel openss-devel
http://nginx.org/download/nginx-1.6.3.tar.gz
groupadd -r nginx
useradd -g nginx -r nginx
ln -s /usr/local/nginx-1.6.3 /usr/local/nginx
编译
./configure \
--prefix=/usr/local/nginx \
--conf-path=/etc/nginx/nginx.conf \
--user=nginx --group=nginx \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--pid-path=/var/run/nginx/nginx.pid \
--lock-path=/var/lock/nginx.lock \
--with-http_ssl_module \
--with-http_stub_status_module \
--with-http_gzip_static_module \
--with-http_flv_module \
--with-http_mp4_module \
--http-client-body-temp-path=/var/tmp/nginx/client \
--http-proxy-temp-path=/var/tmp/nginx/proxy \
--http-fastcgi-temp-path=/var/tmp/nginx/fastcgi \
--http-uwsgi-temp-path=/var/tmp/nginx/uwsgi
make && make install
mkdir -pv /var/tmp/nginx/{client,fastcgi,proxy,uwsgi}
mkdir /usr/local/nginx/logs/
/usr/local/sbin/nginx
编辑nginx配置文件:
vim /etc/nginx/nginx.conf
添加如下字段:
#access_log logs/access.log main;
log_format logstash_json '{"@timestamp":"$time_iso8601",'
'"host": "$server_addr",'
'"client": "$remote_addr",'
'"size": $body_bytes_sent,'
'"responsetime": $request_time,'
'"domain": "$host",'
'"url":"$uri",'
'"referer": "$http_referer",'
'"agent": "$http_user_agent",'
'"status":"$status"}';
修改如下:
access_log logs/access_json.access.log logstash_json;
访问后测试:
[root@elk1 logs]# ab -n1000 -c10 http://192.168.1.4:81/
查看日志
[root@elk1 nginx]# cat /usr/local/nginx/logs/access_json.access.log
{"@timestamp":"2016-03-20T05:46:57-07:00","host": "192.168.1.4","client": "192.168.1.3","size": 612,"responsetime": 0.000,"domain": "192.168.1.4","url":"/index.html","referer": "-","agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.80 Safari/537.36","status":"200"}
{"@timestamp":"2016-03-20T05:46:57-07:00","host": "192.168.1.4","client": "192.168.1.3","size": 570,"responsetime": 0.000,"domain": "192.168.1.4","url":"/favicon.ico","referer": "http://192.168.1.4:81/","agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.80 Safari/537.36","status":"404"}
{"@timestamp":"2016-03-20T05:46:59-07:00","host": "192.168.1.4","client": "192.168.1.3","size": 0,"responsetime": 0.000,"domain": "192.168.1.4","url":"/index.html","referer": "-","agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.80 Safari/537.36","status":"304"}
{"@timestamp":"2016-03-20T05:46:59-07:00","host": "192.168.1.4","client": "192.168.1.3","size": 0,"responsetime": 0.000,"domain": "192.168.1.4","url":"/index.html","referer": "-","agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.80 Safari/537.36","status":"304"}
[root@elk1 nginx]#
产生一些日志让logstash收集
[root@elk1 nginx]# ab -n1000 -c10 http://192.168.1.4:81/
[root@elk1 logs]# ll
total 440
-rw-r--r-- 1 root root 449286 Mar 20 05:57 access_json.access.log
[root@elk1 logs]#
当测试日志可用后,修改logstash配置文件,将access_json.access.log推到redis
[root@elk1 logs]# cat /etc/logstash.conf
input {
# file {
# path => "/var/log/messages"
# type => "system-log"
# }
file {
path => "/usr/local/nginx/logs/access_json.access.log"
codec => "json"
}
}
output {
# redis {
# host => "192.168.1.6"
# data_type => "list"
# key => "system.messages"
# port => "6379"
# db => "1"
#}
redis {
host => "192.168.1.6"
data_type => "list"
key => "nginx-access.log"
port => "6379"
db => "2"
}
}
[root@elk1 logs]#
而后在模拟一些日志
[root@elk1 logs]# ab -n1000 -c10 http://192.168.1.4:81/
而后在redis上查看是否传递到redis
redis 192.168.1.6:6379[2]> select 2
OK
redis 192.168.1.6:6379[2]> keys *
1) "nginx-access.log"
redis 192.168.1.6:6379[2]> llen nginx-access.log
(integer) 1000
redis 192.168.1.6:6379[2]>
验证数据存在,修改logstash文件传递到es,配置如下:
[root@yum-down ~]# cat /etc/logstash.conf
input {
# redis {
# host => "192.168.1.6"
# data_type => "list"
# key => "test.log"
# port => "6379"
# db => "1"
#}
redis {
host => "192.168.1.6"
data_type => "list"
key => "nginx-access.log" #key名称和redis保持一致
port => "6379"
db => "2" #db2
}
}
output {
# elasticsearch {
# host => ["192.168.1.4:9200","192.168.1.5:9200"]
# index => "redis-system-messages-%{+YYYY.MM.dd.HH}"
# protocol => "http"
# workers => 5
# template_overwrite => true
# }
elasticsearch {
host => ["192.168.1.4:9200","192.168.1.5:9200"]
index => "nginx-access-log-%{+YYYY.MM.dd.HH}" #修改es中日志名称
protocol => "http"
workers => 5
template_overwrite => true
}
}
[root@yum-down ~]#
##############################################################################################
- ELK stack
ELK stack是又Elasticsearch,lostash,kibana 三个开源软件的组合而成,形成一款强大的实时日志收集分析展示系统。
Logstash:日志收集工具,可以从本地磁盘,网络服务(自己监听端口,接受用户日志),消息队列中收集各种各样的日志,然后进行过滤分析,并将日志输入到Elasticsearch中。
Elasticsearch:日志分布式存储/搜索工具,原生支持集群功能,可以将指定时间的日志生成一个索引,加快日志查询和访问。
Kibana:可视化日志web展示工具,对Elasticsearch中存储的日志进行展示,还可以生成炫丽的仪表盘。
拓扑
nginx代理两台Elasticsearch集群,logstash将客户端端日志手到redis,redis将数据传递给es,客户端使用lostash将日志传递给redis环境
[root@localhost logs]# cat /etc/redhat-release
CentOS release 6.6 (Final)
[root@localhost logs]# uname -rm
2.6.32-504.el6.x86_64 x86_64
[root@localhost logs]#使用软件
elasticsearch-1.7.4.tar.gz
kibana-4.1.1-linux-x64.tar.gz
logstash-1.5.5.tar.gz时间同步
ntpdate time.nist.gov
Elasticsearch集群安装配置
一,192.168.1.8下载安装 elasticsearchyum -y install java-1.8.0 lrzsz git
wget -P /usr/local https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.4.tar.gz
cd /usr/local
tar xf elasticsearch-1.7.4.tar.gz
ln -s elasticsearch-1.7.4 elasticsearch
修改配置文件vim elasticsearch/config/elasticsearch.yml
cluster.name: LinuxEA 群集名称
node.name: "linuxEA-ES1" 节点名称
node.master: true 是否为主
node.data: true 是否存储
index.number_of_shards: 5 分片
index.number_of_replicas: 1
path.conf: /usr/local/elasticsearch/config/ 配置文件路径
path.data: /data/es-data date路径
path.work: /data/es-worker
path.logs: /usr/local/elasticsearch/logs/ 日志
path.plugins: /usr/local/elasticsearch/plugins 模块
bootstrap.mlockall: true 不写入内存
network.host: 192.168.1.8
http.port: 9200
创建目录
mkdir /data/es-data -p
mkdir /data/es-worker -p
mkdir /usr/local/elasticsearch/logs
mkdir /usr/local/elasticsearch/plugins
下载启动配置文件
git clone https://github.com/elastic/elasticsearch-servicewrapper.git
mv elasticsearch-servicewrapper/service/ /usr/local/elasticsearch/bin/
/usr/local/elasticsearch/bin/service/elasticsearch install
修改配置文件
vim /usr/local/elasticsearch/bin/service/elasticsearch.conf
set.default.ES_HOME=/usr/local/elasticsearch #设置ES的安装路径,必须和安装路径保持一直
set.default.ES_HEAP_SIZE=1024
启动
[root@elk1 local]# /etc/init.d/elasticsearch start
Starting Elasticsearch...
Waiting for Elasticsearch......
running: PID:4355
[root@elk1 local]# netstat -tlntp|grep -E "9200|9300"
tcp 0 0 ::ffff:192.168.1.8:9300 :::* LISTEN 4357/java
tcp 0 0 ::ffff:192.168.1.8:9200 :::* LISTEN 4357/java
[root@elk1 local]#
curl
[root@elk1 local]# curl http://192.168.1.8:9200
{
"status" : 200,
"name" : "linuxEA-ES1",
"cluster_name" : "LinuxEA",
"version" : {
"number" : "1.7.4",
"build_hash" : "0d3159b9fc8bc8e367c5c40c09c2a57c0032b32e",
"build_timestamp" : "2015-12-15T11:25:18Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
[root@elk1 local]#
Elasticsearch2
二,192.168.1.7 Elasticsearch2[root@elk2 local]# vim elasticsearch/config/elasticsearch.yml
cluster.name: LinuxEA
node.name: "linuxEA-ES2"
node.master: true
node.data: true
index.number_of_shards: 5
index.number_of_replicas: 1
path.conf: /usr/local/elasticsearch/config/
path.data: /data/es-data
path.work: /data/es-worker
path.logs: /usr/local/elasticsearch/logs/
path.plugins: /usr/local/elasticsearch/plugins
bootstrap.mlockall: true
network.host: 192.168.1.7
http.port: 9200
创建目录
mkdir /data/es-data -p
mkdir /data/es-worker -p
mkdir /usr/local/elasticsearch/logs
mkdir /usr/local/elasticsearch/plugins
下载启动配置文件
git clone https://github.com/elastic/elasticsearch-servicewrapper.git
mv elasticsearch-servicewrapper/service/ /usr/local/elasticsearch/bin/
/usr/local/elasticsearch/bin/service/elasticsearch install
修改配置文件
vim /usr/local/elasticsearch/bin/service/elasticsearch.conf
set.default.ES_HOME=/usr/local/elasticsearch #设置ES的安装路径,必须和安装路径保持一直
set.default.ES_HEAP_SIZE=1024
启动
[root@elk2 local]# /etc/init.d/elasticsearch start
Starting Elasticsearch...
Waiting for Elasticsearch......
running: PID:4355
[root@elk2 ~]# netstat -tlntp|grep -E "9200|9300"
tcp 0 0 ::ffff:192.168.1.7:9300 :::* LISTEN 4568/java
tcp 0 0 ::ffff:192.168.1.7:9200 :::* LISTEN 4568/java
[root@elk2 ~]#
curl
[root@elk2 ~]# curl http://192.168.1.7:9200
{
"status" : 200,
"name" : "linuxEA-ES2",
"cluster_name" : "LinuxEA",
"version" : {
"number" : "1.7.4",
"build_hash" : "0d3159b9fc8bc8e367c5c40c09c2a57c0032b32e",
"build_timestamp" : "2015-12-15T11:25:18Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
[root@elk2 ~]#
集群插件elasticsearch-head
三,192.168.1.7 elasticsearch-head安装 五星表示主节点,原点表示工作节点[root@elk2 ~]# /usr/local/elasticsearch/bin/plugin -i mobz/elasticsearch-head
redis+logstash
四,192.168.1.6安装redis+logstash,主要用于将redis数据传递到es
安装java依赖包yum -y install java-1.8.0 lrzsz git
wget -P /usr/local https://download.elastic.co/logstash/logstash/logstash-1.5.5.tar.gz
cd /usr/local
tar xf logstash-1.5.5.tar.gz
ln -s logstash-1.5.5 logstash
启动脚本
[root@localhost local]# vim /etc/init.d/logstash
#!/bin/sh
# Init script for logstash
# Maintained by Elasticsearch
# Generated by pleaserun.
# Implemented based on LSB Core 3.1:
# * Sections: 20.2, 20.3
#
### BEGIN INIT INFO
# Provides: logstash
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description:
# Description: Starts Logstash as a daemon.
### END INIT INFO
PATH=/sbin:/usr/sbin:/bin:/usr/bin
export PATH
if [ `id -u` -ne 0 ]; then
echo "You need root privileges to run this script"
exit 1
fi
name=logstash
pidfile="/var/run/$name.pid"
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
LS_USER=logstash
LS_GROUP=logstash
LS_HOME=/usr/local/logstash
LS_HEAP_SIZE="500m"
LS_JAVA_OPTS="-Djava.io.tmpdir=${LS_HOME}"
LS_LOG_DIR=/usr/local/logstash
LS_LOG_FILE="${LS_LOG_DIR}/$name.log"
LS_CONF_FILE=/etc/logstash.conf
LS_OPEN_FILES=16384
LS_NICE=19
LS_OPTS=""
[ -r /etc/default/$name ] && . /etc/default/$name
[ -r /etc/sysconfig/$name ] && . /etc/sysconfig/$name
program=/usr/local/logstash/bin/logstash
args="agent -f ${LS_CONF_FILE} -l ${LS_LOG_FILE} ${LS_OPTS}"
start() {
JAVA_OPTS=${LS_JAVA_OPTS}
HOME=${LS_HOME}
export PATH HOME JAVA_OPTS LS_HEAP_SIZE LS_JAVA_OPTS LS_USE_GC_LOGGING
# set ulimit as (root, presumably) first, before we drop privileges
ulimit -n ${LS_OPEN_FILES}
# Run the program!
nice -n ${LS_NICE} sh -c "
cd $LS_HOME
ulimit -n ${LS_OPEN_FILES}
exec \"$program\" $args
" > "${LS_LOG_DIR}/$name.stdout" 2> "${LS_LOG_DIR}/$name.err" &
# Generate the pidfile from here. If we instead made the forked process
# generate it there will be a race condition between the pidfile writing
# and a process possibly asking for status.
echo $! > $pidfile
echo "$name started."
return 0
}
stop() {
# Try a few times to kill TERM the program
if status ; then
pid=`cat "$pidfile"`
echo "Killing $name (pid $pid) with SIGTERM"
kill -TERM $pid
# Wait for it to exit.
for i in 1 2 3 4 5 ; do
echo "Waiting $name (pid $pid) to die..."
status || break
sleep 1
done
if status ; then
echo "$name stop failed; still running."
else
echo "$name stopped."
fi
fi
}
status() {
if [ -f "$pidfile" ] ; then
pid=`cat "$pidfile"`
if kill -0 $pid > /dev/null 2> /dev/null ; then
# process by this pid is running.
# It may not be our pid, but that's what you get with just pidfiles.
# TODO(sissel): Check if this process seems to be the same as the one we
# expect. It'd be nice to use flock here, but flock uses fork, not exec,
# so it makes it quite awkward to use in this case.
return 0
else
return 2 # program is dead but pid file exists
fi
else
return 3 # program is not running
fi
}
force_stop() {
if status ; then
stop
status && kill -KILL `cat "$pidfile"`
fi
}
case "$1" in
start)
status
code=$?
if [ $code -eq 0 ]; then
echo "$name is already running"
else
start
code=$?
fi
exit $code
;;
stop) stop ;;
force-stop) force_stop ;;
status)
status
code=$?
if [ $code -eq 0 ] ; then
echo "$name is running"
else
echo "$name is not running"
fi
exit $code
;;
restart)
stop && start
;;
reload)
stop && start
;;
*)
echo "Usage: $SCRIPTNAME {start|stop|force-stop|status|restart}" >&2
exit 3
;;
esac
exit $?
开机启动
[root@localhost local]# chmod +X /etc/init.d/logstash
chkconfig --add logstash
chkconfig logstash on
1,编辑logstash配置文件
[root@localhost local]# vim /etc/logstash.conf
input { #表示从标准输入中收集日志
stdin {}
}
output {
elasticsearch { #表示将日志输出到ES中
host => ["172.16.4.102:9200","172.16.4.103:9200"] #可以指定多台主机,也可以指定集群中的单台主机
protocol => "http"
}
}
2.手动写入数据
[root@localhost local]# /usr/local/logstash/bin/logstash -f /etc/logstash.conf
Logstash startup completed
hello word!
3.写入完成,查看ES中已经写入,并自动建立一个索引
4.redis
1,安装redis
yum -y install redis
vim /etc/redis.conf
bind 192.168.1.6
/etc/init.d/redis start
2,安装logstash,如上即可
3,logstash+redis
logstash来读取redis内容到es
cat /etc/logstash.conf
input {
redis {
host => "192.168.1.6"
data_type => "list"
key => "nginx-access.log"
port => "6379"
db => "2"
}
}
output {
elasticsearch {
host => ["192.168.1.7:9200","192.168.1.8:9200"]
index => "nginx-access-log-%{+YYYY.MM.dd}"
protocol => "http"
workers => 5
template_overwrite => true
}
}
- nginx+logstash示例
五,192.168.1.4 安装logstash和nginx,logstash将nginx数据传递到redis即可
logstash如第四步安装即可
yum -y install pcre pcre-devel openssl-devel oepnssl
http://nginx.org/download/nginx-1.6.3.tar.gz
groupadd -r nginx
useradd -g nginx -r nginx
ln -s /usr/local/nginx-1.6.3 /usr/local/nginx
编译安装
./configure \
--prefix=/usr/local/nginx \
--conf-path=/etc/nginx/nginx.conf \
--user=nginx --group=nginx \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--pid-path=/var/run/nginx/nginx.pid \
--lock-path=/var/lock/nginx.lock \
--with-http_ssl_module \
--with-http_stub_status_module \
--with-http_gzip_static_module \
--with-http_flv_module \
--with-http_mp4_module \
--http-client-body-temp-path=/var/tmp/nginx/client \
--http-proxy-temp-path=/var/tmp/nginx/proxy \
--http-fastcgi-temp-path=/var/tmp/nginx/fastcgi \
--http-uwsgi-temp-path=/var/tmp/nginx/uwsgi
make && make install
mkdir -pv /var/tmp/nginx/{client,fastcgi,proxy,uwsgi}
mkdir /usr/local/nginx/logs/
/usr/local/nginx/sbin/nginx
修改日志格式vim /etc/nginx/nginx.conf
log_format logstash_json '{"@timestamp":"$time_iso8601",'
'"host": "$server_addr",'
'"client": "$remote_addr",'
'"size": $body_bytes_sent,'
'"responsetime": $request_time,'
'"domain": "$host",'
'"url":"$uri",'
'"referer": "$http_referer",'
'"agent": "$http_user_agent",'
'"status":"$status"}';
access_log logs/access_json.access.log logstash_json;
日志已经生成
[root@localhost nginx]# ll logs/
total 8
-rw-r--r--. 1 root root 6974 Mar 31 08:44 access_json.access.log
日志格式已经被修改好
[root@localhost nginx]# cat /usr/local/nginx/logs/access_json.access.log
{"@timestamp":"2016-03-31T08:44:48-07:00","host": "192.168.1.4","client": "192.168.1.200","size": 0,"responsetime": 0.000,"domain": "192.168.1.4","url":"/index.html","referer": "-","agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.80 Safari/537.36","status":"304"}
{"@timestamp":"2016-03-31T08:44:48-07:00","host": "192.168.1.4","client": "192.168.1.200","size": 0,"responsetime": 0.000,"domain": "192.168.1.4","url":"/index.html","referer": "-","agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.80 Safari/537.36","status":"304"}
{"@timestamp":"2016-03-31T08:44:48-07:00","host": "192.168.1.4","client": "192.168.1.200","size": 0,"responsetime": 0.000,"domain": "192.168.1.4","url":"/index.html","referer": "-","agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.80 Safari/537.36","status":"304"}
将nginx日志传递给redis
[root@elk1 logs]# cat /etc/logstash.conf
input {
file {
path => "/usr/local/nginx/logs/access_json.access.log"
codec => "json"
}
}
output {
redis {
host => "192.168.1.6"
data_type => "list"
key => "nginx-access.log"
port => "6379"
db => "2"
}
}
[root@elk1 logs]#
分别在redis上,和nginx上启动logstash
nohup /usr/local/logstash/bin/logstash -f /etc/logstash.conf
- el+kibana
六,192.168.1.7 el+kibana
wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz
tar xf kibana-4.1.1-linux-x64.tar.gz
ln -sv kibana-4.1.1-linux-x64 kibana
vim /usr/local/kibana/config/kibana.yml
elasticsearch_url: "http://192.168.1.7:9200"
pid_file: /var/run/kibana.pid
log_file: /usr/local/kibana/kibana.log
nohup ./kibana/bin/kibana &
192.168.1.8 el+kibana
wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz
tar xf kibana-4.1.1-linux-x64.tar.gz
ln -sv kibana-4.1.1-linux-x64 kibana
vim /usr/local/kibana/config/kibana.yml
elasticsearch_url: "http://192.168.1.8:9200"
pid_file: /var/run/kibana.pid
log_file: /usr/local/kibana/kibana.log
nohup ./kibana/bin/kibana &
- nginx代理
七,192.168.1.200 Nginx反向代理el+kibana(192.168.1.7和192.168.1.8)
基于账户和IP做控制
auth_basic "Only for VIPs";
#定义名称
auth_basic_user_file /etc/nginx/users/.htpasswd;
#定义控制用户名的文件路径,为隐藏文件
}
deny 172.16.0.1;
#拒绝172.16.0.1访问,允许便是allow
#比如,只允许172.16.0.1,其他拒绝:
allow 172.16.0.1/16; deny all;
如下:
[root@localhost nginx]# vim nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format logstash_json '{"@timestamp":"$time_iso8601",'
'"host": "$server_addr",'
'"client": "$remote_addr",'
'"size": $body_bytes_sent,'
'"responsetime": $request_time,'
'"domain": "$host",'
'"url":"$uri",'
'"referer": "$http_referer",'
'"agent": "$http_user_agent",'
'"status":"$status"}';
access_log logs/access_json.access.log logstash_json;
sendfile on;
keepalive_timeout 65;
upstream kibana { #定义后端主机组
server 192.168.1.8:5601 weight=1 max_fails=2 fail_timeout=2;
server 192.168.1.7:5601 weight=1 max_fails=2 fail_timeout=2;
}
server {
listen 80;
server_name localhost;
auth_basic "Only for ELK Stack VIPs"; #basic
auth_basic_user_file /etc/nginx/.htpasswd; #用户认证密码文件位置
allow 192.168.1.200; #允许192.168.1.200
allow 192.168.1.0/24; #允许192.168.1.0网段
allow 10.0.0.1; #允许10.0.0.1
allow 10.0.0.254; #允许10.0.0.254
deny all; #拒绝所有
location / { #定义反向代理,将访问自己的请求,都转发到kibana服务器
proxy_pass http://kibana/;
index index.html index.htm;
}
}
}
修改权限
[root@localhost nginx]# chmod 400 /etc/nginx/.htpasswd
[root@localhost nginx]# chown nginx. /etc/nginx/.htpasswd
[root@localhost nginx]# cat /etc/nginx/.htpasswd
linuxea:$apr1$EGCdQ5wx$bD2CwXgww3y/xcCjVBcCD0
[root@localhost nginx]#
添加用户和密码
[root@localhost ~]# htpasswd -c -m /etc/nginx/.htpasswd linuxea
New password:
Re-type new password:
Adding password for user linuxea
[root@localhost ~]#
现在就可以用192.168.1.4访问,这里收集的就是代理nginx自己的日志
- kibana
打开后,点击settings,add,这里的名称需要遵循固定格式YYYY.MM.DD,日志名称可在http://IP:9200/_plugin/head/查看即可
如:搜索ip段:
status:200 AND hosts:192.168.1.200
status:200 OR status:400
status:[400 TO 499]
如果你有多个你可以输入后,会自动索引出来,而后create即可
如果有多个log +add new即可
而后选择discover,选择合适的时间
你可以根据想要的结果而输入对应的字段搜索
点击visualize选择对应内容,出图
也可以在discover界面选择,点击visualize
如下
kibana更多出图可参考kibana.logstash.es
一台机器有多个日志收集,通过if,kye,db区分
input {
file {
type => "apache"
path => "/date/logs/access.log"
}
file {
type => "php-error.log"
path => "/data/logs/php-error.log"
}
}
output {
if [type] == "apache"
redis {
host => "192.168.1.6"
port => "6379"
db => "1"
data_type => "list"
key => "access.log"
}
}
if [type] == "php-error.log"
redis {
host => "192.168.1.6"
port => "6379"
db => "2"
data_type => "list"
key => "php-error.log"
}
}
学习ELK日志平台(五)的更多相关文章
- 学习ELK日志平台(二)
一.ELK介绍 1.1 elasticsearch 1.1.1 elasticsearch介绍 ElasticSearch是一个基于Lucene的搜索服务器.它提供了一个分布式多用户能力的全文搜索 ...
- 学习ELK日志平台(四)
一:需求及基础: 场景: 1.开发人员不能登录线上服务器查看详细日志 2.各个系统都有日志,日志数据分散难以查找 3.日志数据量大,查询速度慢,或者数据不够实时 4.一个调用会涉及到多个系统,难以在这 ...
- 学习ELK日志平台(三)
ELK(elasticsearch.logstash.kibana) Elastic Stack是原ELK Stack在5.0版本加入Beats套件后的新称呼 解决痛点: 开发人员不能登录线上serv ...
- 学习ELK日志平台(一)
一.需求及基础: 场景: 1.开发人员不能登录线上服务器查看详细日志 2.各个系统都有日志,日志数据分散难以查找 3.日志数据量大,查询速度慢,或者数据不够实时 4.一个调用会涉及到多个系统,难以在这 ...
- ELK 日志平台构建
elastic中文社区 https://elasticsearch.cn/ 完整参考 ELK实时日志分析平台环境部署--完整记录 https://www.cnblogs.com/kevingrace/ ...
- elk日志平台搭建小记
最近抽出点时间,搭建了新版本的elk日志平台 elastaicsearch 和logstash,kibana和filebeat都是5.6版本的 中间使用redis做缓存,版本为3.2 使用的系统为ce ...
- Springboot项目使用aop切面保存详细日志到ELK日志平台
上一篇讲过了将Springboot项目中logback日志插入到ELK日志平台,它只是个示例.这一篇来看一下实际使用中,我们应该怎样通过aop切面,拦截所有请求日志插入到ELK日志系统.同时,由于往往 ...
- Springboot项目搭配ELK日志平台
上一篇讲过了elasticsearch和kibana的可视化组合查询,这一篇就来看看大名鼎鼎的ELK日志平台是如何搞定的. elasticsearch负责数据的存储和检索,kibana提供图形界面便于 ...
- 亿级 ELK 日志平台构建部署实践
本篇主要讲工作中的真实经历,我们怎么打造亿级日志平台,同时手把手教大家建立起这样一套亿级 ELK 系统.日志平台具体发展历程可以参考上篇 「从 ELK 到 EFK 演进」 废话不多说,老司机们座好了, ...
随机推荐
- 📚 选择排序和插入排序区别-DS笔记
选择排序法 A[i...n)未排序,A[0...i)已排序 A[i...n]中最小值要放到A[i]的位置 复杂度 \(O(n^2)\) 第一层循环n次 第二层循环:i=0,n次:i=1,n-1次... ...
- 【C# 】继承
背景..什么是继承? 「继承」是对象导向编程的其中一个基本属性. 它可让您定义子类,重复使用(继承).扩充或修改父类别行为. 其成员可供继承的类别称为基底类别. 继承基底类别成员的类别则称为「衍生类别 ...
- Java -- List与数组转换
list转数组 使用for循环 使用list.toArray(new String[]),不可以强制转换list.toArray(),因为数组在jvm是一个object表示的,是一个对象 数组转lis ...
- LVM--逻辑卷管理@安装、格式化、挂载、开机自动挂载完整篇
转至:https://blog.51cto.com/xiguatailang/1256606 LVM的重要性在这里我也就不多说了,今天和大家分享一下,LVM的安装方式,以及挂载方式. 首先呢 ...
- Python:pandas(一)——常用、读写函数read_csv、read_excel、to_csv、to_excel
学习自:pandas1.2.1documentation 0.常用 1)读写 ①从不同文本文件中读取数据的函数,都是read_xxx的形式:写函数则是to_xxx: ②对前n行感兴趣,或者用于检查读进 ...
- dependencies与devDependencies中应该放哪些依赖
网上一般的解释都是,开发环境用devDependencies,生产环境用dependencies,说的很简明,但是这里有个问题是,哪些包需要放到devDependencies中,哪些包需要放到depe ...
- JZ-028-数组中出现次数超过一半的数字
数组中出现次数超过一半的数字 题目描述 数组中有一个数字出现的次数超过数组长度的一半,请找出这个数字.例如输入一个长度为9的数组{1,2,3,2,2,2,5,4,2}.由于数字2在数组中出现了5次,超 ...
- 微信小程序文件上传至七牛云(laravel7)
1 wxml: <view> <form bindsubmit="dopost"> <view> <label>真实姓名</l ...
- php 23种设计模型 - 组合模式(合成模式)
组合模式(Composite) 组合模式(Composite Pattern),又叫部分整体模式,是用于把一组相似的对象当作一个单一的对象.组合模式依据树形结构来组合对象,用来表示部分以及整体层次.这 ...
- 线程池的极简用法——内置线程池multiprocessing
大家好,今天博主来分享一个线程池的小捷径--内置线程池的使用方法 一.背景 说道多线程,对变成层有了解的小伙伴一定不陌生,虽然不知道是什么但是也会从各大网站.面试分享等途径听说过.这里就不做过多的介绍 ...