配置和安装 EFK

官方文件目录:cluster/addons/fluentd-elasticsearch

$ ls *.yaml
es-controller.yaml es-service.yaml fluentd-es-ds.yaml kibana-controller.yaml kibana-service.yaml efk-rbac.yaml

同样EFK服务也需要一个efk-rbac.yaml文件,配置serviceaccount为efk

已经修改好的 yaml 文件见:EFK

配置 es-controller.yaml

#  cat es-controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: elasticsearch-logging-v1
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
version: v1
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
replicas: 2
selector:
k8s-app: elasticsearch-logging
version: v1
template:
metadata:
labels:
k8s-app: elasticsearch-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: efk
containers:
- image: index.tenxcloud.com/docker_library/elasticsearch:2.2.0
name: elasticsearch-logging
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: es-persistent-storage
mountPath: /data
env:
- name: "NAMESPACE"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: es-persistent-storage
emptyDir: {}

配置 es-service.yaml

无需修改

配置 fluentd-es-ds.yaml

# cat fluentd-es-ds.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd-es-v1.22
namespace: kube-system
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
version: v1.22
spec:
template:
metadata:
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: "true"
version: v1.22
# This annotation ensures that fluentd does not get evicted if the node
# supports critical pod annotation based priority scheme.
# Note that this does not guarantee admission on the nodes (#40573).
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
serviceAccountName: fluentd
containers:
- name: fluentd-es
image: index.tenxcloud.com/zhangshun/fluentd-elasticsearch:v1
command:
- '/bin/sh'
- '-c'
- '/usr/sbin/td-agent 2>&1 >> /var/log/fluentd.log'
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
nodeSelector:
beta.kubernetes.io/fluentd-ds-ready: "true"
tolerations:
- key : "node.alpha.kubernetes.io/ismaster"
effect: "NoSchedule"
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers

配置 kibana-controller.yaml

# cat kibana-controller.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kibana-logging
template:
metadata:
labels:
k8s-app: kibana-logging
spec:
serviceAccountName: efk
containers:
- name: kibana-logging
image: index.tenxcloud.com/docker_library/kibana:4.5.1
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
requests:
cpu: 100m
env:
- name: "ELASTICSEARCH_URL"
value: "http://elasticsearch-logging:9200"
- name: "KIBANA_BASE_URL"
value: "/api/v1/proxy/namespaces/kube-system/services/kibana-logging"
ports:
- containerPort: 5601
name: ui
protocol: TCP

给 Node 设置标签

定义 DaemonSet fluentd-es-v1.22 时设置了 nodeSelector beta.kubernetes.io/fluentd-ds-ready=true ,所以需要在期望运行 fluentd 的 Node 上设置该标签;

# kubectl get nodes
NAME STATUS AGE VERSION
192.168.1.122 Ready 22h v1.6.2
192.168.1.123 Ready 22h v1.6.2 # kubectl label nodes 192.168.1.122 beta.kubernetes.io/fluentd-ds-ready=true
node "172.20.0.112" labeled
# kubectl label nodes 192.168.1.123 beta.kubernetes.io/fluentd-ds-ready=true
node "172.20.0.123" labeled

给其他两台node打上同样的标签。

执行定义文件

$ kubectl create -f .
serviceaccount "efk" created
clusterrolebinding "efk" created
replicationcontroller "elasticsearch-logging-v1" created
service "elasticsearch-logging" created
daemonset "fluentd-es-v1.22" created
deployment "kibana-logging" created
service "kibana-logging" created

检查执行结果

$ kubectl get deployment -n kube-system|grep kibana
kibana-logging 1 1 1 1 2m $ kubectl get pods -n kube-system|grep -E 'elasticsearch|fluentd|kibana'
elasticsearch-logging-v1-mlstp 1/1 Running 0 1m
elasticsearch-logging-v1-nfbbf 1/1 Running 0 1m
fluentd-es-v1.22-31sm0 1/1 Running 0 1m
fluentd-es-v1.22-bpgqs 1/1 Running 0 1m
fluentd-es-v1.22-qmn7h 1/1 Running 0 1m
kibana-logging-1432287342-0gdng 1/1 Running 0 1m $ kubectl get service -n kube-system|grep -E 'elasticsearch|kibana'
elasticsearch-logging 10.254.77.62 <none> 9200/TCP 2m
kibana-logging 10.254.8.113 <none> 5601/TCP 2m

kibana Pod 第一次启动时会用较长时间(10-20分钟)来优化和 Cache 状态页面,可以 tailf 该 Pod 的日志观察进度:

$ kubectl logs kibana-logging-1432287342-0gdng -n kube-system -f
ELASTICSEARCH_URL=http://elasticsearch-logging:9200
server.basePath: /api/v1/proxy/namespaces/kube-system/services/kibana-logging
{"type":"log","@timestamp":"2017-07-26T13:08:06Z","tags":["info","optimize"],"pid":7,"message":"Optimizing and caching bundles for kibana and statusPage. This may take a few minutes"}
{"type":"log","@timestamp":"2017-07-26T13:18:17Z","tags":["info","optimize"],"pid":7,"message":"Optimization of bundles for kibana and statusPage complete in 610.40 seconds"}
{"type":"log","@timestamp":"2017-07-26T13:18:17Z","tags":["status","plugin:kibana@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-07-26T13:18:18Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":7,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-07-26T13:18:19Z","tags":["status","plugin:kbn_vislib_vis_types@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-07-26T13:18:19Z","tags":["status","plugin:markdown_vis@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-07-26T13:18:19Z","tags":["status","plugin:metric_vis@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-07-26T13:18:19Z","tags":["status","plugin:spyModes@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-07-26T13:18:19Z","tags":["status","plugin:statusPage@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-07-26T13:18:19Z","tags":["status","plugin:table_vis@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-07-26T13:18:19Z","tags":["listening","info"],"pid":7,"message":"Server running at http://0.0.0.0:5601"}
{"type":"log","@timestamp":"2017-07-26T13:18:24Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":7,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2017-07-26T13:18:29Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from yellow to green - Kibana index ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}

访问 kibana

  1. 通过 kube-apiserver 访问:

获取 monitoring-grafana 服务 URL

# kubectl cluster-info
Kubernetes master is running at https://192.168.1.121:6443
Elasticsearch is running at https://192.168.1.121:6443/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at https://192.168.1.121:6443/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at https://192.168.1.121:6443/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at https://192.168.1.121:6443/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://192.168.1.121:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
monitoring-grafana is running at https://192.168.1.121:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
monitoring-influxdb is running at https://192.168.1.121:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-influxd
浏览器访问 URL: `https://192.168.1.121:6443/api/v1/proxy/namespaces/kube-system/services/kibana-logging/app/kibana`
  1. 通过 kubectl proxy 访问:

    创建代理

    $ kubectl proxy --address='192.168.1.121' --port=8086 --accept-hosts='^*$'
    Starting to serve on 192.168.1.121:8086

    浏览器访问 URL:http://192.168.1.121:8086/api/v1/proxy/namespaces/kube-system/services/kibana-logging

在 Settings -> Indices 页面创建一个 index(相当于 mysql 中的一个 database),选中 Index contains time-based events,使用默认的 logstash-* pattern,点击 Create ;

可能遇到的问题

如果你在这里发现Create按钮是灰色的无法点击,且Time-filed name中没有选项,fluentd要读取/var/log/containers/目录下的log日志,这些日志是从/var/lib/docker/containers/${CONTAINER_ID}/${CONTAINER_ID}-json.log链接过来的,查看你的docker配置,—log-dirver需要设置为json-file格式,默认的可能是journald,参考docker logging

创建Index后,可以在 Discover 下看到 ElasticSearch logging 中汇聚的日志;

12-部署EFK插件的更多相关文章

  1. 09-5.部署 EFK 插件

    09-5.部署 EFK 插件 EFK 对应的目录:kubernetes/cluster/addons/fluentd-elasticsearch $ cd /opt/k8s/kubernetes/cl ...

  2. suse 12 二进制部署 Kubernetets 1.19.7 - 第13章 - 部署metrics-server插件

    文章目录 1.13.0.创建metrics-server证书和私钥 1.13.1.生成metrics-server证书和私钥 1.13.2.开启kube-apiserver聚合配置 1.13.3.分发 ...

  3. suse 12 二进制部署 Kubernetets 1.19.7 - 第03章 - 部署flannel插件

    文章目录 1.3.部署flannel网络 1.3.0.下载flannel二进制文件 1.3.1.创建flannel证书和私钥 1.3.2.生成flannel证书和私钥 1.3.3.将pod网段写入et ...

  4. suse 12 二进制部署 Kubernetets 1.19.7 - 第12章 - 部署dashboard插件

    文章目录 1.12.0.创建namespace 1.12.1.创建Dashboard rbac文件 1.12.2.创建dashboard文件 1.12.3.查看pod以及svc 1.12.4.获取 d ...

  5. kubernetes 1.14安装部署EFK日志收集系统

    简单介绍: EFK 组合插件是k8s项目的一个日志解决方案,它包括三个组件:Elasticsearch, Fluentd, Kibana.相对于ELK这样的架构,k8s官方推行了EFK,可能Fluen ...

  6. 安装IntelliJ IDEA热部署tomcat插件JreBel

    最近试着使用IntelliJ IDEA这款IDE,网上说它是最好用的java开发工具~但奈何国内ecilpse市场占有率实在稳固,所以国内这个工具也就少数人在使用 当然使用起来跟ecilpse还是有很 ...

  7. 在kubernetes1.17.2上结合ceph部署efk

    简绍 应用程序和系统日志可以帮助我们了解集群内部的运行情况,日志对于我们调试问题和监视集群情况也是非常有用的.而且大部分的应用都会有日志记录,对于传统的应用大部分都会写入到本地的日志文件之中.对于容器 ...

  8. Docker Compose部署 EFK(Elasticsearch + Fluentd + Kibana)收集日志

    简述 本文用于记录如何使用Docker Compose部署 EFK(Elasticsearch + Fluentd + Kibana) 收集Docker容器日志,使用EFK,可以无侵入代码,获得灵活, ...

  9. zend stuido 12.5的插件安装和xdebug调试器的配置和和配置注意

    参考: zend stuido 12.5的插件安装 zend 12.5 安装插件是按类别进行分类了的, 而且是在欢迎 界面就可以直接安装, 安装后,要重启zend才能生效 版式设计的一个基本点就是: ...

  10. Elasticsearch5.5 部署Head插件

    Elasticsearch5.5 部署Head插件 1.git下载软件包 yum -y install git git clone git://github.com/mobz/elasticsearc ...

随机推荐

  1. Log4J2用法

    一.    关于Log4J 2015年5月,Apache宣布Log4J 1.x 停止更新.最新版为1.2.17. 如今,Log4J 2.x已更新至2.7. 官方网址:http://logging.ap ...

  2. selenium-java,selenium安装配置

    准备材料 1.java jdk http://www.oracle.com/technetwork/java/javase/downloads/index.html 2.开发工具 https://ww ...

  3. phpexcel获取excel表格内容

    excel表格式: 代码: $objPHPExcel=PHPExcel_IOFactory::load($excelFilePath);//$file_url即Excel文件的路径 $sheet=$o ...

  4. 【Django】RROR 2003 (HY000): Can't connect to MySQL server on 'localhost' (10061)

    刚刚启动项目的时候,突然报了这个错误.之前一直正常 后来百度一下,让我在window的host文件下,把被注释的127.0.0.1   localhost这个的注释取消 然鹅并木有用 直接用cmd连接 ...

  5. Newtonsoft.Json反序列化(Deserialize)出错:Bad JSON escape sequence

    使用Newtonsoft.Json反序列化收到的字串为JObject或其它支持的数据模型,有时错误,提示如下: Bad JSON escape sequence: \c. Path , positio ...

  6. Swift MD5加密

    很多时候我们会用到md5加密,下面是swift 3.0的实现方法: 首先新建桥接文件 xx-Bridging-Header,方法很多,这里就不介绍了. 然后在桥接文件中引入加密库 #import &l ...

  7. Less入门及知识点整理

    LESS « 一种动态样式语言 文档链接:http://www.bootcss.com/p/lesscss/ 百科 Less 是一门 CSS 预处理语言,它扩充了 CSS 语言,增加了诸如变量.混合( ...

  8. boost::asio 学习

    安装 下载-解压 指定安装目录 ./bootstrap.sh --prefix=/usr/local/boost_1_68_0 查看所有必须要编译才能使用的库 ./b2 --show-librarie ...

  9. activeMQ和spring的整合

    http://www.cnblogs.com/shuai-server/p/8966299.html  这篇博客中介绍了activemq传递消息的两种方式,今天分享的是activemq框架和sprin ...

  10. leveldb 学习记录(八) compact

    随着运行时间的增加,memtable会慢慢 转化成 sstable. sstable会越来越多 我们就需要进行整合 compact 代码会在写入查询key值 db写入时等多出位置调用MaybeSche ...