配置和安装 EFK

官方文件目录:cluster/addons/fluentd-elasticsearch

  1. $ ls *.yaml
  2. es-controller.yaml es-service.yaml fluentd-es-ds.yaml kibana-controller.yaml kibana-service.yaml efk-rbac.yaml

同样EFK服务也需要一个efk-rbac.yaml文件,配置serviceaccount为efk

已经修改好的 yaml 文件见:EFK

配置 es-controller.yaml

  1. # cat es-controller.yaml
  2. apiVersion: v1
  3. kind: ReplicationController
  4. metadata:
  5. name: elasticsearch-logging-v1
  6. namespace: kube-system
  7. labels:
  8. k8s-app: elasticsearch-logging
  9. version: v1
  10. kubernetes.io/cluster-service: "true"
  11. addonmanager.kubernetes.io/mode: Reconcile
  12. spec:
  13. replicas: 2
  14. selector:
  15. k8s-app: elasticsearch-logging
  16. version: v1
  17. template:
  18. metadata:
  19. labels:
  20. k8s-app: elasticsearch-logging
  21. version: v1
  22. kubernetes.io/cluster-service: "true"
  23. spec:
  24. serviceAccountName: efk
  25. containers:
  26. - image: index.tenxcloud.com/docker_library/elasticsearch:2.2.0
  27. name: elasticsearch-logging
  28. resources:
  29. # need more cpu upon initialization, therefore burstable class
  30. limits:
  31. cpu: 1000m
  32. requests:
  33. cpu: 100m
  34. ports:
  35. - containerPort: 9200
  36. name: db
  37. protocol: TCP
  38. - containerPort: 9300
  39. name: transport
  40. protocol: TCP
  41. volumeMounts:
  42. - name: es-persistent-storage
  43. mountPath: /data
  44. env:
  45. - name: "NAMESPACE"
  46. valueFrom:
  47. fieldRef:
  48. fieldPath: metadata.namespace
  49. volumes:
  50. - name: es-persistent-storage
  51. emptyDir: {}

配置 es-service.yaml

无需修改

配置 fluentd-es-ds.yaml

  1. # cat fluentd-es-ds.yaml
  2. apiVersion: extensions/v1beta1
  3. kind: DaemonSet
  4. metadata:
  5. name: fluentd-es-v1.22
  6. namespace: kube-system
  7. labels:
  8. k8s-app: fluentd-es
  9. kubernetes.io/cluster-service: "true"
  10. addonmanager.kubernetes.io/mode: Reconcile
  11. version: v1.22
  12. spec:
  13. template:
  14. metadata:
  15. labels:
  16. k8s-app: fluentd-es
  17. kubernetes.io/cluster-service: "true"
  18. version: v1.22
  19. # This annotation ensures that fluentd does not get evicted if the node
  20. # supports critical pod annotation based priority scheme.
  21. # Note that this does not guarantee admission on the nodes (#40573).
  22. annotations:
  23. scheduler.alpha.kubernetes.io/critical-pod: ''
  24. spec:
  25. serviceAccountName: fluentd
  26. containers:
  27. - name: fluentd-es
  28. image: index.tenxcloud.com/zhangshun/fluentd-elasticsearch:v1
  29. command:
  30. - '/bin/sh'
  31. - '-c'
  32. - '/usr/sbin/td-agent 2>&1 >> /var/log/fluentd.log'
  33. resources:
  34. limits:
  35. memory: 200Mi
  36. requests:
  37. cpu: 100m
  38. memory: 200Mi
  39. volumeMounts:
  40. - name: varlog
  41. mountPath: /var/log
  42. - name: varlibdockercontainers
  43. mountPath: /var/lib/docker/containers
  44. readOnly: true
  45. nodeSelector:
  46. beta.kubernetes.io/fluentd-ds-ready: "true"
  47. tolerations:
  48. - key : "node.alpha.kubernetes.io/ismaster"
  49. effect: "NoSchedule"
  50. terminationGracePeriodSeconds: 30
  51. volumes:
  52. - name: varlog
  53. hostPath:
  54. path: /var/log
  55. - name: varlibdockercontainers
  56. hostPath:
  57. path: /var/lib/docker/containers

配置 kibana-controller.yaml

  1. # cat kibana-controller.yaml
  2. apiVersion: extensions/v1beta1
  3. kind: Deployment
  4. metadata:
  5. name: kibana-logging
  6. namespace: kube-system
  7. labels:
  8. k8s-app: kibana-logging
  9. kubernetes.io/cluster-service: "true"
  10. addonmanager.kubernetes.io/mode: Reconcile
  11. spec:
  12. replicas: 1
  13. selector:
  14. matchLabels:
  15. k8s-app: kibana-logging
  16. template:
  17. metadata:
  18. labels:
  19. k8s-app: kibana-logging
  20. spec:
  21. serviceAccountName: efk
  22. containers:
  23. - name: kibana-logging
  24. image: index.tenxcloud.com/docker_library/kibana:4.5.1
  25. resources:
  26. # keep request = limit to keep this container in guaranteed class
  27. limits:
  28. cpu: 100m
  29. requests:
  30. cpu: 100m
  31. env:
  32. - name: "ELASTICSEARCH_URL"
  33. value: "http://elasticsearch-logging:9200"
  34. - name: "KIBANA_BASE_URL"
  35. value: "/api/v1/proxy/namespaces/kube-system/services/kibana-logging"
  36. ports:
  37. - containerPort: 5601
  38. name: ui
  39. protocol: TCP

给 Node 设置标签

定义 DaemonSet fluentd-es-v1.22 时设置了 nodeSelector beta.kubernetes.io/fluentd-ds-ready=true ,所以需要在期望运行 fluentd 的 Node 上设置该标签;

  1. # kubectl get nodes
  2. NAME STATUS AGE VERSION
  3. 192.168.1.122 Ready 22h v1.6.2
  4. 192.168.1.123 Ready 22h v1.6.2
  5. # kubectl label nodes 192.168.1.122 beta.kubernetes.io/fluentd-ds-ready=true
  6. node "172.20.0.112" labeled
  7. # kubectl label nodes 192.168.1.123 beta.kubernetes.io/fluentd-ds-ready=true
  8. node "172.20.0.123" labeled

给其他两台node打上同样的标签。

执行定义文件

  1. $ kubectl create -f .
  2. serviceaccount "efk" created
  3. clusterrolebinding "efk" created
  4. replicationcontroller "elasticsearch-logging-v1" created
  5. service "elasticsearch-logging" created
  6. daemonset "fluentd-es-v1.22" created
  7. deployment "kibana-logging" created
  8. service "kibana-logging" created

检查执行结果

  1. $ kubectl get deployment -n kube-system|grep kibana
  2. kibana-logging 1 1 1 1 2m
  3. $ kubectl get pods -n kube-system|grep -E 'elasticsearch|fluentd|kibana'
  4. elasticsearch-logging-v1-mlstp 1/1 Running 0 1m
  5. elasticsearch-logging-v1-nfbbf 1/1 Running 0 1m
  6. fluentd-es-v1.22-31sm0 1/1 Running 0 1m
  7. fluentd-es-v1.22-bpgqs 1/1 Running 0 1m
  8. fluentd-es-v1.22-qmn7h 1/1 Running 0 1m
  9. kibana-logging-1432287342-0gdng 1/1 Running 0 1m
  10. $ kubectl get service -n kube-system|grep -E 'elasticsearch|kibana'
  11. elasticsearch-logging 10.254.77.62 <none> 9200/TCP 2m
  12. kibana-logging 10.254.8.113 <none> 5601/TCP 2m

kibana Pod 第一次启动时会用较长时间(10-20分钟)来优化和 Cache 状态页面,可以 tailf 该 Pod 的日志观察进度:

  1. $ kubectl logs kibana-logging-1432287342-0gdng -n kube-system -f
  2. ELASTICSEARCH_URL=http://elasticsearch-logging:9200
  3. server.basePath: /api/v1/proxy/namespaces/kube-system/services/kibana-logging
  4. {"type":"log","@timestamp":"2017-07-26T13:08:06Z","tags":["info","optimize"],"pid":7,"message":"Optimizing and caching bundles for kibana and statusPage. This may take a few minutes"}
  5. {"type":"log","@timestamp":"2017-07-26T13:18:17Z","tags":["info","optimize"],"pid":7,"message":"Optimization of bundles for kibana and statusPage complete in 610.40 seconds"}
  6. {"type":"log","@timestamp":"2017-07-26T13:18:17Z","tags":["status","plugin:kibana@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
  7. {"type":"log","@timestamp":"2017-07-26T13:18:18Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":7,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
  8. {"type":"log","@timestamp":"2017-07-26T13:18:19Z","tags":["status","plugin:kbn_vislib_vis_types@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
  9. {"type":"log","@timestamp":"2017-07-26T13:18:19Z","tags":["status","plugin:markdown_vis@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
  10. {"type":"log","@timestamp":"2017-07-26T13:18:19Z","tags":["status","plugin:metric_vis@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
  11. {"type":"log","@timestamp":"2017-07-26T13:18:19Z","tags":["status","plugin:spyModes@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
  12. {"type":"log","@timestamp":"2017-07-26T13:18:19Z","tags":["status","plugin:statusPage@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
  13. {"type":"log","@timestamp":"2017-07-26T13:18:19Z","tags":["status","plugin:table_vis@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
  14. {"type":"log","@timestamp":"2017-07-26T13:18:19Z","tags":["listening","info"],"pid":7,"message":"Server running at http://0.0.0.0:5601"}
  15. {"type":"log","@timestamp":"2017-07-26T13:18:24Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":7,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
  16. {"type":"log","@timestamp":"2017-07-26T13:18:29Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from yellow to green - Kibana index ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}

访问 kibana

  1. 通过 kube-apiserver 访问:

获取 monitoring-grafana 服务 URL

  1. # kubectl cluster-info
  2. Kubernetes master is running at https://192.168.1.121:6443
  3. Elasticsearch is running at https://192.168.1.121:6443/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
  4. Heapster is running at https://192.168.1.121:6443/api/v1/proxy/namespaces/kube-system/services/heapster
  5. Kibana is running at https://192.168.1.121:6443/api/v1/proxy/namespaces/kube-system/services/kibana-logging
  6. KubeDNS is running at https://192.168.1.121:6443/api/v1/proxy/namespaces/kube-system/services/kube-dns
  7. kubernetes-dashboard is running at https://192.168.1.121:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
  8. monitoring-grafana is running at https://192.168.1.121:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
  9. monitoring-influxdb is running at https://192.168.1.121:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-influxd
  1. 浏览器访问 URL `https://192.168.1.121:6443/api/v1/proxy/namespaces/kube-system/services/kibana-logging/app/kibana`
  1. 通过 kubectl proxy 访问:

    创建代理

    1. $ kubectl proxy --address='192.168.1.121' --port=8086 --accept-hosts='^*$'
    2. Starting to serve on 192.168.1.121:8086

    浏览器访问 URL:http://192.168.1.121:8086/api/v1/proxy/namespaces/kube-system/services/kibana-logging

在 Settings -> Indices 页面创建一个 index(相当于 mysql 中的一个 database),选中 Index contains time-based events,使用默认的 logstash-* pattern,点击 Create ;

可能遇到的问题

如果你在这里发现Create按钮是灰色的无法点击,且Time-filed name中没有选项,fluentd要读取/var/log/containers/目录下的log日志,这些日志是从/var/lib/docker/containers/${CONTAINER_ID}/${CONTAINER_ID}-json.log链接过来的,查看你的docker配置,—log-dirver需要设置为json-file格式,默认的可能是journald,参考docker logging

创建Index后,可以在 Discover 下看到 ElasticSearch logging 中汇聚的日志;

12-部署EFK插件的更多相关文章

  1. 09-5.部署 EFK 插件

    09-5.部署 EFK 插件 EFK 对应的目录:kubernetes/cluster/addons/fluentd-elasticsearch $ cd /opt/k8s/kubernetes/cl ...

  2. suse 12 二进制部署 Kubernetets 1.19.7 - 第13章 - 部署metrics-server插件

    文章目录 1.13.0.创建metrics-server证书和私钥 1.13.1.生成metrics-server证书和私钥 1.13.2.开启kube-apiserver聚合配置 1.13.3.分发 ...

  3. suse 12 二进制部署 Kubernetets 1.19.7 - 第03章 - 部署flannel插件

    文章目录 1.3.部署flannel网络 1.3.0.下载flannel二进制文件 1.3.1.创建flannel证书和私钥 1.3.2.生成flannel证书和私钥 1.3.3.将pod网段写入et ...

  4. suse 12 二进制部署 Kubernetets 1.19.7 - 第12章 - 部署dashboard插件

    文章目录 1.12.0.创建namespace 1.12.1.创建Dashboard rbac文件 1.12.2.创建dashboard文件 1.12.3.查看pod以及svc 1.12.4.获取 d ...

  5. kubernetes 1.14安装部署EFK日志收集系统

    简单介绍: EFK 组合插件是k8s项目的一个日志解决方案,它包括三个组件:Elasticsearch, Fluentd, Kibana.相对于ELK这样的架构,k8s官方推行了EFK,可能Fluen ...

  6. 安装IntelliJ IDEA热部署tomcat插件JreBel

    最近试着使用IntelliJ IDEA这款IDE,网上说它是最好用的java开发工具~但奈何国内ecilpse市场占有率实在稳固,所以国内这个工具也就少数人在使用 当然使用起来跟ecilpse还是有很 ...

  7. 在kubernetes1.17.2上结合ceph部署efk

    简绍 应用程序和系统日志可以帮助我们了解集群内部的运行情况,日志对于我们调试问题和监视集群情况也是非常有用的.而且大部分的应用都会有日志记录,对于传统的应用大部分都会写入到本地的日志文件之中.对于容器 ...

  8. Docker Compose部署 EFK(Elasticsearch + Fluentd + Kibana)收集日志

    简述 本文用于记录如何使用Docker Compose部署 EFK(Elasticsearch + Fluentd + Kibana) 收集Docker容器日志,使用EFK,可以无侵入代码,获得灵活, ...

  9. zend stuido 12.5的插件安装和xdebug调试器的配置和和配置注意

    参考: zend stuido 12.5的插件安装 zend 12.5 安装插件是按类别进行分类了的, 而且是在欢迎 界面就可以直接安装, 安装后,要重启zend才能生效 版式设计的一个基本点就是: ...

  10. Elasticsearch5.5 部署Head插件

    Elasticsearch5.5 部署Head插件 1.git下载软件包 yum -y install git git clone git://github.com/mobz/elasticsearc ...

随机推荐

  1. Lua中面向对象

    一.Lua中类的简单实现: (1)版本——摘自 Cocos2.0中的: --Create an class. function class(classname, super) local superT ...

  2. logrotate-日志切割示例

    logrotate是linux系统自带的工具,它可以自动对日志进行截断(或轮循).压缩以及删除旧的日志文件. 1)配置文件示例# cat /wls/wls81/bin/weblogic/wls/app ...

  3. c++ 中的数字和字符串的转换

    理由:一直忘记数字型的字符串和数字之间的转换,这次总结一下,以便后面再次用到. 其实 C++ 已经给我们写好了相应的函数,直接拿来用即可 QA1:如何把一个数字转换为一个数字字符串?(这个不是很常用) ...

  4. Linux 禁止普通用户su到root

    Linux账户权限管理上为了防止普通用户通过su切换到root用户,需要修改/etc/pam.d/su和/etc/login.defs两个配置文件. Step1:修改 /etc/pam.d/su文件 ...

  5. 251. Flatten 2D Vector 平铺二维矩阵

    [抄题]: Implement an iterator to flatten a 2d vector. Example: Input: 2d vector = [ [1,2], [3], [4,5,6 ...

  6. [leetcode]40. Combination Sum II组合之和之二

    Given a collection of candidate numbers (candidates) and a target number (target), find all unique c ...

  7. Selenium 笔记

    1. 截屏:get_screenshot_as_file(“C:\\b1.jpg”) 2. 退出:(1).close----关闭当前窗口 (2).quit()-----用于结束进程,关闭所有的窗口 一 ...

  8. Python学习积累:使用help();打印多个变量;fileno()

    1.使用篇: 1.1如何从help()退出: 直接回车即可! 2.技能篇: 2.1 如何一次性打印多个变量? 多个变量中间使用逗号隔开,且引用变量为%(变量1,变量2,变量3), 2.2fileno( ...

  9. TUN/TAP编程实现

    其实关于这两种设备的编程,基本上属于八股文,大家一般都这么干. 启动设备之前 有的linux 并没有将tun 模块编译到内核之中,所以,我们要做的第一件事情就是检查我们的系统是否支持 TUN/TAP ...

  10. Reading | 《数字图像处理原理与实践(MATLAB版)》(未完待续)

    目录 一.前言 1.MATLAB or C++ 2.图像文件 文件头 调色板 像素数据 3.RGB颜色空间 原理 坐标表示 4.MATLAB中的图像文件 图像类型 image()函数 imshow() ...