K8s部署ElasticSearch集群

1.前提准备工作

1.1 创建elastic的命名空间

namespace编排文件如下:

  1. elastic.namespace.yaml
  2. ---
  3. apiVersion: v1
  4. kind: Namespace
  5. metadata:
  6. name: elastic
  7. ---

创建elastic名称空间

  1. $ kubectl apply elastic.namespace.yaml
  2. namespace/elastic created

1.2 生成Xpack认证证书文件

ElasticSearch提供了生成证书的工具elasticsearch-certutil,我们可以在docker实例中先生成它,然后复制出来,后面统一使用。

1.2.1 创建ES临时容器

  1. $ docker run -it -d --name elastic-cret docker.elastic.co/elasticsearch/elasticsearch:7.8.0 /bin/bash
  2. 62acfabc85f220941fcaf08bc783c4e305813045683290fe7b15f95e37e70cd0

1.2.2 进入容器生成密钥文件

  1. $ docker exec -it elastic-cret /bin/bash
  2. $ ./bin/elasticsearch-certutil ca
  3. This tool assists you in the generation of X.509 certificates and certificate
  4. signing requests for use with SSL/TLS in the Elastic stack.
  5. The 'ca' mode generates a new 'certificate authority'
  6. This will create a new X.509 certificate and private key that can be used
  7. to sign certificate when running in 'cert' mode.
  8. Use the 'ca-dn' option if you wish to configure the 'distinguished name'
  9. of the certificate authority
  10. By default the 'ca' mode produces a single PKCS#12 output file which holds:
  11. * The CA certificate
  12. * The CA s private key
  13. If you elect to generate PEM format certificates (the -pem option), then the output will
  14. be a zip file containing individual files for the CA certificate and private key
  15. Please enter the desired output file [elastic-stack-ca.p12]:
  16. Enter password for elastic-stack-ca.p12 :
  17. ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
  18. This tool assists you in the generation of X.509 certificates and certificate
  19. signing requests for use with SSL/TLS in the Elastic stack.
  20. ......
  21. Enter password for CA (elastic-stack-ca.p12) :
  22. Please enter the desired output file [elastic-certificates.p12]:
  23. Enter password for elastic-certificates.p12 :
  24. Certificates written to /usr/share/elasticsearch/elastic-certificates.p12
  25. This file should be properly secured as it contains the private key for
  26. your instance.
  27. This file is a self contained file and can be copied and used 'as is'
  28. For each Elastic product that you wish to configure, you should copy
  29. this '.p12' file to the relevant configuration directory
  30. and then follow the SSL configuration instructions in the product guide.
  31. For client applications, you may only need to copy the CA certificate and
  32. configure the client to trust this certificate.
  33. $ ls *.p12
  34. elastic-certificates.p12 elastic-stack-ca.p12

注:以上所有选项无需填写,直接回车即可

1.2.3 将证书文件从容器内复制出来备用

  1. $ docker cp elastic-cret:/usr/share/elasticsearch/elastic-certificates.p12 .
  2. $ docker rm -f elastic-cret

2 创建Master节点

创建Master主节点用于控制整个集群,编排文件如下:

2.1 为Master节点配置数据持久化

  1. # 创建编排文件
  2. elasticsearch-master.pvc.yaml
  3. apiVersion: v1
  4. kind: PersistentVolumeClaim
  5. metadata:
  6. name: pvc-elasticsearch-master
  7. namespace: elastic
  8. spec:
  9. accessModes:
  10. - ReadWriteMany
  11. storageClassName: nfs-client # 此处指定StorageClass存储卷
  12. resources:
  13. requests:
  14. storage: 10Gi
  15. # 创建pvc存储卷
  16. kubectl apply -f elasticsearch-master.pvc.yaml
  17. kubectl get pvc -n elastic
  18. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  19. pvc-elasticsearch-master Bound pvc-9ef037b7-c4b2-11ea-8237-ac1f6bd6d98e 10Gi RWX nfs-client-ssd 38d

将之前生成的证书文件存放到创建好pvc的crets目录中,例:

  1. $ mkdir ${MASTER-PVC_HOME}/crets
  2. $ cp elastic-certificates.p12 ${MASTER-PVC_HOME}/crets/

2.2 创建master节点ConfigMap编排文件

ConfigMap对象用于存放Master集群配置信息,方便ElasticSearch的配置并开启Xpack认证功能,资源对象如下:

  1. elasticsearch-master.configmap.yaml
  2. ---
  3. apiVersion: v1
  4. kind: ConfigMap
  5. metadata:
  6. namespace: elastic
  7. name: elasticsearch-master-config
  8. labels:
  9. app: elasticsearch
  10. role: master
  11. data:
  12. elasticsearch.yml: |-
  13. cluster.name: ${CLUSTER_NAME}
  14. node.name: ${NODE_NAME}
  15. discovery.seed_hosts: ${NODE_LIST}
  16. cluster.initial_master_nodes: ${MASTER_NODES}
  17. network.host: 0.0.0.0
  18. node:
  19. master: true
  20. data: false
  21. ingest: false
  22. xpack.security.enabled: true
  23. xpack.monitoring.collection.enabled: true
  24. xpack.security.transport.ssl.enabled: true
  25. xpack.security.transport.ssl.verification_mode: certificate
  26. xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/data/certs/elastic-certificates.p12
  27. xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/data/certs/elastic-certificates.p12
  28. ---

2.3 创建master节点Service编排文件

Master节点只需要用于集群通信的9300端口,资源清单如下:

  1. elasticsearch-master.service.yaml
  2. ---
  3. apiVersion: v1
  4. kind: Service
  5. metadata:
  6. namespace: elastic
  7. name: elasticsearch-master
  8. labels:
  9. app: elasticsearch
  10. role: master
  11. spec:
  12. ports:
  13. - port: 9300
  14. name: transport
  15. selector:
  16. app: elasticsearch
  17. role: master
  18. ---

2.4 创建master节点Deployment编排文件

Deployment用于定于Master节点应用Pod,资源清单如下:

  1. elasticsearch-master.deployment.yaml
  2. ---
  3. apiVersion: apps/v1
  4. kind: Deployment
  5. metadata:
  6. namespace: elastic
  7. name: elasticsearch-master
  8. labels:
  9. app: elasticsearch
  10. role: master
  11. spec:
  12. replicas: 1
  13. selector:
  14. matchLabels:
  15. app: elasticsearch
  16. role: master
  17. template:
  18. metadata:
  19. labels:
  20. app: elasticsearch
  21. role: master
  22. spec:
  23. containers:
  24. - name: elasticsearch-master
  25. image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
  26. env:
  27. - name: CLUSTER_NAME
  28. value: elasticsearch
  29. - name: NODE_NAME
  30. value: elasticsearch-master
  31. - name: NODE_LIST
  32. value: elasticsearch-master,elasticsearch-data,elasticsearch-client
  33. - name: MASTER_NODES
  34. value: elasticsearch-master
  35. - name: ES_JAVA_OPTS
  36. value: "-Xms2048m -Xmx2048m"
  37. - name: ELASTIC_USERNAME
  38. valueFrom:
  39. secretKeyRef:
  40. name: elastic-credentials
  41. key: username
  42. - name: ELASTIC_PASSWORD
  43. valueFrom:
  44. secretKeyRef:
  45. name: elastic-credentials
  46. key: password
  47. ports:
  48. - containerPort: 9300
  49. name: transport
  50. volumeMounts:
  51. - name: config
  52. mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
  53. readOnly: true
  54. subPath: elasticsearch.yml
  55. - name: storage
  56. mountPath: /usr/share/elasticsearch/data
  57. volumes:
  58. - name: config
  59. configMap:
  60. name: elasticsearch-master-config
  61. - name: storage
  62. persistentVolumeClaim:
  63. claimName: pvc-elasticsearch-master
  64. ---

2.5 创建3个master资源对象

  1. $ kubectl apply -f elasticsearch-master.configmap.yaml \
  2. -f elasticsearch-master.service.yaml \
  3. -f elasticsearch-master.deployment.yaml
  4. configmap/elasticsearch-master-config created
  5. service/elasticsearch-master created
  6. deployment.apps/elasticsearch-master created
  7. $ kubectl get pods -n elastic -l app=elasticsearch
  8. NAME READY STATUS RESTARTS AGE
  9. elasticsearch-master-7fc5cc8957-jfjmr 1/1 Running 0 23m

直到 Pod 变成 Running 状态就表明 master 节点安装成功。

3 安装ElasticSearch数据节点

接下来安装的是ES的数据节点,主要用于负责集群的数据托管和执行查询

3.1 创建data节点ConfigMap编排文件

跟Master节点一样,ConfigMap用于存放数据节点ES的配置信息,编排文件如下:

  1. elasticsearch-data.configmap.yaml
  2. ---
  3. apiVersion: v1
  4. kind: ConfigMap
  5. metadata:
  6. namespace: elastic
  7. name: elasticsearch-data-config
  8. labels:
  9. app: elasticsearch
  10. role: data
  11. data:
  12. elasticsearch.yml: |-
  13. cluster.name: ${CLUSTER_NAME}
  14. node.name: ${NODE_NAME}
  15. discovery.seed_hosts: ${NODE_LIST}
  16. cluster.initial_master_nodes: ${MASTER_NODES}
  17. network.host: 0.0.0.0
  18. node:
  19. master: false
  20. data: true
  21. ingest: false
  22. xpack.security.enabled: true
  23. xpack.monitoring.collection.enabled: true
  24. xpack.security.transport.ssl.enabled: true
  25. xpack.security.transport.ssl.verification_mode: certificate
  26. xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/data/certs/elastic-certificates.p12
  27. xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/data/certs/elastic-certificates.p12
  28. ---

3.2 创建data节点Service编排文件

data节点同master一样只需通过9300端口与其它节点通信,资源对象如下:

  1. elasticsearch-data.service.yaml
  2. ---
  3. apiVersion: v1
  4. kind: Service
  5. metadata:
  6. namespace: elastic
  7. name: elasticsearch-data
  8. labels:
  9. app: elasticsearch
  10. role: data
  11. spec:
  12. ports:
  13. - port: 9300
  14. name: transport
  15. selector:
  16. app: elasticsearch
  17. role: data
  18. ---

3.3 创建data节点StatefulSet控制器

data节点需要创建StatefulSet控制器,因为存在多个数据节点,且每个数据节点的数据不是一样的,需要单独存储,其中volumeClaimTemplates用于定于每个数据节点的存储卷,对应的清单文件如下:

  1. elasticsearch-data.statefulset.yaml
  2. ---
  3. apiVersion: apps/v1
  4. kind: StatefulSet
  5. metadata:
  6. namespace: elastic
  7. name: elasticsearch-data
  8. labels:
  9. app: elasticsearch
  10. role: data
  11. spec:
  12. serviceName: "elasticsearch-data"
  13. replicas: 2
  14. selector:
  15. matchLabels:
  16. app: elasticsearch
  17. role: data
  18. template:
  19. metadata:
  20. labels:
  21. app: elasticsearch
  22. role: data
  23. spec:
  24. containers:
  25. - name: elasticsearch-data
  26. image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
  27. env:
  28. - name: CLUSTER_NAME
  29. value: elasticsearch
  30. - name: NODE_NAME
  31. value: elasticsearch-data
  32. - name: NODE_LIST
  33. value: elasticsearch-master,elasticsearch-data,elasticsearch-client
  34. - name: MASTER_NODES
  35. value: elasticsearch-master
  36. - name: "ES_JAVA_OPTS"
  37. value: "-Xms4096m -Xmx4096m"
  38. - name: ELASTIC_USERNAME
  39. valueFrom:
  40. secretKeyRef:
  41. name: elastic-credentials
  42. key: username
  43. - name: ELASTIC_PASSWORD
  44. valueFrom:
  45. secretKeyRef:
  46. name: elastic-credentials
  47. key: password
  48. ports:
  49. - containerPort: 9300
  50. name: transport
  51. volumeMounts:
  52. - name: config
  53. mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
  54. readOnly: true
  55. subPath: elasticsearch.yml
  56. - name: elasticsearch-data-persistent-storage
  57. mountPath: /usr/share/elasticsearch/data
  58. volumes:
  59. - name: config
  60. configMap:
  61. name: elasticsearch-data-config
  62. volumeClaimTemplates:
  63. - metadata:
  64. name: elasticsearch-data-persistent-storage
  65. spec:
  66. accessModes: [ "ReadWriteOnce" ]
  67. storageClassName: nfs-client-ssd
  68. resources:
  69. requests:
  70. storage: 500Gi
  71. ---

3.4 创建data节点资源对象

  1. $ kubectl apply -f elasticsearch-data.configmap.yaml \
  2. -f elasticsearch-data.service.yaml \
  3. -f elasticsearch-data.statefulset.yaml
  4. configmap/elasticsearch-data-config created
  5. service/elasticsearch-data created
  6. statefulset.apps/elasticsearch-data created

将之前准备好的ES证书文件同Master节点一样复制到PVC的目录中(每个数据节点都放一份)

  1. $ mkdir ${DATA-PVC_HOME}/crets
  2. $ cp elastic-certificates.p12 ${DATA-PVC_HOME}/crets/

等待Pod变成Running状态说明节点启动成功

  1. $ kubectl get pods -n elastic -l app=elasticsearch
  2. NAME READY STATUS RESTARTS AGE
  3. elasticsearch-data-0 1/1 Running 0 47m
  4. elasticsearch-data-1 1/1 Running 0 47m
  5. elasticsearch-master-7fc5cc8957-jfjmr 1/1 Running 0 100m

4 安装ElasticSearch客户端节点

Client节点主要用于负责暴露一个HTTP的接口用于查询数据及将数据传递给数据节点

4.1 创建Client节点ConfigMap编排文件

  1. ---
  2. apiVersion: v1
  3. kind: ConfigMap
  4. metadata:
  5. namespace: elastic
  6. name: elasticsearch-client-config
  7. labels:
  8. app: elasticsearch
  9. role: client
  10. data:
  11. elasticsearch.yml: |-
  12. cluster.name: ${CLUSTER_NAME}
  13. node.name: ${NODE_NAME}
  14. discovery.seed_hosts: ${NODE_LIST}
  15. cluster.initial_master_nodes: ${MASTER_NODES}
  16. network.host: 0.0.0.0
  17. node:
  18. master: false
  19. data: false
  20. ingest: true
  21. xpack.security.enabled: true
  22. xpack.monitoring.collection.enabled: true
  23. xpack.security.transport.ssl.enabled: true
  24. xpack.security.transport.ssl.verification_mode: certificate
  25. xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/data/certs/elastic-certificates.p12
  26. xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/data/certs/elastic-certificates.p12
  27. ---

4.2 创建Client节点Service编排文件

客户端节点需要暴露两个端口,9300端口用于与集群其它节点进行通信,9200端口用于HTTP API使用,资源对象如下:

  1. elasticsearch-client.service.yaml
  2. ---
  3. apiVersion: v1
  4. kind: Service
  5. metadata:
  6. namespace: elastic
  7. name: elasticsearch-client
  8. labels:
  9. app: elasticsearch
  10. role: client
  11. spec:
  12. ports:
  13. - port: 9200
  14. name: client
  15. nodePort: 9200
  16. - port: 9300
  17. name: transport
  18. selector:
  19. app: elasticsearch
  20. role: client
  21. type: NodePort
  22. ---

4.3 创建Client节点Deployment编排文件

  1. elasticsearch-client.deployment.yaml
  2. ---
  3. apiVersion: apps/v1
  4. kind: Deployment
  5. metadata:
  6. namespace: elastic
  7. name: elasticsearch-client
  8. labels:
  9. app: elasticsearch
  10. role: client
  11. spec:
  12. selector:
  13. matchLabels:
  14. app: elasticsearch
  15. role: client
  16. template:
  17. metadata:
  18. labels:
  19. app: elasticsearch
  20. role: client
  21. spec:
  22. containers:
  23. - name: elasticsearch-client
  24. image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
  25. env:
  26. - name: CLUSTER_NAME
  27. value: elasticsearch
  28. - name: NODE_NAME
  29. value: elasticsearch-client
  30. - name: NODE_LIST
  31. value: elasticsearch-master,elasticsearch-data,elasticsearch-client
  32. - name: MASTER_NODES
  33. value: elasticsearch-master
  34. - name: ES_JAVA_OPTS
  35. value: "-Xms2048m -Xmx2048m"
  36. - name: ELASTIC_USERNAME
  37. valueFrom:
  38. secretKeyRef:
  39. name: elastic-credentials
  40. key: username
  41. - name: ELASTIC_PASSWORD
  42. valueFrom:
  43. secretKeyRef:
  44. name: elastic-credentials
  45. key: password
  46. ports:
  47. - containerPort: 9200
  48. name: client
  49. - containerPort: 9300
  50. name: transport
  51. volumeMounts:
  52. - name: config
  53. mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
  54. readOnly: true
  55. subPath: elasticsearch.yml
  56. - name: storage
  57. mountPath: /usr/share/elasticsearch/data
  58. volumes:
  59. - name: config
  60. configMap:
  61. name: elasticsearch-client-config
  62. - name: storage
  63. persistentVolumeClaim:
  64. claimName: pvc-elasticsearch-client
  65. ---

4.4 创建Client节点资源对象

  1. $ kubectl apply -f elasticsearch-client.configmap.yaml \
  2. -f elasticsearch-client.service.yaml \
  3. -f elasticsearch-client.deployment.yaml
  4. configmap/elasticsearch-client-config created
  5. service/elasticsearch-client created
  6. deployment.apps/elasticsearch-client createdt

知道所有节点都部署成功为Running状态说明安装成功

  1. kubectl get pods -n elastic -l app=elasticsearch
  2. NAME READY STATUS RESTARTS AGE
  3. elasticsearch-client-f4d4ff794-6gxpz 1/1 Running 0 23m
  4. elasticsearch-data-0 1/1 Running 0 47m
  5. elasticsearch-data-1 1/1 Running 0 47m
  6. elasticsearch-master-7fc5cc8957-jfjmr 1/1 Running 0 54m

部署Client过程中可使用如下命令查看集群状态变化

  1. $ kubectl logs -f -n elastic \
  2. > $(kubectl get pods -n elastic | grep elasticsearch-master | sed -n 1p | awk '{print $1}') \
  3. > | grep "Cluster health status changed from"
  4. {"type": "server", "timestamp": "2020-08-18T06:35:20,859Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master", "message": "Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana_1][0]]]).", "cluster.uuid": "Yy1ctnq7SjmRsuYfbJGSzA", "node.id": "z7vrjgYcTUiiB7tb0kXQ1Q" }

5 生成初始化密码

因为我们启用了Xpack安全模块来保护我们集群,所以需要一个初始化密码,实用客户端节点容器内的bin/elasticsearch-setup-passwords 命令来生成,如下所示

  1. $ kubectl exec $(kubectl get pods -n elastic | grep elasticsearch-client | sed -n 1p | awk '{print $1}') \
  2. -n elastic \
  3. -- bin/elasticsearch-setup-passwords auto -b
  4. Changed password for user apm_system
  5. PASSWORD apm_system = 5wg8JbmKOKiLMNty90l1
  6. Changed password for user kibana_system
  7. PASSWORD kibana_system = 1bT0U5RbPX1e9zGNlWFL
  8. Changed password for user kibana
  9. PASSWORD kibana = 1bT0U5RbPX1e9zGNlWFL
  10. Changed password for user logstash_system
  11. PASSWORD logstash_system = 1ihEyA5yAPahNf9GuRJ9
  12. Changed password for user beats_system
  13. PASSWORD beats_system = WEWDpPndnGvgKY7ad0T9
  14. Changed password for user remote_monitoring_user
  15. PASSWORD remote_monitoring_user = MOCszTmzLmEXQrPIOW4T
  16. Changed password for user elastic
  17. PASSWORD elastic = bbkrgVrsE3UAfs2708aO

生成完后将elastic用户名和密码需要添加到Kubernetes的Secret对象中:

  1. $ kubectl create secret generic elasticsearch-pw-elastic \
  2. -n elastic \
  3. --from-literal password=bbkrgVrsE3UAfs2708aO

6 创建Kibana应用

ElasticSearch集群安装完后,需要安装Kibana用于ElasticSearch数据的可视化工具。

6.1 创建Kibana的ConfigMap编排文件

创建一个ConfigMap资源对象用于Kibana的配置文件,里面定义了ElasticSearch的访问地址、用户及密码信息,对应的清单文件如下:

  1. kibana.configmap.yaml
  2. ---
  3. apiVersion: v1
  4. kind: ConfigMap
  5. metadata:
  6. namespace: elastic
  7. name: kibana-config
  8. labels:
  9. app: kibana
  10. data:
  11. kibana.yml: |-
  12. server.host: 0.0.0.0
  13. elasticsearch:
  14. hosts: ${ELASTICSEARCH_HOSTS}
  15. username: ${ELASTICSEARCH_USER}
  16. password: ${ELASTICSEARCH_PASSWORD}
  17. ---

6.2 创建Kibana的Service编排文件

  1. kibana.service.yaml
  2. ---
  3. apiVersion: v1
  4. kind: Service
  5. metadata:
  6. namespace: elastic
  7. name: kibana
  8. labels:
  9. app: kibana
  10. spec:
  11. ports:
  12. - port: 5601
  13. name: webinterface
  14. nodePort: 5601
  15. selector:
  16. app: kibana
  17. ---

6.3 创建Kibana的Deployment编排文件

  1. kibana.deployment.yaml
  2. ---
  3. apiVersion: apps/v1
  4. kind: Deployment
  5. metadata:
  6. namespace: elastic
  7. name: kibana
  8. labels:
  9. app: kibana
  10. spec:
  11. selector:
  12. matchLabels:
  13. app: kibana
  14. template:
  15. metadata:
  16. labels:
  17. app: kibana
  18. spec:
  19. containers:
  20. - name: kibana
  21. image: docker.elastic.co/kibana/kibana:7.8.0
  22. ports:
  23. - containerPort: 5601
  24. name: webinterface
  25. env:
  26. - name: ELASTICSEARCH_HOSTS
  27. value: "http://elasticsearch-client.elastic.svc.cluster.local:9200"
  28. - name: ELASTICSEARCH_USER
  29. value: "elastic"
  30. - name: ELASTICSEARCH_PASSWORD
  31. valueFrom:
  32. secretKeyRef:
  33. name: elasticsearch-pw-elastic
  34. key: password
  35. - name: "I18N_LOCALE"
  36. value: "zh-CN"
  37. volumeMounts:
  38. - name: config
  39. mountPath: /usr/share/kibana/config/kibana.yml
  40. readOnly: true
  41. subPath: kibana.yml
  42. volumes:
  43. - name: config
  44. configMap:
  45. name: kibana-config
  46. ---

6.4 创建Kibana的Ingress编排文件

这里使用Ingress来暴露Kibana服务,用于通过域名访问,编排文件如下:

  1. kibana.ingress.yaml
  2. apiVersion: extensions/v1beta1
  3. kind: Ingress
  4. metadata:
  5. name: kibana
  6. namespace: elastic
  7. annotations:
  8. nginx.ingress.kubernetes.io/rewrite-target: /
  9. spec:
  10. rules:
  11. - host: kibana.demo.com
  12. http:
  13. paths:
  14. - backend:
  15. serviceName: kibana
  16. servicePort: 5601
  17. path: /

6.5 通过Kibana编排文件创建资源对象

  1. $ kubectl apply -f kibana.configmap.yaml \
  2. -f kibana.service.yaml \
  3. -f kibana.deployment.yaml \
  4. -f kibana.ingress.yaml
  5. configmap/kibana-config created
  6. service/kibana created
  7. deployment.apps/kibana created
  8. ingress/kibana created

部署完成后通过查看Kibana日志查看启动状态:

  1. kubectl logs -f -n elastic $(kubectl get pods -n elastic | grep kibana | sed -n 1p | awk '{print $1}') \
  2. > | grep "Status changed from yellow to green"
  3. {"type":"log","@timestamp":"2020-08-18T06:35:29Z","tags":["status","plugin:elasticsearch@7.8.0","info"],"pid":8,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}

当状态变成green后,我们就可以通过ingress的域名到浏览器访问Kibana服务了

  1. $ kubectl get ingress -n elastic
  2. NAME HOSTS ADDRESS PORTS AGE
  3. kibana kibana.demo.cn 80 40d

6.5 登入Kibana并配置

如图所示,使用上面创建的Secret对象中的elastic用户和生成的密码进行登入:

创建一个超级用户进行访问,依次点击 Stack Management > 用户 > 创建用户 > 输入如下信息:

创建完成后就可以用自定义的admin用户进行管理

日志分析系统 - k8s部署ElasticSearch集群的更多相关文章

  1. Centos8 部署 ElasticSearch 集群并搭建 ELK,基于Logstash同步MySQL数据到ElasticSearch

    Centos8安装Docker 1.更新一下yum [root@VM-24-9-centos ~]# yum -y update 2.安装containerd.io # centos8默认使用podm ...

  2. Centos8 Docker部署ElasticSearch集群

    ELK部署 部署ElasticSearch集群 1.拉取镜像及批量生成配置文件 # 拉取镜像 [root@VM-24-9-centos ~]# docker pull elasticsearch:7. ...

  3. Docker部署Elasticsearch集群

    http://blog.sina.com.cn/s/blog_8ea8e9d50102wwik.html Docker部署Elasticsearch集群 参考文档: https://hub.docke ...

  4. Azure vm 扩展脚本自动部署Elasticsearch集群

    一.完整过程比较长,我仅给出Azure vm extension script 一键部署Elasticsearch集群的安装脚本,有需要的同学,可以邮件我,我给你完整的ARM Template 如果你 ...

  5. ELK 日志分析系统的部署

    一.ELK简介 ElasticSearch介绍Elasticsearch是一个基于Lucene的搜索服务器. 它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful web接口. Elasti ...

  6. 使用Elasticsearch Operator快速部署Elasticsearch集群

    转载自:https://www.qikqiak.com/post/elastic-cloud-on-k8s/ 随着 kubernetes 的快速发展,很多应用都在往 kubernetes 上面迁移,现 ...

  7. ELK日志分析系统简单部署

    1.传统日志分析系统: 日志主要包括系统日志.应用程序日志和安全日志.系统运维和开发人员可以通过日志了解服务器软硬件信息.检查配置过程中的错误及错误发生的原因.经常分析日志可以了解服务器的负荷,性能安 ...

  8. 利用 docker 部署 elasticsearch 集群(单节点多实例)

    文章目录 1.环境介绍 2.拉取 `elasticserach` 镜像 3.创建 `elasticsearch` 数据目录 4.创建 `elasticsearch` 配置文件 5.配置JVM线程数量限 ...

  9. Kubernetes(k8s)部署redis-cluster集群

    Redis Cluster 提供了一种运行 Redis 安装的方法,其中数据 在多个 Redis 节点之间自动分片. Redis Cluster 还在分区期间提供了一定程度的可用性,这实际上是在某些节 ...

随机推荐

  1. Rust 总章

    1.1 Rust安装 3.5 Rust Generic Types, Traits, and Lifetimes 3.6 String 与 切片&str的区别 https://openslr. ...

  2. AI ubantu 环境安装

    ubantu安装记录 apt install python3-pip anaconda安装 https://repo.anaconda.com/archive/Anaconda3-2020.11-Li ...

  3. java生成cron表达式

    bean类: package com.cst.klocwork.service.cron; public class TaskScheduleModel { /** * 所选作业类型: * 1 -&g ...

  4. tomcat 之 session 集群

    官网地址 https://tomcat.apache.org/tomcat-8.5-doc/cluster-howto.html #:配置各tomcat节点 [root@node1 ~]# vim / ...

  5. 【力扣】95. 不同的二叉搜索树 II

    二叉查找树(Binary Search Tree),(又:二叉搜索树,二叉排序树)它或者是一棵空树,或者是具有下列性质的二叉树: 若它的左子树不空,则左子树上所有结点的值均小于它的根结点的值: 若它的 ...

  6. 【Word】自动化参考文献-交叉引用

    第一步:设置参考文献标号 开始-定义新编号格式中,定义参考文献式的方框编号: 这里注意不要把他原来的数字去掉 第二步:选择交叉引用 插入-交叉引用: 第三步:更新标号 如果更新标号,使用右键-更新域. ...

  7. 利用 trap 在 docker 容器优雅关闭前执行环境清理

    当一个运行中的容器被终止时,如何能够执行一些预定义的操作,比如在容器彻底退出之前清理环境.这是一种类似于 pre stop 的钩子体验.但 docker 本身无法提供这种能力,本文结合 Linux 内 ...

  8. Python __new__ 方法解释与使用

    解释 我们通常把 __init__ 称为构造方法,这是从其他语言借鉴过来的术语. 其实,用于构建实例的是特殊方法 __new__:这是个类方法(使用特殊方式处理,因此不必使用 @classmethod ...

  9. Windows 任务计划部署 .Net 控制台程序

    Windows 搜索:任务计划程序 创建任务 添加任务名称 设置触发器:这里设置每10分钟执行一次 保存之后显示 此任务会从每天的 0:10:00 执行第一次后一直循环下去. 在操作选项卡下,选择启动 ...

  10. C语言程序设计:模拟简单运算器的工作

    目录 C语言程序设计:模拟简单运算器的工作 1.题目 2.分析 3.代码实现 4.结尾 C语言程序设计:模拟简单运算器的工作 1.题目 ​ 模拟简单运算器的工作,输入一个算式(没有空格),遇等号&qu ...