前文我们了解了k8s上的hpa资源的使用,回顾请参考:https://www.cnblogs.com/qiuhom-1874/p/14293237.html;今天我们来聊一下k8s包管理器helm的相关话题;

  helm是什么?

  如果我们把k8s的资源清单类比成centos上的rpm包,那么helm的作用就如同yum;简单讲helm就是类似yum这样的包管理器,它能够让我们在k8s上部署应用变得简单,我们需要部署某些应用到k8s上,我们直接使用helm就可以完成一键部署;有了helm工具,我们甚至都不需要再写什么资源清单了;对于helm来说,它只是把对应应用需要的资源清单通过模板引擎,将对应资模板源清单赋值以后,发送给k8s进行应用,从而实现把应用部署到k8s上;我们把部署到k8s上的应用称为release;即把模板资源清单通过模板引擎渲染以后,部署到k8s上的就称为一个release;模板文件是从哪里来呢?如同rpm仓库,这里的模板文件也是从仓库来,简单讲helm仓库就是用来存放各种应用的模板清单打包文件,我们把这个打包文件称为chart,即helm仓库也叫chart仓库,主要用来存放各种应用的打包文件;一个打包文件最主要的有chart.yaml,README.md,templates目录,values.yaml;其中chart.yaml文件主要用来对应应用的元数据信息;README.md主要是用来自述该chart怎么使用,部署等等说明;templates目录使用来存放各种资源模板文件;templates目录中有一个比较重要的文件NOTES.txt,该文件也是一个模板文件,主要作用是把对应chart安装成功的信息通过模板引擎渲染以后输出给用户,告诉用户如何使用对应chart;vlues.yaml文件主要用来存放该chart的模板的默认值,用户不指定,其内部模板中的值就是对应values.yaml的值;正是因为chart中存放的都是模板资源清单,使得用户可以自定义value.yaml文件,通过指定自定义value.yaml来实现自定义chart的目的;

  helm的工具安装

  helm 2的部署稍微有点麻烦,早期helm2是由两个组件组成,第一个是命令行工具helm,第二个是k8s上的tiller Pod;tiller是服务端,主要接受helm发送到chart,然后由tiller联系apiserver进行对应chart的部署;现在helm的版本是3.0+,对于之前helm2的方式,helm3进行了简化,即helm不再依赖tiller这个组件,它可以直接同apiserver进行交互,将对应chart部署到k8s上;使用helm3的前提是对应主机能够正常连接k8s的apiserver,并且对应主机上有kubectl命令,即对应主机必须能使用kubectl命令来管理对应k8s集群;这其中的原因是helm它会使用kubectl工具的认证信息到apiserver进行交互;

  一、helm3的安装

  下载二进制包

  1. [root@master01 ~]# mkdir helm
  2. [root@master01 ~]# cd helm/
  3. [root@master01 helm]# wget https://get.helm.sh/helm-v3.5.0-linux-amd64.tar.gz
  4. --2021-01-20 21:10:33-- https://get.helm.sh/helm-v3.5.0-linux-amd64.tar.gz
  5. Resolving get.helm.sh (get.helm.sh)... 152.195.19.97, 2606:2800:11f:1cb7:261b:1f9c:2074:3c
  6. Connecting to get.helm.sh (get.helm.sh)|152.195.19.97|:443... connected.
  7. HTTP request sent, awaiting response... 200 OK
  8. Length: 12327633 (12M) [application/x-tar]
  9. Saving to: helm-v3.5.0-linux-amd64.tar.gz
  10.  
  11. 100%[==================================================================================================================================>] 12,327,633 9.17MB/s in 1.3s
  12.  
  13. 2021-01-20 21:10:35 (9.17 MB/s) - helm-v3.5.0-linux-amd64.tar.gz saved [12327633/12327633]
  14. [root@master01 helm]#ls
  15. helm-v3.5.0-linux-amd64.tar.gz
  16. [root@master01 helm]

  解压包

  1. [root@master01 helm]# tar xf helm-v3.5.0-linux-amd64.tar.gz
  2. [root@master01 helm]# ls
  3. helm-v3.5.0-linux-amd64.tar.gz linux-amd64
  4. [root@master01 helm]# cd linux-amd64/
  5. [root@master01 linux-amd64]# ls
  6. helm LICENSE README.md
  7. [root@master01 linux-amd64]#

  复制helm二进制文件到path环境变量目录下

  1. [root@master01 linux-amd64]# cp helm /usr/bin/
  2. [root@master01 linux-amd64]# hel
  3. helm help
  4. [root@master01 linux-amd64]# hel

  二、helm的使用

  查看helm版本

  1. [root@master01 ~]# helm version
  2. version.BuildInfo{Version:"v3.5.0", GitCommit:"32c22239423b3b4ba6706d450bd044baffdcf9e6", GitTreeState:"clean", GoVersion:"go1.15.6"}
  3. [root@master01 ~]#

  查看helm帮助

  1. [root@master01 ~]# helm -h
  2. The Kubernetes package manager
  3.  
  4. Common actions for Helm:
  5.  
  6. - helm search: search for charts
  7. - helm pull: download a chart to your local directory to view
  8. - helm install: upload the chart to Kubernetes
  9. - helm list: list releases of charts
  10.  
  11. Environment variables:
  12.  
  13. | Name | Description |
  14. |------------------------------------|-----------------------------------------------------------------------------------|
  15. | $HELM_CACHE_HOME | set an alternative location for storing cached files. |
  16. | $HELM_CONFIG_HOME | set an alternative location for storing Helm configuration. |
  17. | $HELM_DATA_HOME | set an alternative location for storing Helm data. |
  18. | $HELM_DEBUG | indicate whether or not Helm is running in Debug mode |
  19. | $HELM_DRIVER | set the backend storage driver. Values are: configmap, secret, memory, postgres |
  20. | $HELM_DRIVER_SQL_CONNECTION_STRING | set the connection string the SQL storage driver should use. |
  21. | $HELM_MAX_HISTORY | set the maximum number of helm release history. |
  22. | $HELM_NAMESPACE | set the namespace used for the helm operations. |
  23. | $HELM_NO_PLUGINS | disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins. |
  24. | $HELM_PLUGINS | set the path to the plugins directory |
  25. | $HELM_REGISTRY_CONFIG | set the path to the registry config file. |
  26. | $HELM_REPOSITORY_CACHE | set the path to the repository cache directory |
  27. | $HELM_REPOSITORY_CONFIG | set the path to the repositories file. |
  28. | $KUBECONFIG | set an alternative Kubernetes configuration file (default "~/.kube/config") |
  29. | $HELM_KUBEAPISERVER | set the Kubernetes API Server Endpoint for authentication |
  30. | $HELM_KUBECAFILE | set the Kubernetes certificate authority file. |
  31. | $HELM_KUBEASGROUPS | set the Groups to use for impersonation using a comma-separated list. |
  32. | $HELM_KUBEASUSER | set the Username to impersonate for the operation. |
  33. | $HELM_KUBECONTEXT | set the name of the kubeconfig context. |
  34. | $HELM_KUBETOKEN | set the Bearer KubeToken used for authentication. |
  35.  
  36. Helm stores cache, configuration, and data based on the following configuration order:
  37.  
  38. - If a HELM_*_HOME environment variable is set, it will be used
  39. - Otherwise, on systems supporting the XDG base directory specification, the XDG variables will be used
  40. - When no other location is set a default location will be used based on the operating system
  41.  
  42. By default, the default directories depend on the Operating System. The defaults are listed below:
  43.  
  44. | Operating System | Cache Path | Configuration Path | Data Path |
  45. |------------------|---------------------------|--------------------------------|-------------------------|
  46. | Linux | $HOME/.cache/helm | $HOME/.config/helm | $HOME/.local/share/helm |
  47. | macOS | $HOME/Library/Caches/helm | $HOME/Library/Preferences/helm | $HOME/Library/helm |
  48. | Windows | %TEMP%\helm | %APPDATA%\helm | %APPDATA%\helm |
  49.  
  50. Usage:
  51. helm [command]
  52.  
  53. Available Commands:
  54. completion generate autocompletion scripts for the specified shell
  55. create create a new chart with the given name
  56. dependency manage a chart's dependencies
  57. env helm client environment information
  58. get download extended information of a named release
  59. help Help about any command
  60. history fetch release history
  61. install install a chart
  62. lint examine a chart for possible issues
  63. list list releases
  64. package package a chart directory into a chart archive
  65. plugin install, list, or uninstall Helm plugins
  66. pull download a chart from a repository and (optionally) unpack it in local directory
  67. repo add, list, remove, update, and index chart repositories
  68. rollback roll back a release to a previous revision
  69. search search for a keyword in charts
  70. show show information of a chart
  71. status display the status of the named release
  72. template locally render templates
  73. test run tests for a release
  74. uninstall uninstall a release
  75. upgrade upgrade a release
  76. verify verify that a chart at the given path has been signed and is valid
  77. version print the client version information
  78.  
  79. Flags:
  80. --debug enable verbose output
  81. -h, --help help for helm
  82. --kube-apiserver string the address and the port for the Kubernetes API server
  83. --kube-as-group stringArray group to impersonate for the operation, this flag can be repeated to specify multiple groups.
  84. --kube-as-user string username to impersonate for the operation
  85. --kube-ca-file string the certificate authority file for the Kubernetes API server connection
  86. --kube-context string name of the kubeconfig context to use
  87. --kube-token string bearer token used for authentication
  88. --kubeconfig string path to the kubeconfig file
  89. -n, --namespace string namespace scope for this request
  90. --registry-config string path to the registry config file (default "/root/.config/helm/registry.json")
  91. --repository-cache string path to the file containing cached repository indexes (default "/root/.cache/helm/repository")
  92. --repository-config string path to the file containing repository names and URLs (default "/root/.config/helm/repositories.yaml")
  93.  
  94. Use "helm [command] --help" for more information about a command.
  95. [root@master01 ~]#

  查看仓库列表

  1. [root@master01 ~]# helm repo -h
  2.  
  3. This command consists of multiple subcommands to interact with chart repositories.
  4.  
  5. It can be used to add, remove, list, and index chart repositories.
  6.  
  7. Usage:
  8. helm repo [command]
  9.  
  10. Available Commands:
  11. add add a chart repository
  12. index generate an index file given a directory containing packaged charts
  13. list list chart repositories
  14. remove remove one or more chart repositories
  15. update update information of available charts locally from chart repositories
  16.  
  17. Flags:
  18. -h, --help help for repo
  19.  
  20. Global Flags:
  21. --debug enable verbose output
  22. --kube-apiserver string the address and the port for the Kubernetes API server
  23. --kube-as-group stringArray group to impersonate for the operation, this flag can be repeated to specify multiple groups.
  24. --kube-as-user string username to impersonate for the operation
  25. --kube-ca-file string the certificate authority file for the Kubernetes API server connection
  26. --kube-context string name of the kubeconfig context to use
  27. --kube-token string bearer token used for authentication
  28. --kubeconfig string path to the kubeconfig file
  29. -n, --namespace string namespace scope for this request
  30. --registry-config string path to the registry config file (default "/root/.config/helm/registry.json")
  31. --repository-cache string path to the file containing cached repository indexes (default "/root/.cache/helm/repository")
  32. --repository-config string path to the file containing repository names and URLs (default "/root/.config/helm/repositories.yaml")
  33.  
  34. Use "helm repo [command] --help" for more information about a command.
  35. [root@master01 ~]# helm repo list
  36. Error: no repositories to show
  37. [root@master01 ~]#

  提示:这里提示我们没有仓库;

  添加仓库

  1. [root@master01 ~]# helm repo add stable https://charts.helm.sh/stable
  2. "stable" has been added to your repositories
  3. [root@master01 ~]# helm repo list
  4. NAME URL
  5. stable https://charts.helm.sh/stable
  6. [root@master01 ~]#

  提示:添加仓库需要连接到对应仓库,如果你的服务器无法正常连接到对应仓库,请使用代理,具体代理方式就是在对应shell终端使用HTTPS_PROXY环境变量赋予一个可以用的代理地址;如HTTPS_PROXY="http://www.ik8s.io:10080",使用代理环境变量的同时需要注意把对应不需要代理的地址给出来,比如本地地址不需要代理可以使用NO_PROXY="127.0.0.0/8,192.168.0.0/24";否则我们使用kubectl它都会代理到我们给定的代理地址上;

  搜索chart

  提示:helm search repo表示列出已经添加的仓库中所有chart;

  在仓库中搜索redis

  1. [root@master01 ~]# helm search repo redis
  2. NAME CHART VERSION APP VERSION DESCRIPTION
  3. stable/prometheus-redis-exporter 3.5.1 1.3.4 DEPRECATED Prometheus exporter for Redis metrics
  4. stable/redis 10.5.7 5.0.7 DEPRECATED Open source, advanced key-value stor...
  5. stable/redis-ha 4.4.6 5.0.6 DEPRECATED - Highly available Kubernetes implem...
  6. stable/sensu 0.2.5 0.28 DEPRECATED Sensu monitoring framework backed by...
  7. [root@master01 ~]#

  安装stable/redis

  1. [root@master01 ~]# helm install redis-demo stable/redis
  2. WARNING: This chart is deprecated
  3. NAME: redis-demo
  4. LAST DEPLOYED: Wed Jan 20 22:27:18 2021
  5. NAMESPACE: default
  6. STATUS: deployed
  7. REVISION: 1
  8. TEST SUITE: None
  9. NOTES:
  10. This Helm chart is deprecated
  11.  
  12. Given the `stable` deprecation timeline (https://github.com/helm/charts#deprecation-timeline), the Bitnami maintained Redis Helm chart is now located at bitnami/charts (https://github.com/bitnami/charts/).
  13.  
  14. The Bitnami repository is already included in the Hubs and we will continue providing the same cadence of updates, support, etc that we've been keeping here these years. Installation instructions are very similar, just adding the _bitnami_ repo and using it during the installation (`bitnami/<chart>` instead of `stable/<chart>`)
  15.  
  16. ```bash
  17. $ helm repo add bitnami https://charts.bitnami.com/bitnami
  18. $ helm install my-release bitnami/<chart> # Helm 3
  19. $ helm install --name my-release bitnami/<chart> # Helm 2
  20. ```
  21.  
  22. To update an exisiting _stable_ deployment with a chart hosted in the bitnami repository you can execute
  23. ```bash $ helm
  24. repo add bitnami https://charts.bitnami.com/bitnami
  25. $ helm upgrade my-release bitnami/<chart>
  26. ```
  27.  
  28. Issues and PRs related to the chart itself will be redirected to `bitnami/charts` GitHub repository. In the same way, we'll be happy to answer questions related to this migration process in this issue (https://github.com/helm/charts/issues/20969) created as a common place for discussion.
  29.  
  30. ** Please be patient while the chart is being deployed **
  31. Redis can be accessed via port 6379 on the following DNS names from within your cluster:
  32.  
  33. redis-demo-master.default.svc.cluster.local for read/write operations
  34. redis-demo-slave.default.svc.cluster.local for read-only operations
  35.  
  36. To get your password run:
  37.  
  38. export REDIS_PASSWORD=$(kubectl get secret --namespace default redis-demo -o jsonpath="{.data.redis-password}" | base64 --decode)
  39.  
  40. To connect to your Redis server:
  41.  
  42. 1. Run a Redis pod that you can use as a client:
  43.  
  44. kubectl run --namespace default redis-demo-client --rm --tty -i --restart='Never' \
  45. --env REDIS_PASSWORD=$REDIS_PASSWORD \
  46. --image docker.io/bitnami/redis:5.0.7-debian-10-r32 -- bash
  47.  
  48. 2. Connect using the Redis CLI:
  49. redis-cli -h redis-demo-master -a $REDIS_PASSWORD
  50. redis-cli -h redis-demo-slave -a $REDIS_PASSWORD
  51.  
  52. To connect to your database from outside the cluster execute the following commands:
  53.  
  54. kubectl port-forward --namespace default svc/redis-demo-master 6379:6379 &
  55. redis-cli -h 127.0.0.1 -p 6379 -a $REDIS_PASSWORD
  56. [root@master01 ~]#

  查看release

  1. [root@master01 ~]# helm list
  2. NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
  3. redis-demo default 1 2021-01-20 22:27:18.635916075 +0800 CST deployed redis-10.5.7 5.0.7
  4. [root@master01 ~]#

  验证:用kubectl工具查看k8s集群上对应的redis-demo 是否运行?

  1. [root@master01 ~]# kubectl get pods
  2. NAME READY STATUS RESTARTS AGE
  3. myapp-779867bcfc-57zw7 1/1 Running 1 2d7h
  4. myapp-779867bcfc-657qr 1/1 Running 1 2d7h
  5. podinfo-56874dc7f8-5rb9q 1/1 Running 1 2d2h
  6. podinfo-56874dc7f8-t6jgn 1/1 Running 1 2d2h
  7. [root@master01 ~]# kubectl get svc
  8. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  9. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
  10. myapp-svc NodePort 10.111.14.219 <none> 80:31154/TCP 2d7h
  11. podinfo NodePort 10.111.10.211 <none> 9898:31198/TCP 2d2h
  12. redis-demo-headless ClusterIP None <none> 6379/TCP 18m
  13. redis-demo-master ClusterIP 10.100.228.32 <none> 6379/TCP 18m
  14. redis-demo-slave ClusterIP 10.109.46.121 <none> 6379/TCP 18m
  15. [root@master01 ~]# kubectl get sts
  16. NAME READY AGE
  17. redis-demo-master 0/1 18m
  18. redis-demo-slave 0/2 18m
  19. [root@master01 ~]#

  提示:用kubectl工具查看pod列表,并没有发现对应pod运行,但是对应的svc和sts都正常创建;

  查看pod没有创建的原因

  1. [root@master01 ~]# kubectl describe sts/redis-demo-master|grep -A 10 Events
  2. Events:
  3. Type Reason Age From Message
  4. ---- ------ ---- ---- -------
  5. Warning FailedCreate 14m (x12 over 14m) statefulset-controller create Pod redis-demo-master-0 in StatefulSet redis-demo-master failed error: failed to create PVC redis-data-redis-demo-master-0: persistentvolumeclaims "redis-data-redis-demo-master-0" is forbidden: exceeded quota: quota-storage-demo, requested: requests.storage=8Gi, used: requests.storage=0, limited: requests.storage=5Gi
  6. Warning FailedCreate 3m40s (x18 over 14m) statefulset-controller create Claim redis-data-redis-demo-master-0 for Pod redis-demo-master-0 in StatefulSet redis-demo-master failed error: persistentvolumeclaims "redis-data-redis-demo-master-0" is forbidden: exceeded quota: quota-storage-demo, requested: requests.storage=8Gi, used: requests.storage=0, limited: requests.storage=5Gi
  7. [root@master01 ~]# kubectl describe sts/redis-demo-slave|grep -A 10 Events
  8. Events:
  9. Type Reason Age From Message
  10. ---- ------ ---- ---- -------
  11. Warning FailedCreate 14m (x12 over 14m) statefulset-controller create Pod redis-demo-slave-0 in StatefulSet redis-demo-slave failed error: failed to create PVC redis-data-redis-demo-slave-0: persistentvolumeclaims "redis-data-redis-demo-slave-0" is forbidden: exceeded quota: quota-storage-demo, requested: requests.storage=8Gi, used: requests.storage=0, limited: requests.storage=5Gi
  12. Warning FailedCreate 3m41s (x18 over 14m) statefulset-controller create Claim redis-data-redis-demo-slave-0 for Pod redis-demo-slave-0 in StatefulSet redis-demo-slave failed error: persistentvolumeclaims "redis-data-redis-demo-slave-0" is forbidden: exceeded quota: quota-storage-demo, requested: requests.storage=8Gi, used: requests.storage=0, limited: requests.storage=5Gi
  13. [root@master01 ~]#

  提示:这里提示我们没有权限创建,原因是quota-storage-demo禁止了;

  查看resourcequota准入控制规则

  1. [root@master01 ~]# kubectl get resourcequota
  2. NAME AGE REQUEST LIMIT
  3. quota-storage-demo 19d persistentvolumeclaims: 0/5, requests.ephemeral-storage: 0/1Gi, requests.storage: 0/5Gi limits.ephemeral-storage: 0/2Gi
  4. [root@master01 ~]# kubectl describe resourcequota quota-storage-demo
  5. Name: quota-storage-demo
  6. Namespace: default
  7. Resource Used Hard
  8. -------- ---- ----
  9. limits.ephemeral-storage 0 2Gi
  10. persistentvolumeclaims 0 5
  11. requests.ephemeral-storage 0 1Gi
  12. requests.storage 0 5Gi
  13. [root@master01 ~]#

  提示:resourcequota准入控制明确限制了创建pvc最低下限总和是5G,上面创建redis需要8G所以不满足对应准入控制规则所以创建pvc就被拒绝了,导致pod没能正常创建;

  卸载redis-demo

  1. [root@master01 ~]# helm list
  2. NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
  3. redis-demo default 1 2021-01-20 22:27:18.635916075 +0800 CST deployed redis-10.5.7 5.0.7
  4. [root@master01 ~]# helm uninstall redis-demo
  5. release "redis-demo" uninstalled
  6. [root@master01 ~]# helm list
  7. NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
  8. [root@master01 ~]#

  删除resourcequota准入控制

  1. [root@master01 ~]# kubectl get resourcequota
  2. NAME AGE REQUEST LIMIT
  3. quota-storage-demo 19d persistentvolumeclaims: 0/5, requests.ephemeral-storage: 0/1Gi, requests.storage: 0/5Gi limits.ephemeral-storage: 0/2Gi
  4. [root@master01 ~]# kubectl delete resourcequota/quota-storage-demo
  5. resourcequota "quota-storage-demo" deleted
  6. [root@master01 ~]# kubectl get resourcequota
  7. No resources found in default namespace.
  8. [root@master01 ~]#

  检查pv,是否有足量的pv?

  1. [root@master01 ~]# kubectl get pv
  2. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  3. nfs-pv-v1 5Gi RWO,ROX,RWX Retain Bound kube-system/alertmanager 3d22h
  4. nfs-pv-v2 5Gi RWO,ROX,RWX Retain Bound kube-system/prometheus-data-prometheus-0 3d22h
  5. nfs-pv-v3 5Gi RWO,ROX,RWX Retain Available 3d22h
  6. [root@master01 ~]#

  提示:上述还有一个pv没有使用,但大小只有5g不够redis使用;

  创建pv

  1. [root@master01 ~]# cat pv-demo.yaml
  2. apiVersion: v1
  3. kind: PersistentVolume
  4. metadata:
  5. name: nfs-pv-v4
  6. spec:
  7. capacity:
  8. storage: 10Gi
  9. volumeMode: Filesystem
  10. accessModes: ["ReadWriteOnce","ReadWriteMany","ReadOnlyMany"]
  11. persistentVolumeReclaimPolicy: Retain
  12. mountOptions:
  13. - hard
  14. - nfsvers=4.1
  15. nfs:
  16. path: /data/v4
  17. server: 192.168.0.99
  18. ---
  19. apiVersion: v1
  20. kind: PersistentVolume
  21. metadata:
  22. name: nfs-pv-v5
  23. spec:
  24. capacity:
  25. storage: 10Gi
  26. volumeMode: Filesystem
  27. accessModes: ["ReadWriteOnce","ReadWriteMany","ReadOnlyMany"]
  28. persistentVolumeReclaimPolicy: Retain
  29. mountOptions:
  30. - hard
  31. - nfsvers=4.1
  32. nfs:
  33. path: /data/v5
  34. server: 192.168.0.99
  35. ---
  36.  
  37. apiVersion: v1
  38. kind: PersistentVolume
  39. metadata:
  40. name: nfs-pv-v6
  41. spec:
  42. capacity:
  43. storage: 10Gi
  44. volumeMode: Filesystem
  45. accessModes: ["ReadWriteOnce","ReadWriteMany","ReadOnlyMany"]
  46. persistentVolumeReclaimPolicy: Retain
  47. mountOptions:
  48. - hard
  49. - nfsvers=4.1
  50. nfs:
  51. path: /data/v6
  52. server: 192.168.0.99
  53. [root@master01 ~]# kubectl apply -f pv-demo.yaml
  54. persistentvolume/nfs-pv-v4 created
  55. persistentvolume/nfs-pv-v5 created
  56. persistentvolume/nfs-pv-v6 created
  57. [root@master01 ~]# kubectl get pv
  58. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  59. nfs-pv-v1 5Gi RWO,ROX,RWX Retain Bound kube-system/alertmanager 3d22h
  60. nfs-pv-v2 5Gi RWO,ROX,RWX Retain Bound kube-system/prometheus-data-prometheus-0 3d22h
  61. nfs-pv-v3 5Gi RWO,ROX,RWX Retain Available 3d22h
  62. nfs-pv-v4 10Gi RWO,ROX,RWX Retain Available 3s
  63. nfs-pv-v5 10Gi RWO,ROX,RWX Retain Available 3s
  64. nfs-pv-v6 10Gi RWO,ROX,RWX Retain Available 3s
  65. [root@master01 ~]#

  重新安装redis

  1. [root@master01 ~]# helm install redis-demo stable/redis
  2. WARNING: This chart is deprecated
  3. NAME: redis-demo
  4. LAST DEPLOYED: Wed Jan 20 22:54:30 2021
  5. NAMESPACE: default
  6. STATUS: deployed
  7. REVISION: 1
  8. TEST SUITE: None
  9. NOTES:
  10. This Helm chart is deprecated
  11.  
  12. Given the `stable` deprecation timeline (https://github.com/helm/charts#deprecation-timeline), the Bitnami maintained Redis Helm chart is now located at bitnami/charts (https://github.com/bitnami/charts/).
  13.  
  14. The Bitnami repository is already included in the Hubs and we will continue providing the same cadence of updates, support, etc that we've been keeping here these years. Installation instructions are very similar, just adding the _bitnami_ repo and using it during the installation (`bitnami/<chart>` instead of `stable/<chart>`)
  15.  
  16. ```bash
  17. $ helm repo add bitnami https://charts.bitnami.com/bitnami
  18. $ helm install my-release bitnami/<chart> # Helm 3
  19. $ helm install --name my-release bitnami/<chart> # Helm 2
  20. ```
  21.  
  22. To update an exisiting _stable_ deployment with a chart hosted in the bitnami repository you can execute
  23. ```bash $ helm
  24. repo add bitnami https://charts.bitnami.com/bitnami
  25. $ helm upgrade my-release bitnami/<chart>
  26. ```
  27.  
  28. Issues and PRs related to the chart itself will be redirected to `bitnami/charts` GitHub repository. In the same way, we'll be happy to answer questions related to this migration process in this issue (https://github.com/helm/charts/issues/20969) created as a common place for discussion.
  29.  
  30. ** Please be patient while the chart is being deployed **
  31. Redis can be accessed via port 6379 on the following DNS names from within your cluster:
  32.  
  33. redis-demo-master.default.svc.cluster.local for read/write operations
  34. redis-demo-slave.default.svc.cluster.local for read-only operations
  35.  
  36. To get your password run:
  37.  
  38. export REDIS_PASSWORD=$(kubectl get secret --namespace default redis-demo -o jsonpath="{.data.redis-password}" | base64 --decode)
  39.  
  40. To connect to your Redis server:
  41.  
  42. 1. Run a Redis pod that you can use as a client:
  43.  
  44. kubectl run --namespace default redis-demo-client --rm --tty -i --restart='Never' \
  45. --env REDIS_PASSWORD=$REDIS_PASSWORD \
  46. --image docker.io/bitnami/redis:5.0.7-debian-10-r32 -- bash
  47.  
  48. 2. Connect using the Redis CLI:
  49. redis-cli -h redis-demo-master -a $REDIS_PASSWORD
  50. redis-cli -h redis-demo-slave -a $REDIS_PASSWORD
  51.  
  52. To connect to your database from outside the cluster execute the following commands:
  53.  
  54. kubectl port-forward --namespace default svc/redis-demo-master 6379:6379 &
  55. redis-cli -h 127.0.0.1 -p 6379 -a $REDIS_PASSWORD
  56. [root@master01 ~]#

  再次使用kubectl工具查看对应pod是否正常运行?

  1. [root@master01 ~]# kubectl get pods
  2. NAME READY STATUS RESTARTS AGE
  3. myapp-779867bcfc-57zw7 1/1 Running 1 2d7h
  4. myapp-779867bcfc-657qr 1/1 Running 1 2d7h
  5. podinfo-56874dc7f8-5rb9q 1/1 Running 1 2d2h
  6. podinfo-56874dc7f8-t6jgn 1/1 Running 1 2d2h
  7. redis-demo-master-0 0/1 CrashLoopBackOff 4 2m33s
  8. redis-demo-slave-0 0/1 CrashLoopBackOff 4 2m33s
  9. [root@master01 ~]# kubectl get pvc
  10. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  11. redis-data-redis-demo-master-0 Bound nfs-pv-v4 10Gi RWO,ROX,RWX 2m39s
  12. redis-data-redis-demo-slave-0 Bound nfs-pv-v6 10Gi RWO,ROX,RWX 2m39s
  13. [root@master01 ~]#

  提示:这里pvc自动创建成功,但是对应pod能正常启动;

  查看pod详情

  1. [root@master01 ~]# kubectl describe pod/redis-demo-master-0|grep -A 10 Events
  2. Events:
  3. Type Reason Age From Message
  4. ---- ------ ---- ---- -------
  5. Normal Scheduled 6m53s default-scheduler Successfully assigned default/redis-demo-master-0 to node01.k8s.org
  6. Normal Pulling 6m51s kubelet Pulling image "docker.io/bitnami/redis:5.0.7-debian-10-r32"
  7. Normal Pulled 6m33s kubelet Successfully pulled image "docker.io/bitnami/redis:5.0.7-debian-10-r32" in 18.056248477s
  8. Normal Started 5m47s (x4 over 6m33s) kubelet Started container redis-demo
  9. Normal Created 5m1s (x5 over 6m33s) kubelet Created container redis-demo
  10. Normal Pulled 5m1s (x4 over 6m32s) kubelet Container image "docker.io/bitnami/redis:5.0.7-debian-10-r32" already present on machine
  11. Warning BackOff 100s (x28 over 6m31s) kubelet Back-off restarting failed container
  12. [root@master01 ~]# kubectl describe pod/redis-demo-slave-0|grep -A 10 Events
  13. Events:
  14. Type Reason Age From Message
  15. ---- ------ ---- ---- -------
  16. Warning FailedScheduling 6m58s (x2 over 6m58s) default-scheduler 0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.
  17. Normal Scheduled 6m55s default-scheduler Successfully assigned default/redis-demo-slave-0 to node01.k8s.org
  18. Normal Pulling 6m55s kubelet Pulling image "docker.io/bitnami/redis:5.0.7-debian-10-r32"
  19. Normal Pulled 6m37s kubelet Successfully pulled image "docker.io/bitnami/redis:5.0.7-debian-10-r32" in 17.603521415s
  20. Normal Created 5m12s (x5 over 6m37s) kubelet Created container redis-demo
  21. Normal Started 5m12s (x5 over 6m37s) kubelet Started container redis-demo
  22. Normal Pulled 5m12s (x4 over 6m36s) kubelet Container image "docker.io/bitnami/redis:5.0.7-debian-10-r32" already present on machine
  23. Warning BackOff 106s (x27 over 6m35s) kubelet Back-off restarting failed container
  24. [root@master01 ~]#

  提示:这里查看对应pod详细信息也没有明确提示什么错误;总之pod没能正常运行(估计和对应的镜像启动有关系);通过上述实验虽然pod没能正常运行起来,但是helm能够将对应的chart提交给k8s运行;helm的使命是成功的;

  卸载redis-demo,重新找chart安装试试

  提示:这里搜索stable仓库中的redis,该仓库中redis的chart都是废弃的版本;

  删除仓库,重新添加仓库

  1. [root@master01 ~]# helm repo list
  2. NAME URL
  3. stable https://charts.helm.sh/stable
  4. [root@master01 ~]# helm repo remove stable
  5. "stable" has been removed from your repositories
  6. [root@master01 ~]# helm repo add bitnami https://charts.bitnami.com/bitnami
  7. "bitnami" has been added to your repositories
  8. [root@master01 ~]# helm repo list
  9. NAME URL
  10. bitnami https://charts.bitnami.com/bitnami
  11. [root@master01 ~]#

  搜索redis chart

  1. [root@master01 ~]# helm search repo redis
  2. NAME CHART VERSION APP VERSION DESCRIPTION
  3. bitnami/redis 12.6.2 6.0.10 Open source, advanced key-value store. It is of...
  4. bitnami/redis-cluster 4.2.6 6.0.10 Open source, advanced key-value store. It is of...
  5. [root@master01 ~]#

  安装bitnami/redis

  1. [root@master01 ~]# helm install redis-demo bitnami/redis
  2. NAME: redis-demo
  3. LAST DEPLOYED: Thu Jan 21 01:58:18 2021
  4. NAMESPACE: default
  5. STATUS: deployed
  6. REVISION: 1
  7. TEST SUITE: None
  8. NOTES:
  9. ** Please be patient while the chart is being deployed **
  10. Redis can be accessed via port 6379 on the following DNS names from within your cluster:
  11.  
  12. redis-demo-master.default.svc.cluster.local for read/write operations
  13. redis-demo-slave.default.svc.cluster.local for read-only operations
  14.  
  15. To get your password run:
  16.  
  17. export REDIS_PASSWORD=$(kubectl get secret --namespace default redis-demo -o jsonpath="{.data.redis-password}" | base64 --decode)
  18.  
  19. To connect to your Redis(TM) server:
  20.  
  21. 1. Run a Redis(TM) pod that you can use as a client:
  22. kubectl run --namespace default redis-demo-client --rm --tty -i --restart='Never' \
  23. --env REDIS_PASSWORD=$REDIS_PASSWORD \
  24. --image docker.io/bitnami/redis:6.0.10-debian-10-r1 -- bash
  25.  
  26. 2. Connect using the Redis(TM) CLI:
  27. redis-cli -h redis-demo-master -a $REDIS_PASSWORD
  28. redis-cli -h redis-demo-slave -a $REDIS_PASSWORD
  29.  
  30. To connect to your database from outside the cluster execute the following commands:
  31.  
  32. kubectl port-forward --namespace default svc/redis-demo-master 6379:6379 &
  33. redis-cli -h 127.0.0.1 -p 6379 -a $REDIS_PASSWORD
  34. [root@master01 ~]#

  查看pod运行情况

  提示:这里提示我们append-only file 没有打开的权限,说明我们挂载的对应存储没有写权限;

  在后端存储上加上写权限

  提示:这里加上写的权限对应pod还是没能正常跑起来;删除pod试试,看看对应pod重建以后是否会正常运行?

  1. [root@master01 ~]# kubectl delete pod --all
  2. pod "redis-demo-master-0" deleted
  3. pod "redis-demo-slave-0" deleted
  4. [root@master01 ~]# kubectl get pods
  5. NAME READY STATUS RESTARTS AGE
  6. redis-demo-master-0 0/1 ContainerCreating 0 3s
  7. redis-demo-slave-0 0/1 Running 0 3s
  8. [root@master01 ~]# kubectl get pods
  9. NAME READY STATUS RESTARTS AGE
  10. redis-demo-master-0 0/1 Running 0 5s
  11. redis-demo-slave-0 0/1 Running 0 5s
  12. [root@master01 ~]# kubectl get pods
  13. NAME READY STATUS RESTARTS AGE
  14. redis-demo-master-0 1/1 Running 0 62s
  15. redis-demo-slave-0 1/1 Running 0 62s
  16. redis-demo-slave-1 0/1 CrashLoopBackOff 2 26s
  17. [root@master01 ~]#

  提示:这里删除pod以后,新建的pod就能够正常运行;但是还有一个slave运行失败,应该是后端存储没有写权限造成的;

  再次给后端存储加写权限

  提示:可以看到给对应目录加上写权限,对应pod正常启动了;

  进入redis主从复制集群

  提示:可以看到在master节点上,能够看到对应两个从节点的信息;

  验证:在主节点上写数据,看看对应从节点上是否能够同步数据?

  提示:可以看到在master端写数据,slave端能够正常将对应数据同步过来,在slave端能够正常对取到对应数据,说明主从复制集群工作是正常的;

  更新仓库

  1. [root@master01 ~]# helm repo update
  2. Hang tight while we grab the latest from your chart repositories...
  3. ...Successfully got an update from the "bitnami" chart repository
  4. Update Complete. Happy Helming!⎈
  5. [root@master01 ~]#

  提示:建议每次部署新的应用都先更新下仓库,然后在部署应用;

  使用自定义信息部署应用

  提示:上述命令用--set选项可以将自定义信息传入对应的chart中,用于替换对应模板文件中的值;上述命令表示设置redis密码为admin123.com,master和slave都不开启持久存储功能(生产环境不建议);当然简单的设置个别参数可以使用--set来指定,如果过于复杂的参数,建议使用value.yaml文件来替换,使用--value选项来指定对应的值文件即可;

容器编排系统K8s之包管理器Helm基础使用的更多相关文章

  1. 容器编排系统K8s之包管理器helm基础使用(二)

    前文我们介绍了helm的相关术语和使用helm安装和卸载应用,回顾请参考:https://www.cnblogs.com/qiuhom-1874/p/14305902.html:今天我们来介绍下自定义 ...

  2. ASP.NET Core on K8S深入学习(10)K8S包管理器Helm

    本篇已加入<.NET Core on K8S学习实践系列文章索引>,可以点击查看更多容器化技术相关系列文章. 一.关于Helm 1.1 为何需要Helm? 虽然K8S能够很好地组织和编排容 ...

  3. 容器编排系统K8s之flannel网络模型

    前文我们聊到了k8s上webui的安装和相关用户授权,回顾请参考:https://www.cnblogs.com/qiuhom-1874/p/14222930.html:今天我们来聊一聊k8s上的网络 ...

  4. 容器编排系统K8s之ConfigMap、Secret资源

    前文我们了解了k8s上的pv/pvc/sc资源的使用和相关说明,回顾请参考:https://www.cnblogs.com/qiuhom-1874/p/14188621.html:今天我们主要来聊一下 ...

  5. 容器编排系统K8s之NetworkPolicy资源

    前文我们了解了k8s的网络插件flannel的基础工作逻辑,回顾请参考:https://www.cnblogs.com/qiuhom-1874/p/14225657.html:今天我们来聊一下k8s上 ...

  6. 容器编排系统k8s之Ingress资源

    前文我们了解了k8s上的service资源的相关话题,回顾请参考:https://www.cnblogs.com/qiuhom-1874/p/14161950.html:今天我们来了解下k8s上的In ...

  7. 容器编排系统K8s之Volume的基础使用

    前文我们聊到了k8s上的ingress资源相关话题,回顾请参考:https://www.cnblogs.com/qiuhom-1874/p/14167581.html:今天们来聊一下k8s上volum ...

  8. 容器编排系统K8s之访问控制--用户认证

    前文我们聊到了k8s的statefulset控制器相关使用说明,回顾请参考:https://www.cnblogs.com/qiuhom-1874/p/14201103.html:今天我们来聊一下k8 ...

  9. 容器编排系统k8s之Service资源

    前文我们了解了k8s上的DemonSet.Job和CronJob控制器的相关话题,回顾请参考:https://www.cnblogs.com/qiuhom-1874/p/14157306.html:今 ...

随机推荐

  1. Envoy入门实战部署

    一.Envoy介绍 官方文档解释: Envoy是专为大型现SOA(面向服务架构)设置的L7代理和通信总线.该项目源于以下理念:网络对应用程序来说应该是透明的.当网络和应用程序出现问题时,应该很容易确定 ...

  2. day109:MoFang:好友列表显示&添加好友页面初始化&添加好友后端接口

    目录 1.好友列表 2.添加好友-前端 3.服务端提供添加好友的后端接口 1.好友列表 1.在用户中心页面添加好友列表点击入口 html/user.html,用户中心添加好友列表点击入口,代码: &l ...

  3. Kafka消费者手动提交消息偏移

    生产者每次调用poll()方法时,它总是返回由生产者写入Kafka但还没有消费的消息,如果消费者一致处于运行状态,那么分区消息偏移量就没什么用处,但是如果消费者发生崩溃或者有新的消费者加入群组,就会触 ...

  4. css 04-CSS选择器:伪类

    04-CSS选择器:伪类 #伪类(伪类选择器) 伪类:同一个标签,根据其不同的种状态,有不同的样式.这就叫做"伪类".伪类用冒号来表示. 比如div是属于box类,这一点很明确,就 ...

  5. CSS—— div+css

    <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title> ...

  6. 来吧,自己动手撸一个分布式ID生成器组件

    在经过了众多轮的面试之后,小林终于进入到了一家互联网公司的基础架构组,小林目前在公司有使用到架构组研究到分布式id生成器,前一阵子大概看了下其内部的实现,发现还是存在一些架构设计不合理之处.但是又由于 ...

  7. MATLAB绘图,绘双坐标轴,绘一图二轴等

    clc; clear all; close all; % %% 画极坐标系 % x = 0:.01 * pi:0.5 * pi; % y = cos(x) + sqrt(-1) * sin(x); % ...

  8. 【windows】【消息中间件】【安装】Elasticsearch

    一.准备工作 elasticsearch的下载地址:https://www.elastic.co/cn/downloads/elasticsearch ik分词器的下载地址:https://githu ...

  9. CCNP之OSPF实验报告

    OSPF实验报告 一.实验要求 1.R4为ISP,其上只能配置IP地址:R4与其它所有直连设备间使用公有IP2.R3--R5/6/7为MGRE环境,R3为中心站点3.整个OSPF环境IP地址为172. ...

  10. (九)rmdir和rm -r删除目录命令

    一.命令描述与格式 rmdir用于删除空目录 命令格式 :rmdir   [选项]   目录名 选项: --ignore-fail-on-non-empty   :忽略任何因目录仍有数据而造成的错误 ...