什么是资源对象?

  所谓资源对象是指在k8s上创建的资源实例;即通过apiserver提供的各资源api接口(可以理解为各种资源模板),使用yaml文件或者命令行的方式向对应资源api接口传递参数赋值实例化的结果;比如我们在k8s上创建一个pod,那么我们就需要通过给apiserver交互,传递创建pod的相关参数,让apiserver拿着这些参数去实例化一个pod的相关信息存放在etcd中,然后再由调度器进行调度,由node节点的kubelet执行创建pod;简单讲资源对象就是把k8s之上的api接口进行实例化的结果;

  k8s逻辑运行环境

  提示:k8s运行环境如上,k8s能够将多个node节点的底层提供的资源(如内存,cpu,存储,网络等)逻辑的整合成一个大的资源池,统一由k8s进行调度编排;用户只管在k8s上创建各种资源即可,创建完成的资源是通过k8s统一编排调度,用户无需关注具体资源在那个node上运行,也无需关注node节点资源情况;

  k8s的设计理念——分层架构

  k8s的设计理念——API设计原则

  1、所有API应该是声明式的;

  2、API对象是彼此互补而且可组合的,即“高内聚,松耦合”;

  3、高层API以操作意图为基础设计;

  4、低层API根据高层API的控制需要设计;

  5、尽量避免简单封装,不要有在外部API无法显式知道的内部隐藏的机制;

  6、API操作复杂度与对象数量成正比;

  7、API对象状态不能依赖于网络连接状态;

  8、尽量避免让操作机制依赖于全局状态,因为在分布式系统中要保证全局状态的同步是非常困难的;

  kubernetes API简介

  提示:在k8s上api分内置api和自定义api;所谓内置api是指部署好k8s集群后自带的api接口;自定义api也称自定义资源(CRD,Custom Resource Definition),部署好k8s之后,通过安装其他组件等方式扩展出来的api;

  apiserver资源组织逻辑

  提示:apiserver对于不同资源是通过分类,分组,分版本的方式逻辑组织的,如上图所示;

  k8s内置资源对象简介

  k8s资源对象操作命令

  资源配置清单必需字段

  1、apiVersion - 创建该对象所使用的Kubernetes API的版本;

  2、kind - 想要创建的对象的类型;

  3、metadata - 定义识别对象唯一性的数据,包括一个name名称 、可选的namespace,默认不写就是default名称空间;

  4、spec:定义资源对象的详细规范信息(统一的label标签、容器名称、镜像、端口映射等),即用户期望对应资源处于什么状态;

  5、status(Pod创建完成后k8s自动生成status状态),该字段信息由k8s自动维护,用户无需定义,即对应资源的实际状态;

  Pod资源对象

  提示:pod是k8s中最小控制单元,一个pod中可以运行一个或多个容器;一个pod的中的容器是一起调度,即调度的最小单位是pod;pod的生命周期是短暂的,不会自愈,是用完就销毁的实体;一般我们通过Controller来创建和管理pod;使用控制器创建的pod具有自动恢复功能,即pod状态不满足用户期望状态,对应控制器会通过重启或重建的方式,让对应pod状态和数量始终和用户定义的期望状态一致;

  示例:自主式pod配置清单

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: "pod-demo"
  5. namespace: default
  6. labels:
  7. app: "pod-demo"
  8. spec:
  9. containers:
  10. - name: pod-demo
  11. image: "harbor.ik8s.cc/baseimages/nginx:v1"
  12. ports:
  13. - containerPort: 80
  14. name: http
  15. volumeMounts:
  16. - name: localtime
  17. mountPath: /etc/localtime
  18. volumes:
  19. - name: localtime
  20. hostPath:
  21. path: /usr/share/zoneinfo/Asia/Shanghai

  应用配置清单

  1. root@k8s-deploy:/yaml# kubectl get pods
  2. NAME READY STATUS RESTARTS AGE
  3. net-test1 1/1 Running 2 (4m35s ago) 7d7h
  4. test 1/1 Running 4 (4m34s ago) 13d
  5. test1 1/1 Running 4 (4m35s ago) 13d
  6. test2 1/1 Running 4 (4m35s ago) 13d
  7. root@k8s-deploy:/yaml# kubectl apply -f pod-demo.yaml
  8. pod/pod-demo created
  9. root@k8s-deploy:/yaml# kubectl get pods
  10. NAME READY STATUS RESTARTS AGE
  11. net-test1 1/1 Running 2 (4m47s ago) 7d7h
  12. pod-demo 0/1 ContainerCreating 0 4s
  13. test 1/1 Running 4 (4m46s ago) 13d
  14. test1 1/1 Running 4 (4m47s ago) 13d
  15. test2 1/1 Running 4 (4m47s ago) 13d
  16. root@k8s-deploy:/yaml# kubectl get pods
  17. NAME READY STATUS RESTARTS AGE
  18. net-test1 1/1 Running 2 (4m57s ago) 7d7h
  19. pod-demo 1/1 Running 0 14s
  20. test 1/1 Running 4 (4m56s ago) 13d
  21. test1 1/1 Running 4 (4m57s ago) 13d
  22. test2 1/1 Running 4 (4m57s ago) 13d
  23. root@k8s-deploy:/yaml#

  提示:此pod只是在k8s上运行起来,它没有控制器的监视,对应pod删除,故障都不会自动恢复;

  Job控制器,详细说明请参考https://www.cnblogs.com/qiuhom-1874/p/14157306.html

  job控制器配置清单示例

  1. apiVersion: batch/v1
  2. kind: Job
  3. metadata:
  4. name: job-demo
  5. namespace: default
  6. labels:
  7. app: job-demo
  8. spec:
  9. template:
  10. metadata:
  11. name: job-demo
  12. labels:
  13. app: job-demo
  14. spec:
  15. containers:
  16. - name: job-demo-container
  17. image: harbor.ik8s.cc/baseimages/centos7:2023
  18. command: ["/bin/sh"]
  19. args: ["-c", "echo data init job at `date +%Y-%m-%d_%H-%M-%S` >> /cache/data.log"]
  20. volumeMounts:
  21. - mountPath: /cache
  22. name: cache-volume
  23. - name: localtime
  24. mountPath: /etc/localtime
  25. volumes:
  26. - name: cache-volume
  27. hostPath:
  28. path: /tmp/jobdata
  29. - name: localtime
  30. hostPath:
  31. path: /usr/share/zoneinfo/Asia/Shanghai
  32. restartPolicy: Never

  提示:定义job资源必须定义restartPolicy;

  应用清单

  1. root@k8s-deploy:/yaml# kubectl get pods
  2. NAME READY STATUS RESTARTS AGE
  3. net-test1 1/1 Running 3 (48m ago) 7d10h
  4. pod-demo 1/1 Running 1 (48m ago) 3h32m
  5. test 1/1 Running 5 (48m ago) 14d
  6. test1 1/1 Running 5 (48m ago) 14d
  7. test2 1/1 Running 5 (48m ago) 14d
  8. root@k8s-deploy:/yaml# kubectl apply -f job-demo.yaml
  9. job.batch/job-demo created
  10. root@k8s-deploy:/yaml# kubectl get pods -o wide
  11. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  12. job-demo-z8gmb 0/1 Completed 0 26s 10.200.211.130 192.168.0.34 <none> <none>
  13. net-test1 1/1 Running 3 (49m ago) 7d10h 10.200.211.191 192.168.0.34 <none> <none>
  14. pod-demo 1/1 Running 1 (49m ago) 3h32m 10.200.155.138 192.168.0.36 <none> <none>
  15. test 1/1 Running 5 (49m ago) 14d 10.200.209.6 192.168.0.35 <none> <none>
  16. test1 1/1 Running 5 (49m ago) 14d 10.200.209.8 192.168.0.35 <none> <none>
  17. test2 1/1 Running 5 (49m ago) 14d 10.200.211.177 192.168.0.34 <none> <none>
  18. root@k8s-deploy:/yaml#

  验证:查看192.168.0.34的/tmp/jobdata目录下是否有job执行的任务数据?

  1. root@k8s-deploy:/yaml# ssh 192.168.0.34 "ls /tmp/jobdata"
  2. data.log
  3. root@k8s-deploy:/yaml# ssh 192.168.0.34 "cat /tmp/jobdata/data.log"
  4. data init job at 2023-05-06_23-31-32
  5. root@k8s-deploy:/yaml#

  提示:可以看到对应job所在宿主机的/tmp/jobdata/目录下有job执行过后的数据,这说明我们定义的job任务顺利完成;

  定义并行job

  1. apiVersion: batch/v1
  2. kind: Job
  3. metadata:
  4. name: job-multi-demo
  5. namespace: default
  6. labels:
  7. app: job-multi-demo
  8. spec:
  9. completions: 5
  10. template:
  11. metadata:
  12. name: job-multi-demo
  13. labels:
  14. app: job-multi-demo
  15. spec:
  16. containers:
  17. - name: job-multi-demo-container
  18. image: harbor.ik8s.cc/baseimages/centos7:2023
  19. command: ["/bin/sh"]
  20. args: ["-c", "echo data init job at `date +%Y-%m-%d_%H-%M-%S` >> /cache/data.log"]
  21. volumeMounts:
  22. - mountPath: /cache
  23. name: cache-volume
  24. - name: localtime
  25. mountPath: /etc/localtime
  26. volumes:
  27. - name: cache-volume
  28. hostPath:
  29. path: /tmp/jobdata
  30. - name: localtime
  31. hostPath:
  32. path: /usr/share/zoneinfo/Asia/Shanghai
  33. restartPolicy: Never

  提示:spec字段下使用completions来指定执行任务需要的对应pod的数量;

  应用清单

  1. root@k8s-deploy:/yaml# kubectl get pods
  2. NAME READY STATUS RESTARTS AGE
  3. job-demo-z8gmb 0/1 Completed 0 24m
  4. net-test1 1/1 Running 3 (73m ago) 7d11h
  5. pod-demo 1/1 Running 1 (73m ago) 3h56m
  6. test 1/1 Running 5 (73m ago) 14d
  7. test1 1/1 Running 5 (73m ago) 14d
  8. test2 1/1 Running 5 (73m ago) 14d
  9. root@k8s-deploy:/yaml# kubectl apply -f job-multi-demo.yaml
  10. job.batch/job-multi-demo created
  11. root@k8s-deploy:/yaml# kubectl get job
  12. NAME COMPLETIONS DURATION AGE
  13. job-demo 1/1 5s 24m
  14. job-multi-demo 1/5 10s 10s
  15. root@k8s-deploy:/yaml# kubectl get pods -o wide
  16. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  17. job-demo-z8gmb 0/1 Completed 0 24m 10.200.211.130 192.168.0.34 <none> <none>
  18. job-multi-demo-5vp9w 0/1 Completed 0 12s 10.200.211.144 192.168.0.34 <none> <none>
  19. job-multi-demo-frstg 0/1 Completed 0 22s 10.200.211.186 192.168.0.34 <none> <none>
  20. job-multi-demo-gd44s 0/1 Completed 0 17s 10.200.211.184 192.168.0.34 <none> <none>
  21. job-multi-demo-kfm79 0/1 ContainerCreating 0 2s <none> 192.168.0.34 <none> <none>
  22. job-multi-demo-nsmpg 0/1 Completed 0 7s 10.200.211.135 192.168.0.34 <none> <none>
  23. net-test1 1/1 Running 3 (73m ago) 7d11h 10.200.211.191 192.168.0.34 <none> <none>
  24. pod-demo 1/1 Running 1 (73m ago) 3h56m 10.200.155.138 192.168.0.36 <none> <none>
  25. test 1/1 Running 5 (73m ago) 14d 10.200.209.6 192.168.0.35 <none> <none>
  26. test1 1/1 Running 5 (73m ago) 14d 10.200.209.8 192.168.0.35 <none> <none>
  27. test2 1/1 Running 5 (73m ago) 14d 10.200.211.177 192.168.0.34 <none> <none>
  28. root@k8s-deploy:/yaml# kubectl get pods -o wide
  29. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  30. job-demo-z8gmb 0/1 Completed 0 24m 10.200.211.130 192.168.0.34 <none> <none>
  31. job-multi-demo-5vp9w 0/1 Completed 0 33s 10.200.211.144 192.168.0.34 <none> <none>
  32. job-multi-demo-frstg 0/1 Completed 0 43s 10.200.211.186 192.168.0.34 <none> <none>
  33. job-multi-demo-gd44s 0/1 Completed 0 38s 10.200.211.184 192.168.0.34 <none> <none>
  34. job-multi-demo-kfm79 0/1 Completed 0 23s 10.200.211.140 192.168.0.34 <none> <none>
  35. job-multi-demo-nsmpg 0/1 Completed 0 28s 10.200.211.135 192.168.0.34 <none> <none>
  36. net-test1 1/1 Running 3 (73m ago) 7d11h 10.200.211.191 192.168.0.34 <none> <none>
  37. pod-demo 1/1 Running 1 (73m ago) 3h57m 10.200.155.138 192.168.0.36 <none> <none>
  38. test 1/1 Running 5 (73m ago) 14d 10.200.209.6 192.168.0.35 <none> <none>
  39. test1 1/1 Running 5 (73m ago) 14d 10.200.209.8 192.168.0.35 <none> <none>
  40. test2 1/1 Running 5 (73m ago) 14d 10.200.211.177 192.168.0.34 <none> <none>
  41. root@k8s-deploy:/yaml#

  验证:查看192.168.0.34的/tmp/jobdata/目录下是否有job数据产生?

  1. root@k8s-deploy:/yaml# ssh 192.168.0.34 "ls /tmp/jobdata"
  2. data.log
  3. root@k8s-deploy:/yaml# ssh 192.168.0.34 "cat /tmp/jobdata/data.log"
  4. data init job at 2023-05-06_23-31-32
  5. data init job at 2023-05-06_23-55-44
  6. data init job at 2023-05-06_23-55-49
  7. data init job at 2023-05-06_23-55-54
  8. data init job at 2023-05-06_23-55-59
  9. data init job at 2023-05-06_23-56-04
  10. root@k8s-deploy:/yaml#

  定义并行度

  1. apiVersion: batch/v1
  2. kind: Job
  3. metadata:
  4. name: job-multi-demo2
  5. namespace: default
  6. labels:
  7. app: job-multi-demo2
  8. spec:
  9. completions: 6
  10. parallelism: 2
  11. template:
  12. metadata:
  13. name: job-multi-demo2
  14. labels:
  15. app: job-multi-demo2
  16. spec:
  17. containers:
  18. - name: job-multi-demo2-container
  19. image: harbor.ik8s.cc/baseimages/centos7:2023
  20. command: ["/bin/sh"]
  21. args: ["-c", "echo data init job at `date +%Y-%m-%d_%H-%M-%S` >> /cache/data.log"]
  22. volumeMounts:
  23. - mountPath: /cache
  24. name: cache-volume
  25. - name: localtime
  26. mountPath: /etc/localtime
  27. volumes:
  28. - name: cache-volume
  29. hostPath:
  30. path: /tmp/jobdata
  31. - name: localtime
  32. hostPath:
  33. path: /usr/share/zoneinfo/Asia/Shanghai
  34. restartPolicy: Never

  提示:在spec字段下使用parallelism字段来指定并行度,即一次几个pod同时运行;上述清单表示,一次2个pod同时运行,总共需要6个pod;

  应用清单

  1. root@k8s-deploy:/yaml# kubectl get jobs
  2. NAME COMPLETIONS DURATION AGE
  3. job-demo 1/1 5s 34m
  4. job-multi-demo 5/5 25s 9m56s
  5. root@k8s-deploy:/yaml# kubectl apply -f job-multi-demo2.yaml
  6. job.batch/job-multi-demo2 created
  7. root@k8s-deploy:/yaml# kubectl get jobs
  8. NAME COMPLETIONS DURATION AGE
  9. job-demo 1/1 5s 34m
  10. job-multi-demo 5/5 25s 10m
  11. job-multi-demo2 0/6 2s 3s
  12. root@k8s-deploy:/yaml# kubectl get pods
  13. NAME READY STATUS RESTARTS AGE
  14. job-demo-z8gmb 0/1 Completed 0 34m
  15. job-multi-demo-5vp9w 0/1 Completed 0 10m
  16. job-multi-demo-frstg 0/1 Completed 0 10m
  17. job-multi-demo-gd44s 0/1 Completed 0 10m
  18. job-multi-demo-kfm79 0/1 Completed 0 9m59s
  19. job-multi-demo-nsmpg 0/1 Completed 0 10m
  20. job-multi-demo2-7ppxc 0/1 Completed 0 10s
  21. job-multi-demo2-mxbtq 0/1 Completed 0 5s
  22. job-multi-demo2-rhgh7 0/1 Completed 0 4s
  23. job-multi-demo2-th6ff 0/1 Completed 0 11s
  24. net-test1 1/1 Running 3 (83m ago) 7d11h
  25. pod-demo 1/1 Running 1 (83m ago) 4h6m
  26. test 1/1 Running 5 (83m ago) 14d
  27. test1 1/1 Running 5 (83m ago) 14d
  28. test2 1/1 Running 5 (83m ago) 14d
  29. root@k8s-deploy:/yaml# kubectl get pods
  30. NAME READY STATUS RESTARTS AGE
  31. job-demo-z8gmb 0/1 Completed 0 34m
  32. job-multi-demo-5vp9w 0/1 Completed 0 10m
  33. job-multi-demo-frstg 0/1 Completed 0 10m
  34. job-multi-demo-gd44s 0/1 Completed 0 10m
  35. job-multi-demo-kfm79 0/1 Completed 0 10m
  36. job-multi-demo-nsmpg 0/1 Completed 0 10m
  37. job-multi-demo2-7ppxc 0/1 Completed 0 16s
  38. job-multi-demo2-8bh22 0/1 Completed 0 6s
  39. job-multi-demo2-dbjqw 0/1 Completed 0 6s
  40. job-multi-demo2-mxbtq 0/1 Completed 0 11s
  41. job-multi-demo2-rhgh7 0/1 Completed 0 10s
  42. job-multi-demo2-th6ff 0/1 Completed 0 17s
  43. net-test1 1/1 Running 3 (83m ago) 7d11h
  44. pod-demo 1/1 Running 1 (83m ago) 4h6m
  45. test 1/1 Running 5 (83m ago) 14d
  46. test1 1/1 Running 5 (83m ago) 14d
  47. test2 1/1 Running 5 (83m ago) 14d
  48. root@k8s-deploy:/yaml# kubectl get pods -o wide
  49. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  50. job-demo-z8gmb 0/1 Completed 0 35m 10.200.211.130 192.168.0.34 <none> <none>
  51. job-multi-demo-5vp9w 0/1 Completed 0 10m 10.200.211.144 192.168.0.34 <none> <none>
  52. job-multi-demo-frstg 0/1 Completed 0 11m 10.200.211.186 192.168.0.34 <none> <none>
  53. job-multi-demo-gd44s 0/1 Completed 0 11m 10.200.211.184 192.168.0.34 <none> <none>
  54. job-multi-demo-kfm79 0/1 Completed 0 10m 10.200.211.140 192.168.0.34 <none> <none>
  55. job-multi-demo-nsmpg 0/1 Completed 0 10m 10.200.211.135 192.168.0.34 <none> <none>
  56. job-multi-demo2-7ppxc 0/1 Completed 0 57s 10.200.211.145 192.168.0.34 <none> <none>
  57. job-multi-demo2-8bh22 0/1 Completed 0 47s 10.200.211.148 192.168.0.34 <none> <none>
  58. job-multi-demo2-dbjqw 0/1 Completed 0 47s 10.200.211.141 192.168.0.34 <none> <none>
  59. job-multi-demo2-mxbtq 0/1 Completed 0 52s 10.200.211.152 192.168.0.34 <none> <none>
  60. job-multi-demo2-rhgh7 0/1 Completed 0 51s 10.200.211.143 192.168.0.34 <none> <none>
  61. job-multi-demo2-th6ff 0/1 Completed 0 58s 10.200.211.136 192.168.0.34 <none> <none>
  62. net-test1 1/1 Running 3 (84m ago) 7d11h 10.200.211.191 192.168.0.34 <none> <none>
  63. pod-demo 1/1 Running 1 (84m ago) 4h7m 10.200.155.138 192.168.0.36 <none> <none>
  64. test 1/1 Running 5 (84m ago) 14d 10.200.209.6 192.168.0.35 <none> <none>
  65. test1 1/1 Running 5 (84m ago) 14d 10.200.209.8 192.168.0.35 <none> <none>
  66. test2 1/1 Running 5 (84m ago) 14d 10.200.211.177 192.168.0.34 <none> <none>
  67. root@k8s-deploy:/yaml#

  验证job数据

  提示:可以看到后面job追加的时间几乎都是两个重复的,这说明两个pod同时执行了job里的任务;

  Cronjob控制器,详细说明请参考https://www.cnblogs.com/qiuhom-1874/p/14157306.html

  示例:定义cronjob

  1. apiVersion: batch/v1
  2. kind: CronJob
  3. metadata:
  4. name: job-cronjob
  5. namespace: default
  6. spec:
  7. schedule: "*/1 * * * *"
  8. jobTemplate:
  9. spec:
  10. parallelism: 2
  11. template:
  12. spec:
  13. containers:
  14. - name: job-cronjob-container
  15. image: harbor.ik8s.cc/baseimages/centos7:2023
  16. command: ["/bin/sh"]
  17. args: ["-c", "echo data init job at `date +%Y-%m-%d_%H-%M-%S` >> /cache/cronjob-data.log"]
  18. volumeMounts:
  19. - mountPath: /cache
  20. name: cache-volume
  21. - name: localtime
  22. mountPath: /etc/localtime
  23. volumes:
  24. - name: cache-volume
  25. hostPath:
  26. path: /tmp/jobdata
  27. - name: localtime
  28. hostPath:
  29. path: /usr/share/zoneinfo/Asia/Shanghai
  30. restartPolicy: OnFailure

  应用清单

  1. root@k8s-deploy:/yaml# kubectl apply -f cronjob-demo.yaml
  2. cronjob.batch/job-cronjob created
  3. root@k8s-deploy:/yaml# kubectl get cronjob
  4. NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
  5. job-cronjob */1 * * * * False 0 <none> 6s
  6. root@k8s-deploy:/yaml# kubectl get pods
  7. NAME READY STATUS RESTARTS AGE
  8. job-cronjob-28056516-njddz 0/1 Completed 0 12s
  9. job-cronjob-28056516-wgbns 0/1 Completed 0 12s
  10. job-demo-z8gmb 0/1 Completed 0 64m
  11. job-multi-demo-5vp9w 0/1 Completed 0 40m
  12. job-multi-demo-frstg 0/1 Completed 0 40m
  13. job-multi-demo-gd44s 0/1 Completed 0 40m
  14. job-multi-demo-kfm79 0/1 Completed 0 40m
  15. job-multi-demo-nsmpg 0/1 Completed 0 40m
  16. job-multi-demo2-7ppxc 0/1 Completed 0 30m
  17. job-multi-demo2-8bh22 0/1 Completed 0 30m
  18. job-multi-demo2-dbjqw 0/1 Completed 0 30m
  19. job-multi-demo2-mxbtq 0/1 Completed 0 30m
  20. job-multi-demo2-rhgh7 0/1 Completed 0 30m
  21. job-multi-demo2-th6ff 0/1 Completed 0 30m
  22. net-test1 1/1 Running 3 (113m ago) 7d11h
  23. pod-demo 1/1 Running 1 (113m ago) 4h36m
  24. test 1/1 Running 5 (113m ago) 14d
  25. test1 1/1 Running 5 (113m ago) 14d
  26. test2 1/1 Running 5 (113m ago) 14d
  27. root@k8s-deploy:/yaml# kubectl get cronjob
  28. NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
  29. job-cronjob */1 * * * * False 0 12s 108s
  30. root@k8s-deploy:/yaml# kubectl get pods
  31. NAME READY STATUS RESTARTS AGE
  32. job-cronjob-28056516-njddz 0/1 Completed 0 77s
  33. job-cronjob-28056516-wgbns 0/1 Completed 0 77s
  34. job-cronjob-28056517-d6n9h 0/1 Completed 0 17s
  35. job-cronjob-28056517-krsvb 0/1 Completed 0 17s
  36. job-demo-z8gmb 0/1 Completed 0 65m
  37. job-multi-demo-5vp9w 0/1 Completed 0 41m
  38. job-multi-demo-frstg 0/1 Completed 0 41m
  39. job-multi-demo-gd44s 0/1 Completed 0 41m
  40. job-multi-demo-kfm79 0/1 Completed 0 41m
  41. job-multi-demo-nsmpg 0/1 Completed 0 41m
  42. job-multi-demo2-7ppxc 0/1 Completed 0 31m
  43. job-multi-demo2-8bh22 0/1 Completed 0 31m
  44. job-multi-demo2-dbjqw 0/1 Completed 0 31m
  45. job-multi-demo2-mxbtq 0/1 Completed 0 31m
  46. job-multi-demo2-rhgh7 0/1 Completed 0 31m
  47. job-multi-demo2-th6ff 0/1 Completed 0 31m
  48. net-test1 1/1 Running 3 (114m ago) 7d11h
  49. pod-demo 1/1 Running 1 (114m ago) 4h38m
  50. test 1/1 Running 5 (114m ago) 14d
  51. test1 1/1 Running 5 (114m ago) 14d
  52. test2 1/1 Running 5 (114m ago) 14d
  53. root@k8s-deploy:/yaml#

  提示:cronjob 默认保留最近3个历史记录;

  验证:查看周期执行任务的数据

  提示:从上面的时间就可以看到每过一分钟就有两个pod执行一次任务;

  RC/RS 副本控制器

  RC(Replication Controller),副本控制器,该控制器主要负责控制pod副本数量始终满足用户期望的副本数量,该副本控制器是第一代pod副本控制器,仅支持selector = !=;

  rc控制器示例

  1. apiVersion: v1
  2. kind: ReplicationController
  3. metadata:
  4. name: ng-rc
  5. spec:
  6. replicas: 2
  7. selector:
  8. app: ng-rc-80
  9. template:
  10. metadata:
  11. labels:
  12. app: ng-rc-80
  13. spec:
  14. containers:
  15. - name: pod-demo
  16. image: "harbor.ik8s.cc/baseimages/nginx:v1"
  17. ports:
  18. - containerPort: 80
  19. name: http
  20. volumeMounts:
  21. - name: localtime
  22. mountPath: /etc/localtime
  23. volumes:
  24. - name: localtime
  25. hostPath:
  26. path: /usr/share/zoneinfo/Asia/Shanghai

  应用配置清单

  1. root@k8s-deploy:/yaml# kubectl get pods
  2. NAME READY STATUS RESTARTS AGE
  3. test 1/1 Running 6 (11m ago) 16d
  4. test1 1/1 Running 6 (11m ago) 16d
  5. test2 1/1 Running 6 (11m ago) 16d
  6. root@k8s-deploy:/yaml# kubectl apply -f rc-demo.yaml
  7. replicationcontroller/ng-rc created
  8. root@k8s-deploy:/yaml# kubectl get pods -o wide
  9. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  10. ng-rc-l7xmp 1/1 Running 0 10s 10.200.211.136 192.168.0.34 <none> <none>
  11. ng-rc-wl5d6 1/1 Running 0 9s 10.200.155.185 192.168.0.36 <none> <none>
  12. test 1/1 Running 6 (11m ago) 16d 10.200.209.24 192.168.0.35 <none> <none>
  13. test1 1/1 Running 6 (11m ago) 16d 10.200.209.31 192.168.0.35 <none> <none>
  14. test2 1/1 Running 6 (11m ago) 16d 10.200.211.186 192.168.0.34 <none> <none>
  15. root@k8s-deploy:/yaml# kubectl get rc
  16. NAME DESIRED CURRENT READY AGE
  17. ng-rc 2 2 2 25s
  18. root@k8s-deploy:/yaml#

  验证:修改pod标签,看看对应pod是否会重新创建?

  1. root@k8s-deploy:/yaml# kubectl get pods --show-labels
  2. NAME READY STATUS RESTARTS AGE LABELS
  3. ng-rc-l7xmp 1/1 Running 0 2m32s app=ng-rc-80
  4. ng-rc-wl5d6 1/1 Running 0 2m31s app=ng-rc-80
  5. test 1/1 Running 6 (13m ago) 16d run=test
  6. test1 1/1 Running 6 (13m ago) 16d run=test1
  7. test2 1/1 Running 6 (13m ago) 16d run=test2
  8. root@k8s-deploy:/yaml# kubectl label pod/ng-rc-l7xmp app=nginx-demo --overwrite
  9. pod/ng-rc-l7xmp labeled
  10. root@k8s-deploy:/yaml# kubectl get pods --show-labels
  11. NAME READY STATUS RESTARTS AGE LABELS
  12. ng-rc-l7xmp 1/1 Running 0 4m42s app=nginx-demo
  13. ng-rc-rxvd4 0/1 ContainerCreating 0 3s app=ng-rc-80
  14. ng-rc-wl5d6 1/1 Running 0 4m41s app=ng-rc-80
  15. test 1/1 Running 6 (15m ago) 16d run=test
  16. test1 1/1 Running 6 (15m ago) 16d run=test1
  17. test2 1/1 Running 6 (15m ago) 16d run=test2
  18. root@k8s-deploy:/yaml# kubectl get pods --show-labels
  19. NAME READY STATUS RESTARTS AGE LABELS
  20. ng-rc-l7xmp 1/1 Running 0 4m52s app=nginx-demo
  21. ng-rc-rxvd4 1/1 Running 0 13s app=ng-rc-80
  22. ng-rc-wl5d6 1/1 Running 0 4m51s app=ng-rc-80
  23. test 1/1 Running 6 (16m ago) 16d run=test
  24. test1 1/1 Running 6 (16m ago) 16d run=test1
  25. test2 1/1 Running 6 (16m ago) 16d run=test2
  26. root@k8s-deploy:/yaml# kubectl label pod/ng-rc-l7xmp app=ng-rc-80 --overwrite
  27. pod/ng-rc-l7xmp labeled
  28. root@k8s-deploy:/yaml# kubectl get pods --show-labels
  29. NAME READY STATUS RESTARTS AGE LABELS
  30. ng-rc-l7xmp 1/1 Running 0 5m27s app=ng-rc-80
  31. ng-rc-wl5d6 1/1 Running 0 5m26s app=ng-rc-80
  32. test 1/1 Running 6 (16m ago) 16d run=test
  33. test1 1/1 Running 6 (16m ago) 16d run=test1
  34. test2 1/1 Running 6 (16m ago) 16d run=test2
  35. root@k8s-deploy:/yaml#

  提示:rc控制器是通过标签选择器来识别对应pod是否归属对应rc控制器管控,如果发现对应pod的标签发生改变,那么rc控制器会通过新建或删除的方法将对应pod数量始终和用户定义的数量保持一致;

  RS(ReplicaSet),副本控制器,该副本控制器和rc类似,都是通过标签选择器来匹配归属自己管控的pod数量,如果标签或对应pod数量少于或多余用户期望的数量,该控制器会通过新建或删除pod的方式将对应pod数量始终和用户期望的pod数量保持一致;rs控制器和rc控制器唯一区别就是rs控制器支持selector = !=精确匹配外,还支持模糊匹配in notin;是k8s之上的第二代pod副本控制器;

  rs控制器示例

  1. apiVersion: apps/v1
  2. kind: ReplicaSet
  3. metadata:
  4. name: rs-demo
  5. labels:
  6. app: rs-demo
  7. spec:
  8. replicas: 3
  9. selector:
  10. matchLabels:
  11. app: rs-demo
  12. template:
  13. metadata:
  14. labels:
  15. app: rs-demo
  16. spec:
  17. containers:
  18. - name: rs-demo
  19. image: "harbor.ik8s.cc/baseimages/nginx:v1"
  20. ports:
  21. - name: web
  22. containerPort: 80
  23. protocol: TCP
  24. env:
  25. - name: NGX_VERSION
  26. value: 1.16.1
  27. volumeMounts:
  28. - name: localtime
  29. mountPath: /etc/localtime
  30. volumes:
  31. - name: localtime
  32. hostPath:
  33. path: /usr/share/zoneinfo/Asia/Shanghai

  应用配置清单

  验证:修改pod标签,看看对应pod是否会发生变化?

  1. root@k8s-deploy:/yaml# kubectl get pods --show-labels
  2. NAME READY STATUS RESTARTS AGE LABELS
  3. ng-rc-l7xmp 1/1 Running 0 18m app=ng-rc-80
  4. ng-rc-wl5d6 1/1 Running 0 18m app=ng-rc-80
  5. rs-demo-nzmqs 1/1 Running 0 71s app=rs-demo
  6. rs-demo-v2vb6 1/1 Running 0 71s app=rs-demo
  7. rs-demo-x27fv 1/1 Running 0 71s app=rs-demo
  8. test 1/1 Running 6 (29m ago) 16d run=test
  9. test1 1/1 Running 6 (29m ago) 16d run=test1
  10. test2 1/1 Running 6 (29m ago) 16d run=test2
  11. root@k8s-deploy:/yaml# kubectl label pod/rs-demo-nzmqs app=nginx --overwrite
  12. pod/rs-demo-nzmqs labeled
  13. root@k8s-deploy:/yaml# kubectl get pods --show-labels
  14. NAME READY STATUS RESTARTS AGE LABELS
  15. ng-rc-l7xmp 1/1 Running 0 19m app=ng-rc-80
  16. ng-rc-wl5d6 1/1 Running 0 19m app=ng-rc-80
  17. rs-demo-bdfdd 1/1 Running 0 4s app=rs-demo
  18. rs-demo-nzmqs 1/1 Running 0 103s app=nginx
  19. rs-demo-v2vb6 1/1 Running 0 103s app=rs-demo
  20. rs-demo-x27fv 1/1 Running 0 103s app=rs-demo
  21. test 1/1 Running 6 (30m ago) 16d run=test
  22. test1 1/1 Running 6 (30m ago) 16d run=test1
  23. test2 1/1 Running 6 (30m ago) 16d run=test2
  24. root@k8s-deploy:/yaml# kubectl label pod/rs-demo-nzmqs app=rs-demo --overwrite
  25. pod/rs-demo-nzmqs labeled
  26. root@k8s-deploy:/yaml# kubectl get pods --show-labels
  27. NAME READY STATUS RESTARTS AGE LABELS
  28. ng-rc-l7xmp 1/1 Running 0 19m app=ng-rc-80
  29. ng-rc-wl5d6 1/1 Running 0 19m app=ng-rc-80
  30. rs-demo-nzmqs 1/1 Running 0 119s app=rs-demo
  31. rs-demo-v2vb6 1/1 Running 0 119s app=rs-demo
  32. rs-demo-x27fv 1/1 Running 0 119s app=rs-demo
  33. test 1/1 Running 6 (30m ago) 16d run=test
  34. test1 1/1 Running 6 (30m ago) 16d run=test1
  35. test2 1/1 Running 6 (30m ago) 16d run=test2
  36. root@k8s-deploy:/yaml#

  提示:可以看到当我们修改pod标签为其他标签以后,对应rs控制器会新建一个pod,其标签为app=rs-demo,这是因为当我们修改pod标签以后,rs控制器发现标签选择器匹配的pod数量少于用户定义的数量,所以rs控制器会新建一个标签为app=rs-demo的pod;当我们把pod标签修改为rs-demo时,rs控制器发现对应标签选择器匹配pod数量多余用户期望的pod数量,此时rs控制器会通过删除pod方法,让app=rs-demo标签的pod和用户期望的pod数量保持一致;

  Deployment 副本控制器,详细说明请参考https://www.cnblogs.com/qiuhom-1874/p/14149042.html

  Deployment副本控制器时k8s第三代pod副本控制器,该控制器比rs控制器更高级,除了有rs的功能之外,还有很多高级功能,,比如说最重要的滚动升级、回滚等;

  deploy控制器示例

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: deploy-demo
  5. namespace: default
  6. labels:
  7. app: deploy-demo
  8. spec:
  9. selector:
  10. matchLabels:
  11. app: deploy-demo
  12. replicas: 2
  13. template:
  14. metadata:
  15. labels:
  16. app: deploy-demo
  17. spec:
  18. containers:
  19. - name: deploy-demo
  20. image: "harbor.ik8s.cc/baseimages/nginx:v1"
  21. ports:
  22. - containerPort: 80
  23. name: http
  24. volumeMounts:
  25. - name: localtime
  26. mountPath: /etc/localtime
  27. volumes:
  28. - name: localtime
  29. hostPath:
  30. path: /usr/share/zoneinfo/Asia/Shanghai

  应用配置清单

  提示:deploy控制器是通过创建rs控制器来实现管控对应pod数量;

  通过修改镜像版本来更新pod版本

  应用配置清单

  使用命令更新pod版本

  查看rs更新历史版本

  查看更新历史记录

  提示:这里历史记录中没有记录版本信息,原因是默认不记录,需要记录历史版本,可以手动使用--record选项来记录版本信息;如下所示

  查看某个历史版本的详细信息

  提示:查看某个历史版本的详细信息,加上--revision=对应历史版本的编号即可;

  回滚到上一个版本

  提示:使用kubectl rollout undo 命令可以将对应deploy回滚到上一个版本;

  回滚指定编号的历史版本

  提示:使用--to-revision选项来指定对应历史版本编号,即可回滚到对应编号的历史版本;

  Service资源,详细说明请参考https://www.cnblogs.com/qiuhom-1874/p/14161950.html

  nodeport类型的service访问流程

  nodeport类型service主要解决了k8s集群外部客户端访问pod,其流程是外部客户端访问k8s集群任意node节点的对应暴露的端口,被访问的node或通过本机的iptables或ipvs规则将外部客户端流量转发给对应pod之上,从而实现外部客户端访问k8s集群pod的目的;通常使用nodeport类型service为了方便外部客户端访问,都会在集群外部部署一个负载均衡器,即外部客户端访问对应负载均衡器的对应端口,通过负载均衡器将外部客户端流量引入k8s集群,从而完成对pod的访问;

  ClusterIP类型svc示例

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: ngx-svc
  5. namespace: default
  6. spec:
  7. selector:
  8. app: deploy-demo
  9. type: ClusterIP
  10. ports:
  11. - name: http
  12. protocol: TCP
  13. port: 80
  14. targetPort: 80

  应用配置清单

  提示:可以看到创建clusterip类型service以后,对应svc会有一个clusterip,后端endpoints会通过标签选择器去关联对应pod,即我们访问对应svc的clusterip,对应流量会被转发至后端endpoint pod之上进行响应;不过这种clusterip类型svc只能在k8s集群内部客户端访问,集群外部客户端是访问不到的,原因是这个clusterip是k8s内部网络IP地址;

  验证,访问10.100.100.23的80端口,看看对应后端nginxpod是否可以正常被访问呢?

  1. root@k8s-node01:~# curl 10.100.100.23
  2. <!DOCTYPE html>
  3. <html>
  4. <head>
  5. <title>Welcome to nginx!</title>
  6. <style>
  7. html { color-scheme: light dark; }
  8. body { width: 35em; margin: 0 auto;
  9. font-family: Tahoma, Verdana, Arial, sans-serif; }
  10. </style>
  11. </head>
  12. <body>
  13. <h1>Welcome to nginx!</h1>
  14. <p>If you see this page, the nginx web server is successfully installed and
  15. working. Further configuration is required.</p>
  16.  
  17. <p>For online documentation and support please refer to
  18. <a href="http://nginx.org/">nginx.org</a>.<br/>
  19. Commercial support is available at
  20. <a href="http://nginx.com/">nginx.com</a>.</p>
  21.  
  22. <p><em>Thank you for using nginx.</em></p>
  23. </body>
  24. </html>
  25. root@k8s-node01:~#

  nodeport类型service示例

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: ngx-nodeport-svc
  5. namespace: default
  6. spec:
  7. selector:
  8. app: deploy-demo
  9. type: NodePort
  10. ports:
  11. - name: http
  12. protocol: TCP
  13. port: 80
  14. targetPort: 80
  15. nodePort: 30012

  提示:nodeport类型service只需要在clusterip类型的svc之上修改type为NodePort,然后再ports字段下用nodePort指定对应node端口即可;

  应用配置清单

  1. root@k8s-deploy:/yaml# kubectl apply -f nodeport-svc-demo.yaml
  2. service/ngx-nodeport-svc created
  3. root@k8s-deploy:/yaml# kubectl get svc
  4. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  5. kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 16d
  6. ngx-nodeport-svc NodePort 10.100.209.225 <none> 80:30012/TCP 11s
  7. root@k8s-deploy:/yaml# kubectl describe svc ngx-nodeport-svc
  8. Name: ngx-nodeport-svc
  9. Namespace: default
  10. Labels: <none>
  11. Annotations: <none>
  12. Selector: app=deploy-demo
  13. Type: NodePort
  14. IP Family Policy: SingleStack
  15. IP Families: IPv4
  16. IP: 10.100.209.225
  17. IPs: 10.100.209.225
  18. Port: http 80/TCP
  19. TargetPort: 80/TCP
  20. NodePort: http 30012/TCP
  21. Endpoints: 10.200.155.178:80,10.200.211.138:80
  22. Session Affinity: None
  23. External Traffic Policy: Cluster
  24. Events: <none>
  25. root@k8s-deploy:/yaml#

  验证:访问k8s集群任意node的30012端口,看看对应nginxpod是否能够被访问到?

  1. root@k8s-deploy:/yaml# curl 192.168.0.34:30012
  2. <!DOCTYPE html>
  3. <html>
  4. <head>
  5. <title>Welcome to nginx!</title>
  6. <style>
  7. html { color-scheme: light dark; }
  8. body { width: 35em; margin: 0 auto;
  9. font-family: Tahoma, Verdana, Arial, sans-serif; }
  10. </style>
  11. </head>
  12. <body>
  13. <h1>Welcome to nginx!</h1>
  14. <p>If you see this page, the nginx web server is successfully installed and
  15. working. Further configuration is required.</p>
  16.  
  17. <p>For online documentation and support please refer to
  18. <a href="http://nginx.org/">nginx.org</a>.<br/>
  19. Commercial support is available at
  20. <a href="http://nginx.com/">nginx.com</a>.</p>
  21.  
  22. <p><em>Thank you for using nginx.</em></p>
  23. </body>
  24. </html>
  25. root@k8s-deploy:/yaml#

  提示:可以看到k8s外部客户端访问k8snode节点的30012端口是能够正常访问到nginxpod;当然集群内部的客户端是可以通过对应生成的clusterip进行访问的;

  1. root@k8s-node01:~# curl 10.100.209.225:30012
  2. curl: (7) Failed to connect to 10.100.209.225 port 30012 after 0 ms: Connection refused
  3. root@k8s-node01:~# curl 127.0.0.1:30012
  4. curl: (7) Failed to connect to 127.0.0.1 port 30012 after 0 ms: Connection refused
  5. root@k8s-node01:~# curl 192.168.0.34:30012
  6. <!DOCTYPE html>
  7. <html>
  8. <head>
  9. <title>Welcome to nginx!</title>
  10. <style>
  11. html { color-scheme: light dark; }
  12. body { width: 35em; margin: 0 auto;
  13. font-family: Tahoma, Verdana, Arial, sans-serif; }
  14. </style>
  15. </head>
  16. <body>
  17. <h1>Welcome to nginx!</h1>
  18. <p>If you see this page, the nginx web server is successfully installed and
  19. working. Further configuration is required.</p>
  20.  
  21. <p>For online documentation and support please refer to
  22. <a href="http://nginx.org/">nginx.org</a>.<br/>
  23. Commercial support is available at
  24. <a href="http://nginx.com/">nginx.com</a>.</p>
  25.  
  26. <p><em>Thank you for using nginx.</em></p>
  27. </body>
  28. </html>
  29. root@k8s-node01:~#

  提示:集群内部客户端只能访问clusterip的80端口,或者访问node的对外IP的30012端口;

  Volume资源,详细说明请参考https://www.cnblogs.com/qiuhom-1874/p/14180752.html

  pod挂载nfs的使用

  在nfs服务器上准备数据目录

  1. root@harbor:~# cat /etc/exports
  2. # /etc/exports: the access control list for filesystems which may be exported
  3. # to NFS clients. See exports(5).
  4. #
  5. # Example for NFSv2 and NFSv3:
  6. # /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
  7. #
  8. # Example for NFSv4:
  9. # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
  10. # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)
  11. #
  12. /data/k8sdata/kuboard *(rw,no_root_squash)
  13. /data/volumes *(rw,no_root_squash)
  14. /pod-vol *(rw,no_root_squash)
  15. root@harbor:~# mkdir -p /pod-vol
  16. root@harbor:~# ls /pod-vol -d
  17. /pod-vol
  18. root@harbor:~# exportfs -av
  19. exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/kuboard".
  20. Assuming default behaviour ('no_subtree_check').
  21. NOTE: this default has changed since nfs-utils version 1.0.x
  22.  
  23. exportfs: /etc/exports [2]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/volumes".
  24. Assuming default behaviour ('no_subtree_check').
  25. NOTE: this default has changed since nfs-utils version 1.0.x
  26.  
  27. exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/pod-vol".
  28. Assuming default behaviour ('no_subtree_check').
  29. NOTE: this default has changed since nfs-utils version 1.0.x
  30.  
  31. exporting *:/pod-vol
  32. exporting *:/data/volumes
  33. exporting *:/data/k8sdata/kuboard
  34. root@harbor:~#

  在pod上挂载nfs目录

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: ngx-nfs-80
  5. namespace: default
  6. labels:
  7. app: ngx-nfs-80
  8. spec:
  9. selector:
  10. matchLabels:
  11. app: ngx-nfs-80
  12. replicas: 1
  13. template:
  14. metadata:
  15. labels:
  16. app: ngx-nfs-80
  17. spec:
  18. containers:
  19. - name: ngx-nfs-80
  20. image: "harbor.ik8s.cc/baseimages/nginx:v1"
  21. resources:
  22. requests:
  23. cpu: 100m
  24. memory: 100Mi
  25. limits:
  26. cpu: 100m
  27. memory: 100Mi
  28. ports:
  29. - containerPort: 80
  30. name: ngx-nfs-80
  31. volumeMounts:
  32. - name: localtime
  33. mountPath: /etc/localtime
  34. - name: nfs-vol
  35. mountPath: /usr/share/nginx/html/
  36. volumes:
  37. - name: localtime
  38. hostPath:
  39. path: /usr/share/zoneinfo/Asia/Shanghai
  40. - name: nfs-vol
  41. nfs:
  42. server: 192.168.0.42
  43. path: /pod-vol
  44. restartPolicy: Always
  45. ---
  46. apiVersion: v1
  47. kind: Service
  48. metadata:
  49. name: ngx-nfs-svc
  50. namespace: default
  51. spec:
  52. selector:
  53. app: ngx-nfs-80
  54. type: NodePort
  55. ports:
  56. - name: ngx-nfs-svc
  57. protocol: TCP
  58. port: 80
  59. targetPort: 80
  60. nodePort: 30013

  应用配置清单

  1. root@k8s-deploy:/yaml# kubectl apply -f nfs-vol.yaml
  2. deployment.apps/ngx-nfs-80 created
  3. service/ngx-nfs-svc created
  4. root@k8s-deploy:/yaml# kubectl get pods
  5. NAME READY STATUS RESTARTS AGE
  6. deploy-demo-6849bdf444-pvsc9 1/1 Running 1 (57m ago) 46h
  7. deploy-demo-6849bdf444-sg8fz 1/1 Running 1 (57m ago) 46h
  8. ng-rc-l7xmp 1/1 Running 1 (57m ago) 47h
  9. ng-rc-wl5d6 1/1 Running 1 (57m ago) 47h
  10. ngx-nfs-80-66c9697cf4-8pm9k 1/1 Running 0 7s
  11. rs-demo-nzmqs 1/1 Running 1 (57m ago) 47h
  12. rs-demo-v2vb6 1/1 Running 1 (57m ago) 47h
  13. rs-demo-x27fv 1/1 Running 1 (57m ago) 47h
  14. test 1/1 Running 7 (57m ago) 17d
  15. test1 1/1 Running 7 (57m ago) 17d
  16. test2 1/1 Running 7 (57m ago) 17d
  17. root@k8s-deploy:/yaml# kubectl get svc
  18. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  19. kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 18d
  20. ngx-nfs-svc NodePort 10.100.16.14 <none> 80:30013/TCP 15s
  21. ngx-nodeport-svc NodePort 10.100.209.225 <none> 80:30012/TCP 45h
  22. root@k8s-deploy:/yaml#

  在nfs服务器上/pod-vol目录下提供index.html文件

  1. root@harbor:~# echo "this page from nfs server.." >> /pod-vol/index.html
  2. root@harbor:~# cat /pod-vol/index.html
  3. this page from nfs server..
  4. root@harbor:~#

  访问pod,看看nfs服务器上的inde.html是否能够正常访问到?

  1. root@k8s-deploy:/yaml# curl 192.168.0.35:30013
  2. this page from nfs server..
  3. root@k8s-deploy:/yaml#

  提示:能够看到访问pod对应返回的页面就是刚才在nfs服务器上创建的页面,说明pod正常挂载了nfs提供的目录;

  PV、PVC资源,详细说明请参考https://www.cnblogs.com/qiuhom-1874/p/14188621.html

  nfs实现静态pvc的使用

  在nfs服务器上准备目录

  1. root@harbor:~# cat /etc/exports
  2. # /etc/exports: the access control list for filesystems which may be exported
  3. # to NFS clients. See exports(5).
  4. #
  5. # Example for NFSv2 and NFSv3:
  6. # /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
  7. #
  8. # Example for NFSv4:
  9. # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
  10. # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)
  11. #
  12. /data/k8sdata/kuboard *(rw,no_root_squash)
  13. /data/volumes *(rw,no_root_squash)
  14. /pod-vol *(rw,no_root_squash)
  15. /data/k8sdata/myserver/myappdata *(rw,no_root_squash)
  16. root@harbor:~# mkdir -p /data/k8sdata/myserver/myappdata
  17. root@harbor:~# exportfs -av
  18. exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/kuboard".
  19. Assuming default behaviour ('no_subtree_check').
  20. NOTE: this default has changed since nfs-utils version 1.0.x
  21.  
  22. exportfs: /etc/exports [2]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/volumes".
  23. Assuming default behaviour ('no_subtree_check').
  24. NOTE: this default has changed since nfs-utils version 1.0.x
  25.  
  26. exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/pod-vol".
  27. Assuming default behaviour ('no_subtree_check').
  28. NOTE: this default has changed since nfs-utils version 1.0.x
  29.  
  30. exportfs: /etc/exports [4]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/myserver/myappdata".
  31. Assuming default behaviour ('no_subtree_check').
  32. NOTE: this default has changed since nfs-utils version 1.0.x
  33.  
  34. exporting *:/data/k8sdata/myserver/myappdata
  35. exporting *:/pod-vol
  36. exporting *:/data/volumes
  37. exporting *:/data/k8sdata/kuboard
  38. root@harbor:~#

  创建pv

  1. apiVersion: v1
  2. kind: PersistentVolume
  3. metadata:
  4. name: myapp-static-pv
  5. namespace: default
    spec:
  6. capacity:
  7. storage: 2Gi
  8. accessModes:
  9. - ReadWriteOnce
  10. nfs:
  11. path: /data/k8sdata/myserver/myappdata
  12. server: 192.168.0.42

  创建pvc关联pv

  1. apiVersion: v1
  2. kind: PersistentVolumeClaim
  3. metadata:
  4. name: myapp-static-pvc
  5. namespace: default
  6. spec:
  7. volumeName: myapp-static-pv
  8. accessModes:
  9. - ReadWriteOnce
  10. resources:
  11. requests:
  12. storage: 2Gi

  创建pod使用pvc

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: ngx-nfs-pvc-80
  5. namespace: default
  6. labels:
  7. app: ngx-pvc-80
  8. spec:
  9. selector:
  10. matchLabels:
  11. app: ngx-pvc-80
  12. replicas: 1
  13. template:
  14. metadata:
  15. labels:
  16. app: ngx-pvc-80
  17. spec:
  18. containers:
  19. - name: ngx-pvc-80
  20. image: "harbor.ik8s.cc/baseimages/nginx:v1"
  21. resources:
  22. requests:
  23. cpu: 100m
  24. memory: 100Mi
  25. limits:
  26. cpu: 100m
  27. memory: 100Mi
  28. ports:
  29. - containerPort: 80
  30. name: ngx-pvc-80
  31. volumeMounts:
  32. - name: localtime
  33. mountPath: /etc/localtime
  34. - name: data-pvc
  35. mountPath: /usr/share/nginx/html/
  36. volumes:
  37. - name: localtime
  38. hostPath:
  39. path: /usr/share/zoneinfo/Asia/Shanghai
  40. - name: data-pvc
  41. persistentVolumeClaim:
  42. claimName: myapp-static-pvc
  43.  
  44. ---
  45. apiVersion: v1
  46. kind: Service
  47. metadata:
  48. name: ngx-pvc-svc
  49. namespace: default
  50. spec:
  51. selector:
  52. app: ngx-pvc-80
  53. type: NodePort
  54. ports:
  55. - name: ngx-nfs-svc
  56. protocol: TCP
  57. port: 80
  58. targetPort: 80
  59. nodePort: 30014

  应用上述配置清单

  1. root@k8s-deploy:/yaml# kubectl apply -f nfs-static-pvc-demo.yaml
  2. persistentvolume/myapp-static-pv created
  3. persistentvolumeclaim/myapp-static-pvc created
  4. deployment.apps/ngx-nfs-pvc-80 created
  5. service/ngx-pvc-svc created
  6. root@k8s-deploy:/yaml# kubectl get pv
  7. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  8. myapp-static-pv 2Gi RWO Retain Bound default/myapp-static-pvc 4s
  9. root@k8s-deploy:/yaml# kubectl get pvc
  10. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  11. myapp-static-pvc Pending myapp-static-pv 0 7s
  12. root@k8s-deploy:/yaml# kubectl get pods
  13. NAME READY STATUS RESTARTS AGE
  14. deploy-demo-6849bdf444-pvsc9 1/1 Running 1 (151m ago) 47h
  15. deploy-demo-6849bdf444-sg8fz 1/1 Running 1 (151m ago) 47h
  16. ng-rc-l7xmp 1/1 Running 1 (151m ago) 2d1h
  17. ng-rc-wl5d6 1/1 Running 1 (151m ago) 2d1h
  18. ngx-nfs-pvc-80-f776bb6d-nwwwq 0/1 Pending 0 10s
  19. rs-demo-nzmqs 1/1 Running 1 (151m ago) 2d
  20. rs-demo-v2vb6 1/1 Running 1 (151m ago) 2d
  21. rs-demo-x27fv 1/1 Running 1 (151m ago) 2d
  22. test 1/1 Running 7 (151m ago) 18d
  23. test1 1/1 Running 7 (151m ago) 18d
  24. test2 1/1 Running 7 (151m ago) 18d
  25. root@k8s-deploy:/yaml#

  在nfs服务器上/data/k8sdata/myserver/myappdata创建index.html,看看对应主页是否能够被访问?

  1. root@harbor:~# echo "this page from nfs-server /data/k8sdata/myserver/myappdata/index.html" >> /data/k8sdata/myserver/myappdata/index.html
  2. root@harbor:~# cat /data/k8sdata/myserver/myappdata/index.html
  3. this page from nfs-server /data/k8sdata/myserver/myappdata/index.html
  4. root@harbor:~#

  访问pod

  1. root@harbor:~# curl 192.168.0.36:30014
  2. this page from nfs-server /data/k8sdata/myserver/myappdata/index.html
  3. root@harbor:~#

  nfs实现动态pvc的使用

  创建名称空间、服务账号、clusterrole、clusterrolebindding、role、rolebinding

  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. name: nfs
  5. ---
  6. apiVersion: v1
  7. kind: ServiceAccount
  8. metadata:
  9. name: nfs-client-provisioner
  10. # replace with namespace where provisioner is deployed
  11. namespace: nfs
  12. ---
  13. kind: ClusterRole
  14. apiVersion: rbac.authorization.k8s.io/v1
  15. metadata:
  16. name: nfs-client-provisioner-runner
  17. rules:
  18. - apiGroups: [""]
  19. resources: ["nodes"]
  20. verbs: ["get", "list", "watch"]
  21. - apiGroups: [""]
  22. resources: ["persistentvolumes"]
  23. verbs: ["get", "list", "watch", "create", "delete"]
  24. - apiGroups: [""]
  25. resources: ["persistentvolumeclaims"]
  26. verbs: ["get", "list", "watch", "update"]
  27. - apiGroups: ["storage.k8s.io"]
  28. resources: ["storageclasses"]
  29. verbs: ["get", "list", "watch"]
  30. - apiGroups: [""]
  31. resources: ["events"]
  32. verbs: ["create", "update", "patch"]
  33. ---
  34. kind: ClusterRoleBinding
  35. apiVersion: rbac.authorization.k8s.io/v1
  36. metadata:
  37. name: run-nfs-client-provisioner
  38. subjects:
  39. - kind: ServiceAccount
  40. name: nfs-client-provisioner
  41. # replace with namespace where provisioner is deployed
  42. namespace: nfs
  43. roleRef:
  44. kind: ClusterRole
  45. name: nfs-client-provisioner-runner
  46. apiGroup: rbac.authorization.k8s.io
  47. ---
  48. kind: Role
  49. apiVersion: rbac.authorization.k8s.io/v1
  50. metadata:
  51. name: leader-locking-nfs-client-provisioner
  52. # replace with namespace where provisioner is deployed
  53. namespace: nfs
  54. rules:
  55. - apiGroups: [""]
  56. resources: ["endpoints"]
  57. verbs: ["get", "list", "watch", "create", "update", "patch"]
  58. ---
  59. kind: RoleBinding
  60. apiVersion: rbac.authorization.k8s.io/v1
  61. metadata:
  62. name: leader-locking-nfs-client-provisioner
  63. # replace with namespace where provisioner is deployed
  64. namespace: nfs
  65. subjects:
  66. - kind: ServiceAccount
  67. name: nfs-client-provisioner
  68. # replace with namespace where provisioner is deployed
  69. namespace: nfs
  70. roleRef:
  71. kind: Role
  72. name: leader-locking-nfs-client-provisioner
  73. apiGroup: rbac.authorization.k8s.io

  创建sc

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: managed-nfs-storage
  5. provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
  6. reclaimPolicy: Retain #PV的删除策略,默认为delete,删除PV后立即删除NFS server的数据
  7. mountOptions:
  8. #- vers=4.1 #containerd有部分参数异常
  9. #- noresvport #告知NFS客户端在重新建立网络连接时,使用新的传输控制协议源端口
  10. - noatime #访问文件时不更新文件inode中的时间戳,高并发环境可提高性能
  11. parameters:
  12. #mountOptions: "vers=4.1,noresvport,noatime"
  13. archiveOnDelete: "true" #删除pod时保留pod数据,默认为false时为不保留数据

  创建provision

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: nfs-client-provisioner
  5. labels:
  6. app: nfs-client-provisioner
  7. # replace with namespace where provisioner is deployed
  8. namespace: nfs
  9. spec:
  10. replicas: 1
  11. strategy: #部署策略
  12. type: Recreate
  13. selector:
  14. matchLabels:
  15. app: nfs-client-provisioner
  16. template:
  17. metadata:
  18. labels:
  19. app: nfs-client-provisioner
  20. spec:
  21. serviceAccountName: nfs-client-provisioner
  22. containers:
  23. - name: nfs-client-provisioner
  24. #image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
  25. image: registry.cn-qingdao.aliyuncs.com/zhangshijie/nfs-subdir-external-provisioner:v4.0.2
  26. volumeMounts:
  27. - name: nfs-client-root
  28. mountPath: /persistentvolumes
  29. env:
  30. - name: PROVISIONER_NAME
  31. value: k8s-sigs.io/nfs-subdir-external-provisioner
  32. - name: NFS_SERVER
  33. value: 192.168.0.42
  34. - name: NFS_PATH
  35. value: /data/volumes
  36. volumes:
  37. - name: nfs-client-root
  38. nfs:
  39. server: 192.168.0.42
  40. path: /data/volumes

  调用sc创建pvc

  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. name: myserver
  5. ---
  6. # Test PVC
  7. kind: PersistentVolumeClaim
  8. apiVersion: v1
  9. metadata:
  10. name: myserver-myapp-dynamic-pvc
  11. namespace: myserver
  12. spec:
  13. storageClassName: managed-nfs-storage #调用的storageclass 名称
  14. accessModes:
  15. - ReadWriteMany #访问权限
  16. resources:
  17. requests:
  18. storage: 500Mi #空间大小

  创建app使用pvc

  1. kind: Deployment
  2. #apiVersion: extensions/v1beta1
  3. apiVersion: apps/v1
  4. metadata:
  5. labels:
  6. app: myserver-myapp
  7. name: myserver-myapp-deployment-name
  8. namespace: myserver
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. app: myserver-myapp-frontend
  14. template:
  15. metadata:
  16. labels:
  17. app: myserver-myapp-frontend
  18. spec:
  19. containers:
  20. - name: myserver-myapp-container
  21. image: nginx:1.20.0
  22. #imagePullPolicy: Always
  23. volumeMounts:
  24. - mountPath: "/usr/share/nginx/html/statics"
  25. name: statics-datadir
  26. volumes:
  27. - name: statics-datadir
  28. persistentVolumeClaim:
  29. claimName: myserver-myapp-dynamic-pvc
  30.  
  31. ---
  32. kind: Service
  33. apiVersion: v1
  34. metadata:
  35. labels:
  36. app: myserver-myapp-service
  37. name: myserver-myapp-service-name
  38. namespace: myserver
  39. spec:
  40. type: NodePort
  41. ports:
  42. - name: http
  43. port: 80
  44. targetPort: 80
  45. nodePort: 30015
  46. selector:
  47. app: myserver-myapp-frontend

  应用上述配置清单

  1. root@k8s-deploy:/yaml/myapp# kubectl apply -f .
  2. namespace/nfs created
  3. serviceaccount/nfs-client-provisioner created
  4. clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
  5. clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
  6. role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
  7. rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
  8. storageclass.storage.k8s.io/managed-nfs-storage created
  9. deployment.apps/nfs-client-provisioner created
  10. namespace/myserver created
  11. persistentvolumeclaim/myserver-myapp-dynamic-pvc created
  12. deployment.apps/myserver-myapp-deployment-name created
  13. service/myserver-myapp-service-name created
  14. root@k8s-deploy:

  验证:查看sc、pv、pvc是否创建?pod是否正常运行?

  1. root@k8s-deploy:/yaml/myapp# kubectl get sc
  2. NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
  3. managed-nfs-storage k8s-sigs.io/nfs-subdir-external-provisioner Retain Immediate false 105s
  4. root@k8s-deploy:/yaml/myapp# kubectl get pv
  5. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  6. pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c 500Mi RWX Retain Bound myserver/myserver-myapp-dynamic-pvc managed-nfs-storage 107s
  7. root@k8s-deploy:/yaml/myapp# kubectl get pvc -n myserver
  8. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  9. myserver-myapp-dynamic-pvc Bound pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c 500Mi RWX managed-nfs-storage 117s
  10. root@k8s-deploy:/yaml/myapp# kubectl get pods -n myserver
  11. NAME READY STATUS RESTARTS AGE
  12. myserver-myapp-deployment-name-65ff65446f-xpd5p 1/1 Running 0 2m8s
  13. root@k8s-deploy:/yaml/myapp#

  提示:可以看到pv自动由sc创建,pvc自动和pv关联;

  验证:在nfs服务器上的/data/volumes/下创建index.html文件,访问pod service,看看对应文件是否能够正常被访问到?

  1. root@harbor:/data/volumes# ls
  2. myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c
  3. root@harbor:/data/volumes# cd myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c/
  4. root@harbor:/data/volumes/myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c# ls
  5. root@harbor:/data/volumes/myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c# echo "this page from nfs-server /data/volumes" >> index.html
  6. root@harbor:/data/volumes/myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c# cat index.html
  7. this page from nfs-server /data/volumes
  8. root@harbor:/data/volumes/myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c#

  提示:在nfs服务器上的/data/volumes目录下会自动生成一个使用pvcpod所在名称空间+pvc名字+pv名字的一个目录,这个目录就是由provision创建;

  访问pod

  1. root@harbor:~# curl 192.168.0.36:30015/statics/index.html
  2. this page from nfs-server /data/volumes
  3. root@harbor:~#

  提示:能够访问到我们刚才创建的文件,说明pod正常挂载nfs服务器对应目录;

  PV/PVC总结

  PV是对底层网络存储的抽象,即将网络存储定义为一种存储资源,将一个整体的存储资源拆分成多份后给不同的业务使用。

  PVC是对PV资源的申请调用,pod是通过PVC将数据保存至PV,PV再把数据保存至真正的硬件存储。

  PersistentVolume参数

  Capacity: #当前PV空间大小,kubectl explain PersistentVolume.spec.capacity

  accessModes :访问模式,#kubectl explain PersistentVolume.spec.accessModes

    ReadWriteOnce – PV只能被单个节点以读写权限挂载,RWO

    ReadOnlyMany – PV以可以被多个节点挂载但是权限是只读的,ROX
    ReadWriteMany – PV可以被多个节点是读写方式挂载使用,RWX

  persistentVolumeReclaimPolicy #删除机制即删除存储卷卷时候,已经创建好的存储卷由以下删除操作:

    Retain – 删除PV后保持原装,最后需要管理员手动删除

    Recycle – 空间回收,及删除存储卷上的所有数据(包括目录和隐藏文件),目前仅支持NFS和hostPath

    Delete – 自动删除存储卷

  volumeMode #卷类型,kubectl explain PersistentVolume.spec.volumeMode;定义存储卷使用的文件系统是块设备还是文件系统,默认为文件系统

  mountOptions #附加的挂载选项列表,实现更精细的权限控制;

  官方文档:持久卷 | Kubernetes

  PersistentVolumeClaim创建参数

  accessModes :PVC 访问模式,#kubectl explain PersistentVolumeClaim.spec.volumeMode

    ReadWriteOnce – PVC只能被单个节点以读写权限挂载,RWO

    ReadOnlyMany – PVC以可以被多个节点挂载但是权限是只读的,ROX

    ReadWriteMany – PVC可以被多个节点是读写方式挂载使用,RWX

  resources: #定义PVC创建存储卷的空间大小

  selector: #标签选择器,选择要绑定的PV

    matchLabels #匹配标签名称

    matchExpressions #基于正则表达式匹配

  volumeName #要绑定的PV名称

  volumeMode #卷类型,定义PVC使用的文件系统是块设备还是文件系统,默认为文件系统

  Volume- 存储卷类型

  static:静态存储卷 ,需要在使用前手动创建PV、然后创建PVC并绑定到PV然后挂载至pod使用,适用于PV和PVC相对比较固定的业务场景。

  dynamin:动态存储卷,先创建一个存储类storageclass,后期pod在使用PVC的时候可以通过存储类动态创建PVC,适用于有状态服务集群如MySQL一主多从、zookeeper集群等。

  存储类官方文档:存储类 | Kubernetes

k8s资源对象的更多相关文章

  1. k8s资源对象及API群组

    REST是representational state transfer的缩写,意为“表征状态转移”,它是一种程序架构风格,基本元素为资源(resource).表征(representation)和行 ...

  2. Prometheus 监控K8S 资源状态对象

    Prometheus 监控K8S 资源状态对象 官方文档:https://github.com/kubernetes/kube-state-metrics kube-state-metrics是一个简 ...

  3. k8s管理pod资源对象(下)

    一.标签与标签选择器 1.标签是k8s极具特色的功能之一,它能够附加于k8s的任何资源对象之上.简单来说,标签就是键值类型的数据,它们可于资源创建时直接指定,也可随时按需添加于活动对象中,而后即可由标 ...

  4. k8s管理pod资源对象(上)

    一.容器于pod资源对象 现代的容器技术被设计用来运行单个进程时,该进程在容器中pid名称空间中的进程号为1,可直接接收并处理信号,于是,在此进程终止时,容器即终止退出.若要在一个容器中运行多个进程, ...

  5. 6.K8s集群升级、etcd备份和恢复、资源对象及其yaml文件使用总结、常用维护命令

    1.K8s集群升级 集群升级有一定的风险,需充分测试验证后实施 集群升级需要停止服务,可以采用逐个节点滚动升级的方式 1.1 准备新版本二进制文件 查看现在的版本 root@k8-master1:~# ...

  6. k8s基本概念,资源对象

    kubernetes里的master指的是集群控制节点 master负责是整个集群的管理和控制 kubernetes3大进程 API server 增删改查操作的关键入口 controller man ...

  7. [置顶] kubernetes--优雅删除资源对象

    当用户请求删除含有pod的资源对象时(如RC.deployment等),K8S为了让应用程序优雅关闭(即让应用程序完成正在处理的请求后,再关闭软件),K8S提供两种信息通知: 1).默认:K8S通知n ...

  8. [置顶] kubernetes资源对象--Label

    概念 Label机制是K8S中一个重要设计,通过Label进行对象弱关联,灵活地分类和选择不同服务或业务,让用户根据自己特定的组织结构以松耦合方式进行服务部署. Label是一对KV,对用户而言非常有 ...

  9. [置顶] kubernetes资源对象--ResourceQuotas

    概念 Resource Quotas(资源配额,简称quota)是对namespace进行资源配额,限制资源使用的一种策略. K8S是一个多用户架构,当多用户或者团队共享一个K8S系统时,SA使用qu ...

  10. [置顶] kubernetes资源对象--ConfigMap

    原理 很多生产环境中的应用程序配置较为复杂,可能需要多个config文件.命令行参数和环境变量的组合.使用容器部署时,把配置应该从应用程序镜像中解耦出来,以保证镜像的可移植性.尽管Secret允许类似 ...

随机推荐

  1. ZooKeeper启动报错,未成功开启服务

    1.问题示例 (1)启动ZooKeeper服务报错 [Hadoop@master ~]$ zkServer.sh statusZooKeeper JMX enabled by defaultUsing ...

  2. Codeforces Round #803 (Div. 2) A-D 刚vp完还没补题

    Codeforces Round #803 (Div. 2) 2022/7/24 上午VP 传送门:https://codeforces.com/contest/1698 A. XOR Mixup 随 ...

  3. 关于Electron环境配置与一些相关的错误解决

    安装步骤: 1.安装NVM: 这个是nodejs的版本管理器,github上有一个nvm for windows,由于不能的nodejs的版本问题,可以自由进行切换. 下载地址:https://git ...

  4. ARFoundation在2019.2之后无法打包的问题

    打包提示错误gradle无法完成打包.解决方案 转到首选项>外部工具> Android> Gradle ,然后将自定义Gradle设置为Gradle 5.6.4或更高版本.请参阅Gr ...

  5. 2021SWPUCTF-WEB(三)

    error ​ 双引号没有提示的注入,,那就是报错注入了,肯定是个恶心的东西呜呜呜 ?id=1' and updatexml(1,concat(0x7e,(select right(flag,30) ...

  6. CentOS7.6 单用户模式下修改root密码

    第一种方法: 1.启动时用上下键选择要进入的内核,输入'e'进入编辑 2.可以使用上下键移动找到linux16这行编辑ro 为 rw init=/sysroot/bin/sh 并使用ctrl + x进 ...

  7. Android studio的基本使用--基础篇

    一.新建项目 其实跟IDEA新建项目的流程基本一致,File->New->New project,这样就能够新建出来一个项目啦! 一般情况下,我们都会选择Empty Activity,之后 ...

  8. 【LeetCode贪心#12】图解监控二叉树(正宗hard题,涉及贪心分析、二叉树遍历以及状态转移)

    监控二叉树 力扣题目链接(opens new window) 给定一个二叉树,我们在树的节点上安装摄像头. 节点上的每个摄影头都可以监视其父对象.自身及其直接子对象. 计算监控树的所有节点所需的最小摄 ...

  9. 深入理解 python 虚拟机:字节码灵魂——Code obejct

    深入理解 python 虚拟机:字节码灵魂--Code obejct 在本篇文章当中主要给大家深入介绍在 cpython 当中非常重要的一个数据结构 code object! 在上一篇文章 深入理解 ...

  10. 聊一聊如何使用Crank给我们的类库做基准测试

    目录 背景 什么是 Crank 入门示例 Pull Request 总结 参考资料 背景 当我们写了一个类库提供给别人使用时,我们可能会对它做一些基准测试来测试一下它的性能指标,好比内存分配等. 在 ...