Docker+K8s基础篇(四)


  • pod控制器

    • A:pod控制器类型
  • ReplicaSet控制器
    • A:ReplicaSet控制器介绍
    • B:ReplicaSet控制器的使用
  • Deployment控制器
    • A:Deployment控制器的介绍和简单使用
  • DaemonSet控制器
    • A:Deployment控制器的介绍
    • B:Deployment控制器的简单使用
    • C:pod的共享字段介绍

♣一:Pod控制器

A:Pod控制器类型

通过yaml格式创建的pod资源我们手动delete之后,是不会重建的,因为这个属于自主式pod,不是属于控制器控制的pod。之前我们直接通过run启动是通过控制器来管理的,delete之后还能通过控制器来重构一个新的一模一样的pod,控制器会严格控制其控制的pod数量符合用户的期望
而且控制器管理端的pod是不建议直接delete的,可以通过修改控制器管理相应的pod的数量从而达到我们的预期。
pod控制器主要功能也就是带我们去管理端pod的中间层,并帮我们确保每一个pod资源始终处于我们期望的状态,例如pod里面的容器出现故障,控制器会去尝试重启容器,当一直重启不成功,就会基于内部策略来进行重新的编排和部署。
如果容器的数量低于用户的目标数据就会新建pod资源,多余则会终止。
控制器是一种泛称,真正的控制器资源有多种类型:
1:ReplicaSet:(带用户创建指定数量的pod副本,并确保pod副本数量一直处于满足用户期望的数量,另外ReplicaSet还支持扩缩容机制,而且已经替代了ReplicationController)
ReplicaSet的三种核心资源:
1:用户定义的pod副本
2:标签选择器
3:pod模板
ReplicaSet功能如此强大,但是我们却不能直接使用ReplicaSet,而且连k8s也建议用户不直接使用ReplicaSet,而是转而使用Deployment。
Deployment(也是一种控制器,但是Deployment不是直接替代了ReplicaSet来控制pod的,而是通过控制ReplicaSet,再由ReplicaSet来控制pod,由此Deployment是建构在ReplicaSet之上的,不是建立在pod之上的,除了控制ReplicaSet自身所带的两个功能之外,Deployment还支持滚动更新和回滚等强大的功能,还支持声明式配置的功能,声明式可以使得我们创建的时候根据声明的逻辑来定义,方便我们随时动态修改在apiservice上定义的目标期望状态)
Deployment也是目前最好的控制器之一
Deployment指用于管理无状态应用,指需要关注群体,无需关注个体的时候更加需要Deployment来完成。
控制器管理pod的工作特点:
1:pod的数量可以大于node节点数量,pod的数量不和node的数量形成精准匹配的关系,大于node节点数量的pod会通过策略分派不通的node节点上,可能一个node有5个,一个node有3个这样的情况,但是这对某些服务来说一个节点出现多个相同pod是完全没有必要的,例如elk的日志收集服务亦或者监控工具等,一个节点只需要跑一个pod即可来完成node节点上所有的pod所产生的日志收集工作,多起就等于在消耗资源
对于这种情况,Deployment就不能很好的完成,我既要日志收集pod数量每个节点是唯一的,又要保证一旦pod挂掉之后还能精准的从挂掉的pod上重构起来,那么就需要另外一种控制器DaemonSet。
DaemonSet:
用于控制运行的集群每一个节点只运行一个特定的pod副本,这样不仅能规避我们上面的问题,还能完成当新的节点加入集群的时候,上面能运行一个特定的pod,那这种控制器控制的pod数量就直接取决于你的集群的规模,当然pod模板和标签选择器依然是不能少的
Job
Job可以用于指需要在计划内按照指定的时间节点取执行一次,执行完成之后就退出,无需长期运行在后台,例如数据库的备份操作,当备份完成应当立即退出,但是还有特殊的情况,例如mysql程序连接数满了或者mysql挂了,这个时候job控制器控制的pod就需要把指定的任务完成才能结束,如果中途退出了需要重建来直道任务完成才能退出。Job适合完成一次性的任务。
Cronjob:
Cronjob和job的实现的功能类似,但是适合完成周期性的计划任务,面对周期性计划任务我们需要考虑到就是上一次任务执行还没有完成下一次的时间节点又到了应该怎么处理。
StatefulSet
StatefulSet就适合管理有状态的应用,更加关系个体,例如我们创建了一个redis集群,如果集群中某一个redis挂了,新起的pod是无法替代之前的redis的,因为之前的redis存储的数据可能被redis一起带走了。
StatefulSet是将没一个pod单独管理的,每一个pod都有自己独有的标识和独有的数据集,一旦出现故障新的pod加进来之前需要做很多初始化操作才能被加进来,但是我们对于这些有状态而且有数据的应用如果是出现故障需要重构的时候,会变得很麻烦,因为redis和mysql重构和主从复制的配置是完全不一样的,这就意味需要将这些内容编写脚本的形式放到StatefulSet的模板中,这就需要人为的去做大量的验证,因为控制器一旦加载模块都是自动完成的,可能弄不好数据就丢失了。
不管是k8s还是直接部署的应用,只要是有状态的应用都会面临这种难题,一旦故障怎么保证数据不会丢失,而且能快速用新的应用顶上来接着之前的数据继续工作,可能在直接部署的应用上完成了,但是移植到k8s上的时候将会面临的又是另外一种情况。
在k8s上还支持一种特殊类型的资源TPR,但是在1.8版本之后就被CDR取代了,其主要功能就是自定义资源,可以将目标资源管理成一种独特的管理逻辑,然后将这种管理逻辑灌注到Operator里面,但是这种难度会变的很大,以至于到目前支持这种形式的pod资源并不多。
k8s为了使得使用变得简单,后面也提供了一种Helm的工具,这个工具类似centos上的yum工具一样,我们只需要定义存储卷在哪里,使用多少内存空间等等资源,然后直接安装即可,helm现在已经支持很多主流的应用,但是这些应用很多时候都适用于我们的环境,所以也导致helm使用的人也不是很多。

♣二:ReplicaSet控制器

A:ReplicaSet控制器介绍:

我们可以通过kubectl explain rc(ReplicaSet的简写)

  1. [root@www kubeadm]# kubectl explain rc
  2. 可以看到一级字段也我们看
  3. KIND: ReplicationController
  4. VERSION: v1
  5.  
  6. DESCRIPTION:
  7. ReplicationController represents the configuration of a replication
  8. controller.
  9.  
  10. FIELDS:
  11. apiVersion <string>
  12. APIVersion defines the versioned schema of this representation of an
  13. object. Servers should convert recognized schemas to the latest internal
  14. value, and may reject unrecognized values. More info:
  15. https://git.k8s.io/community/contributors/devel/api-conventions.md#resources
  16.  
  17. kind <string>
  18. Kind is a string value representing the REST resource this object
  19. represents. Servers may infer this from the endpoint the client submits
  20. requests to. Cannot be updated. In CamelCase. More info:
  21. https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
  22.  
  23. metadata <Object>
  24. If the Labels of a ReplicationController are empty, they are defaulted to
  25. be the same as the Pod(s) that the replication controller manages. Standard
  26. object's metadata. More info:
  27. https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
  28.  
  29. spec <Object>
  30. Spec defines the specification of the desired behavior of the replication
  31. controller. More info:
  32. https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
  33.  
  34. status <Object>
  35. Status is the most recently observed status of the replication controller.
  36. This data may be out of date by some window of time. Populated by the
  37. system. Read-only. More info:
  38. https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
  39. spec:
  40. [root@www kubeadm]# kubectl explain rc.spec
  41. KIND: ReplicationController
  42. VERSION: v1
  43.  
  44. RESOURCE: spec <Object>
  45.  
  46. DESCRIPTION:
  47. Spec defines the specification of the desired behavior of the replication
  48. controller. More info:
  49. https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
  50.  
  51. ReplicationControllerSpec is the specification of a replication controller.
  52.  
  53. FIELDS:
  54. minReadySeconds <integer>
  55. Minimum number of seconds for which a newly created pod should be ready
  56. without any of its container crashing, for it to be considered available.
  57. Defaults to 0 (pod will be considered available as soon as it is ready)
  58.  
  59. replicas <integer>
  60. Replicas is the number of desired replicas. This is a pointer to
  61. distinguish between explicit zero and unspecified. Defaults to 1. More
  62. info:
  63. https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#what-is-a-replicationcontroller
  64.  
  65. selector <map[string]string>
  66. Selector is a label query over pods that should match the Replicas count.
  67. If Selector is empty, it is defaulted to the labels present on the Pod
  68. template. Label keys and values that must match in order to be controlled
  69. by this replication controller, if empty defaulted to labels on Pod
  70. template. More info:
  71. https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors
  72.  
  73. template <Object>
  74. Template is the object that describes the pod that will be created if
  75. insufficient replicas are detected. This takes precedence over a
  76. TemplateRef. More info:
  77. https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#pod-template
  78.  
  79. [root@www kubeadm]#

ReplicaSet字段内容

ReplicaSet的spec中最主要需要定义的内容是:
1:副本数,
2:标签选择器,
3:pod模板

案例:
apiVersion: apps/v1
kind: ReplicaSet 使用类型是ReplicaSet
metadata:
  name: myapp
  namespace: default
spec:
  replicas: 2 创建两个pods资源
  selector: 使用什么样的标签选择器
    matchLabels: 如果使用多个标签就是逻辑域的关系,就需要使用matchLabels字段
      app: myapp 可以使用多个标签
      release: Public survey 声明两个标签就意味标签选择的时候必须满足两个标签内容
  template: 定义资源模板
    metadata: 资源模板下有两个字段就是matadata和spec,这个用法就是kind类型是pod的一样了
      name: myapp-pod
      labels: 注意这里的labels的标签必须包含上面matchLabels的两个标签,可以多,但是不能少,如果控制器创建一个发现不能满足就会又建一个,周而复始环境可能被创建的pod给撑死了
        app: myapp
        release: Public survey
        time: current
    spec:
      containers:
        - name: myapp-test
        image: ikubernetes/myapp:v1
        ports:
          - name: http
           containerPort: 80

B:ReplicaSet控制器的使用:

  1. [root@www TestYaml]# cat pp.yaml
  2. apiVersion: apps/v1
  3. kind: ReplicaSet
  4. metadata:
  5. name: myapp
  6. namespace: default
  7. spec:
  8. replicas: 2
  9. selector:
  10. matchLabels:
  11. app: myapp
  12. template:
  13. metadata:
  14. name: myapp-pod
  15. labels:
  16. app: myapp
  17. spec:
  18. containers:
  19. - name: myapp-containers
  20. image: ikubernetes/myapp:v1
  21.  
  22. [root@www TestYaml]# kubectl get pods
  23. NAME READY STATUS RESTARTS AGE
  24. myapp-7ttch 1/1 Running 0 3m31s
  25. myapp-8w2f2 1/1 Running 0 3m31s
  26. 我们看到我们在yaml文件里面定义的名字控制器会自动的生成在后面跟上随机串
  27. [root@www TestYaml]# kubectl get rs
  28. NAME DESIRED CURRENT READY AGE
  29. myapp 2 2 2 3m35s
  30. [root@www TestYaml]# kubectl describe pods myapp-7ttch
  31. Name: myapp-7ttch
  32. Namespace: default
  33. Priority: 0
  34. PriorityClassName: <none>
  35. Node: www.kubernetes.node1.com/192.168.181.140
  36. Start Time: Sun, 07 Jul 2019 16:07:42 +0800
  37. Labels: app=myapp
  38. Annotations: <none>
  39. Status: Running
  40. IP: 10.244.1.27
  41. Controlled By: ReplicaSet/myapp
  42. Containers:
  43. myapp-containers:
  44. Container ID: docker://17288f7aed7f62a983c35cabfd061a22f94c8e315da475fcfe4b276d49b22e33
  45. Image: ikubernetes/myapp:v1
  46. Image ID: docker-pullable://ikubernetes/myapp@sha256:9c3dc30b5219788b2b8a4b065f548b922a34479577befb54b03330999d30d513
  47. Port: <none>
  48. Host Port: <none>
  49. State: Running
  50. Started: Sun, 07 Jul 2019 16:07:45 +0800
  51. Ready: True
  52. Restart Count: 0
  53. Environment: <none>
  54. Mounts:
  55. /var/run/secrets/kubernetes.io/serviceaccount from default-token-h5ddf (ro)
  56. Conditions:
  57. Type Status
  58. Initialized True
  59. Ready True
  60. ContainersReady True
  61. PodScheduled True
  62. Volumes:
  63. default-token-h5ddf:
  64. Type: Secret (a volume populated by a Secret)
  65. SecretName: default-token-h5ddf
  66. Optional: false
  67. QoS Class: BestEffort
  68. Node-Selectors: <none>
  69. Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
  70. node.kubernetes.io/unreachable:NoExecute for 300s
  71. Events:
  72. Type Reason Age From Message
  73. ---- ------ ---- ---- -------
  74. Normal Scheduled 16m default-scheduler Successfully assigned default/myapp-7ttch to www.kubernetes.node1.com
  75. Normal Pulled 16m kubelet, www.kubernetes.node1.com Container image "ikubernetes/myapp:v1" already present on machine
  76. Normal Created 16m kubelet, www.kubernetes.node1.com Created container myapp-containers
  77. Normal Started 16m kubelet, www.kubernetes.node1.com Started container myapp-containers
  78. [root@www TestYaml]# kubectl delete pods myapp-7ttch 当我们删除7ttch这个pods的时候,发现控制器立马帮忙创建了一个n8lt4后缀的pods
  79. pod "myapp-7ttch" deleted
  80. [root@www ~]# kubectl get pods -w
  81. NAME READY STATUS RESTARTS AGE
  82. myapp-7ttch 1/1 Running 0 18m
  83. myapp-8w2f2 1/1 Running 0 18m
  84. myapp-7ttch 1/1 Terminating 0 18m
  85. myapp-n8lt4 0/1 Pending 0 0s
  86. myapp-n8lt4 0/1 Pending 0 0s
  87. myapp-n8lt4 0/1 ContainerCreating 0 0s
  88. myapp-7ttch 0/1 Terminating 0 18m
  89. myapp-n8lt4 1/1 Running 0 2s
  90. myapp-7ttch 0/1 Terminating 0 18m
  91. myapp-7ttch 0/1 Terminating 0 18m
  92. 如果我们创建一个新的pod,把标签设置成myapp一样,这个控制器或怎么去控制副本的数量
  93. [root@www ~]# kubectl get pods --show-labels
  94. NAME READY STATUS RESTARTS AGE LABELS
  95. myapp-8w2f2 1/1 Running 0 26m app=myapp
  96. myapp-n8lt4 1/1 Running 0 7m53s app=myapp
  97. [root@www ~]#
  98.  
  99. [root@www TestYaml]# kubectl create -f pod-test.yaml
  100. pod/myapp created
  101. [root@www TestYaml]# kubectl get pods --show-labels
  102. NAME READY STATUS RESTARTS AGE LABELS
  103. myapp 0/1 ContainerCreating 0 2s <none>
  104. myapp-8w2f2 1/1 Running 1 41m app=myapp
  105. myapp-n8lt4 1/1 Running 0 22m app=myapp,time=july
  106. mypod-g7rgq 1/1 Running 0 10m app=mypod,time=july
  107. mypod-z86bg 1/1 Running 0 10m app=mypod,time=july
  108. [root@www TestYaml]# kubectl label pods myapp app=myapp 给新建的pod打上myapp的标签
  109. pod/myapp labeled
  110. [root@www TestYaml]# kubectl get pods --show-labels
  111. NAME READY STATUS RESTARTS AGE LABELS
  112. myapp 0/1 Terminating 1 53s app=myapp
  113. myapp-8w2f2 1/1 Running 1 42m app=myapp
  114. myapp-n8lt4 1/1 Running 0 23m app=myapp,time=july
  115. mypod-g7rgq 1/1 Running 0 11m app=mypod,time=july
  116. mypod-z86bg 1/1 Running 0 11m app=mypod,time=july
  117. [root@www TestYaml]# kubectl get pods --show-labels
  118. NAME READY STATUS RESTARTS AGE LABELS
  119. myapp-8w2f2 1/1 Running 1 42m app=myapp 可以发现只要标签和控制器定义的pod标签一致了可能就会被误杀掉
  120. myapp-n8lt4 1/1 Running 0 23m app=myapp,time=july
  121. mypod-g7rgq 1/1 Running 0 11m app=mypod,time=july
  122. mypod-z86bg 1/1 Running 0 11m app=mypod,time=july

ReplicaSet案例

ReplicaSet的特性之一就是指关心集体不关心个体,严格按照内部定义的pod数量,标签来控制pods资源,所以在定义ReplicaSet控制器的时候需要把条件设置复杂,避免出现上面的情况
使用ReplicaSet创建的集体pods的时候,需要注意到一旦pods的挂了,控制器新起的pods地址肯定会变化,这个时候就需要在外面加一层service,让service的标签和ReplicaSet一致,通过标签选择器关联至后端的pods,这样就避免地址变化导致访问中断的情况。
ReplicaSet的动态手动扩缩容也很简单。

  1. [root@www TestYaml]# kubectl edit rs myapp 使用edit参数进入myapp的模板信息,直接修改replicas值即可
  2. .....
  3. spec:
  4. replicas: 5
  5. selector:
  6. matchLabels:
  7. app: myapp
  8. ........
  9. replicaset.extensions/myapp edited
  10. [root@www TestYaml]# kubectl get pods
  11. NAME READY STATUS RESTARTS AGE
  12. myapp-6d4nd 1/1 Running 0 10s
  13. myapp-8w2f2 1/1 Running 1 73m
  14. myapp-c85dt 1/1 Running 0 10s
  15. myapp-n8lt4 1/1 Running 0 54m
  16. myapp-prdmq 1/1 Running 0 10s
  17. mypod-g7rgq 1/1 Running 0 42m
  18. mypod-z86bg 1/1 Running 0 42m

ReplicaSet的资源扩缩容

  1. [root@www TestYaml]# curl 10.244.2.8
  2. Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
  3. [root@www TestYaml]# kubectl edit rs myapp
  4. .......
  5. spec:
  6. containers:
  7. - image: ikubernetes/myapp:v2 升级为v2版本
  8. imagePullPolicy: IfNotPresent
  9. .......
  10. replicaset.extensions/myapp edited
  11. NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
  12. myapp 3 3 3 79m myapp-containers ikubernetes/myapp:v2 app=myapp
  13. 可以看到镜像版本已经是v2版本
  14. [root@www TestYaml]# curl 10.244.2.8
  15. Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
  16. 但是我们访问结果还是v1的版本,这个是因为pods一直处于运行中,并没有被重建,只有重建的pod资源才会是v2版本
  17. [root@www TestYaml]# kubectl get pods -o wide
  18. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  19. myapp-6d4nd 1/1 Running 0 10m 10.244.1.30 www.kubernetes.node1.com <none> <none>
  20. myapp-8w2f2 1/1 Running 1 83m 10.244.2.8 www.kubernetes.node2.com <none> <none>
  21. myapp-n8lt4 1/1 Running 0 64m 10.244.1.28 www.kubernetes.node1.com <none> <none>
  22. mypod-g7rgq 1/1 Running 0 52m 10.244.1.29 www.kubernetes.node1.com <none> <none>
  23. mypod-z86bg 1/1 Running 0 52m 10.244.2.9 www.kubernetes.node2.com <none> <none>
  24. [root@www TestYaml]# curl 10.244.1.30 我们访问myapp-6d4nd版本还是v1
  25. Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
  26. [root@www TestYaml]# kubectl delete pods myapp-6d4nd 删除这个pods资源让其重构
  27. pod "myapp-6d4nd" deleted
  28. [root@www TestYaml]# kubectl get pods -o wide 重构之后的podsmyapp-bsdlk
  29. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  30. myapp-8w2f2 1/1 Running 1 83m 10.244.2.8 www.kubernetes.node2.com <none> <none>
  31. myapp-bsdlk 1/1 Running 0 17s 10.244.2.16 www.kubernetes.node2.com <none> <none>
  32. myapp-n8lt4 1/1 Running 0 65m 10.244.1.28 www.kubernetes.node1.com <none> <none>
  33. mypod-g7rgq 1/1 Running 0 52m 10.244.1.29 www.kubernetes.node1.com <none> <none>
  34. mypod-z86bg 1/1 Running 0 52m 10.244.2.9 www.kubernetes.node2.com <none> <none>
  35. [root@www TestYaml]# curl 10.244.2.16 访问对应的地址,发现现在已经是v2版本
  36. Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
  37. [root@www TestYaml]# curl 10.244.2.8 还没有被重构的pods还是属于v1版本
  38. Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

ReplicaSet的版本迭代升级

  1. [root@www TestYaml]# kubectl delete rs myapp mypod
  2. replicaset.extensions "myapp" deleted
  3. replicaset.extensions "mypod" deleted

ReplicaSet的删除

这样有个好处就是在更新版本的时候是平滑过渡的,我留有缓冲期,当访问v2版本的用户无问题了,我再快速的更新剩下的v1版本,然后通过脚本等形式发布v2版本,这个就属于金丝雀发布。
如图:

如果是一些重要的pods,可能金丝雀不是一种好的更新方式,我们可以使用蓝绿发布的方式,在创建一个其模板一直,标签选择器类似的新pods资源,但是这种情况需要考虑到访问地址,所以service需要同时关联新老两边的pods资源。

还可以通过deployment来关联至后端的多个service上,service在关联pods资源,例如pods资源副本是3个,此时关闭一个pods资源,同时新建一个版本是v2的pods资源,这个pods资源对应的service是一个新的service资源,这个时候用户的请求一部分请求会被
deployment引导至新service资源后端的v2版本上,然后在停止一个v1版本的pods资源同时创建v2版本的资源,直到把所有的pods资源更新完毕。

一个deployment默认最多只能管理10个rc控制资源,当然也可以手动的去调整这个数
deployment还能提供声明式更新配置,这个时候就不使用create来创建pods,而是使用apply声明式更新,而且这种形式创建的pods,不需要edit来去改相关的pods模板信息了,可以通过patch打补丁的形式,直接通过命令行纯命令的形式对pods资源的内部进行修改。
对于deployment更新时还能控制更新节奏和更新逻辑
假如现在服务器的ReplicaSet控制的pods数量有5个,这5个刚刚好满足用户的访问请求,当我们使用上面的办法删除一个在重建一个的方式就不太可取,因为删除和创建中间需要消耗时间,这时间足以导致用户访问请求过大导致其他pods承载不了而崩溃,
这个时候就需要我们采用另外的方式了,我们可以指定控制在滚动更新期间能临时多起几个pods,我们完全可以控制,控制最多能多余我们定义的副本数量几个,最少能少于我们定义副本数量的几个,这样我们定义最多多1个出来,这样更新的适合就是先起一个新的,然后删除一个老的,在起一个新的,在删除一个老的。
如果是pods资源过多,一个个更新过慢,可以一次多起几个新的,例如一次创建新的5个,删除5个老的,我们通过这样更新也可以控制更新的粒度。
最少能少于我们定义副本数量的几个的更新形式就和最多的反过来,先删一个老的,在创建新的,先减后加。
那如果是最多多一个,最少少一个,如果基数是5,那么最少是4个,最多是6个,这个时候更新就是先加1删2,然后加2删2。
基数5,一个都不能少,最多可以到5个,那么这种就是直接删加5删5,这个就属于蓝绿部署。
这些更新的方式默认是滚动更新。
上面这些更新方式一定要考虑到就绪性状态和存活性状态,避免加1的还没有就绪,老的直接就删掉了。

♣三:Deployment控制器

A:Deployment控制器的介绍和简单使用:

上面我们说明了很多种依赖Deployment更新的方式,那在Deployment下主要会用到这些字段:

  1. [root@www TestYaml]# kubectl explain deployDeployment的简写)
  2. KIND: Deployment
  3. VERSION: extensions/v1beta1
  4.  
  5. DESCRIPTION:
  6. DEPRECATED - This group version of Deployment is deprecated by
  7. apps/v1beta2/Deployment. See the release notes for more information.
  8. Deployment enables declarative updates for Pods and ReplicaSets.
  9.  
  10. FIELDS:
  11. apiVersion <string>
  12. APIVersion defines the versioned schema of this representation of an
  13. object. Servers should convert recognized schemas to the latest internal
  14. value, and may reject unrecognized values. More info:
  15. https://git.k8s.io/community/contributors/devel/api-conventions.md#resources
  16.  
  17. kind <string>
  18. Kind is a string value representing the REST resource this object
  19. represents. Servers may infer this from the endpoint the client submits
  20. requests to. Cannot be updated. In CamelCase. More info:
  21. https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
  22.  
  23. metadata <Object>
  24. Standard object metadata.
  25.  
  26. spec <Object>
  27. Specification of the desired behavior of the Deployment.
  28.  
  29. status <Object>
  30. Most recently observed status of the Deployment.
  31.  
  32. 可以看到所包含的以及字段名称和ReplicaSet一样,而且注意这个VERSION: extensions/v1beta1群组是特殊的,由于k8s提供的文档是落后于实际的版本信息的,我们可以看到现在已经挪动到另外一个群组了
  33. apps/v1beta2/Deployment属于apps群组了
  34.  
  35. [root@www TestYaml]# kubectl explain deploy.spec spec字段的内容和ReplicaSet区别又不大。
  36. KIND: Deployment
  37. VERSION: extensions/v1beta1
  38.  
  39. RESOURCE: spec <Object>
  40.  
  41. DESCRIPTION:
  42. Specification of the desired behavior of the Deployment.
  43.  
  44. DeploymentSpec is the specification of the desired behavior of the
  45. Deployment.
  46.  
  47. FIELDS:
  48. minReadySeconds <integer>
  49. Minimum number of seconds for which a newly created pod should be ready
  50. without any of its container crashing, for it to be considered available.
  51. Defaults to 0 (pod will be considered available as soon as it is ready)
  52.  
  53. paused <boolean>
  54. Indicates that the deployment is paused and will not be processed by the
  55. deployment controller.
  56.  
  57. progressDeadlineSeconds <integer>
  58. The maximum time in seconds for a deployment to make progress before it is
  59. considered to be failed. The deployment controller will continue to process
  60. failed deployments and a condition with a ProgressDeadlineExceeded reason
  61. will be surfaced in the deployment status. Note that progress will not be
  62. estimated during the time a deployment is paused. This is set to the max
  63. value of int32 (i.e. 2147483647) by default, which means "no deadline".
  64.  
  65. replicas <integer>
  66. Number of desired pods. This is a pointer to distinguish between explicit
  67. zero and not specified. Defaults to 1.
  68.  
  69. revisionHistoryLimit <integer>
  70. The number of old ReplicaSets to retain to allow rollback. This is a
  71. pointer to distinguish between explicit zero and not specified. This is set
  72. to the max value of int32 (i.e. 2147483647) by default, which means
  73. "retaining all old RelicaSets".
  74.  
  75. rollbackTo <Object>
  76. DEPRECATED. The config this deployment is rolling back to. Will be cleared
  77. after rollback is done.
  78.  
  79. selector <Object>
  80. Label selector for pods. Existing ReplicaSets whose pods are selected by
  81. this will be the ones affected by this deployment.
  82.  
  83. strategy <Object>
  84. The deployment strategy to use to replace existing pods with new ones.
  85.  
  86. template <Object> -required-
  87. Template describes the pods that will be created.
  88. 除了部分字段和ReplicaSet一样之外,还多了几个重要的字段,strategy(定义更新策略)
  89. strategy支持的更新策略:
  90. [root@www TestYaml]# kubectl explain deploy.spec.strategy
  91. KIND: Deployment
  92. VERSION: extensions/v1beta1
  93.  
  94. RESOURCE: strategy <Object>
  95.  
  96. DESCRIPTION:
  97. The deployment strategy to use to replace existing pods with new ones.
  98.  
  99. DeploymentStrategy describes how to replace existing pods with new ones.
  100.  
  101. FIELDS:
  102. rollingUpdate <Object>
  103. Rolling update config params. Present only if DeploymentStrategyType =
  104. RollingUpdate.
  105.  
  106. type <string>
  107. Type of deployment. Can be "Recreate" or "RollingUpdate". Default is
  108. RollingUpdate.
  109. 1:Recreate(重建式更新,删11的策略,此类型rollingUpdate对其是无效的)
  110. 2RollingUpdate(滚动更新,如果type的更新类型是RollingUpdate,那么还可以使用上面的rollingUpdate来定义)
  111. rollingUpdate(主要功能就是来定义更新粒度的)
  112. [root@www TestYaml]# kubectl explain deploy.spec.strategy.rollingUpdate
  113. KIND: Deployment
  114. VERSION: extensions/v1beta1
  115.  
  116. RESOURCE: rollingUpdate <Object>
  117.  
  118. DESCRIPTION:
  119. Rolling update config params. Present only if DeploymentStrategyType =
  120. RollingUpdate.
  121.  
  122. Spec to control the desired behavior of rolling update.
  123.  
  124. FIELDS:
  125. maxSurge (对应的更新过程中,最多能超出之前定义的目标副本数有几个) <string>
  126. The maximum number of pods that can be scheduled above the desired number
  127. of pods. Value can be an absolute number (ex: 5) or a percentage of desired
  128. pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number
  129. is calculated from percentage by rounding up. By default, a value of 1 is
  130. used. Example: when this is set to 30%, the new RC can be scaled up
  131. immediately when the rolling update starts, such that the total number of
  132. old and new pods do not exceed 130% of desired pods. Once old pods have
  133. been killed, new RC can be scaled up further, ensuring that total number of
  134. pods running at any time during the update is at most 130% of desired pods.
  135. maxSurge有两种取值方式,一种是 Value can be an absolute number (ex: 5)直接指定数量,还有一种是a percentage of desired pods (ex: 10%).指定百分比
  136. maxUnavailable (定义最多有几个不可用) <string>
  137. The maximum number of pods that can be unavailable during the update. Value
  138. can be an absolute number (ex: 5) or a percentage of desired pods (ex:
  139. 10%). Absolute number is calculated from percentage by rounding down. This
  140. can not be 0 if MaxSurge is 0. By default, a fixed value of 1 is used.
  141. Example: when this is set to 30%, the old RC can be scaled down to 70% of
  142. desired pods immediately when the rolling update starts. Once new pods are
  143. ready, old RC can be scaled down further, followed by scaling up the new
  144. RC, ensuring that the total number of pods available at all times during
  145. the update is at least 70% of desired pods.
  146. 若果这两个字段都设置为0,那等于怎么更新都更新不了,所以这两个字段只能有一个为0,另外一个为指定数字
  147.  
  148. revisionHistoryLimit(代表我们滚动更新之后,最多能保留几个历史版本,方便我们回滚)
  149. [root@www TestYaml]# kubectl explain deploy.spec.revisionHistoryLimit
  150. KIND: Deployment
  151. VERSION: extensions/v1beta1
  152.  
  153. FIELD: revisionHistoryLimit <integer>
  154.  
  155. DESCRIPTION:
  156. The number of old ReplicaSets to retain to allow rollback. This is a
  157. pointer to distinguish between explicit zero and not specified. This is set
  158. to the max value of int32 (i.e. 2147483647) by default, which means
  159. "retaining all old RelicaSets".
  160. 默认是10
  161.  
  162. paused(暂停,当我们滚动更新之后,如果不想立即启动,就可以通过paused来控制暂停一会儿,默认都是不暂停的)
  163. [root@www TestYaml]# kubectl explain deploy.spec.paused
  164. KIND: Deployment
  165. VERSION: extensions/v1beta1
  166.  
  167. FIELD: paused <boolean>
  168.  
  169. DESCRIPTION:
  170. Indicates that the deployment is paused and will not be processed by the
  171. deployment controller.
  172.  
  173. templateDeployment会控制ReplicaSet自动来创建pods
  174. [root@www TestYaml]# kubectl explain deploy.spec.template
  175. KIND: Deployment
  176. VERSION: extensions/v1beta1
  177.  
  178. RESOURCE: template <Object>
  179.  
  180. DESCRIPTION:
  181. Template describes the pods that will be created.
  182.  
  183. PodTemplateSpec describes the data a pod should have when created from a
  184. template
  185.  
  186. FIELDS:
  187. metadata <Object>
  188. Standard object's metadata. More info:
  189. https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
  190.  
  191. spec <Object>
  192. Specification of the desired behavior of the pod. More info:
  193. https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status

Deployment的字段说明

  1. [root@www TestYaml]# cat deploy.test.yaml
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: mydeploy
  6. namespace: default
  7. spec:
  8. replicas: 2
  9. selector:
  10. matchLabels:
  11. app: mydeploy
  12. release: Internal-measurement
  13. template:
  14. metadata:
  15. labels:
  16. app: mydeploy
  17. release: Internal-measurement
  18. spec:
  19. containers:
  20. - name: myapp-containers
  21. image: ikubernetes/myapp:v1
  22.  
  23. [root@www TestYaml]# kubectl apply -f deploy.test.yaml 这个时候不是使用create来创建了而是使用apply声明的方式来创建pods资源
  24. deployment.apps/mydeploy created
  25. [root@www TestYaml]# kubectl get deploy
  26. NAME READY UP-TO-DATE AVAILABLE AGE
  27. mydeploy 2/2 2 2 2m
  28. [root@www TestYaml]# kubectl get pods
  29. NAME READY STATUS RESTARTS AGE
  30. mydeploy-74b7786d9b-kq88g 1/1 Running 0 2m4s
  31. mydeploy-74b7786d9b-mp2mb 1/1 Running 0 2m4s
  32. [root@www TestYaml]# kubectl get rs 可以看到我们创建deployment的时候自动帮忙创建了rs pod资源,而且可以看到命名方式就知道deploymentrspods之间的关系了
  33. NAME DESIRED CURRENT READY AGE
  34. mydeploy-74b7786d9b 2 2 2 2m40s
  35. [root@www TestYaml]#
  36. deployment的名字是mydeployrs的名字是mydeploy-74b7786d9b(注意这个随机数值串,它是模板的hash值),pods的名字是mydeploy-74b7786d9b-kq88g
  37. 由此可见rspods资源是由deployment控制自动去创建的

Deployment案例

  1. deployment扩缩容不同于rs的扩缩容,我们直接通过修yaml模板,然后通过apply声明就可以达到扩缩容的机制。
  2. [root@www TestYaml]# cat deploy.test.yaml
  3. apiVersion: apps/v1
  4. kind: Deployment
  5. metadata:
  6. name: mydeploy
  7. namespace: default
  8. spec:
  9. replicas: 3 直接加到三个
  10. selector:
  11. matchLabels:
  12. app: mydeploy
  13. release: Internal-measurement
  14. template:
  15. metadata:
  16. labels:
  17. app: mydeploy
  18. release: Internal-measurement
  19. spec:
  20. containers:
  21. - name: myapp-containers
  22. image: ikubernetes/myapp:v1
  23. [root@www TestYaml]# kubectl get pods
  24. NAME READY STATUS RESTARTS AGE
  25. mydeploy-74b7786d9b-4bcln 1/1 Running 0 7s 可以看到直接加了一个新的pods资源
  26. mydeploy-74b7786d9b-kq88g 1/1 Running 0 13m
  27. mydeploy-74b7786d9b-mp2mb 1/1 Running 0 13m
  28. [root@www TestYaml]# kubectl get deploy
  29. NAME READY UP-TO-DATE AVAILABLE AGE
  30. mydeploy 3/3 3 3 14m
  31. [root@www TestYaml]# kubectl get rs
  32. NAME DESIRED CURRENT READY AGE
  33. mydeploy-74b7786d9b 3 3 3 14m
  34. deploymentrs的状态数量也随之更新
  35. 我们改变模板之后,使用apply声明资源变化情况,这个变化直接回存储到etcd或者apiservice里面,然后通知下游节点做出相应的改变
  36. [root@www TestYaml]# kubectl describe deploy mydeploy
  37. Name: mydeploy
  38. Namespace: default
  39. CreationTimestamp: Sun, 07 Jul 2019 21:31:01 +0800
  40. Labels: <none>
  41. Annotations: deployment.kubernetes.io/revision: 1 我们每一次的变化都会存在Annotations里面,而且是自动维护的
  42. kubectl.kubernetes.io/last-applied-configuration:
  43. {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"mydeploy","namespace":"default"},"spec":{"replicas":3,"se...
  44. Selector: app=mydeploy,release=Internal-measurement
  45. Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
  46. StrategyType: RollingUpdate 默认的更新策略就是滚动更新
  47. MinReadySeconds: 0
  48. RollingUpdateStrategy: 25% max unavailable, 25% max surge 这里的最大和最小都是25%
  49. Pod Template:
  50. Labels: app=mydeploy
  51. release=Internal-measurement
  52. Containers:
  53. myapp-containers:
  54. Image: ikubernetes/myapp:v1
  55. Port: <none>
  56. Host Port: <none>
  57. Environment: <none>
  58. Mounts: <none>
  59. Volumes: <none>
  60. Conditions:
  61. Type Status Reason
  62. ---- ------ ------
  63. Progressing True NewReplicaSetAvailable
  64. Available True MinimumReplicasAvailable
  65. OldReplicaSets: <none>
  66. NewReplicaSet: mydeploy-74b7786d9b (3/3 replicas created)
  67. Events:
  68. Type Reason Age From Message
  69. ---- ------ ---- ---- -------
  70. Normal ScalingReplicaSet 17m deployment-controller Scaled up replica set mydeploy-74b7786d9b to 2
  71. Normal ScalingReplicaSet 3m42s deployment-controller Scaled up replica set mydeploy-74b7786d9b to 3
  72. 对于deployment的更新也很简单,如果是单纯的更新镜像资源可以直接使用set image参数来更新,也可以直接修改配置文件的形式来更新
  73. [root@www TestYaml]# cat deploy.test.yaml
  74. .......
  75. spec:
  76. containers:
  77. - name: myapp-containers
  78. image: ikubernetes/myapp:v2 升级到v2版本
  79.  
  80. [root@www TestYaml]# kubectl apply -f deploy.test.yaml
  81. deployment.apps/mydeploy configured
  82. [root@www ~]# kubectl get pods -w
  83. NAME READY STATUS RESTARTS AGE
  84. mydeploy-74b7786d9b-8jjvv 1/1 Running 0 82s
  85. mydeploy-74b7786d9b-mp84r 1/1 Running 0 84s
  86. mydeploy-74b7786d9b-qdzc5 1/1 Running 0 86s
  87. mydeploy-6fbdd45d4c-kbcmh 0/1 Pending 0 0s 可以看到更新逻辑是先多一个
  88. mydeploy-6fbdd45d4c-kbcmh 0/1 Pending 0 0s
  89. mydeploy-6fbdd45d4c-kbcmh 0/1 ContainerCreating 0 0s 然后终止一个,一次的轮询直到全部完成
  90. mydeploy-6fbdd45d4c-kbcmh 1/1 Running 0 1s
  91. mydeploy-74b7786d9b-8jjvv 1/1 Terminating 0 99s
  92. mydeploy-6fbdd45d4c-qqgb8 0/1 Pending 0 0s
  93. mydeploy-6fbdd45d4c-qqgb8 0/1 Pending 0 0s
  94. mydeploy-6fbdd45d4c-qqgb8 0/1 ContainerCreating 0 0s
  95. mydeploy-74b7786d9b-8jjvv 0/1 Terminating 0 100s
  96. mydeploy-6fbdd45d4c-qqgb8 1/1 Running 0 1s
  97. mydeploy-74b7786d9b-mp84r 1/1 Terminating 0 102s
  98. mydeploy-6fbdd45d4c-ng99s 0/1 Pending 0 0s
  99. mydeploy-6fbdd45d4c-ng99s 0/1 Pending 0 0s
  100. mydeploy-6fbdd45d4c-ng99s 0/1 ContainerCreating 0 0s
  101. mydeploy-74b7786d9b-mp84r 0/1 Terminating 0 103s
  102. mydeploy-6fbdd45d4c-ng99s 1/1 Running 0 2s
  103. mydeploy-74b7786d9b-qdzc5 1/1 Terminating 0 106s
  104. mydeploy-74b7786d9b-qdzc5 0/1 Terminating 0 107s
  105. mydeploy-74b7786d9b-qdzc5 0/1 Terminating 0 113s
  106. mydeploy-74b7786d9b-qdzc5 0/1 Terminating 0 113s
  107. mydeploy-74b7786d9b-8jjvv 0/1 Terminating 0 109s
  108. mydeploy-74b7786d9b-8jjvv 0/1 Terminating 0 109s
  109. mydeploy-74b7786d9b-mp84r 0/1 Terminating 0 113s
  110. mydeploy-74b7786d9b-mp84r 0/1 Terminating 0 113s
  111. 全成自动完成自动更新,只需要指定版本号。
  112. [root@www TestYaml]# kubectl get rs -o wide
  113. NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
  114. mydeploy-6fbdd45d4c 3 3 3 25m myapp-containers ikubernetes/myapp:v2 app=mydeploy,pod-template-hash=6fbdd45d4c,release=Internal-measurement
  115. mydeploy-74b7786d9b 0 0 0 33m myapp-containers ikubernetes/myapp:v1 app=mydeploy,pod-template-hash=74b7786d9b,release=Internal-measurement
  116. 可以看到我们又要两个版本的镜像,然后使用v2版本的有三个,使用v1的是没有的,还可以看到两个模板的标签信息基本是一致的,保留老版本随时等待回滚。
  117. [root@www TestYaml]# kubectl rollout history deployment mydeploy 我们还用过命令rollout history来查看滚动更新的次数和痕迹
  118. deployment.extensions/mydeploy
  119. REVISION CHANGE-CAUSE
  120. 3 <none>
  121. 4 <none>
  122.  
  123. [root@www TestYaml]# kubectl rollout undo deployment mydeploy 回滚直接使用rollout undo来进行回滚,它会根据保留的老版本模板来进行回滚,回滚的逻辑和升级的也一样,加1停1。
  124. deployment.extensions/mydeploy rolled back
  125. [root@www TestYaml]# kubectl get rs -o wide
  126. NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
  127. mydeploy-6fbdd45d4c 0 0 0 34m myapp-containers ikubernetes/myapp:v2 app=mydeploy,pod-template-hash=6fbdd45d4c,release=Internal-measurement
  128. mydeploy-74b7786d9b 3 3 3 41m myapp-containers ikubernetes/myapp:v1 app=mydeploy,pod-template-hash=74b7786d9b,release=Internal-measurement
  129. [root@www TestYaml]#
  130. 可以看到v1的版本又回来了

deployment的扩缩容

  1. [root@www TestYaml]# kubectl patch --help
  2. Update field(s) of a resource using strategic merge patch, a JSON merge patch, or a JSON patch.
  3.  
  4. JSON and YAML formats are accepted.
  5.  
  6. Examples:
  7. # Partially update a node using a strategic merge patch. Specify the patch as JSON.
  8. kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}'
  9.  
  10. # Partially update a node using a strategic merge patch. Specify the patch as YAML.
  11. kubectl patch node k8s-node-1 -p $'spec:\n unschedulable: true'
  12.  
  13. # Partially update a node identified by the type and name specified in "node.json" using strategic merge patch.
  14. kubectl patch -f node.json -p '{"spec":{"unschedulable":true}}'
  15.  
  16. # Update a container's image; spec.containers[*].name is required because it's a merge key.
  17. kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}'
  18.  
  19. # Update a container's image using a json patch with positional arrays.
  20. kubectl patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new
  21. image"}]'
  22.  
  23. Options:
  24. --allow-missing-template-keys=true: If true, ignore any errors in templates when a field or map key is missing in
  25. the template. Only applies to golang and jsonpath output formats.
  26. --dry-run=false: If true, only print the object that would be sent, without sending it.
  27. -f, --filename=[]: Filename, directory, or URL to files identifying the resource to update
  28. -k, --kustomize='': Process the kustomization directory. This flag can't be used together with -f or -R.
  29. --local=false: If true, patch will operate on the content of the file, not the server-side resource.
  30. -o, --output='': Output format. One of:
  31. json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file.
  32. -p, --patch='': The patch to be applied to the resource JSON file.
  33. --record=false: Record current kubectl command in the resource annotation. If set to false, do not record the
  34. command. If set to true, record the command. If not set, default to updating the existing annotation value only if one
  35. already exists.
  36. -R, --recursive=false: Process the directory used in -f, --filename recursively. Useful when you want to manage
  37. related manifests organized within the same directory.
  38. --template='': Template string or path to template file to use when -o=go-template, -o=go-template-file. The
  39. template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].
  40. --type='strategic': The type of patch being provided; one of [json merge strategic]
  41.  
  42. Usage:
  43. kubectl patch (-f FILENAME | TYPE NAME) -p PATCH [options]
  44.  
  45. Use "kubectl options" for a list of global command-line options (applies to all commands).
  46. patch不仅能扩充资源还能完成其它的操作
  47. [root@www TestYaml]# kubectl patch deployment mydeploy -p '{"spec":{"replicas":5}}'
  48. -p选项可以用来指定一级菜单下二级三级菜单指的变动,但是注意的是外面使用单引号,里面一级字段的词就需要用双引号
  49. deployment.extensions/mydeploy patched
  50. [root@www ~]# kubectl get pods -w
  51. NAME READY STATUS RESTARTS AGE
  52. mydeploy-74b7786d9b-qnqg2 1/1 Running 0 8m41s
  53. mydeploy-74b7786d9b-tz6xk 1/1 Running 0 8m43s
  54. mydeploy-74b7786d9b-vt659 1/1 Running 0 8m45s
  55. mydeploy-74b7786d9b-hlwbp 0/1 Pending 0 0s
  56. mydeploy-74b7786d9b-hlwbp 0/1 Pending 0 0s
  57. mydeploy-74b7786d9b-zpcxb 0/1 Pending 0 0s
  58. mydeploy-74b7786d9b-zpcxb 0/1 Pending 0 0s
  59. mydeploy-74b7786d9b-hlwbp 0/1 ContainerCreating 0 0s
  60. mydeploy-74b7786d9b-zpcxb 0/1 ContainerCreating 0 0s
  61. mydeploy-74b7786d9b-hlwbp 1/1 Running 0 2s
  62. mydeploy-74b7786d9b-zpcxb 1/1 Running 0 2s
  63. 可以看到更新的过程,因为我们回滚过版本,但是deploy版本定义的是v2的版本,现在应该是v1有3个,v2有两个
  64. [root@www TestYaml]# kubectl get rs -o wide
  65. NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
  66. mydeploy-6fbdd45d4c 0 0 0 45m myapp-containers ikubernetes/myapp:v2 app=mydeploy,pod-template-hash=6fbdd45d4c,release=Internal-measurement
  67. mydeploy-74b7786d9b 5 5 5 52m myapp-containers ikubernetes/myapp:v1 app=mydeploy,pod-template-hash=74b7786d9b,release=Internal-measurement
  68. 但是实际不是这样的,当你只指定某一个字段进行打补丁的时候,是不会改变其它字段的值的,除非将image的版本也给到v2版本
  69. patch的好处在于如果只想对某些字段的值进行变更,不想去调整yaml模板的值,就可以使用patch,但是patch绝对不适合完成很多字段的调整,因为会使得命令行结构变的复杂
  70. [root@www TestYaml]# kubectl patch deployment mydeploy -p '{"spec":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0}}}}' 例如我们去改最少0个,最多1个,就会使得很复杂,如果是很多指,这个结构变的就会复杂,如果是修改多个指,直接apply更加方便
  71. deployment.extensions/mydeploy patched (no change)
  72. [root@www TestYaml]# kubectl set image deployment mydeploy myapp-containers=ikubernetes/myapp:v2 && kubectl rollout pause deployment mydeploy 我们使用直接set image来直接更新镜像的版本,而且更新1个之后就直接暂停
  73. deployment.extensions/mydeploy image updated
  74. deployment.extensions/mydeploy paused 可以看到更新一个之后就直接paused了
  75. [root@www ~]# kubectl get pods -w
  76. NAME READY STATUS RESTARTS AGE
  77. mydeploy-74b7786d9b-hlwbp 1/1 Running 0 30m
  78. mydeploy-74b7786d9b-qnqg2 1/1 Running 0 40m
  79. mydeploy-74b7786d9b-tz6xk 1/1 Running 0 40m
  80. mydeploy-74b7786d9b-vt659 1/1 Running 0 40m
  81. mydeploy-74b7786d9b-zpcxb 1/1 Running 0 30m
  82. mydeploy-6fbdd45d4c-phcp4 0/1 Pending 0 0s
  83. mydeploy-6fbdd45d4c-phcp4 0/1 Pending 0 0s
  84. mydeploy-74b7786d9b-hlwbp 1/1 Terminating 0 33m
  85. mydeploy-6fbdd45d4c-wllm7 0/1 Pending 0 0s
  86. mydeploy-6fbdd45d4c-wllm7 0/1 Pending 0 0s
  87. mydeploy-6fbdd45d4c-wllm7 0/1 ContainerCreating 0 0s
  88. mydeploy-6fbdd45d4c-dc84z 0/1 Pending 0 0s
  89. mydeploy-6fbdd45d4c-dc84z 0/1 Pending 0 0s
  90. mydeploy-6fbdd45d4c-phcp4 0/1 ContainerCreating 0 0s
  91. mydeploy-6fbdd45d4c-dc84z 0/1 ContainerCreating 0 0s
  92. mydeploy-74b7786d9b-hlwbp 0/1 Terminating 0 33m
  93. mydeploy-6fbdd45d4c-wllm7 1/1 Running 0 2s
  94. mydeploy-6fbdd45d4c-phcp4 1/1 Running 0 3s
  95. mydeploy-6fbdd45d4c-dc84z 1/1 Running 0 3s
  96. mydeploy-74b7786d9b-hlwbp 0/1 Terminating 0 33m
  97. mydeploy-74b7786d9b-hlwbp 0/1 Terminating 0 33m
  98. [root@www TestYaml]# kubectl rollout status deployment mydeploy 也可以使用其他命令来监控更新的过程
  99. Waiting for deployment "mydeploy" rollout to finish: 3 out of 5 new replicas have been updated...
  100. 因为我们前面执行暂停了,结果更新几个之后就暂停下来了,如果我们已经更新几个小时了,没有用户反馈有问题,想继续把剩下的更新掉,就可以使用resume命令来继续更新
  101. [root@www ~]# kubectl rollout resume deployment mydeploy 直接继续更新
  102. deployment.extensions/mydeploy resumed
  103. [root@www TestYaml]# kubectl rollout status deployment mydeploy
  104. Waiting for deployment "mydeploy" rollout to finish: 3 out of 5 new replicas have been updated...
  105. Waiting for deployment spec update to be observed...
  106. Waiting for deployment spec update to be observed...
  107. Waiting for deployment "mydeploy" rollout to finish: 3 out of 5 new replicas have been updated...
  108. Waiting for deployment "mydeploy" rollout to finish: 3 out of 5 new replicas have been updated...
  109. Waiting for deployment "mydeploy" rollout to finish: 1 old replicas are pending termination...
  110. Waiting for deployment "mydeploy" rollout to finish: 1 old replicas are pending termination...
  111. Waiting for deployment "mydeploy" rollout to finish: 1 old replicas are pending termination...
  112. Waiting for deployment "mydeploy" rollout to finish: 4 of 5 updated replicas are available...
  113. deployment "mydeploy" successfully rolled out
  114. 可以看到全部更新完毕,这个就是金丝雀更新。

patch补丁形式更新

  1. [root@www TestYaml]# kubectl rollout undo --help
  2. Rollback to a previous rollout.
  3.  
  4. Examples:
  5. # Rollback to the previous deployment
  6. kubectl rollout undo deployment/abc
  7.  
  8. # Rollback to daemonset revision 3
  9. kubectl rollout undo daemonset/abc --to-revision=3 能指定回滚到那个版本
  10.  
  11. # Rollback to the previous deployment with dry-run
  12. kubectl rollout undo --dry-run=true deployment/abc 不指定默认是上一个版本
  13.  
  14. Options:
  15. --allow-missing-template-keys=true: If true, ignore any errors in templates when a field or map key is missing in
  16. the template. Only applies to golang and jsonpath output formats.
  17. --dry-run=false: If true, only print the object that would be sent, without sending it.
  18. -f, --filename=[]: Filename, directory, or URL to files identifying the resource to get from a server.
  19. -k, --kustomize='': Process the kustomization directory. This flag can't be used together with -f or -R.
  20. -o, --output='': Output format. One of:
  21. json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file.
  22. -R, --recursive=false: Process the directory used in -f, --filename recursively. Useful when you want to manage
  23. related manifests organized within the same directory.
  24. --template='': Template string or path to template file to use when -o=go-template, -o=go-template-file. The
  25. template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].
  26. --to-revision=0: The revision to rollback to. Default to 0 (last revision).
  27.  
  28. Usage:
  29. kubectl rollout undo (TYPE NAME | TYPE/NAME) [flags] [options]
  30.  
  31. Use "kubectl options" for a list of global command-line options (applies to all commands).
  32. [root@www TestYaml]# kubectl rollout undo deployment mydeploy --to-revision=1 我们可以通过命令快速进行版本的回滚操作

kubectl rollout undo的使用

♣四:DaemonSet控制器

A:Deployment控制器的介绍:

DaemonSet的主要是在集群的每一个节点上运行一个指定的pod,而且此pod只有一个副本,或者是符合选择器的节点上运行指定的pod(例如有些机器是实体机,有些是虚拟机,那么上面跑的一些程序是不同的,这个时候就需要选择器来选择运行pod)
还可以将某些目录关联至pod中,来实现某些特定的功能。

  1. [root@www TestYaml]# kubectl explain ds.spec Daemonset简写ds,也是包含5个一级字段)
  2. KIND: DaemonSet
  3. VERSION: extensions/v1beta1
  4.  
  5. RESOURCE: spec <Object>
  6.  
  7. DESCRIPTION:
  8. The desired behavior of this daemon set. More info:
  9. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
  10.  
  11. DaemonSetSpec is the specification of a daemon set.
  12.  
  13. FIELDS:
  14. minReadySeconds <integer>
  15. The minimum number of seconds for which a newly created DaemonSet pod
  16. should be ready without any of its container crashing, for it to be
  17. considered available. Defaults to 0 (pod will be considered available as
  18. soon as it is ready).
  19.  
  20. revisionHistoryLimit(保存历史版本数) <integer>
  21. The number of old history to retain to allow rollback. This is a pointer to
  22. distinguish between explicit zero and not specified. Defaults to 10.
  23.  
  24. selector <Object>
  25. A label query over pods that are managed by the daemon set. Must match in
  26. order to be controlled. If empty, defaulted to labels on Pod template. More
  27. info:
  28. https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors
  29.  
  30. template <Object> -required-
  31. An object that describes the pod that will be created. The DaemonSet will
  32. create exactly one copy of this pod on every node that matches the
  33. template's node selector (or on every node if no node selector is
  34. specified). More info:
  35. https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#pod-template
  36.  
  37. templateGeneration <integer>
  38. DEPRECATED. A sequence number representing a specific generation of the
  39. template. Populated by the system. It can be set only during the creation.
  40.  
  41. updateStrategy (更新策略) <Object>
  42. An update strategy to replace existing DaemonSet pods with new pods.

DaemonSet字段介绍

B:Deployment控制器的简单使用:

  1. [root@www TestYaml]# cat ds.test.yaml
  2. apiVersion: apps/v1
  3. kind: DaemonSet
  4. metadata:
  5. name: myds
  6. namespace: default
  7. spec:
  8. selector:
  9. matchLabels:
  10. app: myds
  11. release: Only
  12. template:
  13. metadata:
  14. labels:
  15. app: myds
  16. release: Only
  17. spec:
  18. containers:
  19. - name: mydaemonset
  20. image: ikubernetes/filebeat:5.6.5-alpine
  21. env: 因为filebeat监控日志需要指定服务名称和日志级别,这个不能在启动之后传,我们需要提前定义
  22. - name: REDIS_HOST
  23. value: redis.default.svc.cluster.local 这个值是redis名称+名称空间default+域
  24. - name: REDIS_LOG
  25. value: info 日志级别我们定义为info级别
  26. [root@www TestYaml]# kubectl get ds
  27. NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
  28. myds 2 2 1 2 1 <none> 4m28s
  29. [root@www TestYaml]# kubectl get pods
  30. NAME READY STATUS RESTARTS AGE
  31. myds-9kt2j 0/1 ImagePullBackOff 0 2m18s
  32. myds-jt8kd 1/1 Running 0 2m14s
  33. [root@www TestYaml]# kubectl get pods -o wide
  34. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  35. myds-9kt2j 0/1 ImagePullBackOff 0 2m24s 10.244.1.43 www.kubernetes.node1.com <none> <none>
  36. myds-jt8kd 1/1 Running 0 2m20s 10.244.2.30 www.kubernetes.node2.com <none> <none>
  37. 可以看到整个节点上至跑了两个pods,不会多也不会少,无论我们怎么定义,一个节点只能运行一个由DaemonSet控制的pods资源

filebeat来监控日志案例

  1. [root@www TestYaml]# cat ds.test.yaml
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: redis
  6. namespace: default
  7. spec:
  8. replicas: 1
  9. selector:
  10. matchLabels:
  11. app: redis
  12. role: loginfo
  13. template:
  14. metadata:
  15. labels:
  16. app: redis
  17. role: loginfo
  18. spec:
  19. containers:
  20. - name: redis
  21. image: redis:4.0-alpine
  22. ports:
  23. - name: redis
  24. containerPort: 6379
  25. --- 可以将两个资源定义的yaml写在一个文件当中,但是需要注意的是这样写最好是有关联的两个资源对象,如果没有关联还是建议分开写。
  26. apiVersion: apps/v1
  27. kind: DaemonSet
  28. metadata:
  29. name: myds
  30. namespace: default
  31. spec:
  32. selector:
  33. matchLabels:
  34. app: myds
  35. release: Only
  36. template:
  37. metadata:
  38. labels:
  39. app: myds
  40. release: Only
  41. spec:
  42. containers:
  43. - name: mydaemonset
  44. image: ikubernetes/filebeat:5.6.5-alpine
  45. env:
  46. - name: REDIS_HOST
  47. value: redis.default.svc.cluster.local
  48. - name: REDIS_LOG
  49. value: info
  50. 通过定义清单文件,我们就能通过filebeat来收集redis日志。

yaml的多资源合并编写

  1. [root@www TestYaml]# kubectl explain ds.spec.updateStrategy
  2. KIND: DaemonSet
  3. VERSION: extensions/v1beta1
  4.  
  5. RESOURCE: updateStrategy <Object>
  6.  
  7. DESCRIPTION:
  8. An update strategy to replace existing DaemonSet pods with new pods.
  9.  
  10. FIELDS:
  11. rollingUpdate <Object>
  12. Rolling update config params. Present only if type = "RollingUpdate".
  13.  
  14. type <string> 默认更新的方式也是有两种,一种是滚动更新,还有一种是在删除时候更新
  15. Type of daemon set update. Can be "RollingUpdate" or "OnDelete". Default is
  16. OnDelete.
  17.  
  18. rollingUpdate滚动更新
  19. [root@www TestYaml]# kubectl explain ds.spec.updateStrategy.rollingUpdate
  20. KIND: DaemonSet
  21. VERSION: extensions/v1beta1
  22.  
  23. RESOURCE: rollingUpdate <Object>
  24.  
  25. DESCRIPTION:
  26. Rolling update config params. Present only if type = "RollingUpdate".
  27.  
  28. Spec to control the desired behavior of daemon set rolling update.
  29.  
  30. FIELDS:
  31. maxUnavailable ds控制器的更新策略只能支持先删在更新,因为一个节点支持一个pods资源,此处的数量是和节点数量相关的,一次更新几个节点的pods资源 <string>
  32. The maximum number of DaemonSet pods that can be unavailable during the
  33. update. Value can be an absolute number (ex: 5) or a percentage of total
  34. number of DaemonSet pods at the start of the update (ex: 10%). Absolute
  35. number is calculated from percentage by rounding up. This cannot be 0.
  36. Default value is 1. Example: when this is set to 30%, at most 30% of the
  37. total number of nodes that should be running the daemon pod (i.e.
  38. status.desiredNumberScheduled) can have their pods stopped for an update at
  39. any given time. The update starts by stopping at most 30% of those
  40. DaemonSet pods and then brings up new DaemonSet pods in their place. Once
  41. the new pods are available, it then proceeds onto other DaemonSet pods,
  42. thus ensuring that at least 70% of original number of DaemonSet pods are
  43. available at all times during the update.
  44.  
  45. [root@www TestYaml]# kubectl set image --help
  46. Update existing container image(s) of resources.
  47.  
  48. Possible resources include (case insensitive):
  49.  
  50. pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), replicaset (rs)
  51. set image目前支持更新的控制器类别
  52. [root@www TestYaml]# kubectl set image daemonsets myds mydaemonset=ikubernetes/filebeat:5.6.6-alpine
  53. daemonset.extensions/myds image updated
  54. [root@www TestYaml]# kubectl get ds
  55. NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
  56. myds 2 2 1 0 1 <none> 19m
  57. [root@www TestYaml]# kubectl get pods
  58. NAME READY STATUS RESTARTS AGE
  59. myds-lmw5d 0/1 ContainerCreating 0 7s
  60. myds-mhw89 1/1 Running 0 19m
  61. redis-fdc8c666b-spqlc 1/1 Running 0 19m
  62. [root@www TestYaml]# kubectl get pods -w
  63. NAME READY STATUS RESTARTS AGE
  64. myds-lmw5d 0/1 ContainerCreating 0 15s 可以看到更新的时候先停一个,然后去pull镜像来更新
  65. myds-mhw89 1/1 Running 0 19m
  66. redis-fdc8c666b-spqlc 1/1 Running 0 19m
  67. .......
  68. myds-546lq 1/1 Running 0 46s 可以看到一件更新完毕

DaemonSet的滚动更新

C:pod的共享字段介绍:

容器是可以共享使用主机的网络名称空间,这样容器监听的端口将是监听了宿主机至上了
[root@www TestYaml]# kubectl explain pod.spec.hostNetwork
KIND: Pod
VERSION: v1

FIELD: hostNetwork <boolean>

DESCRIPTION:
Host networking requested for this pod. Use the host's network namespace.
If this option is set, the ports that will be used must be specified.
Default to false.
可以看到pods直接使用主机的网络名称空间,那么在创建ds控制器的时候,直接共享使用宿主机的网络名称空间,这样我们直接可以使用节点ip来进行访问了,无需通过service来进行暴露端口
还可以共享的有hostPID,hostIPC等字段。

docker+k8s基础篇四的更多相关文章

  1. docker+k8s基础篇一

    Docker+K8s基础篇(一) docker的介绍 A:为什么是docker B:k8s介绍 docker的使用 A:docker的安装 B:docker的常用命令 C:docker容器的启动和操作 ...

  2. docker+k8s基础篇五

    Docker+K8s基础篇(五) service资源介绍 A:service资源的工作特性 service的使用 A:service字段介绍 B:ClusterIP的简单使用 C:NodePort的简 ...

  3. docker+k8s基础篇三

    Docker+K8s基础篇(三) kubernetes上的资源 A:k8s上的常用资源 Pod的配置清单 A:Pod上的清单定义 B:Pod创建资源的方法 C:spec下其它字段的介绍 Pod的生命周 ...

  4. docker+k8s基础篇二

    Docker+K8s基础篇(二) docker的资源控制 A:docker的资源限制 Kubernetes的基础篇 A:DevOps的介绍 B:Kubernetes的架构概述 C:Kubernetes ...

  5. 小白学Docker之基础篇

    系列文章: 小白学Docker之基础篇 小白学Docker之Compose 小白学Docker之Swarm PS: 以下是个人作为新手小白学习docker的笔记总结 1. docker是什么 百科上的 ...

  6. Docker之基础篇

    小白学Docker之基础篇   系列文章: 小白学Docker之基础篇 小白学Docker之Compose 小白学Docker之Swarm PS: 以下是个人作为新手小白学习docker的笔记总结 1 ...

  7. Hybrid APP基础篇(四)->JSBridge的原理

    说明 JSBridge实现原理 目录 前言 参考来源 前置技术要求 楔子 原理概述 简介 url scheme介绍 实现流程 实现思路 第一步:设计出一个Native与JS交互的全局桥对象 第二步:J ...

  8. Python基础篇(四)_组合数据类型的基本概念

    Python基础篇——组合数据类型的基本概念 集合类型:元素的集合,元素之间无序 序列类型:是一个元素向量,元素之间存在先后关系,通过序号进行访问,没有排他性,具体包括字符串类型.元组类型.列表类型 ...

  9. 前端开发之JavaScript基础篇四

    主要内容: 1.定时器 2.正则表达式入门 3.元字符 4.正则表达式实战运用 一.定时器 javaScript里主要使用两种定时器,分别是:setInterval()和setTimeout(). 1 ...

随机推荐

  1. AD域与信任关系

    域与信任关系:信任关系分为两种,一种是林中信任关系,另一种是林之间的信任关系. 林中信任关系的特点: 注意:林中信任关系还可以分为两种:一种是父子信任,还有一种是树根信任. 父子信任:在同一个树域之中 ...

  2. HTML Meta标签和link标签

    一.meta 标签 name属性主要用于描述网页,对应于content(网页内容)  1.<meta name="Generator" contect="" ...

  3. 洛谷 P2850 [USACO06DEC]虫洞Wormholes 题解

    P2850 [USACO06DEC]虫洞Wormholes 题目描述 While exploring his many farms, Farmer John has discovered a numb ...

  4. 洛谷 题解 P2721 【摄像头】

    这是我见过最水的蓝题 这不就是拓扑排序板子题吗 题目大意:松鼠砸烂摄像头不被抓住 摄像头一个可以监视到另一个可以看做有向边,用邻接链表储存就好了,我也不知道邻接矩阵到底能不能过保险起见还是用邻接链表. ...

  5. UDP网络程序设计

    udp_server #include<stdio.h>#include<sys/socket.h>#include<string.h>#include<ne ...

  6. Mathmatica中的Sum命令

    在Mathematica中,Sum不能直接用于计算列表的和.如图1所示. 图1:利用Sum函数直接计算列表的和出错. 可以采用如下格式的语句:Sum[Part[x0, i], {i, 1, 4}];

  7. delphi开第二个进程报错cannot create file editorlineends.ttr

    网上说问题是windows系统补丁造成的,解决办法有卸补丁.装插件,还有自己搞个bat启动. 在网上看到最好的一个办法是: 把这个文件EditorLineEnds.ttr的后缀改为ttf,然后安装这个 ...

  8. 从浏览器输入url到显示页面的过程 (前端面试题)

    域名DNS解析,解析到真正的IP地址             | 客户端与服务端建立TCP连接,3次握手 | 客户端发送Http请求 | server接收到http请求,处理,并返回 | 客户端接收到 ...

  9. Kali linux 2018 安装 Fluxion

    本人是在VMware 12下安装 Kali linux 2018.2版本 安装完成后 用命令行运行更新   apt-get update apt-get full-upgrade   更新所有组件. ...

  10. Mybatis(上)

    Mybatis 一.MyBatis 简介 1. MyBatis作用 MyBatis 是支持定制化 SQL.存储过程以及高级映射的优秀的持久层框架. MyBatis 避免了几乎所有的 JDBC 代码和手 ...