k8s-高级调度方式-二十一
两类:
- 节点选择器:nodeSelector(给node打上标签,pod通过标签预选节点),nodeName
- 节点亲和调度:nodeAffinity
1、节点选择器(nodeSelector,nodeName)
[root@master ~]# kubectl explain pods.spec.nodeSelector [root@master schedule]# pwd
/root/manifests/schedule [root@master schedule]# vim pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-demo
namespace: default
labels:
app: myapp
tier: frontend
annotations:
mageedu.com/created-by: "cluster admin"
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
nodeSelector: #节点选择器
disktype: ssd #该pod运行在有disktype=ssd标签的node节点上
[root@master schedule]# kubectl apply -f pod-demo.yaml
pod/pod-demo created [root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-demo / Running 8m13s 10.244.1.6 node01 <none> <none> [root@master schedule]# kubectl get nodes --show-labels |grep node01
node01 Ready <none> 76d v1.13.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/hostname=node01 #可见新创建的pod已经运行在node01上了,因为node01上有disktype=ssd标签;
接下来我们给node02打上标签,修改一下资源定义清单文件,再创建pod:
将node02打上标签,pod资源清单里面的节点选择器里,改为和node02一样的标签;
[root@master schedule]# kubectl delete -f pod-demo.yaml [root@master ~]# kubectl label nodes node02 disktype=harddisk
node/node02 labeled [root@master schedule]# vim pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-demo
namespace: default
labels:
app: myapp
tier: frontend
annotations:
mageedu.com/created-by: "cluster admin"
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
nodeSelector:
disktype: harddisk [root@master schedule]# kubectl get nodes --show-labels |grep node02
node02 Ready <none> 76d v1.13.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=harddisk,kubernetes.io/hostname=node02 [root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-demo / Running 104s 10.244.2.5 node02 <none> <none>
可见pod已经运行在node02上了;
2、节点亲和度调度
[root@master scheduler]# kubectl explain pods.spec.affinity
[root@master scheduler]# kubectl explain pods.spec.affinity.nodeAffinity
preferredDuringSchedulingIgnoredDuringExecution:软亲和,
requiredDuringSchedulingIgnoredDuringExecution:硬亲和,表示必须满足
[root@master ~]# kubectl explain pods.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions #硬亲和性
[root@master schedule]# vim pod-nodeaffinity-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-node-affinity-demo
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: zone
operator: In
values:
- foo
- bar [root@master schedule]# kubectl apply -f pod-nodeaffinity-demo.yaml
pod/pod-node-affinity-demo created [root@master schedule]# kubectl get pods
NAME READY STATUS RESTARTS AGE
pod-node-affinity-demo / Pending 76s
#此时pod是Pending, 是因为没有节点满足条件;
下面我们再创建一个软亲和性的pod:
#软亲和性,就算没有符合条件的节点,也会找一个勉强运行; [root@master schedule]# kubectl delete -f pod-nodeaffinity-demo.yaml
pod "pod-node-affinity-demo" deleted [root@master schedule]# vim pod-nodeaffinity-demo2.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-node-affinity-demo2
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: zone
operator: In
values:
- foo
- bar
weight: [root@master schedule]# kubectl apply -f pod-nodeaffinity-demo2.yaml
pod/pod-node-affinity-demo2 created [root@master schedule]# kubectl get pods #可见pod已经运行了
NAME READY STATUS RESTARTS AGE
pod-node-affinity-demo2 / Running 74s pod-node-affinity-demo- 运行起来了,因为这个pod我们是定义的软亲和性,即使没有符合条件的及诶单,也会找个节点让Pod运行起来
3、pod亲和性调度
比如在机房中,我们可以将一个机柜中的机器都打上标签,让pod调度的时候,对此机柜有亲和性;
或者将机柜中某几台机器打上标签,让pod调度的时候,对这几个机器有亲和性;
#查看资源定义清单字段
[root@master ~]# kubectl explain pods.spec.affinity.podAffinity
FIELDS:
preferredDuringSchedulingIgnoredDuringExecution <[]Object> #软亲和
requiredDuringSchedulingIgnoredDuringExecution <[]Object> #硬亲和 [root@master ~]# kubectl explain pods.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution
FIELDS:
labelSelector <Object> #表示选定一组资源,(跟哪些pod进行亲和);
namespaces <[]string> #指定Pod属于哪个名称空间中,一般不跨名称空间去引用
topologyKey <string> -required- #定义键(要亲和的关键字)
pod硬亲和性调度:
[root@master ~]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master Ready master 77d v1.13.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master,node-role.kubernetes.io/master=
node01 Ready <none> 77d v1.13.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/hostname=node01
node02 Ready <none> 76d v1.13.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=harddisk,kubernetes.io/hostname=node02 #资源定义清单
[root@master schedule]# vim pod-requieed-affinity-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-first
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
---
apiVersion: v1
kind: Pod
metadata:
name: pod-second
namespace: default
labels:
app: backend
tier: db
spec:
containers:
- name: busybox #前面的-号表示这是一个列表格式的,也可以用中括号表示
image: busybox:latest
imagePullPolicy: IfNotPresent
command: ["sh","-c","sleep 3600"]
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution: #硬亲和性
- labelSelector:
matchExpressions:
- {key: app,operator: In,values: ["myapp"]} #意思是当前这个pod要跟一个有着标签app=myapp(要和上面pod-first的metadata里面的标签一致)的pod在一起
topologyKey: kubernetes.io/hostname #匹配的节点key是kubernetes.io/hostname #创建
[root@master schedule]# kubectl apply -f pod-requieed-affinity-demo.yaml
pod/pod-first created
pod/pod-second created [root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-first / Running 3m25s 10.244.2.9 node02 <none> <none>
pod-second / Running 3m25s 10.244.2.10 node02 <none> <none> #可以看到我们的两个pod都运行在同一个节点了,这是因为pod-second会和pod-first运行在同一个节点上,pod-second依赖于pod-first;
4、pod反亲和性调度
[root@master ~]# kubectl explain pods.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector
FIELDS:
matchExpressions <[]Object>
matchLabels <map[string]string> [root@master schedule]# kubectl delete -f pod-requieed-affinity-demo.yaml #删掉刚才的pod #资源定义清单
[root@master schedule]# vim pod-requieed-Anti-affinity-demo.yaml apiVersion: v1
kind: Pod
metadata:
name: pod-first
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
---
apiVersion: v1
kind: Pod
metadata:
name: pod-second
namespace: default
labels:
app: backend
tier: db
spec:
containers:
- name: busybox
image: busybox:latest
imagePullPolicy: IfNotPresent
command: ["sh","-c","sleep 3600"]
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- {key: app,operator: In,values: ["myapp"]}
topologyKey: kubernetes.io/hostname #创建
[root@master schedule]# kubectl apply -f pod-requieed-Anti-affinity-demo.yaml
pod/pod-first created
pod/pod-second created [root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-first / Running 53s 10.244.1.7 node01 <none> <none>
pod-second / Running 53s 10.244.2.11 node02 <none> <none> #可见pod-first和pod-second就不会被调度到同一个节点上;
下面可以给两个节点打相同的标签,因为pod调度策略是podAntiAffinity反亲和性,所以pod-first和pod-second不能同时运行在标有zone标签的节点上;
最终出现的情况就是有一个pod-first能成功运行,而另外一个pod-second因为是反亲和的,没有节点可以运行而处于pending状态;
#打标,相同的标签
[root@master ~]# kubectl label nodes node01 zone=foo
node/node01 labeled
[root@master ~]# kubectl label nodes node02 zone=foo [root@master schedule]# kubectl delete -f pod-requieed-Anti-affinity-demo.yaml #删掉pod #资源定义定义清单
[root@master schedule]# vim pod-requieed-Anti-affinity-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-first
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
---
apiVersion: v1
kind: Pod
metadata:
name: pod-second
namespace: default
labels:
app: backend
tier: db
spec:
containers:
- name: busybox
image: busybox:latest
imagePullPolicy: IfNotPresent
command: ["sh","-c","sleep 3600"]
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- {key: app,operator: In,values: ["myapp"]}
topologyKey: zone #节点标签改为zone #创建
[root@master schedule]# kubectl apply -f pod-requieed-Anti-affinity-demo.yaml
pod/pod-first created
pod/pod-second created [root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-first / Running 4s 10.244.2.12 node02 <none> <none>
pod-second / Pending 4s <none> <none> <none> <none> #可见pod-first能成功运行,而pod-second因为是反亲和的,没有节点可以运行而处于pending状态;
5、污点调度
污点调度是让节点来选择哪些pod能运行在其上面,污点(taints)用在节点上,容忍度(Tolerations
)用在pod上;
污点定义:
[root@master ~]# kubectl explain nodes.spec.taints #taints:定义节点的污点
FIELDS:
effect <string> -required- #表示当pod不能容忍节点上污点时的行为是什么,主要有以下三种行为:
{NoSchedule:仅影响调度过程,不影响现存pod。没调度过来的就调度不过来了。如果对节点新加了污点,那么对节点上现存的Pod没有影响。
NoExecute:既影响调度过程,也影响现存Pod,没调度过来的就调度不过来了,如果对节点新加了污点,那么对现存的pod对象将会被驱逐
PreferNoSchedule:不能容忍就不能调度过来,但是实在没办法也是能调度过来的。对节点新加了污点,那么对节点上现存的pod没有影响。}
key <string> -required-
timeAdded <string>
value <string> #查看节点的污点
[root@master ~]# kubectl describe node node01 |grep Taints
Taints: <none>
[root@master ~]# kubectl describe node node02 |grep Taints
Taints: <none> #查看pod的容忍度
[root@master ~]# kubectl describe pods kube-apiserver-master -n kube-system |grep Tolerations
Tolerations: :NoExecute [root@master ~]# kubectl taint -h | grep -A Usage #给节点打污点的方式
Usage:
kubectl taint NODE NAME KEY_1=VAL_1:TAINT_EFFECT_1 ... KEY_N=VAL_N:TAINT_EFFECT_N [options] 污点和容忍度都是自定义的键值对形式;
下面给node1打上污点node-type=production:NoSchedule:
[root@master ~]# kubectl taint node node01 node-type=production:NoSchedule
node/node01 tainted #pod资源定义清单,此文件没有定义容忍度,但是node01有污点,pod应该都会运行在node02上;
[root@master schedule]# vim deploy-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas:
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v2
ports:
- name: http
containerPort: #创建
[root@master schedule]# kubectl apply -f deploy-demo.yaml
deployment.apps/myapp-deploy created #可见pod都运行在了node02上
[root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-deploy-6b56d98b6b-52hth / Running 9s 10.244.2.15 node02 <none> <none>
myapp-deploy-6b56d98b6b-dr224 / Running 9s 10.244.2.14 node02 <none> <none>
myapp-deploy-6b56d98b6b-z278x / Running 9s 10.244.2.13 node02 <none> <none>
容忍度定义:
[root@master ~]# kubectl explain pods.spec.tolerations
FIELDS:
effect <string>
key <string>
operator <string> #两个值:Exists表示只要节点有这个污点的key,pod都能容忍,值是什么都行;Equal表示只要节点必须精确匹配污点的key和value才能容忍;
tolerationSeconds <integer> #表示宽限多长时间pod才会被驱逐
value <string> [root@master ~]# kubectl taint node node02 node-type=dev:NoExecute #给node02打上另一个标签
node/node02 tainted [root@master schedule]# kubectl delete -f deploy-demo.yaml #资源定义清单
[root@master schedule]# vim deploy-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas:
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v2
ports:
- name: http
containerPort:
tolerations:
- key: "node-type"
operator: "Equal" #要精确匹配污点键值
value: "production"
effect: "NoSchedule" #创建pod
[root@master schedule]# kubectl apply -f deploy-demo.yaml
deployment.apps/myapp-deploy created [root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-deploy-779c578779-5vkbw / Running 12s 10.244.1.12 node01 <none> <none>
myapp-deploy-779c578779-bh9td / Running 12s 10.244.1.11 node01 <none> <none>
myapp-deploy-779c578779-dn52p / Running 12s 10.244.1.13 node01 <none> <none> #可见pod都运行在了node01上,因为我们设置了pod能容忍node01的污点;
下面我们把operator: "Equal"改成operator: "Exists"
Exists表示只要节点有这个污点的key,pod都能容忍,值是什么都行;
[root@master schedule]# kubectl delete -f deploy-demo.yaml [root@master schedule]# vim deploy-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas:
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v2
ports:
- name: http
containerPort:
tolerations:
- key: "node-type"
operator: "Exists"
value: ""
effect: "" #不设置行为 #创建
[root@master schedule]# kubectl apply -f deploy-demo.yaml
deployment.apps/myapp-deploy create [root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-deploy-69b95476c8-bfpgj / Running 13s 10.244.2.20 node02 <none> <none>
myapp-deploy-69b95476c8-fhwbd / Running 13s 10.244.1.17 node01 <none> <none>
myapp-deploy-69b95476c8-tzzlx / Running 13s 10.244.2.19 node02 <none> <none> #可见,node01 node02上面都有pod了;
effect:不设置表示什么行为都能容忍;
最后可以去除节点上的污点:
#去除污点命令,删除指定key上所有的effect
[root@master ~]# kubectl taint node node02 node-type-
node/node02 untainted
[root@master ~]# kubectl taint node node01 node-type-
node/node01 untainted
k8s-高级调度方式-二十一的更多相关文章
- K8S 高级调度方式
可以使用高级调度分为: 节点选择器: nodeSelector.nodeName 节点亲和性调度: nodeAffinity Pod亲和性调度:PodAffinity Pod反亲和性调度:podAnt ...
- Unix环境高级编程(二十一)数据库函数库
本章的内容是开发一个简单的.多用户数据库的C函数库.调用此函数库提供的C语言函数,其他程序可以读取和存储数据库中的记录.绝大部分商用数据库函数库提供多进程同时更新数据库所需要的并发控制,采用建议记录锁 ...
- 【圣诞特献】Web 前端开发精华文章推荐【系列二十一】
<Web 前端开发精华文章推荐>2013年第九期(总第二十一期)和大家见面了.梦想天空博客关注 前端开发 技术,分享各种增强网站用户体验的 jQuery 插件,展示前沿的 HTML5 和 ...
- JAVA之旅(二十一)——泛型的概述以及使用,泛型类,泛型方法,静态泛型方法,泛型接口,泛型限定,通配符
JAVA之旅(二十一)--泛型的概述以及使用,泛型类,泛型方法,静态泛型方法,泛型接口,泛型限定,通配符 不知不觉JAVA之旅已经写到21篇了,不得不感叹当初自己坚持要重学一遍JAVA的信念,中途也算 ...
- ASP.NET MVC深入浅出(被替换) 第一节: 结合EF的本地缓存属性来介绍【EF增删改操作】的几种形式 第三节: EF调用普通SQL语句的两类封装(ExecuteSqlCommand和SqlQuery ) 第四节: EF调用存储过程的通用写法和DBFirst模式子类调用的特有写法 第六节: EF高级属性(二) 之延迟加载、立即加载、显示加载(含导航属性) 第十节: EF的三种追踪
ASP.NET MVC深入浅出(被替换) 一. 谈情怀-ASP.NET体系 从事.Net开发以来,最先接触的Web开发框架是Asp.Net WebForm,该框架高度封装,为了隐藏Http的无状态 ...
- Senparc.Weixin.MP SDK 微信公众平台开发教程(二十一):在小程序中使用 WebSocket (.NET Core)
本文将介绍如何在 .NET Core 环境下,借助 SignalR 在小程序内使用 WebSocket.关于 WebSocket 和 SignalR 的基础理论知识不在这里展开,已经有足够的参考资料, ...
- 学习笔记:CentOS7学习之二十一: 条件测试语句和if流程控制语句的使用
目录 学习笔记:CentOS7学习之二十一: 条件测试语句和if流程控制语句的使用 21.1 read命令键盘读取变量的值 21.1.1 read常用见用法及参数 21.2 流程控制语句if 21.2 ...
- 无废话ExtJs 入门教程二十一[继承:Extend]
无废话ExtJs 入门教程二十一[继承:Extend] extjs技术交流,欢迎加群(201926085) 在开发中,我们在使用视图组件时,经常要设置宽度,高度,标题等属性.而这些属性可以通过“继承” ...
- Bootstrap <基础二十一>徽章(Badges)
Bootstrap 徽章(Badges).徽章与标签相似,主要的区别在于徽章的边角更加圆滑. 徽章(Badges)主要用于突出显示新的或未读的项.如需使用徽章,只需要把 <span class= ...
随机推荐
- 如何通过SQL注入获取服务器本地文件
写在前面的话 SQL注入可以称得上是最臭名昭著的安全漏洞了,而SQL注入漏洞也已经给整个网络世界造成了巨大的破坏.针对SQL漏洞,研究人员也已经开发出了多种不同的利用技术来实施攻击,包括非法访问存储在 ...
- ZOJ ACM 1314(JAVA)
昨天做了几个题目.过于简单,就不在博客里面写了. 1314这道题也比較简单,写出来是由于我认为在这里有一个小技巧,对于时间复杂度和空间复杂度都比較节省. 这个题目类似哈希表的求解.可是更简单.刚拿到题 ...
- 【Objective-C】09-空指针和野指针
一.什么是空指针和野指针 1.空指针 1> 没有存储不论什么内存地址的指针就称为空指针(NULL指针) 2> 空指针就是被赋值为0的指针.在没有被详细初始化之前.其值为0. 以下两个都是空 ...
- binary-tree-preorder-traversal——前序遍历
Given a binary tree, return the preorder traversal of its nodes' values. For example:Given binary tr ...
- js中cookie的使用具体分析
JavaScript中的还有一个机制:cookie,则能够达到真正全局变量的要求. cookie是浏览器 提供的一种机制,它将document 对象的cookie属性提供 ...
- java开始到熟悉63-65
本次内容:java常用类 1.包装类 package array; public class wrapperclass { public static void main(String[] args) ...
- 【转载】一分钟了解两阶段提交2PC(运营MM也懂了)
上一期分享了"一分钟了解mongoDB"[回复"mongo"阅读],本期将分享分布式事务的一种实现方式2PC. 一.概念 二阶段提交2PC(Two phase ...
- HDU 1257 最少拦截系统(dp)
Problem Description 某国为了防御敌国的导弹突击,发展出一种导弹拦截系统.可是这样的导弹拦截系统有一个缺陷:尽管它的第一发炮弹可以到达随意的高度,可是以后每一发炮弹都不能超过前一发的 ...
- Codeforces Round #335 (Div. 2) 606B Testing Robots(模拟)
B. Testing Robots time limit per test 2 seconds memory limit per test 256 megabytes input standard i ...
- 跨平台C++:(前言)正确打开C++的方式
接触C++已经十五年了...但是对于C++而言,我至今是个门外汉,不是谦虚,而是确实不得其门而入. 历程是这样的—— 大学考研要考C++,就自学了.研没考上,C++算是学了,准确的说是C++的语法,以 ...