一.定向调度
1.NodeName策略
NodeName用于强制约束将Pod调度到指定的Name的Node节点上。这种方式,其实是直接跳过Scheduler的调度逻辑,直接将Pod调度到指定名称的节点。
//查看node节点信息
NAME STATUS ROLES AGE VERSION
192.168.0.200 Ready master 8d v1.22.2
192.168.0.201 Ready master 8d v1.22.2
192.168.0.202 Ready node 8d v1.22.2
192.168.0.203 Ready node 6m49s v1.22.2
[root@k8s-master1 ~]# cat nginx.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-view
namespace: default
spec:
type: NodePort
selector:
app: nginx-view
ports:
- port: 80
targetPort: 80
nodePort: 10002
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-view
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: nginx-view
template:
metadata:
labels:
app: nginx-view
spec:
nodeName: 192.168.0.200
containers:
- name: nginx-view
image: nginx:1.20
imagePullPolicy: IfNotPresent
[root@k8s-master1 ~]# kubectl apply -f nginx.yaml
[root@k8s-master1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-view-6d974c499b-pgfdd 1/1 Running 0 2m59s 172.10.159.151 192.168.0.200 <none> <none>
nginx-view-6d974c499b-wkrcg 1/1 Running 0 2m59s 172.10.159.152 192.168.0.200 <none> <none>
2.NodeSelector策略
NodeSelector用于将pod调度到添加了指定标签的node节点上。它是通过kubernetes的label-selector机制实现的,也就是说,在pod创建之前,会由scheduler使用MatchNodeSelector调度策略进行label匹配,找出目标node,然后将pod调度到目标节点,该匹配规则是强制约束。简单的说就是给kubectl集群的node节点打上标签,然后调度器将pod调度到指定标签的node上。
//给节点打标签
kubectl label nodes 192.168.0.200 label_tools=tools-1
[root@k8s-master1 ~]# cat nginx.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-view
namespace: default
spec:
type: NodePort
selector:
app: nginx-view
ports:
- port: 80
targetPort: 80
nodePort: 10002
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-view
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: nginx-view
template:
metadata:
labels:
app: nginx-view
spec:
nodeSelector:
label_tools: tools-1
containers:
- name: nginx-view
image: nginx:1.20
imagePullPolicy: IfNotPresent
[root@k8s-master1 ~]# kubectl apply -f nginx.yaml
[root@k8s-master1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-view-7cc97fcc65-8zv9w 1/1 Running 0 73s 172.10.159.153 192.168.0.200 <none> <none>
nginx-view-7cc97fcc65-smgfw 1/1 Running 0 73s 172.10.159.154 192.168.0.200 <none> <none>
二.node亲和性
1.简介
Affinity中文意思 ”亲和性” ,跟nodeSelect 类似,根据节点上的标签来调度 POD 在哪些节点上创建。
目录nodeAffinity 根据软策略和硬策略分为2种:
硬策略(requiredDuringSchedulingIgnoredDuringExecution)
表示POD 必须部署到满足条件的节点上,如果没有满足条件的节点,就不停重试。
软策略(preferredDuringSchedulingIgnoredDuringExecution)
表示 POD 优先部署到满足条件的节点上,如果没有满足条件的节点,就忽略这些条件,按照正常逻辑部署
2.亲和语法关系
In:label的值在某个列表中
NotIn:label的值不在某个列表中
Gt:label的值大于某个值
Lt:label的值小于某个值
Exists:某个label存在
DoesNotExist:某个label不存在
3.关系符operator
关系符的使用说明:
- matchExpressions:
- key: nodeenv # 匹配存在标签的key为nodeenv的节点,只匹配key就行
operator: Exists
- key: nodeenv # 匹配标签的key为nodeenv,且value是"xxx"或"yyy"的节点,key和value都要匹配
operator: In
values: ["xxx","yyy"]
- key: nodeenv # 匹配标签的key为nodeenv,且value大于"xxx"的节点
operator: Gt
values: "xxx"
4.硬亲和
4.1node硬亲和
affinity:
nodeAffinity:
#Node节点必须满足指定的所有规则才可以,相当于硬限制(找不到会调度失败)
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms: #节点选择列表
- matchFields: #按节点字段列出的节点选择器要求列表
- matchExpressions: #按节点标签列出的节点选择器要求列表(推荐)
- key: cpu #键
operator: Gt #关系符
values: #值
- "6"
4.2给node节点打标签
kubectl label nodes 192.168.0.200 label_tools=tools-1
kubectl label nodes 192.168.0.200 cpu=6
kubectl label nodes 192.168.0.201 label_tools=tools-2
kubectl label nodes 192.168.0.201 cpu=12
#查看打好的标签
kubectl get node --show-labels=true
4.3配置硬亲和
#必须满足才匹配,不然不会调度
#node标签存在且cpu值大于6,此时pod已经调度到192.168.0.201主机上
spec:
containers:
- name: nginx-view
image: nginx:1.20
imagePullPolicy: IfNotPresent
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: cpu
operator: Gt
values:
- "6"
5.node软亲和
5.1软亲和
affinity:
nodeAffinity:
#优先调度到满足指定的规则的Node,相当于软限制 (倾向,找不到也会调度)
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 50 #节点调度权重
preference: #一个节点选择器项,与相应的权重相关联
matchFields: #按节点字段列出的节点选择器要求列表
matchExpressions: #按节点标签列出的节点选择器要求列表(推荐)
- key: cpu #键
operator: Gt #关系符
values: #值
- "15"
5.2配置软亲和
#有限调度到满足条件的节点,如果不满足会调度到其他节点
#表示node标签存在且cpu值大于12,没有匹配到标签,但还是会调度
spec:
containers:
- name: nginx-view
image: nginx:1.20
imagePullPolicy: IfNotPresent
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 50
preference:
matchExpressions:
- key: cpu
operator: Gt
values:
- "12"
三.pod亲和
1.作用域
拓扑域是指一个范围的概念,可以是一个node、一个机柜、一个机房、或者一个地区等。实际上是根据node上标签进行划分范围的,比如有3个node的标签同为prod=dev,prod=test,可以认为是3个node为一个拓扑域,或者第一个机柜上的节点标签全部为pc=pc1,第二个机柜上节点标签全部为pc=pc2,则认为第一个机柜是一个拓扑域,第二个机柜是另一个拓扑域,所以是拓扑域是根据节点的标签进行划分的。如下所示,Node1、Node2、Node3属于一个拓扑域,Node4、Node5、Node6属于另一个拓扑域。
2.创建一个tomcat-view的pod
[root@k8s-master1 ~]# cat tomcat.yaml
apiVersion: v1
kind: Service
metadata:
name: tomcat-view
namespace: default
spec:
type: NodePort
selector:
app: tomcat-view
ports:
- port: 80
targetPort: 80
nodePort: 10004
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-view
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: tomcat-view
template:
metadata:
labels:
app: tomcat-view
spec:
containers:
- name: tomcat-view
image: nginx:1.20
imagePullPolicy: IfNotPresent
#创建两个pod分别在两个节点上,app-tomcat-view
[root@k8s-master1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tomcat-view-6c977f87cf-5cvsp 1/1 Running 0 36s 172.10.159.174 192.168.0.200 <none> <none>
tomcat-view-6c977f87cf-b7dsv 1/1 Running 0 30s 172.10.169.142 192.168.0.203 <none> <none>
3.pod亲和
作用域是kubernetes.io/hostname,也就是匹配所有node标签,然后tomcat在192.168.0.200,192.168.203主机上,使用的pod硬亲和调度,必须调度到app=tomcat-view的主机上
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- tomcat-view
topologyKey: kubernetes.io/hostname
[root@k8s-master1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES <none> <none>
nginx-view-9b5786987-pdtxt 1/1 Running 0 37s 172.10.159.177 192.168.0.200 <none> <none>
nginx-view-9b5786987-rd5kh 1/1 Running 0 29s 172.10.169.143 192.168.0.203 <none> <none> <none> <none>
tomcat-view-6c977f87cf-5cvsp 1/1 Running 0 10m 172.10.159.174 192.168.0.200 <none> <none>
tomcat-view-6c977f87cf-b7dsv 1/1 Running 0 10m 172.10.169.142 192.168.0.203 <none> <none>
4.反亲和
作用域是kubernetes.io/hostname,也就是匹配所有node标签,然后tomcat在192.168.0.200,192.168.203主机上,使用的pod硬亲和调度,必须调度到app不等于tomcat-view的主机上
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- tomcat-view
topologyKey: kubernetes.io/hostname
[root@k8s-master1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES <none> <none>
nginx-view-5d974454d5-ljq8c 1/1 Running 0 79s 172.10.224.35 192.168.0.201 <none> <none>
nginx-view-5d974454d5-pj8wk 1/1 Running 0 79s 172.10.224.36 192.168.0.201 <none> <none>
nginx-view-5d974454d5-qc2rf 1/1 Running 0 79s 172.10.36.97 192.168.0.202 <none> <none>
nginx-view-5d974454d5-xkzt9 1/1 Running 0 79s 172.10.36.96 192.168.0.202 <none> <none>
tomcat-view-6c977f87cf-5cvsp 1/1 Running 0 10m 172.10.159.174 192.168.0.200 <none> <none>
tomcat-view-6c977f87cf-b7dsv 1/1 Running 0 10m 172.10.169.142 192.168.0.203 <none> <none>
四.亲和应用
1.打标签
kubectl label nodes 192.168.0.200 label_tools=tools-1
kubectl label nodes 192.168.0.200 node_number=6
kubectl label nodes 192.168.0.201 label_tools=tools-2
kubectl label nodes 192.168.0.201 node_number=12
#查看打好的标签
kubectl get node --show-labels=true
2.node亲和硬限制和pod反亲和软限制(生产推荐使用)
#node硬限制匹配cpu标签小于30的
#反亲和匹配app=nginx-view限制一个node只能有一个pod,如果pod数量多过node数量,依旧会匹配
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 50
preference:
matchExpressions:
- key: cpu
operator: Lt
values: ["30"]
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "nginx-view"
topologyKey: kubernetes.io/hostname
3.1 2副本
[root@k8s-master1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-view-5775468684-jfjfq 1/1 Running 0 66s 172.10.224.49 192.168.0.201 <none> <none>
nginx-view-5775468684-zlpsq 1/1 Running 0 66s 172.10.159.185 192.168.0.200 <none> <none>
3.2 3副本
[root@k8s-master1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-view-5775468684-jfjfq 1/1 Running 0 66s 172.10.224.49 192.168.0.201 <none> <none>
nginx-view-5775468684-zlpsq 1/1 Running 0 66s 172.10.159.185 192.168.0.200 <none> <none>
nginx-view-5775468684-dsdgq 1/1 Running 0 68s 172.10.159.141 192.168.0.200 <none> <none>
评论区