K8S从入门到放弃系列-(7)kubernetes集群之kube-scheduler部署
简单总结:kube-scheduler负责分配调度Pod到集群内的node节点监听kube-apiserver,查询还未分配的Node的Pod根据调度策略为这些Pod分配节点
kube-scheduler 连接 apiserver 需要使用的证书,同时本身 10259 端口也会使用此证书
[root@k8s-master01 ~]# vim /opt/k8s/certs/kube-scheduler-csr.json
{
"CN": "system:kube-scheduler",
"hosts": [
"127.0.0.1",
"localhost",
"10.10.0.18",
"10.10.0.19",
"10.10.0.20"
],
"key": {
"algo": "rsa",
"size":
},
"names": [
{
"C": "CN",
"ST": "ShangHai",
"L": "ShangHai",
"O": "system:kube-scheduler",
"OU": "System"
}
]
}
[root@k8s-master01 ~]# cd /opt/k8s/certs/
[root@k8s-master01 certs]# cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/opt/k8s/certs/ca-config.json \
-profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
// :: [INFO] generate received request
// :: [INFO] received CSR
// :: [INFO] generating key: rsa-
// :: [INFO] encoded CSR
// :: [INFO] signed certificate with serial number
// :: [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1., from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2. ("Information Requirements").
[root@k8s-master01 certs]# ll kube-scheduler*
-rw-r--r-- root root Apr : kube-scheduler.csr
-rw-r--r-- root root Apr : kube-scheduler-csr.json
-rw------- root root Apr : kube-scheduler-key.pem
-rw-r--r-- root root Apr : kube-scheduler.pem
[root@k8s-master01 ~]# ansible k8s-master -m copy -a 'src=/opt/k8s/certs/kube-scheduler-key.pem dest=/etc/kubernetes/ssl/'
[root@k8s-master01 ~]# ansible k8s-master -m copy -a 'src=/opt/k8s/certs/kube-scheduler.pem dest=/etc/kubernetes/ssl/'
1、kube-scheduler 组件开启安全端口及 RBAC 认证所需配置2、kube-scheduler kubeconfig文件中包含Master地址信息与上一步创建的证书、私钥
## 设置集群参数
###
[root@k8s-master01 ~]# kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig
Cluster "kubernetes" set.
## 配置客户端认证参数
[root@k8s-master01 ~]# kubectl config set-credentials "system:kube-scheduler" \
--client-certificate=/etc/kubernetes/ssl/kube-scheduler.pem \
--client-key=/etc/kubernetes/ssl/kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
User "system:kube-scheduler" set.
## 配置上下文参数
[root@k8s-master01 ~]# kubectl config set-context system:kube-scheduler@kubernetes \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
Context "system:kube-scheduler@kubernetes" created.
## 配置默认上下文
[root@k8s-master01 ~]# kubectl config use-context system:kube-scheduler@kubernetes --kubeconfig=kube-scheduler.kubeconfig
Switched to context "system:kube-scheduler@kubernetes".
## 配置文件分发
[root@k8s-master01 ~]# ansible k8s-master -m copy -a 'src=/root/kube-scheduler.kubeconfig dest=/etc/kubernetes/config/'
kube-shceduler 同 kube-controller manager 一样将不安全端口绑定在本地,安全端口对外公开
[root@k8s-master01 ~]# vim /opt/k8s/cfg/kube-scheduler.conf
###
# kubernetes scheduler config # default config should be adequate # Add your own!
KUBE_SCHEDULER_ARGS="--address=127.0.0.1 \
--authentication-kubeconfig=/etc/kubernetes/config/kube-scheduler.kubeconfig \
--authorization-kubeconfig=/etc/kubernetes/config/kube-scheduler.kubeconfig \
--bind-address=0.0.0.0 \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--kubeconfig=/etc/kubernetes/config/kube-scheduler.kubeconfig \
--requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \
--secure-port= \
--leader-elect=true \
--port= \
--tls-cert-file=/etc/kubernetes/ssl/kube-scheduler.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-scheduler-key.pem \
--v="
## 分发配置文件
[root@k8s-master01 ~]# ansible k8s-master -m copy -a 'src=/opt/k8s/cfg/kube-scheduler.conf dest=/etc/kubernetes/config'
需要指定需要加载的配置文件路径
[root@k8s-master01 ~]# vim /opt/k8s/unit/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service]
EnvironmentFile=-/etc/kubernetes/config/kube-scheduler.conf
User=kube
ExecStart=/usr/local/bin/kube-scheduler \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE= [Install]
WantedBy=multi-user.target
##脚本分发
[root@k8s-master01 ~]# ansible k8s-master -m copy -a 'src=/opt/k8s/unit/kube-scheduler.service dest=/usr/lib/systemd/system/'
[root@k8s-master01 ~]# ansible k8s-master -m shell -a 'systemctl daemon-reload'
[root@k8s-master01 ~]# ansible k8s-master -m shell -a 'systemctl enable kube-scheduler.service'
[root@k8s-master01 ~]# ansible k8s-master -m shell -a 'systemctl start kube-scheduler.service'
[root@k8s-master01 ~]# kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master03_e0e29681-666b-11e9-b086-000c2920229d","leaseDurationSeconds":15,"acquireTime":"2019-04-24T08:35:14Z","renewTime":"2019-04-24T08:36:08Z","leaderTransitions":0}'
creationTimestamp: "2019-04-24T08:35:14Z"
name: kube-scheduler
namespace: kube-system
resourceVersion: ""
selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
uid: e17d5eee-666b-11e9-bdea-000c2920229d
## master03 为leader主机
在三个节点中,任一主机执行以下命令,都应返回集群状态信息
[root@k8s-master02 config]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd- Healthy {"health":"true"}
etcd- Healthy {"health":"true"}
etcd- Healthy {"health":"true"}
K8S从入门到放弃系列-(7)kubernetes集群之kube-scheduler部署的更多相关文章
- K8S从入门到放弃系列-(16)Kubernetes集群Prometheus-operator监控部署
Prometheus Operator不同于Prometheus,Prometheus Operator是 CoreOS 开源的一套用于管理在 Kubernetes 集群上的 Prometheus 控 ...
- K8S从入门到放弃系列-(15)Kubernetes集群Ingress部署
Ingress是kubernetes集群对外提供服务的一种方式.ingress部署相对比较简单,官方把相关资源配置文件,都已经集合到一个yml文件中(mandatory.yaml),镜像地址也修改为q ...
- K8S从入门到放弃系列-(14)Kubernetes集群Dashboard部署
Dashboard是k8s的web界面,用户可以用 Kubernetes Dashboard 部署容器化的应用.监控应用.并对集群本身进行管理,在 Kubernetes Dashboard 中可以查看 ...
- K8S从入门到放弃系列-(13)Kubernetes集群mertics-server部署
集群部署好后,如果我们想知道集群中每个节点及节点上的pod资源使用情况,命令行下可以直接使用kubectl top node/pod来查看资源使用情况,默认此命令不能正常使用,需要我们部署对应api资 ...
- K8S从入门到放弃系列-(12)Kubernetes集群Coredns部署
摘要: 集群其他组件全部完成后我们应当部署集群 DNS 使 service 等能够正常解析,1.11版本coredns已经取代kube-dns成为集群默认dns. 1)下载yaml配置清单 [root ...
- K8S从入门到放弃系列-(11)kubernetes集群网络Calico部署
摘要: 前面几个篇幅,已经介绍master与node节点集群组件部署,由于K8S本身不支持网络,当 node 全部启动后,由于网络组件(CNI)未安装会显示为 NotReady 状态,需要借助第三方网 ...
- K8S从入门到放弃系列-(10)kubernetes集群之kube-proxy部署
摘要: kube-proxy的作用主要是负责service的实现,具体来说,就是实现了内部从pod到service和外部的从node port向service的访问 新版本目前 kube-proxy ...
- K8S从入门到放弃系列-(9)kubernetes集群之kubelet部署
摘要: Kubelet组件运行在Node节点上,维持运行中的Pods以及提供kuberntes运行时环境,主要完成以下使命: 1.监视分配给该Node节点的pods 2.挂载pod所需要的volume ...
- K8S从入门到放弃系列-(6)kubernetes集群之kube-controller-manager部署
摘要: 1.Kubernetes控制器管理器是一个守护进程它通过apiserver监视集群的共享状态,并进行更改以尝试将当前状态移向所需状态. 2.kube-controller-manager是有状 ...
随机推荐
- [JXOI2017]颜色
\(Orz\) 各位题解大佬,我来膜拜一发 还有单调栈实在没弄懂 法一:线段树+堆 首先,讨论区间的个数的题目,我们可以想到枚举一个端点\(r\),找到所有的\(l\) 我们不妨设:\(ml[i]\) ...
- spring boot 扫描 其他jar包里面的 mapper xml
启动类配置扫描 goods.mapper为当前项目mapper路径 ,common.mpper为其他jar包. 1. 2.mybatis.mapper-locations=classpath*:map ...
- 【LGR-059】洛谷7月月赛题解
传送门 比赛的时候正在大巴上,笔记本没网又没电(不过就算有我估计也不会打就是了) \(A\) 咕咕 const int N=(1<<10)+5; int a[N][N],n; void s ...
- xshel链接linuxl安装nginx
原文链接:https://blog.csdn.net/Sweet__dream/article/details/78256952?utm_source=blogxgwz9 这个连接更详细:https: ...
- JS JQuery 操作: Json转 Excel 下载文件
方法的调用 var json = '[' + '{"申请流水号":"123456","保险公司":"测试数据",&quo ...
- label设置渐变时不显示纯英文纯数字字符串
提出问题: 当对UILabel设置渐变color时,有点小问题.即:text为中文或中英混合字符串时显示正常,纯英文字符串不显示!!! 剖析问题: 经搜索了解到:在显示中文时,绘制渐变color的 ...
- AxB Proplem(大数乘法)
描述 Redraiment碰到了一个难题,需要请你来帮忙:给你两个整数,请你计算A × B. 输入 数据的第一行是整数T(1 ≤ T ≤ 20),代表测试数据的组数. 接着有T组数据,每组数据只有一行 ...
- BigDecimal 相关
一.BigDecimal 精度设置 BigDecimal setScale(int newScale, int roundingMode): newScale:小数位数, RoundingMode是一 ...
- T-MAX组--项目冲刺(第一天)
THE FIRST DAY 项目相关 作业相关 具体描述 所属班级 2019秋福大软件工程实践Z班 作业要求 团队作业第五次-项目冲刺 作业正文 T-MAX组--项目冲刺(第一天) 团队名称 T-MA ...
- Fiddler is not capturing web request from Firefox
Fiddler is not capturing web request from Firefox You can also get the FiddlerHook plug in for Firef ...