摘要:
1、Kubernetes控制器管理器是一个守护进程它通过apiserver监视集群的共享状态,并进行更改以尝试将当前状态移向所需状态。
2、kube-controller-manager是有状态的服务,会修改集群的状态信息。如果多个master节点上的相关服务同时生效,则会有同步与一致性问题,所以多master节点中的kube-controller-manager服务只能是主备的关系,kukubernetes采用租赁锁(lease-lock)实现leader的选举,具体到kube-controller-manager,设置启动参数"--leader-elect=true"。

1)创建kube-conftroller-manager证书签名请求

1、kube-controller-mamager连接 apiserver 需要使用的证书,同时本身 10257 端口也会使用此证书
2、kube-controller-mamager与kubei-apiserver通信采用双向TLS认证 
[root@k8s-master01 ~]# vim /opt/k8s/certs/kube-controller-manager-csr.json
{
"CN": "system:kube-controller-manager",
"hosts": [
"127.0.0.1",
"10.10.0.18",
"10.10.0.19",
"10.10.0.20",
"localhost"
],
"key": {
"algo": "rsa",
"size":
},
"names": [
{
"C": "CN",
"ST": "ShangHai",
"L": "ShangHai",
"O": "system:kube-controller-manager",
"OU": "System"
}
]
}
1、hosts 列表包含所有 kube-controller-manager 节点 IP;
2、CN 为 system:kube-controller-manager;O 为 system:kube-controller-manager;kube-apiserver预定义的 RBAC使用的ClusterRoleBindings system:kube-controller-manager将用户system:kube-controller-manager与ClusterRole system:kube-controller-manager绑定。
2)生成kube-controller-manager证书与私钥

[root@k8s-master01 ~]# cd /opt/k8s/certs/
[root@k8s-master01 certs]# cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/opt/k8s/certs/ca-config.json \
-profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
// :: [INFO] generate received request
// :: [INFO] received CSR
// :: [INFO] generating key: rsa-
// :: [INFO] encoded CSR
// :: [INFO] signed certificate with serial number
// :: [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1., from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2. ("Information Requirements").
3)查看证书

[root@k8s-master01 certs]# ll kube-controller-manager*
-rw-r--r-- root root Apr : kube-controller-manager.csr
-rw-r--r-- root root Apr : kube-controller-manager-csr.json
-rw------- root root Apr : kube-controller-manager-key.pem
-rw-r--r-- root root Apr : kube-controller-manager.pem
4)分发证书

[root@k8s-master01 ~]# ansible k8s-master -m copy -a 'src=/opt/k8s/certs/kube-controller-manager-key.pem dest=/etc/kubernetes/ssl/'
[root@k8s-master01 ~]# ansible k8s-master -m copy -a 'src=/opt/k8s/certs/kube-controller-manager.pem dest=/etc/kubernetes/ssl/'
5)生成配置文件kube-controller-manager.kubeconfig

kube-controller-manager 组件开启安全端口及RBAC认证所需配置

## 配置集群参数
### --kubeconfig:指定kubeconfig文件路径与文件名;如果不设置,默认生成在~/.kube/config文件。
### 后面需要用到此文件,所以我们把配置信息单独指向到指定文件中
[root@k8s-master01 ~]# kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig
Cluster "kubernetes" set.
## 配置客户端认证参数
### --server:指定api-server,若不指定,后面脚本中,可以指定master
### 认证用户为前文签名中的"system:kube-controller-manager";
[root@k8s-master01 ~]# kubectl config set-credentials system:kube-controller-manager \
--client-certificate=/etc/kubernetes/ssl/kube-controller-manager.pem \
--embed-certs=true \
--client-key=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
--kubeconfig=kube-controller-manager.kubeconfig
User "system:kube-controller-manager" set
## 配置上下文参数
[root@k8s-master01 ~]# kubectl config set-context system:kube-controller-manager@kubernetes \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
Context "system:kube-controller-manager@kubernetes" created.
## 配置默认上下文
[root@k8s-master01 ~]# kubectl config use-context system:kube-controller-manager@kubernetes --kubeconfig=kube-controller-manager.kubeconfig
Switched to context "system:kube-controller-manager@kubernetes". ## 分发生成的配置文件
[root@k8s-master01 ~]# ansible k8s-master -m copy -a 'src=/root/kube-controller-manager.kubeconfig dest=/etc/kubernetes/config/'
6)编辑kube-controller-manager核心文件

controller manager 将不安全端口 10252 绑定到 127.0.0.1 确保 kuebctl get cs 有正确返回;将安全端口 10257 绑定到 0.0.0.0 公开,提供服务调用;由于controller manager开始连接apiserver的6443认证端口,所以需要 --use-service-account-credentials 选项来让 controller manager 创建单独的 service account(默认 system:kube-controller-manager 用户没有那么高权限)
[root@k8s-master01 ~]# vim /opt/k8s/cfg/kube-controller-manager.conf
###
# The following values are used to configure the kubernetes controller-manager # defaults from config and apiserver should be adequate # Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 \
                             --authentication-kubeconfig=/etc/kubernetes/config/kube-controller-manager.kubeconfig \
                             --authorization-kubeconfig=/etc/kubernetes/config/kube-controller-manager.kubeconfig \
                             --bind-address=0.0.0.0 \
                             --cluster-name=kubernetes \
                             --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
                             --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
                             --client-ca-file=/etc/kubernetes/ssl/ca.pem \
                             --controllers=*,bootstrapsigner,tokencleaner \
                             --deployment-controller-sync-period=10s \
                             --experimental-cluster-signing-duration=87600h0m0s \
                             --enable-garbage-collector=true \
                             --kubeconfig=/etc/kubernetes/config/kube-controller-manager.kubeconfig \
                             --leader-elect=true \
                             --node-monitor-grace-period=20s \
                             --node-monitor-period=5s \
                             --port=10252 \
                             --pod-eviction-timeout=2m0s \
                             --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \
                             --terminated-pod-gc-threshold=50 \
                             --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
                             --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
                             --root-ca-file=/etc/kubernetes/ssl/ca.pem \
                             --secure-port=10257 \
                             --service-cluster-ip-range=10.254.0.0/16 \
                             --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
                             --use-service-account-credentials=true \
                             --v=2"
## 分发kube-controller-manager配置文件
[root@k8s-master01 ~]# ansible k8s-master -m copy -a 'src=/opt/k8s/cfg/kube-controller-manager.conf dest=/etc/kubernetes/config'
参数说明:
  • address/bind-address:默认值:0.0.0.0,监听--secure-port端口的IP地址。关联的接口必须由集群的其他部分和CLI/web客户端访问。
  • cluster-name:集群名称
  • cluster-signing-cert-file/cluster-signing-key-file:用于集群范围认证
  • controllers:启动的contrller列表,默认为”*”,启用所有的controller,但不包含” bootstrapsigner”与”tokencleaner”;
  • kubeconfig:带有授权和master位置信息的kubeconfig文件路径
  • leader-elect:在执行主逻辑之前,启动leader选举,并获得leader权
  • service-cluster-ip-range:集群service的IP地址范围

8)启动脚本

[root@k8s-master01 ~]# vim /opt/k8s/unit/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service]
EnvironmentFile=-/etc/kubernetes/config/kube-controller-manager.conf
User=kube
ExecStart=/usr/local/bin/kube-controller-manager \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE= [Install]
WantedBy=multi-user.target
## 分发启动脚本
[root@k8s-master01 ~]# ansible k8s-master -m copy -a 'src=/opt/k8s/unit/kube-controller-manager.service dest=/usr/lib/systemd/system/'
9)启动服务

[root@k8s-master01 ~]# ansible k8s-master -m shell -a 'systemctl daemon-reload'
[root@k8s-master01 ~]# ansible k8s-master -m shell -a 'systemctl enable kube-controller-manager'
[root@k8s-master01 ~]# ansible k8s-master -m shell -a 'systemctl start kube-controller-manager'
10)查看leader主机

[root@k8s-master01 ~]# kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master01_aef1b777-6658-11e9-beb0-000c295aa452","leaseDurationSeconds":15,"acquireTime":"2019-04-24T06:18:04Z","renewTime":"2019-04-24T06:20:43Z","leaderTransitions":2}'
creationTimestamp: "2019-04-24T05:55:13Z"
name: kube-controller-manager
namespace: kube-system
resourceVersion: ""
selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
uid: 870148c4--11e9-bb69-000c29180723
## 可看到当前k8s-master01为leader节点

K8S从入门到放弃系列-(6)kubernetes集群之kube-controller-manager部署的更多相关文章

  1. K8S从入门到放弃系列-(16)Kubernetes集群Prometheus-operator监控部署

    Prometheus Operator不同于Prometheus,Prometheus Operator是 CoreOS 开源的一套用于管理在 Kubernetes 集群上的 Prometheus 控 ...

  2. K8S从入门到放弃系列-(15)Kubernetes集群Ingress部署

    Ingress是kubernetes集群对外提供服务的一种方式.ingress部署相对比较简单,官方把相关资源配置文件,都已经集合到一个yml文件中(mandatory.yaml),镜像地址也修改为q ...

  3. K8S从入门到放弃系列-(13)Kubernetes集群mertics-server部署

    集群部署好后,如果我们想知道集群中每个节点及节点上的pod资源使用情况,命令行下可以直接使用kubectl top node/pod来查看资源使用情况,默认此命令不能正常使用,需要我们部署对应api资 ...

  4. K8S从入门到放弃系列-(14)Kubernetes集群Dashboard部署

    Dashboard是k8s的web界面,用户可以用 Kubernetes Dashboard 部署容器化的应用.监控应用.并对集群本身进行管理,在 Kubernetes Dashboard 中可以查看 ...

  5. K8S从入门到放弃系列-(12)Kubernetes集群Coredns部署

    摘要: 集群其他组件全部完成后我们应当部署集群 DNS 使 service 等能够正常解析,1.11版本coredns已经取代kube-dns成为集群默认dns. 1)下载yaml配置清单 [root ...

  6. K8S从入门到放弃系列-(11)kubernetes集群网络Calico部署

    摘要: 前面几个篇幅,已经介绍master与node节点集群组件部署,由于K8S本身不支持网络,当 node 全部启动后,由于网络组件(CNI)未安装会显示为 NotReady 状态,需要借助第三方网 ...

  7. K8S从入门到放弃系列-(10)kubernetes集群之kube-proxy部署

    摘要: kube-proxy的作用主要是负责service的实现,具体来说,就是实现了内部从pod到service和外部的从node port向service的访问 新版本目前 kube-proxy ...

  8. K8S从入门到放弃系列-(9)kubernetes集群之kubelet部署

    摘要: Kubelet组件运行在Node节点上,维持运行中的Pods以及提供kuberntes运行时环境,主要完成以下使命: 1.监视分配给该Node节点的pods 2.挂载pod所需要的volume ...

  9. K8S从入门到放弃系列-(7)kubernetes集群之kube-scheduler部署

    摘要: 1.Kube-scheduler作为组件运行在master节点,主要任务是把从kube-apiserver中获取的未被调度的pod通过一系列调度算法找到最适合的node,最终通过向kube-a ...

随机推荐

  1. python一些实用的小工具

    1  搭一个简易的本地局域网  python -m http.server 2 获取当前目录下的所有文件名 3 进度条效果 import sys,time for i in range(50): sy ...

  2. Hadoop优化 操作系统优化

    1.优化文件系统,修改/etc/fstab 在defaults后面添加noatime,表示不记录文件的访问时间. 修改为: 如果不想重新启动操作系统使配置生效,那么应该执行: # mount -o r ...

  3. kvm 学习(三)存储池

    创建kvm存储池 1.查看系统已经存储的存储池 [root@runstone ~ ::]#virsh pool-list Name State Autostart ------------------ ...

  4. 配置centos7阿里镜像源和epel源

    [root@runstone yum.repos.d]# pwd /etc/yum.repos.d [root@runstone yum.repos.d]# cat aliBase.repo #镜像源 ...

  5. yarn-site.xml 基本配置参考

    以下只是对yarn配置文件(yarn.site.xml)简单的一个配置 <configuration> <!-- rm失联后重新链接的时间 --> <property&g ...

  6. Java判断整数溢出

    开题报告第一版写完发给老师了,熬了两周终于搞出来了,等着被怼了之后再改吧.晚上选了Leetcode一道简单的题,整数反转,就是将一个int类型的数反转.原本确实很简单,最后出现个问题有意思--整数溢出 ...

  7. 通过adb操作安卓亮屏、设置背光亮度、解锁、打开app

    亮屏 adb shell inputkeyevent 26 keyevent 26表示点击power Android adb 点亮和关闭屏幕的命令 # kernel休眠 echo mem > / ...

  8. nVidia GPGPU vs AMD Radeon HD Graphics执行模式对比

    大家做高性能计算的朋友,想必对CPU的执行模式已经非常熟悉了吧.当代高级些的CPU一般采用超标量流水线,使得毗邻几条相互独立的指令能够并行执行——这称为指令集并行(ILP,Instruction-Le ...

  9. SIT测试 和 UAT测试

    在企业级软件的测试过程中,经常会划分为三个阶段——单元测试,SIT和UAT,如果开发人员足够,通常还会在SIT之前引入代码审查机制(Code Review)来保证软件符合客户需求且流程正确.下面简单介 ...

  10. std::wstring std::string w2m m2w

    static std::wstring m2w(std::string ch, unsigned int CodePage = CP_ACP) { if (ch.empty())return L&qu ...