kubeadm安装集群系列-2.Master高可用
Master高可用安装
VIP负载均衡可以使用haproxy+keepalive实现,云上用户可以使用对应的ULB实现
准备kubeadm-init.yaml文件
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: k60p22.go0fadibgqm2xcx8
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.8.31.84
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-test-master-1
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes-test
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /data/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.1
controlPlaneEndpoint: 10.8.28.200:6443
networking:
dnsDomain: cluster.local
podSubnet: 192.168.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
kubeadm安装master-1
kubeadm init执行初始化
# 在master-1执行
kubeadm init --config kubeadm-init.yaml
以下为输出
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-test-master- kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.8.31.84 10.8.28.200]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-test-master- localhost] and IPs [10.8.31.84 127.0.0.1 ::]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-test-master- localhost] and IPs [10.8.31.84 127.0.0.1 ::]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 39.002458 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-test-master- as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-test-master- as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[kubelet-check] Initial timeout of 40s passed.
[bootstrap-token] Using token: k60p22.go0fadibgqm2xcx8
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root: kubeadm join 10.8.28.200: --token k60p22.go0fadibgqm2xcx8 \
--discovery-token-ca-cert-hash sha256:ffd76aed49c8f52dea15d16132897376176aea4c3ee50370e9369ca6c6c5a6b0 \
--control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.8.28.200: --token k60p22.go0fadibgqm2xcx8 \
--discovery-token-ca-cert-hash sha256:ffd76aed49c8f52dea15d16132897376176aea4c3ee50370e9369ca6c6c5a6b0
配置kubeconfig
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
通过kubectl查询当前集群状态
[root@k8s-test-master- k8s-init]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd- Healthy {"health":"true"}
[root@k8s-test-master- k8s-init]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-test-master- NotReady master 20m v1.15.1
配置其他Master节点
配置证书
USER=root
CONTROL_PLANE_IPS="k8s-test-master-2 k8s-test-master-3"
for host in ${CONTROL_PLANE_IPS}; do
ssh -p "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd"
scp -P /etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/
scp -P /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/kubernetes/pki/
scp -P /etc/kubernetes/pki/front-proxy-ca.* "${USER}"@$host:/etc/kubernetes/pki/
scp -P /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/
scp -P /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/
done
执行加入集群
在master2上操作,EndPoint为VIP地址
kubeadm join 10.8.28.200: --token k60p22.go0fadibgqm2xcx8 \
--discovery-token-ca-cert-hash sha256:ffd76aed49c8f52dea15d16132897376176aea4c3ee50370e9369ca6c6c5a6b0 \
--control-plane \
--node-name k8s-test-master-
以下为输出
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-test-master- kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.8.152.149 10.8.28.200]
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-test-master- localhost] and IPs [10.8.152.149 127.0.0.1 ::]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-test-master- localhost] and IPs [10.8.152.149 127.0.0.1 ::]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-test-master- as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-test-master- as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] This node has joined the cluster and a new control plane instance was created: * Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster.
master3同理
kubeadm join 10.8.28.200: --token k60p22.go0fadibgqm2xcx8 \
--discovery-token-ca-cert-hash sha256:ffd76aed49c8f52dea15d16132897376176aea4c3ee50370e9369ca6c6c5a6b0 \
--control-plane \
--node-name k8s-test-master-
检查master节点
[root@k8s-test-master- ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-test-master- NotReady master 32m v1.15.1
k8s-test-master- NotReady master 7m9s v1.15.1
k8s-test-master- NotReady master 2m4s v1.15.1
配置网络
本例使用calico
其他网络可以在pod-network章节找到对应安装方法
kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
kube-proxy开启ip_vs
kubectl edit cm kube-proxy -n kube-system
## configmap/kube-proxy edited
重启kube-proxy
kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
## pod "kube-proxy-52clz" deleted
## pod "kube-proxy-6d7vr" deleted
## pod "kube-proxy-vm6zf" deleted
## pod "kube-proxy-zgtgk" deleted
kubectl logs kube-proxy-6q54d -n kube-system
## I0729 ::09.286574 server_others.go:] Using ipvs Proxier.
## W0729 ::09.286940 proxier.go:] IPVS scheduler not specified, use rr by default
## I0729 ::09.288637 server.go:] Version: v1.15.1
## I0729 ::09.295887 conntrack.go:] Setting nf_conntrack_max to
## I0729 ::09.296683 config.go:] Starting service config controller
## I0729 ::09.296711 controller_utils.go:] Waiting for caches to sync for service config controller
## I0729 ::09.296729 config.go:] Starting endpoints config controller
## I0729 ::09.296737 controller_utils.go:] Waiting for caches to sync for endpoints config controller
## I0729 ::09.396922 controller_utils.go:] Caches are synced for service config controller
## I0729 ::09.396937 controller_utils.go:] Caches are synced for endpoints config controller
kubeadm安装集群系列-2.Master高可用的更多相关文章
- kubeadm安装集群系列(kubeadm 1.15.1)
kubeadm已经进入GA阶段,所以尝试使用kubeadm从零开始安装高可用的Kubernetes集群,并记录下过程和所有坑 本文基于kubeadm 1.15.1 目录 kubeadm安装集群系列-1 ...
- kubeadm安装集群系列-4.证书更新
证书更新 默认证书一年有效期 一旦证书过期,使用kubectl时会出现如下提示:`Unable to connect to the server: x509: certificate has expi ...
- kubeadm安装集群系列-3.添加工作节点
添加工作节点 worker通过kubeadm join加入集群,加入所需的集群的token默认24小时过期 查看Token kubeadm token list # 如果失效创建一个新的 kubead ...
- kubeadm安装集群系列-1.基础服务安装
基础服务 本文基于centos7.5部署 规划 10.8.28.200 master-VIP 10.8.31.84 k8s-test-master-1 10.8.152.149 k8s-test-ma ...
- kubeadm安装集群系列-5.其他操作
常用的一些操作记录 imagePullSecrets kubectl -n [namespace] create secret docker-registry regsecret --docker-s ...
- Apache shiro集群实现 (五)分布式集群系统下的高可用session解决方案
Apache shiro集群实现 (一) shiro入门介绍 Apache shiro集群实现 (二) shiro 的INI配置 Apache shiro集群实现 (三)shiro身份认证(Shiro ...
- Apache shiro集群实现 (六)分布式集群系统下的高可用session解决方案---Session共享
Apache shiro集群实现 (一) shiro入门介绍 Apache shiro集群实现 (二) shiro 的INI配置 Apache shiro集群实现 (三)shiro身份认证(Shiro ...
- MySQL集群搭建(5)-MHA高可用架构
1 概述 1.1 MHA 简介 MHA - Master High Availability 是由 Perl 实现的一款高可用程序,出现故障时,MHA 以最小的停机时间(通常10-30秒)执行 mas ...
- Cluster基础(四):创建RHCS集群环境、创建高可用Apache服务
一.创建RHCS集群环境 目标: 准备四台KVM虚拟机,其三台作为集群节点,一台安装luci并配置iSCSI存储服务,实现如下功能: 使用RHCS创建一个名为tarena的集群 集群中所有节点均需要挂 ...
随机推荐
- Maven打Dubbo可执行Jar
POM文件中添加如下配置: <build> <finalName>test-jar</finalName> <resources> <resour ...
- hadoop关闭安全模式
执行以下语句即可 hadoop dfsadmin -safemode leave
- BZOJ 3772: 精神污染 (dfs序+树状数组)
跟 BZOJ 4009: [HNOI2015]接水果一样- CODE #include <set> #include <queue> #include <cctype&g ...
- nginx配置项
try_files location / { try_files $uri $uri/ /index.php?$query_string; } 当用户请求 http://host/instance时, ...
- import org.apache.ibatis.annotations.Param 报错
说明缺少依赖 <dependency> <groupId>com.baomidou</groupId> <artifactId>mybatis-plus ...
- javascript JSON.parse and JSON.stringify
var jstu = '{"name": "xiaoqiang", "age": 18}'; console.log(jstu); var ...
- [Luogu] 时间复杂度
https://www.luogu.org/problemnew/show/P3952 考场上输出的是 "YES" "NO" ++ ,如果不是亲身经历,打死我我 ...
- springboot 生产环境与开发环境配置
通过修改yml文件里的active属性,prod(生产环境) 与 dev (开发环境)
- 逻辑回归原理 面试 Logistic Regression
逻辑回归是假设数据服从独立且服从伯努利分布,多用于二分类场景,应用极大似然估计构造损失函数,并使用梯度下降法对参数进行估计.
- vue-cli3项目首页加载速度优化(cdn加速,路由懒加载,gzip压缩)
今天打算上线vue的单页面项目,上线后,首页加载速度巨慢! 原因是项目上线后,网速不够快,加载js,css等资源很慢, 打开打包好的文件发现chunk-vendors.xxxxxxx.js的包很大,达 ...