kubernetes实战(二十五):kubeadm 安装 高可用 k8s v1.13.x
1、系统环境
使用kubeadm安装高可用k8s v.13.x较为简单,相比以往的版本省去了很多步骤。
kubeadm安装高可用k8s v.11 和 v1.12 点我
主机信息
主机名 | IP地址 | 说明 | 组件 |
---|---|---|---|
k8s-master01 ~ 03 | 192.168.20.20 ~ 22 | master节点 * 3 | keepalived、nginx、etcd、kubelet、kube-apiserver |
k8s-master-lb | 192.168.20.10 | keepalived虚拟IP | 无 |
k8s-node01 ~ 08 | 192.168.20.30 ~ 37 | worker节点 * 8 | kubelet |
主机配置
- [root@k8s-master01 ~]# hostname
- k8s-master01
- [root@k8s-master01 ~]# free -g
- total used free shared buff/cache available
- Mem:
- Swap:
- [root@k8s-master01 ~]# cat /proc/cpuinfo | grep process
- processor :
- processor :
- processor :
- processor :
- [root@k8s-master01 ~]# cat /etc/redhat-release
- CentOS Linux release 7.5. (Core)
Docker和k8s版本
- [root@k8s-master01 ~]# docker version
- Client:
- Version: 17.09.-ce
- API version: 1.32
- Go version: go1.8.3
- Git commit: 19e2cf6
- Built: Thu Dec ::
- OS/Arch: linux/amd64
- Server:
- Version: 17.09.-ce
- API version: 1.32 (minimum version 1.12)
- Go version: go1.8.3
- Git commit: 19e2cf6
- Built: Thu Dec ::
- OS/Arch: linux/amd64
- Experimental: false
- [root@k8s-master01 ~]# kubectl version
- Client Version: version.Info{Major:"", Minor:"", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:35:51Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
- Server Version: version.Info{Major:"", Minor:"", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:28:14Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
2、配置SSH互信
所有节点配置hosts:
- [root@k8s-master01 ~]# cat /etc/hosts
- 192.168.20.20 k8s-master01
- 192.168.20.21 k8s-master02
- 192.168.20.22 k8s-master03
- 192.168.20.10 k8s-master-lb
- 192.168.20.30 k8s-node01
- 192.168.20.31 k8s-node02
在k8s-master01上执行:
- [root@k8s-master01 ~]# ssh-keygen -t rsa
- Generating public/private rsa key pair.
- Enter file in which to save the key (/root/.ssh/id_rsa):
- Created directory '/root/.ssh'.
- Enter passphrase (empty for no passphrase):
- Enter same passphrase again:
- Your identification has been saved in /root/.ssh/id_rsa.
- Your public key has been saved in /root/.ssh/id_rsa.pub.
- The key fingerprint is:
- SHA256:TE0eRfhGNRXL3btmmMRq+awUTkR4RnWrMf6Q5oJaTn0 root@k8s-master01
- The key's randomart image is:
- +---[RSA 2048]----+
- | =*+oo+o|
- | =o+. o.=|
- | . =+ o +o|
- | o . = = .|
- | S + O . |
- | = B = .|
- | + O E = |
- | = o = o |
- | . . ..o |
- +----[SHA256]-----+
- for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
所有节点关闭防火墙和selinux
- [root@k8s-master01 ~]# systemctl disable --now firewalld NetworkManager
- Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
- Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
- Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service.
- Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.
- Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
- [root@k8s-master01 ~]# setenforce 0
- [root@k8s-master01 ~]# sed -ri '/^[^#]*SELINUX=/s#=.+$#=disabled#' /etc/selinux/config
所有节点关闭dnsmasq(如开启)
- systemctl disable --now dnsmasq
所有节点关闭swap
- [root@k8s-master01 ~]# swapoff -a && sysctl -w vm.swappiness=0
- vm.swappiness = 0
- [root@k8s-master01 ~]# sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
所有节点同步时间
- ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
- echo 'Asia/Shanghai' >/etc/timezone
- ntpdate time2.aliyun.com
# 加入到crontab
所有节点limit配置
- ulimit -SHn 65535
master01下载安装文件
- [root@k8s-master01 ~]# git clone https://github.com/dotbalo/k8s-ha-install.git -b v1.13.x
所有节点创建repo
- cd /etc/yum.repos.d
- mkdir bak
- mv *.repo bak/
- cp /root/k8s-ha-install/repo/* .
所有节点升级系统并重启
- yum install wget git jq psmisc vim net-tools -y
- yum update -y && reboot
所有节点配置k8s内核
- cat <<EOF > /etc/sysctl.d/k8s.conf
- net.ipv4.ip_forward =
- net.bridge.bridge-nf-call-ip6tables =
- net.bridge.bridge-nf-call-iptables =
- fs.may_detach_mounts =
- vm.overcommit_memory=
- vm.panic_on_oom=
- fs.inotify.max_user_watches=
- fs.file-max=
- fs.nr_open=
- net.netfilter.nf_conntrack_max=
- EOF
- sysctl --system
3、k8s服务安装
所有节点安装docker-ce
- yum -y install docker-ce-17.09..ce-.el7.centos
所有节点安装集群组件
- yum install -y kubelet-1.13.-.x86_64 kubeadm-1.13.-.x86_64
所有节点启动docker和kubelet
- systemctl enable docker && systemctl start docker
- [root@k8s-master01 ~]# DOCKER_CGROUPS=$(docker info | grep 'Cgroup' | cut -d' ' -f3)
- [root@k8s-master01 ~]# echo $DOCKER_CGROUPS
- cgroupfs
- [root@k8s-master01 ~]# cat >/etc/sysconfig/kubelet<<EOF
- KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
- EOF
- [root@k8s-master01 ~]#
- [root@k8s-master01 ~]# systemctl daemon-reload
- [root@k8s-master01 ~]# systemctl enable kubelet && systemctl start kubelet
- Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
- [root@k8s-master01 ~]#
注意此时如果kubelet无法启动不用管
在所有master节点安装并启动keepalived及docker-compose
- yum install -y keepalived
- systemctl enable keepalived && systemctl restart keepalived
- #安装docker-compose
- yum install -y docker-compose
4、master01节点安装
以下操作在master01节点
创建配置文件
修改对应的配置信息,注意nm-bond修改为服务器对应网卡名称
- [root@k8s-master01 k8s-ha-install]# ./create-config.sh
- create kubeadm-config.yaml files success. config/k8s-master01/kubeadm-config.yaml
- create kubeadm-config.yaml files success. config/k8s-master02/kubeadm-config.yaml
- create kubeadm-config.yaml files success. config/k8s-master03/kubeadm-config.yaml
- create keepalived files success. config/k8s-master01/keepalived/
- create keepalived files success. config/k8s-master02/keepalived/
- create keepalived files success. config/k8s-master03/keepalived/
- create nginx-lb files success. config/k8s-master01/nginx-lb/
- create nginx-lb files success. config/k8s-master02/nginx-lb/
- create nginx-lb files success. config/k8s-master03/nginx-lb/
- create calico.yaml file success. calico/calico.yaml
- [root@k8s-master01 k8s-ha-install]# pwd
- /root/k8s-ha-instal
分发文件
- [root@k8s-master01 k8s-ha-install]# export HOST1=k8s-master01
- [root@k8s-master01 k8s-ha-install]# export HOST2=k8s-master02
- [root@k8s-master01 k8s-ha-install]# export HOST3=k8s-master03
- [root@k8s-master01 k8s-ha-install]# scp -r config/$HOST1/kubeadm-config.yaml $HOST1:/root/
- kubeadm-config.yaml % .9MB/s :
- [root@k8s-master01 k8s-ha-install]# scp -r config/$HOST2/kubeadm-config.yaml $HOST2:/root/
- kubeadm-config.yaml % .8KB/s :
- [root@k8s-master01 k8s-ha-install]# scp -r config/$HOST3/kubeadm-config.yaml $HOST3:/root/
- kubeadm-config.yaml % .6KB/s :
- [root@k8s-master01 k8s-ha-install]# scp -r config/$HOST1/keepalived/* $HOST1:/etc/keepalived/
- check_apiserver.sh 100% 471 36.4KB/s 00:00
- keepalived.conf 100% 558 69.9KB/s 00:00
- You have new mail in /var/spool/mail/root
- [root@k8s-master01 k8s-ha-install]# scp -r config/$HOST2/keepalived/* $HOST2:/etc/keepalived/
- check_apiserver.sh 100% 471 10.8KB/s 00:00
- keepalived.conf 100% 558 275.5KB/s 00:00
- [root@k8s-master01 k8s-ha-install]# scp -r config/$HOST3/keepalived/* $HOST3:/etc/keepalived/
- check_apiserver.sh 100% 471 12.7KB/s 00:00
- keepalived.conf 100% 558 1.1MB/s 00:00
- [root@k8s-master01 k8s-ha-install]# scp -r config/$HOST1/nginx-lb $HOST1:/root/
- docker-compose.yaml 100% 213 478.6KB/s 00:00
- nginx-lb.conf 100% 1036 2.6MB/s 00:00
- [root@k8s-master01 k8s-ha-install]# scp -r config/$HOST2/nginx-lb $HOST2:/root/
- docker-compose.yaml 100% 213 12.5KB/s 00:00
- nginx-lb.conf 100% 1036 35.5KB/s 00:00
- [root@k8s-master01 k8s-ha-install]# scp -r config/$HOST3/nginx-lb $HOST3:/root/
- docker-compose.yaml 100% 213 20.5KB/s 00:00
- nginx-lb.conf 100% 1036 94.3KB/s 00:00
所有master节点启动nginx
- cd
- docker-compose --file=/root/nginx-lb/docker-compose.yaml up -d
- docker-compose --file=/root/nginx-lb/docker-compose.yaml ps
重启keepalived
- systemctl restart keepalived
提前下载镜像
- kubeadm config images pull --config /root/kubeadm-config.yaml
集群初始化
- kubeadm init --config /root/kubeadm-config.yaml
- ....
kubeadm join k8s-master-lb:16443 --token cxwr3f.2knnb1gj83ztdg9l --discovery-token-ca-cert-hash sha256:41718412b5d2ccdc8b7326fd440360bf186a21dac4a0769f460ca4bdaf5d2825
....
- [root@k8s-master01 ~]# cat <<EOF >> ~/.bashrc
- export KUBECONFIG=/etc/kubernetes/admin.conf
- EOF
- [root@k8s-master01 ~]# source ~/.bashrc
- [root@k8s-master01 ~]# kubectl get nodes
- NAME STATUS ROLES AGE VERSION
- k8s-master01 NotReady master 2m11s v1.13.2
查看pods状态
- [root@k8s-master01 ~]# kubectl get pods -n kube-system -o wide
- NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
- coredns-89cc84847-2h7r6 / ContainerCreating 3m12s <none> k8s-master01 <none> <none>
- coredns-89cc84847-fhwbr / ContainerCreating 3m12s <none> k8s-master01 <none> <none>
- etcd-k8s-master01 / Running 2m31s 192.168.20.20 k8s-master01 <none> <none>
- kube-apiserver-k8s-master01 / Running 2m36s 192.168.20.20 k8s-master01 <none> <none>
- kube-controller-manager-k8s-master01 / Running 2m39s 192.168.20.20 k8s-master01 <none> <none>
- kube-proxy-kb95s / Running 3m12s 192.168.20.20 k8s-master01 <none> <none>
- kube-scheduler-k8s-master01 / Running 2m46s 192.168.20.20 k8s-master01 <none> <none>
此时CoreDNS状态为ContainerCreating,报错如下:
- Normal Scheduled 2m51s default-scheduler Successfully assigned kube-system/coredns-89cc84847-2h7r6 to k8s-master01
- Warning NetworkNotReady 2m3s (x25 over 2m51s) kubelet, k8s-master01 network is not ready: [runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
因为没有安装网络插件,暂时不用管
安装calico
- [root@k8s-master01 k8s-ha-install]# kubectl create -f calico/
- configmap/calico-config created
- service/calico-typha created
- deployment.apps/calico-typha created
- poddisruptionbudget.policy/calico-typha created
- daemonset.extensions/calico-node created
- serviceaccount/calico-node created
- customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
- customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
- customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
- customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
- customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
- customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
- customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
- customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
- customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
- clusterrole.rbac.authorization.k8s.io/calico-node created
- clusterrolebinding.rbac.authorization.k8s.io/calico-node created
再次查看
- [root@k8s-master01 k8s-ha-install]# kubectl get po -n kube-system
- NAME READY STATUS RESTARTS AGE
- calico-node-tp2dz / Running 42s
- coredns-89cc84847-2djpl / Running 66s
- coredns-89cc84847-vt6zq / Running 66s
- etcd-k8s-master01 / Running 27s
- kube-apiserver-k8s-master01 / Running 16s
- kube-controller-manager-k8s-master01 / Running 34s
- kube-proxy-x497d / Running 66s
- kube-scheduler-k8s-master01 / Running 17s
5、高可用配置
复制证书
- USER=root
- CONTROL_PLANE_IPS="k8s-master02 k8s-master03"
- for host in $CONTROL_PLANE_IPS; do
- ssh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd"
- scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:/etc/kubernetes/pki/ca.crt
- scp /etc/kubernetes/pki/ca.key "${USER}"@$host:/etc/kubernetes/pki/ca.key
- scp /etc/kubernetes/pki/sa.key "${USER}"@$host:/etc/kubernetes/pki/sa.key
- scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:/etc/kubernetes/pki/sa.pub
- scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:/etc/kubernetes/pki/front-proxy-ca.crt
- scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:/etc/kubernetes/pki/front-proxy-ca.key
- scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:/etc/kubernetes/pki/etcd/ca.crt
- scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:/etc/kubernetes/pki/etcd/ca.key
- scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/admin.conf
- done
以下操作在master02执行
提前下载镜像
- kubeadm config images pull --config /root/kubeadm-config.yaml
master02加入集群,与node节点相差的参数就是--experimental-control-plane
- kubeadm join k8s-master-lb:16443 --token cxwr3f.2knnb1gj83ztdg9l --discovery-token-ca-cert-hash sha256:41718412b5d2ccdc8b7326fd440360bf186a21dac4a0769f460ca4bdaf5d2825 --experimental-control-plane
- ......
- This node has joined the cluster and a new control plane instance was created:
- * Certificate signing request was sent to apiserver and approval was received.
- * The Kubelet was informed of the new secure connection details.
- * Master label and taint were applied to the new node.
- * The Kubernetes control plane instances scaled up.
- * A new etcd member was added to the local/stacked etcd cluster.
- To start administering your cluster from this node, you need to run the following as a regular user:
- mkdir -p $HOME/.kube
- sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
- sudo chown $(id -u):$(id -g) $HOME/.kube/config
- Run 'kubectl get nodes' to see this node join the cluster.
master01查看状态
- [root@k8s-master01 k8s-ha-install]# kubectl get no
- NAME STATUS ROLES AGE VERSION
- k8s-master01 Ready master 15m v1.13.2
- k8s-master02 Ready master 9m55s v1.13.2
其他master节点类似
查看最终master状态
- [root@k8s-master01 ~]# kubectl get po -n kube-system
- NAME READY STATUS RESTARTS AGE
- calico-node-49dwr / Running 26m
- calico-node-kz2d4 / Running 22m
- calico-node-zwnmq / Running 4m6s
- coredns-89cc84847-dgxlw / Running 27m
- coredns-89cc84847-n77x6 / Running 27m
- etcd-k8s-master01 / Running 27m
- etcd-k8s-master02 / Running 22m
- etcd-k8s-master03 / Running 4m5s
- kube-apiserver-k8s-master01 / Running 27m
- kube-apiserver-k8s-master02 / Running 22m
- kube-apiserver-k8s-master03 / Running 4m6s
- kube-controller-manager-k8s-master01 / Running 27m
- kube-controller-manager-k8s-master02 / Running 22m
- kube-controller-manager-k8s-master03 / Running 4m6s
- kube-proxy-f9qc5 / Running 27m
- kube-proxy-k55bg / Running 22m
- kube-proxy-kbg9c / Running 4m6s
- kube-scheduler-k8s-master01 / Running 27m
- kube-scheduler-k8s-master02 / Running 22m
- kube-scheduler-k8s-master03 / Running 4m6s
- [root@k8s-master01 ~]# kubectl get no
- NAME STATUS ROLES AGE VERSION
- k8s-master01 Ready master 28m v1.13.2
- k8s-master02 Ready master 22m v1.13.2
- k8s-master03 Ready master 4m16s v1.13.2
- [root@k8s-master01 ~]# kubectl get csr
- NAME AGE REQUESTOR CONDITION
- csr-6mqbv 28m system:node:k8s-master01 Approved,Issued
- node-csr-GPLcR1G4Nchf-zuB5DaTWncoluMuENUfKvWKs0j2GdQ 23m system:bootstrap:9zp70m Approved,Issued
- node-csr-cxAxrkllyidkBuZ8fck6fwq-ht1_u6s0snbDErM8bIs 4m51s system:bootstrap:9zp70m Approved,Issued
在所有master节点上允许hpa通过接口采集数据
- vi /etc/kubernetes/manifests/kube-controller-manager.yaml
- - --horizontal-pod-autoscaler-use-rest-clients=false
6、node节点加入集群
- kubeadm join 192.168.20.10: --token ll4usb.qmplnofiv7z1j0an --discovery-token-ca-cert-hash sha256:e88a29f62ab77a59bf88578abadbcd37e89455515f6ecf3ca371656dc65b1d6e
- ......
- [kubelet-start] Activating the kubelet service
- [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
- [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node02" as an annotation
- This node has joined the cluster:
- * Certificate signing request was sent to apiserver and a response was received.
- * The Kubelet was informed of the new secure connection details.
- Run 'kubectl get nodes' on the master to see this node join the cluster.
master节点查看
- [root@k8s-master01 k8s-ha-install]# kubectl get po -n kube-system
- NAME READY STATUS RESTARTS AGE
- calico-node-49dwr / Running 13h
- calico-node-9nmhb / Running 11m
- calico-node-k5nmt / Running 11m
- calico-node-kz2d4 / Running 13h
- calico-node-zwnmq / Running 13h
- coredns-89cc84847-dgxlw / Running 13h
- coredns-89cc84847-n77x6 / Running 13h
- etcd-k8s-master01 / Running 13h
- etcd-k8s-master02 / Running 13h
- etcd-k8s-master03 / Running 13h
- kube-apiserver-k8s-master01 / Running 18m
- kube-apiserver-k8s-master02 / Running 17m
- kube-apiserver-k8s-master03 / Running 16m
- kube-controller-manager-k8s-master01 / Running 19m
- kube-controller-manager-k8s-master02 / Running 19m
- kube-controller-manager-k8s-master03 / Running 19m
- kube-proxy-cl2zv / Running 11m
- kube-proxy-f9qc5 / Running 13h
- kube-proxy-hkcq5 / Running 11m
- kube-proxy-k55bg / Running 13h
- kube-proxy-kbg9c / Running 13h
- kube-scheduler-k8s-master01 / Running 13h
- kube-scheduler-k8s-master02 / Running 13h
- kube-scheduler-k8s-master03 / Running 13h
- You have new mail in /var/spool/mail/root
- [root@k8s-master01 k8s-ha-install]# kubectl get no
- NAME STATUS ROLES AGE VERSION
- k8s-master01 Ready master 13h v1.13.2
- k8s-master02 Ready master 13h v1.13.2
- k8s-master03 Ready master 13h v1.13.2
- k8s-node01 Ready <none> 11m v1.13.2
- k8s-node02 Ready <none> 11m v1.13.2
7、其他组件安装
部署metrics server 0.3.1/1.8+安装
- [root@k8s-master01 k8s-ha-install]# kubectl create -f metrics-server/
- clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
- clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
- rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
- apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
- serviceaccount/metrics-server created
- deployment.extensions/metrics-server created
- service/metrics-server created
- clusterrole.rbac.authorization.k8s.io/system:metrics-server created
- clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
- [root@k8s-master01 k8s-ha-install]# kubectl get po -n kube-system
- NAME READY STATUS RESTARTS AGE
- calico-node-49dwr / Running 14h
- calico-node-9nmhb / Running 69m
- calico-node-k5nmt / Running 69m
- calico-node-kz2d4 / Running 14h
- calico-node-zwnmq / Running 14h
- coredns-89cc84847-dgxlw / Running 14h
- coredns-89cc84847-n77x6 / Running 14h
- etcd-k8s-master01 / Running 14h
- etcd-k8s-master02 / Running 14h
- etcd-k8s-master03 / Running 14h
- kube-apiserver-k8s-master01 / Running 6m23s
- kube-apiserver-k8s-master02 / Running 4m41s
- kube-apiserver-k8s-master03 / Running 4m34s
- kube-controller-manager-k8s-master01 / Running 78m
- kube-controller-manager-k8s-master02 / Running 78m
- kube-controller-manager-k8s-master03 / Running 77m
- kube-proxy-cl2zv / Running 69m
- kube-proxy-f9qc5 / Running 14h
- kube-proxy-hkcq5 / Running 69m
- kube-proxy-k55bg / Running 14h
- kube-proxy-kbg9c / Running 14h
- kube-scheduler-k8s-master01 / Running 14h
- kube-scheduler-k8s-master02 / Running 14h
- kube-scheduler-k8s-master03 / Running 14h
- metrics-server-7c5546c5c5-ms4nz / Running 25s
过5分钟左右查看
- [root@k8s-master01 k8s-ha-install]# kubectl top nodes
- NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
- k8s-master01 155m % 1716Mi %
- k8s-master02 337m % 1385Mi %
- k8s-master03 450m % 1180Mi %
- k8s-node01 153m % 582Mi %
- k8s-node02 142m % 601Mi %
- [root@k8s-master01 k8s-ha-install]# kubectl top pod -n kube-system
- NAME CPU(cores) MEMORY(bytes)
- calico-node-49dwr 15m 71Mi
- calico-node-9nmhb 47m 60Mi
- calico-node-k5nmt 46m 61Mi
- calico-node-kz2d4 18m 47Mi
- calico-node-zwnmq 16m 46Mi
- coredns-89cc84847-dgxlw 2m 13Mi
- coredns-89cc84847-n77x6 2m 13Mi
- etcd-k8s-master01 27m 126Mi
- etcd-k8s-master02 23m 117Mi
- etcd-k8s-master03 19m 112Mi
- kube-apiserver-k8s-master01 29m 410Mi
- kube-apiserver-k8s-master02 19m 343Mi
- kube-apiserver-k8s-master03 13m 343Mi
- kube-controller-manager-k8s-master01 23m 97Mi
- kube-controller-manager-k8s-master02 1m 16Mi
- kube-controller-manager-k8s-master03 1m 16Mi
- kube-proxy-cl2zv 18m 18Mi
- kube-proxy-f9qc5 8m 20Mi
- kube-proxy-hkcq5 30m 19Mi
- kube-proxy-k55bg 8m 20Mi
- kube-proxy-kbg9c 6m 20Mi
- kube-scheduler-k8s-master01 7m 20Mi
- kube-scheduler-k8s-master02 9m 19Mi
- kube-scheduler-k8s-master03 7m 19Mi
- metrics-server-7c5546c5c5-ms4nz 3m 14Mi
部署dashboard v1.10.0
- [root@k8s-master01 k8s-ha-install]# kubectl create -f dashboard/
- secret/kubernetes-dashboard-certs created
- serviceaccount/kubernetes-dashboard created
- role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
- rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
- deployment.apps/kubernetes-dashboard created
- service/kubernetes-dashboard created
查看pod和svc
- [root@k8s-master01 k8s-ha-install]# kubectl get svc -n kube-system
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- calico-typha ClusterIP 10.102.221.48 <none> /TCP 15h
- kube-dns ClusterIP 10.96.0.10 <none> /UDP,/TCP 15h
- kubernetes-dashboard NodePort 10.105.18.61 <none> :/TCP 7s
- metrics-server ClusterIP 10.101.178.115 <none> /TCP 23m
- [root@k8s-master01 k8s-ha-install]# kubectl get po -n kube-system -l k8s-app=kubernetes-dashboard
- NAME READY STATUS RESTARTS AGE
- kubernetes-dashboard-845b47dbfc-j4r48 / Running 7m14s
访问:https://192.168.20.10:30000/#!/login
查看令牌
- [root@k8s-master01 k8s-ha-install]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
- Name: admin-user-token-455bd
- Namespace: kube-system
- Labels: <none>
- Annotations: kubernetes.io/service-account.name: admin-user
- kubernetes.io/service-account.uid: e6effde6-1a0a-11e9-ae1a-000c298bf023
- Type: kubernetes.io/service-account-token
- Data
- ====
- namespace: bytes
- token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTQ1NWJkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJlNmVmZmRlNi0xYTBhLTExZTktYWUxYS0wMDBjMjk4YmYwMjMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.Lw8hErqRoEC3e4VrEsAkFraytQI13NWj2osm-3lhaFDfgLtj4DIadq3ef8VgxpmyViPRzPh5fhq7EejuGH6V9cPsqEVlNBjWG0Wzfn0QuPP0xkxoW2V7Lne14Pu0-bTDE4P4UcW4MGPJAHSvckO9DTfYSzYghE2YeNKzDfhhA4DuWXaWGdNqzth_QjG_zbHsAB9kT3yVNM6bMVj945wZYSzXdJixSPBB46y92PAnfO0kAWsQc_zUtG8U1bTo7FdJ8BXgvNhytUvP7-nYanSIcpUoVXZRinQDGB-_aVRuoHHpiBOKmZlEqWOOaUrDf0DQJvDzt9TL-YHjimIstzv18A
- ca.crt: bytes
Prometheus部署:https://www.cnblogs.com/dukuan/p/10177757.html
赞助作者:
kubernetes实战(二十五):kubeadm 安装 高可用 k8s v1.13.x的更多相关文章
- kubernetes实战(二十六):kubeadm 安装 高可用 k8s v1.16.x dashboard 2.x
1.基本配置 基本配置.内核升级.基本服务安装参考https://www.cnblogs.com/dukuan/p/10278637.html,或者参考<再也不踩坑的Kubernetes实战指南 ...
- 使用kubeadm搭建高可用k8s v1.16.3集群
目录 1.部署环境说明 2.集群架构及部署准备工作 2.1.集群架构说明 2.2.修改hosts及hostname 2.3.其他准备 3.部署keepalived 3.1.安装 3.2.配置 3.3. ...
- kubernetes实战(二十八):Kubernetes一键式资源管理平台Ratel安装及使用
1. Ratel是什么? Ratel是一个Kubernetes资源平台,基于管理Kubernetes的资源开发,可以管理Kubernetes的Deployment.DaemonSet.Stateful ...
- 企业运维实践-还不会部署高可用的kubernetes集群?使用kubeadm方式安装高可用k8s集群v1.23.7
关注「WeiyiGeek」公众号 设为「特别关注」每天带你玩转网络安全运维.应用开发.物联网IOT学习! 希望各位看友[关注.点赞.评论.收藏.投币],助力每一个梦想. 文章目录: 0x00 前言简述 ...
- kubernetes实战(二十):k8s一键部署高可用Prometheus并实现邮件告警
1.基本概念 本次部署使用的是CoreOS的prometheus-operator. 本次部署包含监控etcd集群. 本次部署适用于二进制和kubeadm安装方式. 本次部署适用于k8s v1.10版 ...
- kubernetes实战(二十九):Kubernetes RBAC实现不同用户在不同Namespace的不同权限
1.基本说明 在生产环境使用k8s以后,大部分应用都实现了高可用,不仅降低了维护成本,也简化了很多应用的部署成本,但是同时也带来了诸多问题.比如开发可能需要查看自己的应用状态.连接信息.日志.执行命令 ...
- Kubeadm搭建高可用(k8s)Kubernetes v1.24.0集群
文章转载自:https://i4t.com/5451.html 背景 Kubernetes 1.24新特性 从kubelet中移除dockershim,自1.20版本被弃用之后,dockershim组 ...
- Kubeadm部署高可用K8S集群
一 基础环境 1.1 资源 节点名称 ip地址 VIP 192.168.12.150 master01 192.168.12.48 master02 192.168.12.242 master03 1 ...
- centos7使用kubeadm配置高可用k8s集群
CountingStars_ 关注 2018.08.12 09:06* 字数 464 阅读 88评论 0喜欢 0 简介 使用kubeadm配置多master节点,实现高可用. 安装 实验环境说明 实验 ...
随机推荐
- CMake设置输出目录
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/Lib)set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${CMAK ...
- U3D的控制
做游戏少不了控制,但是一个成熟的游戏引擎,是不能简单仅仅获取键盘中或者遥感确定的按键来控制,要考虑到用户更改游戏按键的情况,当然也得考虑到不同设备的不通输入方式,比如U3D是可以运行在iphone上的 ...
- webform的学习(2)
突然回想一下,两周之后放假回家,三周之后重返学习,四周之后就要真正的面对社会,就这样有好多的舍不得在脑海中回旋,但是又是兴奋的想快点拥有自己的小生活,似乎太多的人在说程序的道路甚是艰难,我不知道我的选 ...
- 【Java并发编程四】关卡
一.什么是关卡? 关卡类似于闭锁,它们都能阻塞一组线程,直到某些事件发生. 关卡和闭锁关键的不同在于,所有线程必须同时到达关卡点,才能继续处理.闭锁等待的是事件,关卡等待的是其他线程. 二.Cycli ...
- Java网络编程之TCP通信
一.概述 Socket类是Java执行客户端TCP操作的基础类,这个类本身使用代码通过主机操作系统的本地TCP栈进行通信.Socket类的方法会建立和销毁连接,设置各种Socket选项. Server ...
- CSS-禁用a标签
<style> a.disabled { pointer-events: none; filter: alpha(opacity=50); /*IE滤镜,透明度50%*/ -moz-opa ...
- C数组&结构体&联合体快速初始化
背景 C89标准规定初始化语句的元素以固定顺序出现,该顺序即待初始化数组或结构体元素的定义顺序. C99标准新增指定初始化(Designated Initializer),即可按照任意顺序对数组某些元 ...
- 决策树归纳算法之C4.5
前面学习了ID3,知道了有关“熵”以及“信息增益”的概念之后. 今天,来学习一下C4.5.都说C4.5是ID3的改进版,那么,ID3到底哪些地方做的不好?C4.5又是如何改进的呢? 在此,引用一下前人 ...
- delphi for android 获取手机号
delphi for android 获取手机号 uses System.SysUtils, System.Types, System.UITypes, System.Classes, Syste ...
- Linux 下误删除恢复,(文件名无法找回)
手贱命令写错了,直接把一个目录下的文件全删了,用下面的方法虽然恢复了,但是还是有几个文件没有找回来...(可以找回,但是要在另一个盘进行操作) 如果不小心用rm –rf xxx删除了文件或目录,在ex ...