环境配置

IP hostname 操作系统
10.11.66.44 k8s-master centos7.6
10.11.66.27 k8s-node1 centos7.7
10.11.66.28 k8s-node2 centos7.7
# 官方建议每台机器至少双核2G内存,同时需确保MAC和product_uuid的唯一性
[root@localhost ~]# hostnamectl --static set-hostname k8s-master
[root@localhost ~]# hostnamectl --static set-hostname k8s-node1
[root@localhost ~]# hostnamectl --static set-hostname k8s-node2
[root@k8s-master ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
[root@k8s-master ~]# sestatus
SELinux status: disabled
[root@k8s-master ~]# systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
[root@k8s-master ~]# cat >> /etc/hosts << EOF # 三台都需要操作
> 10.11.66.44 k8s-master
> 10.11.66.27 k8s-node1
> 10.11.66.28 k8s-node2
> EOF
# ping主机测试hosts是否配置正确
[root@k8s-master ~]# ping k8s-master
PING k8s-master (10.11.66.44) 56(84) bytes of data.
64 bytes from k8s-master (10.11.66.44): icmp_seq=1 ttl=64 time=0.012 ms
64 bytes from k8s-master (10.11.66.44): icmp_seq=2 ttl=64 time=0.016 ms
^C
--- k8s-master ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.012/0.014/0.016/0.002 ms
[root@k8s-master ~]# ping k8s-node1
PING k8s-node1 (10.11.66.27) 56(84) bytes of data.
64 bytes from k8s-node1 (10.11.66.27): icmp_seq=1 ttl=64 time=0.924 ms
64 bytes from k8s-node1 (10.11.66.27): icmp_seq=2 ttl=64 time=1.36 ms
^C
--- k8s-node1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1010ms
rtt min/avg/max/mdev = 0.924/1.146/1.369/0.225 ms
[root@k8s-master ~]# ping k8s-node2
PING k8s-node2 (10.11.66.28) 56(84) bytes of data.
64 bytes from k8s-node2 (10.11.66.28): icmp_seq=1 ttl=64 time=1.18 ms
64 bytes from k8s-node2 (10.11.66.28): icmp_seq=2 ttl=64 time=1.30 ms
^C
--- k8s-node2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1003ms
rtt min/avg/max/mdev = 1.180/1.240/1.300/0.060 ms
[root@k8s-master ~]# ip link # 三台机器mac地址不能一样
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 00:0c:29:26:38:13 brd ff:ff:ff:ff:ff:ff
[root@k8s-master ~]# cat /sys/class/dmi/id/product_uuid # 三台机器UUID不能一样
07B64D56-0D8B-6047-8E55-9ADE9F263813
# 设置为阿里云yum源(三台都需要操作)
[root@k8s-master ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@k8s-master ~]# rm -rf /var/cache/yum && yum makecache && yum -y update && yum -y autoremove
# 注意: 网络条件不好,可以不用 update
# 安装依赖包(三台都需要操作)
[root@k8s-master ~]# yum -y install epel-release.noarch conntrack ipvsadm ipset jq sysstat curl iptables libseccomp
# 配置iptables(三台都需要操作)
[root@k8s-master ~]# iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
# 关闭swap分区(三台都需要操作)
[root@k8s-master ~]# swapoff -a
[root@k8s-master ~]# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# 加载内核模块(三台都需要操作)
[root@k8s-node2 ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs # lvs基于4层的负载均衡
modprobe -- ip_vs_rr # 轮询
modprobe -- ip_vs_wrr # 加权轮询
modprobe -- ip_vs_sh # 源地址散列调度算法
modprobe -- nf_conntrack_ipv4 # 链接跟踪模块
modprobe -- br_netfilter # 遍历桥的数据包由iptables进行处理以进行
EOF [root@k8s-master ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules
# 设置内核参数(三台都需要操作)
[root@k8s-master ~]# cat << EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF [root@k8s-master ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1 # 设置网桥iptables网络过滤通告
net.bridge.bridge-nf-call-ip6tables = 1 # 设置网桥iptables网络过滤通告
net.ipv4.ip_forward = 1 # 开启内核转发
net.ipv4.tcp_tw_recycle = 0 # 设置 IP_TW 回收
vm.swappiness = 0 # 禁用swap
vm.overcommit_memory = 1 # 内核对内存分配的一种策略
vm.panic_on_oom = 0 # 设置系统oom(内存溢出)
fs.inotify.max_user_watches = 89100 # 允许用户最大监控目录数
fs.file-max = 52706963 # 允许系统打开的最大文件数
fs.nr_open = 52706963 # 允许单个进程打开的最大文件数
net.ipv6.conf.all.disable_ipv6 = 1 # 禁用ipv6
net.netfilter.nf_conntrack_max = 2310720 # 系统的最大连接数 # overcommit_memory 是一个内核对内存分配的一种策略,取值又三种分别为0, 1, 2
- overcommit_memory=0 '表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。
- overcommit_memory=1 '表示内核允许分配所有的物理内存,而不管当前的内存状态如何。
- overcommit_memory=2 '表示内核允许分配超过所有物理内存和交换空间总和的内存

部署docker

# 卸载旧版docker(三台都需要操作)
[root@k8s-master ~]# yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
Loaded plugins: fastestmirror
No Match for argument: docker
No Match for argument: docker-client
No Match for argument: docker-client-latest
No Match for argument: docker-common
No Match for argument: docker-latest
No Match for argument: docker-latest-logrotate
No Match for argument: docker-logrotate
No Match for argument: docker-selinux
No Match for argument: docker-engine-selinux
No Match for argument: docker-engine
No Packages marked for removal
# 安装docker依赖包(三台都需要操作)
[root@k8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
# 设置docker源(阿里云)(三台都需要操作)
[root@k8s-master ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Loaded plugins: fastestmirror
adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
# 启用测试库(可选)
[root@k8s-master ~]# yum-config-manager --enable docker-ce-edge
[root@k8s-master ~]# yum-config-manager --enable docker-ce-test
# 安装docker(三台都需要操作)
[root@k8s-master ~]# yum makecache fast
[root@k8s-master ~]# yum -y install docker-ce
# 启动docker,并开机自启(三台都需要操作)
[root@k8s-master ~]# systemctl enable docker --now
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
# 配置docker(三台都需要操作)
[root@k8s-master ~]# sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service # 安装完成后配置启动时的命令,否则 docker 会将 iptables FORWARD chain 的默认策略设置为DROP
[root@k8s-master ~]# tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://bk6kzfqm.mirror.aliyuncs.com"], # 配置阿里云镜像加速
"exec-opts": ["native.cgroupdriver=cgroup"], # 将 systemd 设置为 cgroup 驱动
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart docker

部署kubeadm和kubelet

# 配置yum源(三台都需要操作)
[root@k8s-master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 安装并启动(三台都需要操作)
[root@k8s-master ~]# yum install -y kubelet-1.18.6 kubeadm-1.18.6 kubectl-1.18.6
[root@k8s-master ~]# systemctl enable kubelet.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
# 配置自动补全命令(三台都需要操作)
[root@k8s-master ~]# yum -y install bash-completion
# 设置kubectl与kubeadm命令补全,下次login生效
[root@k8s-master ~]# kubectl completion bash > /etc/bash_completion.d/kubectl
[root@k8s-master ~]# kubeadm completion bash > /etc/bash_completion.d/kubeadm
# 查看k8s依赖的包(三台都需要操作)
[root@k8s-master ~]# kubeadm config images list --kubernetes-version v1.18.6
W0803 15:10:18.910528 25638 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.6
k8s.gcr.io/kube-controller-manager:v1.18.6
k8s.gcr.io/kube-scheduler:v1.18.6
k8s.gcr.io/kube-proxy:v1.18.6
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7
# 拉取所需镜像(三台都需要操作)
[root@k8s-master ~]# vim get-k8s-images.sh
#!/bin/bash
# Script For Quick Pull K8S Docker Images KUBE_VERSION=v1.18.6
PAUSE_VERSION=3.2
CORE_DNS_VERSION=1.6.7
ETCD_VERSION=3.4.3-0 # pull kubernetes images from hub.docker.com
docker pull kubeimage/kube-proxy-amd64:$KUBE_VERSION
docker pull kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
docker pull kubeimage/kube-apiserver-amd64:$KUBE_VERSION
docker pull kubeimage/kube-scheduler-amd64:$KUBE_VERSION
# pull aliyuncs mirror docker images
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION # retag to k8s.gcr.io prefix
docker tag kubeimage/kube-proxy-amd64:$KUBE_VERSION k8s.gcr.io/kube-proxy:$KUBE_VERSION
docker tag kubeimage/kube-controller-manager-amd64:$KUBE_VERSION k8s.gcr.io/kube-controller-manager:$KUBE_VERSION
docker tag kubeimage/kube-apiserver-amd64:$KUBE_VERSION k8s.gcr.io/kube-apiserver:$KUBE_VERSION
docker tag kubeimage/kube-scheduler-amd64:$KUBE_VERSION k8s.gcr.io/kube-scheduler:$KUBE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION k8s.gcr.io/coredns:$CORE_DNS_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION k8s.gcr.io/etcd:$ETCD_VERSION # untag origin tag, the images won't be delete.
docker rmi kubeimage/kube-proxy-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-apiserver-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-scheduler-amd64:$KUBE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION
[root@k8s-master ~]# sh get-k8s-images.sh
'或者
[root@k8s-master ~]# docker save $(docker images | grep -v REPOSITORY | awk 'BEGIN{OFS=":";ORS=" "}{print $1,$2}') -o k8s-images.tar # master节点导出
[root@k8s-node1 ~]# docker image load -i k8s-images.tar # node节点导入

初始化集群

# 使用kubeadm init初始化集群,ip为本机ip(在k8s-master上操作)
[root@k8s-master ~]# kubeadm init --kubernetes-version=v1.18.6 --apiserver-advertise-address=10.11.66.44 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.1.0.0/16 --kubernetes-version=v1.18.6 : 加上该参数后启动相关镜像(刚才下载的那一堆)
--pod-network-cidr=10.244.0.0/16 :(Pod 中间网络通讯我们用flannel,flannel要求是10.244.0.0/16,这个IP段就是Pod的IP段)
--service-cidr=10.1.0.0/16 : Service(服务)网段(和微服务架构有关)
# 初始化成功后,会有以下回显
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.11.66.44:6443 --token ecqlbq.1k41wwa3gn57oonq \
--discovery-token-ca-cert-hash sha256:daeec6df945f3f4a646d074d9f9144f414373106ff8849450c1d10b5a663e87e
# 为需要使用kubectl的用户进行配置(在k8s-master上操作)
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 使用下面的命令确保所有的Pod都处于Running状态,可能要等到许久
[root@k8s-master ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-66bff467f8-cxtrj 0/1 Pending 0 8m14s <none> <none> <none> <none>
kube-system coredns-66bff467f8-znlm2 0/1 Pending 0 8m14s <none> <none> <none> <none>
kube-system etcd-k8s-master 1/1 Running 0 8m23s 10.11.66.44 k8s-master <none> <none>
kube-system kube-apiserver-k8s-master 1/1 Running 0 8m23s 10.11.66.44 k8s-master <none> <none>
kube-system kube-controller-manager-k8s-master 1/1 Running 0 8m23s 10.11.66.44 k8s-master <none> <none>
kube-system kube-proxy-vh964 1/1 Running 0 8m14s 10.11.66.44 k8s-master <none> <none>
kube-system kube-scheduler-k8s-master 1/1 Running 0 8m23s 10.11.66.44 k8s-master <none> <none>
[root@k8s-master ~]# kubectl get pods -n kube-system # 这个也可以查看
NAME READY STATUS RESTARTS AGE
coredns-66bff467f8-cxtrj 0/1 Pending 0 3m52s
coredns-66bff467f8-znlm2 0/1 Pending 0 3m52s
etcd-k8s-master 1/1 Running 0 4m1s
kube-apiserver-k8s-master 1/1 Running 0 4m1s
kube-controller-manager-k8s-master 1/1 Running 0 4m1s
kube-proxy-vh964 1/1 Running 0 3m52s
kube-scheduler-k8s-master 1/1 Running 0 4m1s

集群网络配置(选择一种就可以)

flannel 网络

[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 注意:修改集群初始化地址及镜像能否拉取(在k8s-master上操作)

Pod Network(使用七牛云镜像)

# (在k8s-master上操作)
[root@k8s-master ~]# curl -o kube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8s-master ~]# sed -i "s/quay.io\/coreos\/flannel/quay-mirror.qiniu.com\/coreos\/flannel/g" kube-flannel.yml
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml
[root@k8s-master ~]# rm -f kube-flannel.yml

calico 网络

# (在k8s-master上操作)
[root@k8s-master ~]# wget https://docs.projectcalico.org/v3.15/manifests/calico.yaml
[root@k8s-master ~]# vim calico.yaml
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"
[root@k8s-master ~]# kubectl apply -f calico.yaml
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-578894d4cd-rchx6 1/1 Running 0 2m31s
calico-node-slgg9 1/1 Running 0 2m32s
coredns-66bff467f8-cxtrj 1/1 Running 0 55m
coredns-66bff467f8-znlm2 1/1 Running 0 55m
etcd-k8s-master 1/1 Running 0 55m
kube-apiserver-k8s-master 1/1 Running 0 55m
kube-controller-manager-k8s-master 1/1 Running 0 55m
kube-proxy-vh964 1/1 Running 0 55m
kube-scheduler-k8s-master 1/1 Running 0 55m

kubernetes集群中添加node节点

# 在k8s-node1和k8s-node2上,运行之前在k8s-master输出的命令
[root@k8s-node1 ~]# kubeadm join 10.11.66.44:6443 --token ecqlbq.1k41wwa3gn57oonq \
--discovery-token-ca-cert-hash sha256:daeec6df945f3f4a646d074d9f9144f414373106ff8849450c1d10b5a663e87e
[root@k8s-node2 ~]# kubeadm join 10.11.66.44:6443 --token ecqlbq.1k41wwa3gn57oonq \
--discovery-token-ca-cert-hash sha256:daeec6df945f3f4a646d074d9f9144f414373106ff8849450c1d10b5a663e87e
# 没有记录集群 join 命令的可以通过以下方式重新获取(在k8s-master上操作)
[root@k8s-master ~]# kubeadm token create --print-join-command --ttl=0
[root@k8s-master ~]# kubectl get nodes # 查看集群中的节点状态,可能要等等许久才Ready
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 64m v1.18.6
k8s-node1 Ready <none> 3m37s v1.18.6
k8s-node2 Ready <none> 3m36s v1.18.6

kube-proxy开启ipvs

# (在k8s-master上操作)
[root@k8s-master ~]# kubectl get configmap kube-proxy -n kube-system -o yaml > kube-proxy-configmap.yaml
[root@k8s-master ~]# sed -i 's/mode: ""/mode: "ipvs"/' kube-proxy-configmap.yaml
[root@k8s-master ~]# kubectl apply -f kube-proxy-configmap.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
configmap/kube-proxy configured
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-578894d4cd-rchx6 1/1 Running 0 14m
calico-node-kfc5p 1/1 Running 0 7m17s
calico-node-slgg9 1/1 Running 0 14m
calico-node-xcc92 1/1 Running 0 7m16s
coredns-66bff467f8-cxtrj 1/1 Running 0 67m
coredns-66bff467f8-znlm2 1/1 Running 0 67m
etcd-k8s-master 1/1 Running 0 67m
kube-apiserver-k8s-master 1/1 Running 0 67m
kube-controller-manager-k8s-master 1/1 Running 0 67m
kube-proxy-6fnpb 1/1 Running 0 16s
kube-proxy-tflld 1/1 Running 0 20s
kube-proxy-x47c8 1/1 Running 0 26s
kube-scheduler-k8s-master 1/1 Running 0 67m

部署 kubernetes-dashboard

# Dashboard安装脚本(在k8s-master上操作)
cat > recommended.yaml <<-EOF
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard --- apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard --- kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30000
selector:
k8s-app: kubernetes-dashboard --- #apiVersion: v1
#kind: Secret
#metadata:
# labels:
# k8s-app: kubernetes-dashboard
# name: kubernetes-dashboard-certs
# namespace: kubernetes-dashboard
#type: Opaque --- apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: "" --- apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque --- kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard --- kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"] --- kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard --- kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0-beta1
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule --- kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: kubernetes-metrics-scraper --- kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-metrics-scraper
name: kubernetes-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-metrics-scraper
template:
metadata:
labels:
k8s-app: kubernetes-metrics-scraper
spec:
containers:
- name: kubernetes-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.0
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
EOF

创建证书

[root@k8s-master ~]# cd /etc/kubernetes/
[root@k8s-master kubernetes]# mkdir dashboard-certs
[root@k8s-master kubernetes]# cd dashboard-certs/
[root@k8s-master dashboard-certs]# kubectl create namespace kubernetes-dashboard # 创建命名空间
namespace/kubernetes-dashboard created
[root@k8s-master dashboard-certs]# kubectl get namespace # 查看命名空间
NAME STATUS AGE
default Active 75m
kube-node-lease Active 75m
kube-public Active 75m
kube-system Active 75m
kubernetes-dashboard Active 9s
[root@k8s-master dashboard-certs]# openssl genrsa -out dashboard.key 2048 # 创建私钥key文件
Generating RSA private key, 2048 bit long modulus
........................................+++
..........+++
e is 65537 (0x10001)
[root@k8s-master dashboard-certs]# openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert' # 证书请求
[root@k8s-master dashboard-certs]# openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt # 自签证书
Signature ok
subject=/CN=dashboard-cert
Getting Private key
[root@k8s-master dashboard-certs]# kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard # 创建kubernetes-dashboard-certs对象
secret/kubernetes-dashboard-certs created
[root@k8s-master dashboard-certs]# kubectl get secret -A
NAMESPACE NAME TYPE DATA AGE
default default-token-j6m5t kubernetes.io/service-account-token 3 77m
kube-node-lease default-token-n5lxf kubernetes.io/service-account-token 3 77m
.........
.........
kubernetes-dashboard default-token-bjp2p kubernetes.io/service-account-token 3 2m33s
kubernetes-dashboard kubernetes-dashboard-certs Opaque 2 90s

创建 dashboard 管理员

[root@k8s-master dashboard-certs]# cat > dashboard-admin.yaml <<-EOF
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: dashboard-admin
namespace: kubernetes-dashboard
EOF
[root@k8s-master dashboard-certs]# kubectl apply -f dashboard-admin.yaml
serviceaccount/dashboard-admin created

为用户分配权限

[root@k8s-master dashboard-certs]# cat > dashboard-admin-bind-cluster-role.yaml <<-EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin-bind-cluster-role
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kubernetes-dashboard
EOF
[root@k8s-master dashboard-certs]# kubectl apply -f dashboard-admin-bind-cluster-role.yaml
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin-bind-cluster-role created

安装 Dashboard

[root@k8s-master dashboard-certs]# kubectl apply -f /root/recommended.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
namespace/kubernetes-dashboard configured
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/kubernetes-metrics-scraper created
[root@k8s-master dashboard-certs]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-578894d4cd-rchx6 1/1 Running 0 29m
kube-system calico-node-kfc5p 1/1 Running 0 22m
kube-system calico-node-slgg9 1/1 Running 0 29m
kube-system calico-node-xcc92 1/1 Running 0 22m
kube-system coredns-66bff467f8-cxtrj 1/1 Running 0 82m
kube-system coredns-66bff467f8-znlm2 1/1 Running 0 82m
kube-system etcd-k8s-master 1/1 Running 0 82m
kube-system kube-apiserver-k8s-master 1/1 Running 0 82m
kube-system kube-controller-manager-k8s-master 1/1 Running 0 82m
kube-system kube-proxy-6fnpb 1/1 Running 0 15m
kube-system kube-proxy-tflld 1/1 Running 0 15m
kube-system kube-proxy-x47c8 1/1 Running 0 15m
kube-system kube-scheduler-k8s-master 1/1 Running 0 82m
kubernetes-dashboard kubernetes-dashboard-84b6b4578b-8t9bp 1/1 Running 0 75s
kubernetes-dashboard kubernetes-metrics-scraper-86f6785867-bqvpg 1/1 Running 0 75s
[root@k8s-master dashboard-certs]# kubectl get service -n kubernetes-dashboard -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
dashboard-metrics-scraper ClusterIP 10.1.16.181 <none> 8000/TCP 2m6s k8s-app=kubernetes-metrics-scraper
kubernetes-dashboard NodePort 10.1.99.111 <none> 443:30000/TCP 2m6s k8s-app=kubernetes-dashboard

查看并复制用户Token

[root@k8s-master dashboard-certs]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
Name: dashboard-admin-token-528w2
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 7c3955d3-2c0c-4b99-b69b-8a3f330661de Type: kubernetes.io/service-account-token Data
====
ca.crt: 1025 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1oVnpzUlUzRU4zbXJRV2F5VUZMc3JmYWFBTWMyWU1IenY1d1NET1U0bDgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNTI4dzIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiN2MzOTU1ZDMtMmMwYy00Yjk5LWI2OWItOGEzZjMzMDY2MWRlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.nVS3WCiIU90o5WIYG9iHYE90Gfox_Q5eHNzz3UsGLDDBIfgDt7veX-4pl7GLV8FFsAap0fTLo_pU7sbehd5mOYcgh_QRlZ3ELR4mVZYNW6fmPBFZn7Tbjv7LLieGDPzELrefQJwS4sZus2WsH1OdQbMIry6AYKpl5AAKw4rhh_679QnEBjCsJiEebg0hzlKyXoXGqmaGwfetsCB5DOmoNss2WbIKfGJ7pasTTKa29F3T19NIh9VbDmavyvYZp9VPgfcKiuBKlxrakzwH9fosS8V3faMgH64CMIWwrEqv1cybd85gQkA1u0SGZ5mOQJ3tYWGHGJBFlO8J-RKSo8gJOw

访问测试

1、浏览器访问:https://10.11.66.44:30000/
2、选择token,输入上面输出的token

用文件认证登录

导出认证

[root@k8s-master dashboard-certs]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
Name: dashboard-admin-token-528w2
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 7c3955d3-2c0c-4b99-b69b-8a3f330661de Type: kubernetes.io/service-account-token Data
====
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1oVnpzUlUzRU4zbXJRV2F5VUZMc3JmYWFBTWMyWU1IenY1d1NET1U0bDgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNTI4dzIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiN2MzOTU1ZDMtMmMwYy00Yjk5LWI2OWItOGEzZjMzMDY2MWRlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.nVS3WCiIU90o5WIYG9iHYE90Gfox_Q5eHNzz3UsGLDDBIfgDt7veX-4pl7GLV8FFsAap0fTLo_pU7sbehd5mOYcgh_QRlZ3ELR4mVZYNW6fmPBFZn7Tbjv7LLieGDPzELrefQJwS4sZus2WsH1OdQbMIry6AYKpl5AAKw4rhh_679QnEBjCsJiEebg0hzlKyXoXGqmaGwfetsCB5DOmoNss2WbIKfGJ7pasTTKa29F3T19NIh9VbDmavyvYZp9VPgfcKiuBKlxrakzwH9fosS8V3faMgH64CMIWwrEqv1cybd85gQkA1u0SGZ5mOQJ3tYWGHGJBFlO8J-RKSo8gJOw
ca.crt: 1025 bytes
[root@k8s-master ~]# vim .kube/config
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJZmk1aXZZNkxXb0F3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBNE1ETXdOelF5TlRoYUZ3MHlNVEE0TURNd056UXpNREJhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXpLazFvSnVPenQ3R3kzWnIKYjY5UkFqOXpzZ0hsNDdBOVVGOGIvQm1oYjVZalAwNTZuSG5FUVg4Qi85eDRaQmI0U2VLOTZkVVhIaTlFcEZuUQpDUlNKTFUwNnFRcW1GeUdXc1JJcEJPVDlUQmtrSW1XM25aRFZvKzI2dWFnVEp0V1BsOWtaWHZ5Z1hGUkJxeDNYCkxvTHIwZ2FrWE56dWd6TzBhMnFwQ1hQK0xmTE1Pa2gzUlJRZmQ4NUtaWWFXcWhNSStjNkZEVGtnTi84Z3BNKzYKWkE0a0UzT0x3OWFORkpvakl2amNIY1h5N0RNdGxCaFVRZVU4bEk2NHVRVk9zcDllTDR2WjBFRmo1djZFejNnbwp4ZFYrbzd6NWd3N3pzUENrdlJjc3RRcVhSRnV6emlpTVVQQTRDbzFhZkt3R1VZcmtBbmNzZnQxbVhGb2V3WDFPCjkwQ2xod0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDd0lKc0JreEV4UXBpeW8zTkNmQmkrL3hOQ0U3YnpNLzhmRAp4Q0VwQlZ0MWR1NkU1ZFdJQy82a3B0OVZzNHhHc1gvVVA4aUNaejVHZmtxT1JmTklDM0dZUFZJWlhNTUN2RHp0CnFubkk0Z1p2YXhyMnNoSDNpVkw2Rzd0Y2hCZmNJV0J4K1lnTEt3ZW9iTDUvaUorbXJmT2xsNXV4eit6cGUveHIKTjArWWVsTXJBaS9PeWpJR1N0WjVOblRzcnVILzZVRXRFZUwwRE9WQ0FrR3JQYnlkQVdNQUxaeWlQMTU4bCticQpNRkFkMHc2ZG82R3R2NlRCMGVaaXdzT1RHVzN6Ti85YlZWS2NFcGIzaE1MVVk0YVhvNC9laXl6TnF6MzlDdEpBCklPb3djOEFuakdGRDYraUdKbWU0VVdXcUxzMDI5US82eXF6WWFsUmFqWkwyL2FkNHRuaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBektrMW9KdU96dDdHeTNacmI2OVJBajl6c2dIbDQ3QTlVRjhiL0JtaGI1WWpQMDU2Cm5IbkVRWDhCLzl4NFpCYjRTZUs5NmRVWEhpOUVwRm5RQ1JTSkxVMDZxUXFtRnlHV3NSSXBCT1Q5VEJra0ltVzMKblpEVm8rMjZ1YWdUSnRXUGw5a1pYdnlnWEZSQnF4M1hMb0xyMGdha1hOenVnek8wYTJxcENYUCtMZkxNT2toMwpSUlFmZDg1S1pZYVdxaE1JK2M2RkRUa2dOLzhncE0rNlpBNGtFM09MdzlhTkZKb2pJdmpjSGNYeTdETXRsQmhVClFlVThsSTY0dVFWT3NwOWVMNHZaMEVGajV2NkV6M2dveGRWK283ejVndzd6c1BDa3ZSY3N0UXFYUkZ1enppaU0KVVBBNENvMWFmS3dHVVlya0FuY3NmdDFtWEZvZXdYMU85MENsaHdJREFRQUJBb0lCQVFDMHlLZXhkcGZnanhObAp1UFpRVXJvcFZTbDZ6WWhuNTA5U0JxR3V3R2xGSzRkNUxYYkxjQmgzanB5U2lncml4eE9PR0xlUHJZYmRSLzNICmUvcHpldXR0MC9HRVR2N0dJZ3A5NGIvUUxnSzl6TnVKY3ZhT1Bka3FGQjVFVDM2VGFFU09hdHlwZGxpbEZseG4KcmxWZEpaTHdGS1B0ejg3MG9LQzMzaUR4VTcvc2p4MWUwc3FFQ1NMdW5aY2FiaWJtYUpjT2RXYk0yM3JBdEdYQQp0YlFIYVZneHJldEZFREx0Ym9IMFB3Qit3eFNHdFh4WUFwSXR0RkowNWM3QWc1OVhWSFc2akdiYWd2VVlPcDFQCmdGVndSbjdwT1daNlNHTDBqdXgvbTl2UzZoakZ1aVVhVXhkM2ZOSVNKbUljRjZ2MTlmVTQwV3kyYXBCK1B0bHIKOU5zM2RpSGhBb0dCQU01ZW9QcFNGNmp0U1V0NTlERktZdUJUUG9wQWxiZFlxM0QvWnVBQlpkaFdJWXNoS1JvRwpUSGhjaTFlKzBPbmZlZ2pvMzhGM0syaHVJRVdrNEFhQ25QaWVyRWc3Yk1mVjNkMjYyNHBFeGRBN3J5Y1JvaWJuClJlTVA5K1BvVy9IaXJVQW4wUFdyRFUydEpLekxwNlhCcnozeE02VmFiWGxFcnNnZ0pybHN1cEwzQW9HQkFQM2gKWW5QLzVWWHBWeUtvMkhuZEEwWkwwK0pscFhNeFY4NDA4ZE1QMXE1WkVQbkZ2aVNXVjlLdFJVa3lCR2ZDUW1WeApEWkp3KzBRcmZUbXV5elZ6aUFZTFJJbHJKZ285QmN0NmRGUmpFaUo4NkVIeGdlV1J5UkhmaUZqalhqSXlCVGYyCmFxOGM2UlBTZmEyTEh1SVBlZEZVY2lrN0Z5WDg4dzJabkpBcjJFM3hBb0dBWnBOVWtuZkJlTjdRNHFvd2ZWdUwKQUJPQWIzbWdzU3hxc3RUUURxSERQSis3Tm90NkFZeUY4QUdYNVRwY1h4TU1kbWRCNk1qU0U2dEJjVHg5ZWQ3cwpKUXZCZUhuSkhSOHBrMit3ZGU2dklFeTZSOElWQmg5SWRvOVdXTHNERUp6cUhveHI2ZUJtMFdneFpZNG91MVFsClJiV2hSUnhJYzlGMnl0Um9TeHhITklzQ2dZQmRxSFQ2bUMrUmx3aG5KK1RjYUJWYUxJVVpJeWg3SzN2Wi9ad3MKb2M0ditYbVN1MGxmRS91SUpCWElYK1JTSnM3NXYxQWpjdnl1OUdBNUZHdXc1MU1KNzhRejhjeFJ3SnRQcW5nWgozWWFHSkpCR0s0TWhIcndQbE9nbTZwSUljSDJPWEtDVXcxU1UxSFU2dlhVQ0xuVmhMUWNFZ09FVVNaR2N0Y3VWClFDZUc4UUtCZ0UrMkFrZTR3QlRnZDhuZFhlTHRPcHBRZ21IUVViZUN1elZyRzFEVEJxam0rcVpnSzhKR2RUdXIKUDhybjY3TGNFSFpyRlJVODEwQXJUNU92QXRGOTlnU0dnKzd1Q2x5bzJtVGtxZWRIUTZ6RVZld0JUQlFQUEx1VAp6UGRYbjl5cTZSaVZPajU1QUROdmFuNXdQNUE3clRSTGZjNXZqQWRmV3hmYUZqYVIxNE85Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
token: eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1oVnpzUlUzRU4zbXJRV2F5VUZMc3JmYWFBTWMyWU1IenY1d1NET1U0bDgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNTI4dzIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiN2MzOTU1ZDMtMmMwYy00Yjk5LWI2OWItOGEzZjMzMDY2MWRlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.nVS3WCiIU90o5WIYG9iHYE90Gfox_Q5eHNzz3UsGLDDBIfgDt7veX-4pl7GLV8FFsAap0fTLo_pU7sbehd5mOYcgh_QRlZ3ELR4mVZYNW6fmPBFZn7Tbjv7LLieGDPzELrefQJwS4sZus2WsH1OdQbMIry6AYKpl5AAKw4rhh_679QnEBjCsJiEebg0hzlKyXoXGqmaGwfetsCB5DOmoNss2WbIKfGJ7pasTTKa29F3T19NIh9VbDmavyvYZp9VPgfcKiuBKlxrakzwH9fosS8V3faMgH64CMIWwrEqv1cybd85gQkA1u0SGZ5mOQJ3tYWGHGJBFlO8J-RKSo8gJOw
[root@k8s-master ~]# cp .kube/config /usr/local/k8s-dashboard.kubeconfig
[root@k8s-master ~]# cd /usr/local/
[root@k8s-master local]# ll
total 8
drwxr-xr-x. 2 root root 6 Apr 11 2018 bin
drwxr-xr-x. 2 root root 6 Apr 11 2018 etc
drwxr-xr-x. 2 root root 6 Apr 11 2018 games
drwxr-xr-x. 2 root root 6 Apr 11 2018 include
-rw------- 1 root root 6425 Aug 3 17:48 k8s-dashboard.kubeconfig
drwxr-xr-x. 2 root root 6 Apr 11 2018 lib
drwxr-xr-x. 2 root root 6 Apr 11 2018 lib64
drwxr-xr-x. 2 root root 6 Apr 11 2018 libexec
drwxr-xr-x. 2 root root 6 Apr 11 2018 sbin
drwxr-xr-x. 5 root root 49 Mar 30 2019 share
drwxr-xr-x. 2 root root 6 Apr 11 2018 src
[root@k8s-master local]# sz k8s-dashboard.kubeconfig
# 登录的时候,选择文件认证的方式登录即可

安装部署 metrics-server 插件

链接:https://pan.baidu.com/s/1QRndSG88L5w-_DHfMxrd_g
提取码:62dj
复制这段内容后打开百度网盘手机App,操作更方便哦
[root@k8s-master ~]# unzip metrics-server-master.zip
[root@k8s-master ~]# cd metrics-server-master/deploy/1.8+/
[root@k8s-master 1.8+]# ll
total 28
-rw-r--r-- 1 root root 397 Nov 12 2019 aggregated-metrics-reader.yaml
-rw-r--r-- 1 root root 303 Nov 12 2019 auth-delegator.yaml
-rw-r--r-- 1 root root 324 Nov 12 2019 auth-reader.yaml
-rw-r--r-- 1 root root 298 Nov 12 2019 metrics-apiservice.yaml
-rw-r--r-- 1 root root 1091 Nov 12 2019 metrics-server-deployment.yaml
-rw-r--r-- 1 root root 297 Nov 12 2019 metrics-server-service.yaml
-rw-r--r-- 1 root root 517 Nov 12 2019 resource-reader.yaml

修改安装脚本

[root@k8s-master 1.8+]# vim metrics-server-deployment.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
image: mirrorgooglecontainers/metrics-server-amd64:v0.3.6 # # 修改镜像下载地址
args: # 添加以下内容
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
ports:
- name: main-port
containerPort: 4443
protocol: TCP
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
imagePullPolicy: Always
volumeMounts:
- name: tmp-dir
mountPath: /tmp
# 执行脚本
[root@k8s-master 1.8+]# kubectl apply -f .
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
[root@k8s-master 1.8+]# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master 887m 22% 1701Mi 59%
k8s-node1 158m 7% 954Mi 35%
k8s-node2 137m 6% 894Mi 32%
# 以下情况表示还没创建完成,等待1-3分钟即可
[root@k8s-master 1.8+]# kubectl top nodes
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
[root@k8s-master 1.8+]#
[root@k8s-master 1.8+]# kubectl top nodes
error: metrics not available yet

Kubeadm部署-Kubernetes-1.18.6集群的更多相关文章

  1. 利用shell脚本使用kubeadm部署kubenetes 1.18.6集群环境

    # README # 此脚本需要在master节点上使用 # 注意root密码,请提前修改 # 个人实验环境,注意机器最低配置:master(2G内存,1cpu2核心,否则集群会创建失败),node( ...

  2. 使用Kubeadm搭建Kubernetes(1.12.2)集群

    Kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,在2018年将进入GA状态,说明离生产环境中使用的距离越来 ...

  3. 使用 Kubeadm 安装部署 Kubernetes 1.12.1 集群

    手工搭建 Kubernetes 集群是一件很繁琐的事情,为了简化这些操作,就产生了很多安装配置工具,如 Kubeadm ,Kubespray,RKE 等组件,我最终选择了官方的 Kubeadm 主要是 ...

  4. kubeadm安装kubernetes V1.11.1 集群

    之前测试了离线环境下使用二进制方法安装配置Kubernetes集群的方法,安装的过程中听说 kubeadm 安装配置集群更加方便,因此试着折腾了一下.安装过程中,也有一些坑,相对来说操作上要比二进制方 ...

  5. 【葵花宝典】lvs+keepalived部署kubernetes(k8s)高可用集群

    一.部署环境 1.1 主机列表 主机名 Centos版本 ip docker version flannel version Keepalived version 主机配置 备注 lvs-keepal ...

  6. 使用kubeadm搭建Kubernetes(1.10.2)集群(国内环境)

    目录 目标 准备 主机 软件 步骤 (1/4)安装 kubeadm, kubelet and kubectl (2/4)初始化master节点 (3/4) 安装网络插件 (4/4)加入其他节点 (可选 ...

  7. kubeadm安装kubernetes 1.13.1集群完整部署记录

    k8s是什么 Kubernetes简称为k8s,它是 Google 开源的容器集群管理系统.在 Docker 技术的基础上,为容器化的应用提供部署运行.资源调度.服务发现和动态伸缩等一系列完整功能,提 ...

  8. kubeadm部署kubernetes-1.12.0 HA集群-ipvs

    一.概述 主要介绍搭建流程及使用注意事项,如果线上使用的话,请务必做好相关测试及压测. 1.基础环境准备 系统:ubuntu TLS 16.04  5台 docker-ce:17.06.2 kubea ...

  9. 使用kubeadm部署K8S v1.17.0集群

    kubeadm部署K8S集群 安装前的准备 集群机器 172.22.34.34 K8S00 172.22.34.35 K8S01 172.22.34.36 K8S02 注意: 本文档中的 etcd . ...

  10. 二进制方式部署Kubernetes 1.6.0集群(开启TLS)

    本节内容: Kubernetes简介 环境信息 创建TLS加密通信的证书和密钥 下载和配置 kubectl(kubecontrol) 命令行工具 创建 kubeconfig 文件 创建高可用 etcd ...

随机推荐

  1. Python常用功能函数系列总结(一)

    本节目录 常用函数一:获取指定文件夹内所有文件 常用函数二:文件合并 常用函数三:将文件按时间划分 常用函数四:数据去重 写在前面 写代码也有很长时间了,总觉得应该做点什么有价值的事情,写代码初始阶段 ...

  2. Pandas系列(十七)-EDA(pandas-profiling)

    对于探索性数据分析来说,做数据分析前需要先看一下数据的总体概况,pandas_profiling工具可以快速预览数据. 安装 pip install pandas-profiling 使用 impor ...

  3. iOS二进制方案真实落地经验(30分钟降低到10分钟以内)

    iOS二进制方案真实落地经验(30分钟降低到10分钟以内) 我们做iOS二进制化断断续续尝试了一年多了,来来回回换了三个架构师去尝试落地,今日完全落地,在此做个总结 背景 工程基于cocoapod的组 ...

  4. efcore使用ShardingCore实现分表分库下的多租户

    efcore使用ShardingCore实现分表分库下的多租户 介绍 本期主角:ShardingCore 一款ef-core下高性能.轻量级针对分表分库读写分离的解决方案,具有零依赖.零学习成本.零业 ...

  5. 基于Jenkins+Maven+Gitea+Nexus从0到1搭建CICD环境

    在传统的单体软件架构中,软件开发.测试.运维都是以单个进程为单位. 当拆分成微服务之后,单个应用可以被拆分成多个微服务,比如用户系统,可以拆分成基本信息管理.积分管理.订单管理.用户信息管理.合同管理 ...

  6. python实现--九九乘法表

    1 for i in range(1,10): 2 for j in range(1,i+1): 3 print("%d*%d=%d"%(j,i,j*i),end="\t ...

  7. 【pwn】攻防世界 pwn新手区wp

    [pwn]攻防世界 pwn新手区wp 前言 这几天恶补pwn的各种知识点,然后看了看攻防世界的pwn新手区没有堆题(堆才刚刚开始看),所以就花了一晚上的时间把新手区的10题给写完了. 1.get_sh ...

  8. Golang 通过创建临时结构体实现 struct 内 interface struct 的 json 反序列化

    原文链接 背景 type AData struct { A string `json:"a"` } type BData struct { B string `json:" ...

  9. c#操作符详解

    操作符概览 操作符(Operator)也译为"运算符" 操作符是用来操作数据的,被操作符操作的数据称为操作数(Operand) 操作符的本质 操作符的本质是函数(即算法)的&quo ...

  10. golang中文件和路径用法

    package main import ( "fmt" "io/fs" "io/ioutil" "os" "p ...