kubeadm搭建单master多node节点的k8s集群(3)
一、实验环境准备
K8s集群角色 | IP | 主机名 | 安装的组件 | 配置 |
控制节点 | 192.168.1.10 | master | apiserver、controller-manager、scheduler、etcd、kube-proxy、docker、calico | 6核4G |
工作节点 | 192.168.1.11 | pod1 | kubelet-1.20.7、kube-proxy、docker、calico-3.18.0、coredns | 2核2G |
工作节点 | 192.168.1.12 | pod2 | kubelet-1.20.7、kube-proxy、docker、calico-3.18.0、coredns | 2核2G |
基础环境安装:https://www.cnblogs.com/yangmeichong/p/16452316.html 到安装完成docker
1.1 安装k8s初始化所需安装包
yum install -y kubelet-1.20.7 kubeadm-1.20.7 kubectl-1.20.7
systemctl enable kubelet && systemctl start kubelet && systemctl status kubelet
1.2 使用kubeadm初始化k8s集群
master端:
kubeadm init --kubernetes-version=1.20.7 --apiserver-advertise-address=192.168.1.10 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=SystemVerification
注: -- image-repository registry.aliyuncs.com/google_containers 手动指定仓库地址,默认是从k8s.grc.io拉取镜像,但是k8s.grc.io不能访问
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
kubeadm token create --print-join-command # 查看加入节点命令
node端:把pod1和pod2节点加入集群
kubeadm join 192.168.1.10:6443 --token mzqg4u.s149ey40efoszof2 --discovery-token-ca-cert-hash sha256:3852939697d68b0cfddb28f6d64638ba8d92a0ee8c83fa88ecff0a4036d68679
查看集群节点情况,没有安装calico所以是NotReady状态
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane,master 10h v1.20.7
pod1 NotReady 10h v1.20.7
pod2 NotReady 10h v1.20.7
1.3 安装网络组件calico
注:在线下载配置文件地址是: https://docs.projectcalico.org/manifests/calico.yaml
https://projectcalico.docs.tigera.io/archive/v3.20/manifests/calico.yaml
下载的calico安装包中也有calico.yaml文件:https://github.com/projectcalico/calico/releases
查看caclico支持的版本:https://projectcalico.docs.tigera.io/archive/v3.20/getting-started/kubernetes/requirements
1.3.1下载calico组件
calico/pod2daemon-flexvol v3.18.0 2a22066e9588 16 months ago 21.7MB
calico/node v3.18.0 5a7c4970fbc2 16 months ago 172MB
calico/cni v3.18.0 727de170e4ce 16 months ago 131MB
calico/kube-controllers
1.3.2 安装:kubectl apply -f calico.yaml
如果是自己下载calico请注意下calico.yaml版本和k8s版本问题
error: unable to recognize "calico.yaml": no matches for kind "PodDisruptionBudget" in version "poli
下载其它版本:https://docs.projectcalico.org/v3.18/manifests/calico.yaml 卸载已安装的版本
kubectl delete -f calico.yaml
1.3.3 查看集群状态
calico-kube-controllers-6949477b58-28k6k 1/1 Running 0 10h
calico-node-84rc2 1/1 Running 0 10h
calico-node-cn45z 1/1 Running 0 10h
calico-node-ng692 1/1 Running 0 10h
coredns-7f89b7bc75-ff9l4 1/1 Running 0 10h
coredns-7f89b7bc75-pg6wj 1/1 Running 0 10h
etcd-master 1/1 Running 0 10h
kube-apiserver-master 1/1 Running 0 10h
kube-controller-manager-master 1/1 Running 0 10h
kube-proxy-drsd8 1/1 Running 0 10h
kube-proxy-fcflq 1/1 Running 0 10h
kube-proxy-mftgk 1/1 Running 0 10h
kube-scheduler-master 1/1 Running 0 10h
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 10h v1.20.7
pod1 Ready worker 10h v1.20.7
pod2 Ready worker 10h v1.20.7
1.3.4 查看是否能正常访问网络
docker pull busybox
[root@master ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # ping baidu.com
PING baidu.com (220.181.38.251): 56 data bytes
64 bytes from 220.181.38.251: seq=0 ttl=127 time=30.079 ms
1.4 集群节点上部署测试tomcat
pod1和pod2节点上下载tomcat镜像:docker pull tomcat
# tomcat.yaml
apiVersion: v1 #pod属于k8s核心组v1
kind: Pod #创建的是一个Pod资源
metadata: #元数据
name: demo-pod #pod名字
namespace: default #pod所属的名称空间
labels:
app: myapp #pod具有的标签
env: dev #pod具有的标签
spec:
containers: #定义一个容器,容器是对象列表,下面可以有多个name
- name: tomcat-pod-java #容器的名字
ports:
- containerPort: 8080
image: tomcat:8.5-jre8-alpine #容器使用的镜像
imagePullPolicy: IfNotPresent # tomcat-service.yaml
apiVersion: v1
kind: Service
metadata:
name: tomcat
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30080
selector:
app: myapp
env: dev
[root@master ~]# kubectl apply -f tomcat.yaml [root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-pod 1/1 Running 0 10s [root@master ~]# kubectl apply -f tomcat-service.yaml
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 50s
tomcat NodePort 10.111.116.112 <none> 8080:30080/TCP 50s 在浏览器访问http://192.168.1.11:30080就可以请求到
二、安装k8s可视化UI界面dashboard
2.1 安装dashboard
dashboard下载地址:https://github.com/kubernetes/dashboard/releases
镜像搜索方式:docker search metrics-scraper 和 docker search dashboard
得到:kubernetesui/metrics-scraper和kubernetesui/dashboard
2.1.1 pod1和pod2节点导入:
docker load -i dashboard_2_0_0.tar.gz
docker load -i metrics-scrapter-1-0-1.tar.gz
2.2.2 安装dashboard组件
一般下载地址,请按版本选择:https://github.com/kubernetes/dashboard/blob/v2.0.0/aio/deploy/recommended.yaml
在master节点上运行:kubectl apply - f kubernetes-dashboard.yaml
查看dashboard状态:
[root@master ~]# kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-7445d59dfd-5fl8k 1/1 Running 0 10m
kubernetes-dashboard-54f5b6dc4b-k9crw 1/1 Running 0 10m
# Running说明安装好了 # 查看dashboard前段的service
[root@master ~]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.104.242.228 <none> 8000/TCP 10m
kubernetes-dashboard ClusterIP 10.109.75.153 <none> 443:31480/TCP 10m # 修改service type类型变为NodePort
[root@master ~]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
把type: ClusterIP 变成 type: NodePort ,保存退出即可 [root@master ~]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.104.242.228 <none> 8000/TCP 12m
kubernetes-dashboard NodePort 10.109.75.153 <none> 443:31480/TCP 12m # 访问任何一个工作节点:https://192.168.1.11:31480
2.3 dashboard登录配置UI界面
https://192.168.1.11:31480
2.3.1 通过token令牌访问dashboard
创建管理员 token ,具有查看任何空间的权限,可以管理所有资源对象
[root@master ~] kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard # 查看 kubernetes -d dashboard 名称空间下的secret
[root@master ~]# kubectl get secret -n kubernetes-dashboard
NAME TYPE DATA AGE
default-token-5fr9l kubernetes.io/service-account-token 3 26m
kubernetes-dashboard-certs Opaque 0 26m
kubernetes-dashboard-csrf Opaque 1 26m
kubernetes-dashboard-key-holder Opaque 2 26m
kubernetes-dashboard-token-kczz2 kubernetes.io/service-account-token 3 26m # 找到对应的token:kubernetes-dashboard-token-kczz2
[root@master ~]# kubectl describe secret kubernetes-dashboard-token-kczz2 -n kubernetes-dashboard
Name: kubernetes-dashboard-token-kczz2
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: kubernetes-dashboard
kubernetes.io/service-account.uid: fee4abed-549e-4a10-9d58-f52b02165d2a Type: kubernetes.io/service-account-token Data
====
ca.crt: 1066 bytes
namespace: 20 bytes
# 记住token值,复制到浏览器
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlRBWC1CVVlkM2tqTG9TU3NvdjJMV3NVaVNUSjctRC15NXdjX3ZNVlJVY3MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1rY3p6MiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImZlZTRhYmVkLTU0OWUtNGExMC05ZDU4LWY1MmIwMjE2NWQyYSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.U6te78cEYuCN_2LvMjM_GBeWzCV3qZpYn2rfPZa9fOZz-dSO_0ZrWuc49mpRWQSU-2a2_zOJyl97-YxdEETivdqZ427dDKdbclOjl5j0zWvBXClVJ6N_nbMghLTjjMH08GhVCxo1iXKcLraCtq2-3kVuKRjzSQ_OWjLgZcBZuaZQPyDuHosSfmtr53oIBxs8uiZZtAmayuvHk7ZfevWN2R8enC8CSv1EgbbiZ17Mmu2dPpZDCcMmk3zw-OAgZCDX0wDD-wwwQOUShNeWOdUBmwgS91-VbjCYsVaaBCv6iXmd5VMoDWsI90qqa69jnOCB9QZUcsZBa-Kml4UIuYiMZw
登录进去:
2.3.2 通过kubeconfig文件访问dashboard
[root@master pki]# kubectl config set-cluster kubernetes --certificate-authority=./ca.crt --server="https://192.168.1.10:6443" --embed-certs=true --kubeconfig=/root/dashboard-admin.conf
Cluster "kubernetes" set. cat /root/dashboard-admin.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EY3hNekUxTVRReU5sb1hEVE15TURjeE1ERTFNVFF5Tmxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBUEFUCjQ4cnY4QzViOU1Za1dXc1FUelg3NTY5SUE1UnhvdVlsOU5ETUZWRUZ3MTVpNXhmMXhzK3lqN3FCVHBnbmcxK3oKUFBoWGhoOGx5M08xRU55VDlLM04xeCtlU2RZZ2dEUms0RDZFSktxMVUxWUkzcTRyYkxQZ25GRk94aloweFdLcwovejZKdnBzcVRrUWpTczMzUzNMVm1tOEJLRVd1cjFwamUra2RIV2IrQ3BTOGhvTkhSU05TaXJRYTFjQVhiQThVCnE4OXdCenFCNjd6Q3VpUmNQMTR3VnVLc2dWTE5BQitXT2llS2tVUjhoMVJjc2xlY1RKMEo2eDJhY1RBV0pyRzAKNFF4VGRFRDNwUUY1ekRaaEN3aEdSOVU1a2JySTlzR1FKc1V4T2dlVjE0QkFTN2s2VlU2QVlaTHpvcE9KdHFtRQpRTGNrR0ErNStNK3Z0UllyQ3BVQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZGVFFqcmlScFZNaFRzd2YyNkVnM1Fib1F5NU5NQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFDMW85L0VnNFltUW5hSTUzL1UxbDdTL2VnNzBQVDlNTEtpd3BpK1hhVlNuM0JNRFJPcgpITURzNHhBWElGazRidzJiUmJoUXlsT0NQNzZPdDBPeEhmYTlTWTgvdWhUQ0x4UTkyYnMvcXNYTm96NGJxdCtBCkV6bm5SZGtKSUFpb3dUdFJoYUtTYW1qUW9JWndoNVA3RjdSTXJYcVNhNEp1SzA0bG1KM1c5UFBzNmpPM0x3OUUKZ05qRkpZWitmSWFBUXllVWVTTnBVU0pwNnNOem5aTlJtTG1UNUVlcWVDSWlHRit6eFlrazdtMnpKbVZ5Y2MrUgpiL25LWEFjU3FTa29NbE1uaE5OdzBiU3hPb1luUHRUNXBJNlRaaXlZbHRoRTZocE44TGZjL016aVczckxGWEhhCmpBbDZJa0x3ZllMUm45ekU4Nk5EaVdBcEx2TGJyQXpJd3l0VAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://192.168.1.10:6443
name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
创建credentials
# 创建credentials,需要使用上面的kubernetes-dashboard-token-kczz2对应的token信息
[root@master pki]# DEF_NS_ADMIN_TOKEN=$(kubectl get secret kubernetes-dashboard-token-kczz2 -n kubernetes-dashboard -o jsonpath={.data.token}|base64 -d)
[root@master pki]# kubectl config set-credentials dashboard-admin --token=$DEF_NS_ADMIN_TOKEN --kubeconfig=/root/dashboard-admin.conf
User "dashboard-admin" set.
[root@master pki]# cat /root/dashboard-admin.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EY3hNekUxTVRReU5sb1hEVE15TURjeE1ERTFNVFF5Tmxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBUEFUCjQ4cnY4QzViOU1Za1dXc1FUelg3NTY5SUE1UnhvdVlsOU5ETUZWRUZ3MTVpNXhmMXhzK3lqN3FCVHBnbmcxK3oKUFBoWGhoOGx5M08xRU55VDlLM04xeCtlU2RZZ2dEUms0RDZFSktxMVUxWUkzcTRyYkxQZ25GRk94aloweFdLcwovejZKdnBzcVRrUWpTczMzUzNMVm1tOEJLRVd1cjFwamUra2RIV2IrQ3BTOGhvTkhSU05TaXJRYTFjQVhiQThVCnE4OXdCenFCNjd6Q3VpUmNQMTR3VnVLc2dWTE5BQitXT2llS2tVUjhoMVJjc2xlY1RKMEo2eDJhY1RBV0pyRzAKNFF4VGRFRDNwUUY1ekRaaEN3aEdSOVU1a2JySTlzR1FKc1V4T2dlVjE0QkFTN2s2VlU2QVlaTHpvcE9KdHFtRQpRTGNrR0ErNStNK3Z0UllyQ3BVQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZGVFFqcmlScFZNaFRzd2YyNkVnM1Fib1F5NU5NQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFDMW85L0VnNFltUW5hSTUzL1UxbDdTL2VnNzBQVDlNTEtpd3BpK1hhVlNuM0JNRFJPcgpITURzNHhBWElGazRidzJiUmJoUXlsT0NQNzZPdDBPeEhmYTlTWTgvdWhUQ0x4UTkyYnMvcXNYTm96NGJxdCtBCkV6bm5SZGtKSUFpb3dUdFJoYUtTYW1qUW9JWndoNVA3RjdSTXJYcVNhNEp1SzA0bG1KM1c5UFBzNmpPM0x3OUUKZ05qRkpZWitmSWFBUXllVWVTTnBVU0pwNnNOem5aTlJtTG1UNUVlcWVDSWlHRit6eFlrazdtMnpKbVZ5Y2MrUgpiL25LWEFjU3FTa29NbE1uaE5OdzBiU3hPb1luUHRUNXBJNlRaaXlZbHRoRTZocE44TGZjL016aVczckxGWEhhCmpBbDZJa0x3ZllMUm45ekU4Nk5EaVdBcEx2TGJyQXpJd3l0VAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://192.168.1.10:6443
name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users:
- name: dashboard-admin
user:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlRBWC1CVVlkM2tqTG9TU3NvdjJMV3NVaVNUSjctRC15NXdjX3ZNVlJVY3MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1rY3p6MiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImZlZTRhYmVkLTU0OWUtNGExMC05ZDU4LWY1MmIwMjE2NWQyYSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.U6te78cEYuCN_2LvMjM_GBeWzCV3qZpYn2rfPZa9fOZz-dSO_0ZrWuc49mpRWQSU-2a2_zOJyl97-YxdEETivdqZ427dDKdbclOjl5j0zWvBXClVJ6N_nbMghLTjjMH08GhVCxo1iXKcLraCtq2-3kVuKRjzSQ_OWjLgZcBZuaZQPyDuHosSfmtr53oIBxs8uiZZtAmayuvHk7ZfevWN2R8enC8CSv1EgbbiZ17Mmu2dPpZDCcMmk3zw-OAgZCDX0wDD-wwwQOUShNeWOdUBmwgS91-VbjCYsVaaBCv6iXmd5VMoDWsI90qqa69jnOCB9QZUcsZBa-Kml4UIuYiMZw
创建context
[root@master pki]# kubectl config set-context dashboard-admin@kubernetes --cluster=kubernetes --user=dashboard-admin --kubeconfig=/root/dashboard-admin.conf
Context "dashboard-admin@kubernetes" created.
[root@master pki]# cat /root/dashboard-admin.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EY3hNekUxTVRReU5sb1hEVE15TURjeE1ERTFNVFF5Tmxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBUEFUCjQ4cnY4QzViOU1Za1dXc1FUelg3NTY5SUE1UnhvdVlsOU5ETUZWRUZ3MTVpNXhmMXhzK3lqN3FCVHBnbmcxK3oKUFBoWGhoOGx5M08xRU55VDlLM04xeCtlU2RZZ2dEUms0RDZFSktxMVUxWUkzcTRyYkxQZ25GRk94aloweFdLcwovejZKdnBzcVRrUWpTczMzUzNMVm1tOEJLRVd1cjFwamUra2RIV2IrQ3BTOGhvTkhSU05TaXJRYTFjQVhiQThVCnE4OXdCenFCNjd6Q3VpUmNQMTR3VnVLc2dWTE5BQitXT2llS2tVUjhoMVJjc2xlY1RKMEo2eDJhY1RBV0pyRzAKNFF4VGRFRDNwUUY1ekRaaEN3aEdSOVU1a2JySTlzR1FKc1V4T2dlVjE0QkFTN2s2VlU2QVlaTHpvcE9KdHFtRQpRTGNrR0ErNStNK3Z0UllyQ3BVQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZGVFFqcmlScFZNaFRzd2YyNkVnM1Fib1F5NU5NQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFDMW85L0VnNFltUW5hSTUzL1UxbDdTL2VnNzBQVDlNTEtpd3BpK1hhVlNuM0JNRFJPcgpITURzNHhBWElGazRidzJiUmJoUXlsT0NQNzZPdDBPeEhmYTlTWTgvdWhUQ0x4UTkyYnMvcXNYTm96NGJxdCtBCkV6bm5SZGtKSUFpb3dUdFJoYUtTYW1qUW9JWndoNVA3RjdSTXJYcVNhNEp1SzA0bG1KM1c5UFBzNmpPM0x3OUUKZ05qRkpZWitmSWFBUXllVWVTTnBVU0pwNnNOem5aTlJtTG1UNUVlcWVDSWlHRit6eFlrazdtMnpKbVZ5Y2MrUgpiL25LWEFjU3FTa29NbE1uaE5OdzBiU3hPb1luUHRUNXBJNlRaaXlZbHRoRTZocE44TGZjL016aVczckxGWEhhCmpBbDZJa0x3ZllMUm45ekU4Nk5EaVdBcEx2TGJyQXpJd3l0VAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://192.168.1.10:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: dashboard-admin
name: dashboard-admin@kubernetes
current-context: ""
kind: Config
preferences: {}
users:
- name: dashboard-admin
user:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlRBWC1CVVlkM2tqTG9TU3NvdjJMV3NVaVNUSjctRC15NXdjX3ZNVlJVY3MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1rY3p6MiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImZlZTRhYmVkLTU0OWUtNGExMC05ZDU4LWY1MmIwMjE2NWQyYSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.U6te78cEYuCN_2LvMjM_GBeWzCV3qZpYn2rfPZa9fOZz-dSO_0ZrWuc49mpRWQSU-2a2_zOJyl97-YxdEETivdqZ427dDKdbclOjl5j0zWvBXClVJ6N_nbMghLTjjMH08GhVCxo1iXKcLraCtq2-3kVuKRjzSQ_OWjLgZcBZuaZQPyDuHosSfmtr53oIBxs8uiZZtAmayuvHk7ZfevWN2R8enC8CSv1EgbbiZ17Mmu2dPpZDCcMmk3zw-OAgZCDX0wDD-wwwQOUShNeWOdUBmwgS91-VbjCYsVaaBCv6iXmd5VMoDWsI90qqa69jnOCB9QZUcsZBa-Kml4UIuYiMZw
切换context的current-context是dashboard-admin@kubernetes
[root@master pki]# kubectl config use-context dashboard-admin@kubernetes --kubeconfig=/root/dashboard-admin.conf
Switched to context "dashboard-admin@kubernetes".
[root@master pki]# cat /root/dashboard-admin.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EY3hNekUxTVRReU5sb1hEVE15TURjeE1ERTFNVFF5Tmxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBUEFUCjQ4cnY4QzViOU1Za1dXc1FUelg3NTY5SUE1UnhvdVlsOU5ETUZWRUZ3MTVpNXhmMXhzK3lqN3FCVHBnbmcxK3oKUFBoWGhoOGx5M08xRU55VDlLM04xeCtlU2RZZ2dEUms0RDZFSktxMVUxWUkzcTRyYkxQZ25GRk94aloweFdLcwovejZKdnBzcVRrUWpTczMzUzNMVm1tOEJLRVd1cjFwamUra2RIV2IrQ3BTOGhvTkhSU05TaXJRYTFjQVhiQThVCnE4OXdCenFCNjd6Q3VpUmNQMTR3VnVLc2dWTE5BQitXT2llS2tVUjhoMVJjc2xlY1RKMEo2eDJhY1RBV0pyRzAKNFF4VGRFRDNwUUY1ekRaaEN3aEdSOVU1a2JySTlzR1FKc1V4T2dlVjE0QkFTN2s2VlU2QVlaTHpvcE9KdHFtRQpRTGNrR0ErNStNK3Z0UllyQ3BVQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZGVFFqcmlScFZNaFRzd2YyNkVnM1Fib1F5NU5NQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFDMW85L0VnNFltUW5hSTUzL1UxbDdTL2VnNzBQVDlNTEtpd3BpK1hhVlNuM0JNRFJPcgpITURzNHhBWElGazRidzJiUmJoUXlsT0NQNzZPdDBPeEhmYTlTWTgvdWhUQ0x4UTkyYnMvcXNYTm96NGJxdCtBCkV6bm5SZGtKSUFpb3dUdFJoYUtTYW1qUW9JWndoNVA3RjdSTXJYcVNhNEp1SzA0bG1KM1c5UFBzNmpPM0x3OUUKZ05qRkpZWitmSWFBUXllVWVTTnBVU0pwNnNOem5aTlJtTG1UNUVlcWVDSWlHRit6eFlrazdtMnpKbVZ5Y2MrUgpiL25LWEFjU3FTa29NbE1uaE5OdzBiU3hPb1luUHRUNXBJNlRaaXlZbHRoRTZocE44TGZjL016aVczckxGWEhhCmpBbDZJa0x3ZllMUm45ekU4Nk5EaVdBcEx2TGJyQXpJd3l0VAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://192.168.1.10:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: dashboard-admin
name: dashboard-admin@kubernetes
current-context: dashboard-admin@kubernetes
kind: Config
preferences: {}
users:
- name: dashboard-admin
user:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlRBWC1CVVlkM2tqTG9TU3NvdjJMV3NVaVNUSjctRC15NXdjX3ZNVlJVY3MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1rY3p6MiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImZlZTRhYmVkLTU0OWUtNGExMC05ZDU4LWY1MmIwMjE2NWQyYSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.U6te78cEYuCN_2LvMjM_GBeWzCV3qZpYn2rfPZa9fOZz-dSO_0ZrWuc49mpRWQSU-2a2_zOJyl97-YxdEETivdqZ427dDKdbclOjl5j0zWvBXClVJ6N_nbMghLTjjMH08GhVCxo1iXKcLraCtq2-3kVuKRjzSQ_OWjLgZcBZuaZQPyDuHosSfmtr53oIBxs8uiZZtAmayuvHk7ZfevWN2R8enC8CSv1EgbbiZ17Mmu2dPpZDCcMmk3zw-OAgZCDX0wDD-wwwQOUShNeWOdUBmwgS91-VbjCYsVaaBCv6iXmd5VMoDWsI90qqa69jnOCB9QZUcsZBa-Kml4UIuYiMZw
将生成的dashboardd-admin.conf放到本地桌面,浏览器再次访问https://192.168.1.11:31480,导入dashboard-admin.conf文件
2.4 通过kubernetes-dashboard创建容器
2.4.1 在pod1和pod2节点导入nginx镜像
2.4.2 在dashboard界面添加,右上角点击 + ,进入后切换到create from from
注:
应用名称:nginx
容器镜像:nginx
pod数量:2
service:external 外部网络
port :8- 外部端口
targetport :80
注:表单中创建pod时没有创建nodeport的选项,会自动创建在30000+以上的端口。 关于port、targetport、nodeport的说明:
nodeport是集群外流量访问集群内服务的端口,比如客户访问nginx、apache,
port是集群内的pod互相通信用的端口类型,比如nginx访问mysql,而mysql是不需要让客户访
问到的,port 是service的的端口
targetport 目标端口,也就是最终端口,也就是pod 的端口
在主界面就可以看到nginx这个镜像了,点击左侧Discovery and Load Balaning下Services,浏览器访问nginx:http://192.168.1.12:30777,看到可以访问
三、安装metrics-server组件
metrics-server 是一个集群范围内的资源数据集和工具,同样的,metrics-server 也只是显示数 据,并不提供数据存储服务,主要关注的是资源度量 API 的实现,比如 CPU、文件描述符、内存、请求延 时等指标,metric-server 收集数据给 k8s 集群内使用,如 kubectl top,hpa,scheduler 等
3.1 部署metrics-server组件,将下载的镜像导入,master、pod1和pod2都需要导入metrics-server和扩展组件addon
github地址:https://github.com/kubernetes-sigs/metrics-server
下载地址:https://github.com/kubernetes-sigs/metrics-server/releases
文件:https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
docker load -i metrics-server-amd64-0-3-6.tar.gz
docker load -i addon.tar.gz
3.2 部署metrics-server
#在/etc/kubernetes/manifests 里面改一下 apiserver 的配置
注意:这个是 k8s 在 1.17 的新特性,如果是 1.16 版本的可以不用添加,1.17 以后要添加。这个参 数的作用是 Aggregation 允许在不修改 Kubernetes 核心代码的同时扩展 Kubernetes API。
[root@master ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
- --enable-aggregator-routing=true
# 重新更新apiserver配置
[root@master ~]# kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
pod/kube-apiserver created [root@master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6949477b58-28k6k 1/1 Running 0 12h
calico-node-84rc2 1/1 Running 0 12h
calico-node-cn45z 1/1 Running 0 12h
calico-node-ng692 1/1 Running 0 12h
coredns-7f89b7bc75-ff9l4 1/1 Running 0 12h
coredns-7f89b7bc75-pg6wj 1/1 Running 0 12h
etcd-master 1/1 Running 0 12h
kube-apiserver 0/1 CrashLoopBackOff 2 51s
kube-apiserver-master 1/1 Running 0 2m45s
kube-controller-manager-master 1/1 Running 1 12h
kube-proxy-drsd8 1/1 Running 0 12h
kube-proxy-fcflq 1/1 Running 0 12h
kube-proxy-mftgk 1/1 Running 0 12h
kube-scheduler-master 1/1 Running 1 12h
#kube-apiserver不提供服务,是运行yaml是生成的,提供服务的是kube-apiserver-master,带主机名的
# 把CrashLoopBackOff 状态的pod删除
[root@master ~]# kubectl delete pods kube-apiserver -n kube-system
pod "kube-apiserver" deleted
[root@master ~]# kubectl apply -f metrics.yaml
[root@master ~]# kubectl get pods -n kube-system | grep metrics
metrics-server-6595f875d6-pwwml 2/2 Running 0 2m33s
3.3 测试kubectl top命令
[root@master ~]# kubectl top pods -n kube-system
NAME CPU(cores) MEMORY(bytes)
calico-kube-controllers-6949477b58-28k6k 1m 14Mi
calico-node-84rc2 37m 76Mi
calico-node-cn45z 63m 74Mi
calico-node-ng692 34m 77Mi
coredns-7f89b7bc75-ff9l4 4m 9Mi
coredns-7f89b7bc75-pg6wj 4m 12Mi
etcd-master 34m 69Mi
kube-apiserver-master 133m 341Mi
kube-controller-manager-master 35m 50Mi
kube-proxy-drsd8 1m 31Mi
kube-proxy-fcflq 1m 16Mi
kube-proxy-mftgk 1m 26Mi
kube-scheduler-master 5m 22Mi
metrics-server-6595f875d6-pwwml 3m 17Mi [root@master ~]# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master 355m 5% 1988Mi 54%
pod1 103m 5% 811Mi 47%
pod2 101m 5% 824Mi 47%
3.4 把 scheduler、controller-manager 端口变成物理机可以监听的端口
[root@master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0 Healthy {"health":"true"} # 默认在 1.19 之后 10252 和 10251 都是绑定在 127 的,如果想要通过 prometheus 监控,会采集不到 数据,所以可以把端口绑定到物理机
# 处理方法:
[root@master ~]# vim /etc/kubernetes/manifests/kube-scheduler.yaml
1.将--bind-address的IP地址从127.0.0.1改为192.168.1.10
2.将httpGet下的host从127.0.0.1改为192.168.1.10,共2个
3.删除--port=0 [root@master ~]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml
1.将--bind-address的IP地址从127.0.0.1改为192.168.1.10
2.将httpGet下的host从127.0.0.1改为192.168.1.10,共2个
3.删除--port=0 修改之后在k8s各个节点重启kubectl
systemctl restart kubelet [root@master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"} [root@master ~]# ss -antulp | grep :10251
tcp LISTEN 0 128 [::]:10251 [::]:* users:(("kube-scheduler",pid=42570,fd=7)) [root@master ~]# ss -antulp | grep :10252
tcp LISTEN 0 128 [::]:10252 [::]:* users:(("kube-controller",pid=45258,fd=7))
可以看到端口已经被监听了
kubeadm搭建单master多node节点的k8s集群(3)的更多相关文章
- 沉淀,再出发——手把手教你使用VirtualBox搭建含有三个虚拟节点的Hadoop集群
手把手教你使用VirtualBox搭建含有三个虚拟节点的Hadoop集群 一.准备,再出发 在项目启动之前,让我们看一下前面所做的工作.首先我们掌握了一些Linux的基本命令和重要的文件,其次我们学会 ...
- 【K8S】基于单Master节点安装K8S集群
写在前面 最近在研究K8S,今天就输出部分研究成果吧,后续也会持续更新. 集群规划 IP 主机名 节点 操作系统版本 192.168.175.101 binghe101 Master CentOS 8 ...
- 全新一台node节点加入到集群中
目录 前言 对新节点做解析 方法一 hosts 文件解析 方法二 bind 解析 测试 分发密钥对 推送 CA 证书 flanneld 部署 推送flanneld二进制命令 推送flanneld秘钥 ...
- 记录一次新节点加入K8S集群
新节点初始化 安装docker kubelet kubeadm(指定版本) #先查看当前集群docker版本 [root@lecode-k8s-master manifests]# docker ve ...
- 1 搭建K8s集群
官网:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing ...
- 教你用multipass快速搭建k8s集群
目录 前言 一.multipass快速入门 安装 使用 二.使用multipass搭建k8s集群 创建3台虚拟机 安装master节点 安装node节点 测试k8s集群 三.其他问题 不能拉取镜像:报 ...
- K8S 使用Kubeadm搭建单个Master节点的Kubernetes(K8S)~本文仅用于测试学习
01.集群规划 系统版本:CentOS Linux release 7.6.1810 (Core) 软件版本:kubeadm.kubernetes-1.15.docker-ce-18.09 硬件要求: ...
- Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之部署master/node节点组件(四)
0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 1.部署master组件 ...
- K8S 使用Kubeadm搭建高可用Kubernetes(K8S)集群 - 证书有效期100年
1.概述 Kubenetes集群的控制平面节点(即Master节点)由数据库服务(Etcd)+其他组件服务(Apiserver.Controller-manager.Scheduler...)组成. ...
- k3s|如何将k8s集群的node节点设置不可调度或删除node节点?
k3s|如何将k8s集群的node节点设置不可调度或删除node节点? k3s是由 Rancher 公司开发的轻量级Kubernetes,是经CNCF一致性认证的Kubernetes发行版,专为物联网 ...
随机推荐
- 记录--通过手写,分析axios核心原理
这里给大家分享我在网上总结出来的一些知识,希望对大家有所帮助 一.axios简介 axios是什么? Axios 是一个基于 promise 的 HTTP 库,可以用在浏览器和 node.js 中. ...
- KingbaseES 查看函数中最耗时的sql
测试 创建测试环境所需表及函数 create table test1(id int); INSERT INTO test1(id) VALUES (generate_series(1, 10000)) ...
- MySQL联结
创建联结 mysql> SELECT vend_name,prod_name,prod_price FROM vendors,products WHERE vendors.vend_id=pro ...
- #轮廓线dp#洛谷 1879 [USACO06NOV]Corn Fields G
题目 分析 考虑状压dp在\(n\leq 21\)的情况下会TLE, 设\(dp[n][m][S]\)表示当前正在处理\((n,m)\)这个格子 并且轮廓线状态为\(S\)的方案数, 考虑可行状态最多 ...
- #排列组合#CF1081C Colorful Bricks
题目 一共 \(n\) 块砖排成一排,把每块砖涂成 \(m\) 种颜色中的一种, 其中恰有 \(k\) 块颜色与其左边的那块砖不同(不包括第一块),问涂色方案数,对 \(998244353\) 取模. ...
- #树上启发式合并,trie#JZOJ 5363 生命之树
分析 考虑按位处理, 如果熟悉dsu的话可以发现这道题能够用dsu做, 再用两个trie分别维护该位为0或1的字符串, 重儿子可以按照子树字符串的总长计算 代码 #include <cstdio ...
- 使用OHOS SDK构建lz4
参照OHOS IDE和SDK的安装方法配置好开发环境. 从github下载源码. 执行如下命令: git clone --depth=1 https://github.com/lz4/lz4.git ...
- Qt6安装
*:Qt现在基本都是在线安装了,但是下载的速度特别慢,所以此次记录下如何提速,快速安装 一.在线安装器下载 我用的这个(非官网):https://mirrors.tuna.tsinghua.edu.c ...
- std::thread 四:异步(async)
*:如果 std::async 中传递参数 std::lunnch::deferred ,就需要等待调用 get() 或者 wait() 才会执行,并且代码非子线程运行,而是在主线程中执行 #incl ...
- cmd中怎么清屏--cls
平时我们在 Linux 系统中清除屏幕 用的命令是: clear 现在在Windows上用的清屏命令是 : cls