由于此处docker代理无法使用,因此,请各位设置有效代理进行部署,勿使用文档中的docker代理。整体部署步骤不用改动。谢谢各位支持。

1、部署背景

  1. 操作系统版本:CentOS Linux release 7.5. (Core)
  2. docker-ce版本:18.06.-ce
  3. kubernetes版本:1.11.
  4. kubeadm版本:v1.11.3

2、节点划分

  1. master节点:
  2. 主机名:k8s-master-
  3. ip地址:192.168.40.52
  4. node1节点:
  5. 主机名:k8s-node-
  6. ip地址:192.168.40.53
  7. node2节点:
  8. 主机名:k8s-node-
  9. ip地址:192.168.40.54

3、部署前提

  1. 、关闭selinuxfirewalld
  2. 、开启内核转发。
    3、关闭swap交换分区
    4master免密钥登录所有node节点
    5、所有节点配置ntp时间同步服务,保证节点时间一致。
    6、加载ipvs相关模块

4、集群所有节点初始化

  1. 、加载ipvs相关模块以及安装依赖关系
  2. 安装依赖。
  3. yum install ipset ipvsadm conntrack-tools.x86_64 -y
  4.  
  5. 加载模块。
  6. modprobe ip_vs_rr
  7. modprobe ip_vs_wrr
  8. modprobe ip_vs_sh
  9. modprobe ip_vs
  10.  
  11. 查看模块加载信息。
  12. lsmod| grep ip_vs

    

  1. 、开启内核转发,并使之生效
  2. cat <<EOF | tee /etc/sysctl.d/k8s.conf
  3. net.ipv4.ip_forward =
  4. net.bridge.bridge-nf-call-ip6tables =
  5. net.bridge.bridge-nf-call-iptables =
  6. EOF
  7.  
  8. sysctl -p /etc/sysctl.d/k8s.conf
  1. 、关闭selinux,关闭swap分区,关闭firewalld
  2.  
  3. #关闭防火墙,并且禁止自动启动。
    systemctl stop firewalld
    systemctl disable firewalld
  4.  
  5. #关闭selinux
    sed -i 's#enforcing#disabled#ig' /etc/sysconfig/selinux
  6.  
  7. #关闭swap分区
    swapoff -a && sysctl -w vm.swappiness=0
  8.  
  9. #修改文件最大打开数量
    echo -e '*\tsoft\tnproc\t4096\nroot\tsoft\tnproc\tunlimited' > /etc/security/limits.d/20-nproc.conf
    echo -e '* soft nofile 65536\n* hard nofile 65536' > /etc/security/limits.conf
  1. 、配置时间同步以及hosts解析,以及实现master节点通过免密钥登录node节点
  2.  
  3. #安装ntp命令,同时配置任务计划
    yum install ntp -y
    任务计划命令如下:
    */5 * * * *  /usr/sbin/ntpdate  0.centos.pool.ntp.org > /dev/null 2> /dev/null
  4.  
  5. #配置服务器通过hostname可以解析,保证master和node节点上一致,内容如下:
    192.168.40.52 k8s-master-52 master
    192.168.40.53 k8s-node-53
    192.168.40.54 k8s-node-54
  6.  
  7. #配置master节点通过免秘钥登录node节点
    ssh-keygen -t rsa
    一路回车,生成公钥和私钥。
  8.  
  9. ssh-copy-id -i ~/.ssh/id_rsa.pub k8s-node-53
    ssh-copy-id -i ~/.ssh/id_rsa.pub k8s-node-54

初始化完成之后,最好能重启服务器。

5、在master节点进行操作

  1. 1、配置kubernetes yum源。
    vim /etc/yum.repos.d/kubernetes.repo,内容如下:
  2.  
  3. [kubernetes]
  4. name=kubernetes
  5. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  6. enabled=
  7. gpgcheck=0
  8.  
  9. 2、配置docker-ce yum源。
    yum install -y yum-utils device-mapper-persistent-data lvm2
    yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  10.  
  11. 3、安装docker-cekubernetes
    yum install docker-ce kubelet kubeadm kubectl
    软件及依赖的版本如下:
      
  12.  
  13. 4、配置docker容器代理、启动docker-ce,同时配置dockerkubelet开机自动启动。
    配置代理如下:
      编辑文件:/usr/lib/systemd/system/docker.service
      Environment="HTTPS_PROXY=http://www.ik8s.io:10080"
      Environment="NO_PROXY=127.0.0.0/8,192.168.0.0/16"
  14.  
  15. 重新加载相关服务配置。
    systemctl daemon-reload
  16.  
  17. #启动docker
    systemctl start docker
  18.  
  19. #配置docker、kubelet开机自动启动
    systemctl enable docker
    systemctl enable kubelet
  1. 在此处,kubelet不用启动,在kubeadm初始化服务器的时候,初始化完成,会自动启动kubelet服务。
  1. 5、初始化master节点
    [root@k8s-master-52 ]# kubeadm init --kubernetes-version=v1.11.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 
    命令解析:
      --kubernetes-version=v1.11.3:指定kubernetes版本
      --pod-network-cidr=10.244.0.0/16:指定pod网络地址池
      --service-cidr=10.96.0.0/12:指定service网络地址池
  2.  
  3. 命令执行输出如下:
[init] using Kubernetes version: v1.11.3
[preflight] running pre-flight checks
I0913 20:48:31.926894 2304 kernel_validator.go:81] Validating kernel version
I0913 20:48:31.926940 2304 kernel_validator.go:96] Validating kernel config
  [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
....
中间信息,此处不再给出。根据网络情况,下载镜像,因此初始化的时间不固定。
[addons] Applied essential addon: CoreDNS #1.11版本开始支持coredns,1.10.X版本使用的为kube dns。
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.40.52:6443 --token k5mudw.bri3lujvlsxffbqo --discovery-token-ca-cert-hash sha256:f6cf089d5aff3230996f75ca71e74273095c901c1aa45f1325ade0359aeb336e
 
注意:要记住最后一行,kubeadm join这一行的信息,这行信息为node加入集群要执行的命令,请务必复制记录。
 
其中master节点在初始化的时候,会pull docker hub中的镜像。如下:
  

查看端口占用情况,如下:

  

其中6443为apiserver的https端口。

从1.11版本开始,默认为ipvs。1.10.X及其之前版本为iptables。
从1.11版本开始支持coredns,1.10.X版本使用的为kube dns。

创建配置文件,使kubectl客户端能正常进行命令进行kubernetes集群的相关操作。

[root@k8s-master-52 ]# mkdir -p $HOME/.kube
[root@k8s-master-52 ]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 
查看集群状态信息。
[root@k8s-master-52 manifests]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
查看集群节点信息。
[root@k8s-master-52 manifests]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master-52 NotReady master 12m v1.11.3
未安装网络组件造成节点status状态为notready。
 
6、安装flannel网络插件。
#直接引用官方的部署模板文件进行安装。
[root@k8s-master-52 ]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.extensions/kube-flannel-ds-amd64 created
    daemonset.extensions/kube-flannel-ds-arm64 created
    daemonset.extensions/kube-flannel-ds-arm created
    daemonset.extensions/kube-flannel-ds-ppc64le created
    daemonset.extensions/kube-flannel-ds-s390x created
 
[root@k8s-master-52 ]# kubectl get pods -n kube-system
  
只有等到flannel镜像下载完成,pods才能正常启动。 

6、在node节点操作

  1. 、配置kubernetes yum源。
  2. vim /etc/yum.repos.d/kubernetes.repo,内容如下:
  3.  
  4. [kubernetes]
  5. name=kubernetes
  6. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  7. enabled=
  8. gpgcheck=
  9.  
  10. 、配置docker-ce yum源。
  11. yum install -y yum-utils device-mapper-persistent-data lvm2
  12. yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  13.  
  14. 、安装docker-cekubernetes
  15. yum install docker-ce kubelet kubeadm kubectl
  16. 软件及依赖的版本如下:
  17.   
  18.  
  19. 、配置docker容器代理、启动docker-ce,同时配置dockerkubelet开机自动启动。
  20. 配置代理如下:
  21.   编辑文件:/usr/lib/systemd/system/docker.service
  22.   Environment="HTTPS_PROXY=http://www.ik8s.io:10080"
  23.   Environment="NO_PROXY=127.0.0.0/8,192.168.0.0/16"
  24.  
  25. 重新加载相关服务配置。
  26. systemctl daemon-reload
  27.  
  28. #启动docker
  29. systemctl start docker
  30.  
  31. #配置docker、kubelet开机自动启动
  32. systemctl enable docker
  33. systemctl enable kubelet
  34. 在此处,kubelet不用启动,在kubeadm初始化服务器的时候,初始化完成,会自动启动kubelet服务。
  35.  
  36. 5、安装flannel网络插件。
#直接引用官方的部署模板文件进行安装。
 
在k8s-node-53节点上进行安装。
[root@k8s-node-53 ]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
   

在k8s-node-54节点上进行安装。
[root@k8s-node-54 ]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
 
将k8s-node-53节点加入k8s集群。

[root@k8s-node-53 ~]# kubeadm join 192.168.40.52:6443 --token k5mudw.bri3lujvlsxffbqo --discovery-token-ca-cert-hash sha256:f6cf089d5aff3230996f75ca71e74273095c901c1aa45f1325ade0359aeb336e
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I0913 21:13:20.983878 1794 kernel_validator.go:81] Validating kernel version
I0913 21:13:20.983943 1794 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "192.168.40.52:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.40.52:6443"
[discovery] Requesting info from "https://192.168.40.52:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.40.52:6443"
[discovery] Successfully established connection with API Server "192.168.40.52:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node-53" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

将k8s-node-54节点加入k8s集群。

[root@k8s-node-54 ~]# kubeadm join 192.168.40.52:6443 --token k5mudw.bri3lujvlsxffbqo --discovery-token-ca-cert-hash sha256:f6cf089d5aff3230996f75ca71e74273095c901c1aa45f1325ade0359aeb336e
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I0913 21:21:03.915755 11043 kernel_validator.go:81] Validating kernel version
I0913 21:21:03.915806 11043 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "192.168.40.52:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.40.52:6443"
[discovery] Requesting info from "https://192.168.40.52:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.40.52:6443"
[discovery] Successfully established connection with API Server "192.168.40.52:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node-54" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

7、创建角色,使用k8s dashboard查看集群状态。

  1. vim dashboard-admin.yaml
  2. 内容如下:
  3. apiVersion: rbac.authorization.k8s.io/v1beta1
  4. kind: ClusterRoleBinding
  5. metadata:
  6. name: kubernetes-dashboard
  7. labels:
  8. k8s-app: kubernetes-dashboard
  9. roleRef:
  10. apiGroup: rbac.authorization.k8s.io
  11. kind: ClusterRole
  12. name: cluster-admin
  13. subjects:
  14. - kind: ServiceAccount
  15. name: kubernetes-dashboard
  16. namespace: kube-system

执行以下命令创建角色:

  1. kubectl create -f dashboard-admin.yaml

8、安装k8s dashboard

  1. vim kubernetes-dashboard.yaml
    内容如下:
  2. apiVersion: v1
  3. kind: Secret
  4. metadata:
  5. labels:
  6. k8s-app: kubernetes-dashboard
  7. name: kubernetes-dashboard-certs
  8. namespace: kube-system
  9. type: Opaque
  10.  
  11. ---
  12. # ------------------- Dashboard Service Account ------------------- #
  13.  
  14. apiVersion: v1
  15. kind: ServiceAccount
  16. metadata:
  17. labels:
  18. k8s-app: kubernetes-dashboard
  19. name: kubernetes-dashboard
  20. namespace: kube-system
  21.  
  22. ---
  23. # ------------------- Dashboard Role & Role Binding ------------------- #
  24.  
  25. kind: Role
  26. apiVersion: rbac.authorization.k8s.io/v1
  27. metadata:
  28. name: kubernetes-dashboard-minimal
  29. namespace: kube-system
  30. rules:
  31. # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
  32. - apiGroups: [""]
  33. resources: ["secrets"]
  34. verbs: ["create"]
  35. # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
  36. - apiGroups: [""]
  37. resources: ["configmaps"]
  38. verbs: ["create"]
  39. # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  40. - apiGroups: [""]
  41. resources: ["secrets"]
  42. resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  43. verbs: ["get", "update", "delete"]
  44. # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  45. - apiGroups: [""]
  46. resources: ["configmaps"]
  47. resourceNames: ["kubernetes-dashboard-settings"]
  48. verbs: ["get", "update"]
  49. # Allow Dashboard to get metrics from heapster.
  50. - apiGroups: [""]
  51. resources: ["services"]
  52. resourceNames: ["heapster"]
  53. verbs: ["proxy"]
  54. - apiGroups: [""]
  55. resources: ["services/proxy"]
  56. resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  57. verbs: ["get"]
  58.  
  59. ---
  60. apiVersion: rbac.authorization.k8s.io/v1
  61. kind: RoleBinding
  62. metadata:
  63. name: kubernetes-dashboard-minimal
  64. namespace: kube-system
  65. roleRef:
  66. apiGroup: rbac.authorization.k8s.io
  67. kind: Role
  68. name: kubernetes-dashboard-minimal
  69. subjects:
  70. - kind: ServiceAccount
  71. name: kubernetes-dashboard
  72. namespace: kube-system
  73.  
  74. ---
  75. # ------------------- Dashboard Deployment ------------------- #
  76.  
  77. kind: Deployment
  78. apiVersion: apps/v1beta2
  79. metadata:
  80. labels:
  81. k8s-app: kubernetes-dashboard
  82. name: kubernetes-dashboard
  83. namespace: kube-system
  84. spec:
  85. replicas: 1
  86. revisionHistoryLimit: 10
  87. selector:
  88. matchLabels:
  89. k8s-app: kubernetes-dashboard
  90. template:
  91. metadata:
  92. labels:
  93. k8s-app: kubernetes-dashboard
  94. spec:
  95. containers:
  96. - name: kubernetes-dashboard
  97. image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
  98. ports:
  99. - containerPort: 8443
  100. protocol: TCP
  101. args:
  102. - --auto-generate-certificates
  103. # Uncomment the following line to manually specify Kubernetes API server Host
  104. # If not specified, Dashboard will attempt to auto discover the API server and connect
  105. # to it. Uncomment only if the default does not work.
  106. # - --apiserver-host=http://my-address:port
  107. volumeMounts:
  108. - name: kubernetes-dashboard-certs
  109. mountPath: /certs
  110. # Create on-disk volume to store exec logs
  111. - mountPath: /tmp
  112. name: tmp-volume
  113. livenessProbe:
  114. httpGet:
  115. scheme: HTTPS
  116. path: /
  117. port: 8443
  118. initialDelaySeconds: 30
  119. timeoutSeconds: 30
  120. volumes:
  121. - name: kubernetes-dashboard-certs
  122. secret:
  123. secretName: kubernetes-dashboard-certs
  124. - name: tmp-volume
  125. emptyDir: {}
  126. serviceAccountName: kubernetes-dashboard
  127. # Comment the following tolerations if Dashboard must not be deployed on master
  128. tolerations:
  129. - key: node-role.kubernetes.io/master
  130. effect: NoSchedule
  131.  
  132. ---
  133. # ------------------- Dashboard Service ------------------- #
  134.  
  135. kind: Service
  136. apiVersion: v1
  137. metadata:
  138. labels:
  139. k8s-app: kubernetes-dashboard
  140. name: kubernetes-dashboard
  141. namespace: kube-system
  142. spec:
  143. type: NodePort
  144. ports:
  145. - port: 443
  146. targetPort: 8443
  147. nodePort: 30001
  148. selector:
  149. k8s-app: kubernetes-dashboard

执行以下命令安装dashboard:

  1. kubectl apply -f kubernetes-dashboard.yaml

访问dashboard url如下:

  1. https://192.168.40.54:30001
    此处使用集群中任一节点ip,即可访问dashboard页面。

9、生成token认证文件

  1. 在主节点上进行执行。

[root@k8s-master-52 opt]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name: admin-user-token-hddfq
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=admin-user
kubernetes.io/service-account.uid=2d23955c-b75d-11e8-a770-5254007ec152

Type: kubernetes.io/service-account-token

Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWhkZGZxIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyZDIzOTU1Yy1iNzVkLTExZTgtYTc3MC01MjU0MDA3ZWMxNTIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.5GakSIdKw7H62P5Bk3c8879Jc68cAN9gcQRMYvaWLo-Cq6cwnpOoz6fwYm1AoFRfJ_ddMoctqB_rp72j_AqSO0ihp3_H_1dX31bo_ddp1xtj5Yg3IswhcxU2RCBmoIn0JmgCeWxoIt_KAYpNJBJqJKR5oIS2hr_Xfew5GNXRC6_OE9fm7ljRy4XqkBTaj6_1K0wUrmoC4WFHQGZzTUq6mmVsJlD_o3J35sMzi993WtP0APeBc6v66RokHW5EAECN9__ipA9cQlqmtLkgFydORMvUmd4bOWNFoNticx_M6poDlzTLRqmKY5I3mxJmhCCHr2gp7X0auo1enLW765t-7g

使用最后生成的token认证内容登录dashboard。

kubeadm安装部署kubernetes 1.11.3(单主节点)的更多相关文章

  1. 使用 Kubeadm 安装部署 Kubernetes 1.12.1 集群

    手工搭建 Kubernetes 集群是一件很繁琐的事情,为了简化这些操作,就产生了很多安装配置工具,如 Kubeadm ,Kubespray,RKE 等组件,我最终选择了官方的 Kubeadm 主要是 ...

  2. Kubeadm 安装部署 Kubernetes 集群

    阅读目录: 准备工作 部署 Master 管理节点 部署 Minion 工作节点 部署 Hello World 应用 安装 Dashboard 插件 安装 Heapster 插件 后记 相关文章:Ku ...

  3. centos7使用kubeadm安装部署kubernetes 1.14

    应用背景: 截止目前为止,高热度的kubernetes版本已经发布至1.14,在此记录一下安装部署步骤和过程中的问题排查. 部署k8s一般两种方式:kubeadm(官方称目前已经GA,可以在生产环境使 ...

  4. 使用 kubeadm 安装部署 kubernetes 1.9-部署heapster插件

    1.先到外网下载好镜像倒进各个节点 2.下载yaml文件和创建应用 mkdir -p ~/k8s/heapster cd ~/k8s/heapster wget https://raw.githubu ...

  5. kubernetes系列03—kubeadm安装部署K8S集群

    本文收录在容器技术学习系列文章总目录 1.kubernetes安装介绍 1.1 K8S架构图 1.2 K8S搭建安装示意图 1.3 安装kubernetes方法 1.3.1 方法1:使用kubeadm ...

  6. [转帖]centos7 使用kubeadm 快速部署 kubernetes 国内源

    centos7 使用kubeadm 快速部署 kubernetes 国内源 https://www.cnblogs.com/qingfeng2010/p/10540832.html 前言 搭建kube ...

  7. 安装部署 Kubernetes 集群

    安装部署 Kubernetes 集群 阅读目录: 准备工作 部署 Master 管理节点 部署 Minion 工作节点 部署 Hello World 应用 安装 Dashboard 插件 安装 Hea ...

  8. Kubernetes探索学习001--Centos7.6使用kubeadm快速部署Kubernetes集群

    Centos7.6使用kubeadm快速部署kubernetes集群 为什么要使用kubeadm来部署kubernetes?因为kubeadm是kubernetes原生的部署工具,简单快捷方便,便于新 ...

  9. kubeadm快速部署kubernetes(十九)

    安装要求 部署Kubernetes集群机器需要满足以下几个条件: 一台或多台机器,操作系统 CentOS7.x-86_x64 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多 ...

随机推荐

  1. 关于YARN Node Labels的一点理解

    最近在做实验,实验需要进行分区域计算,网上查了资料后发现Yarn Node Labels + Capacity-Scheduler可以实现我的需求 但是当任务提交到capacity-scheduler ...

  2. 缩点tarjan

    给定一个n个点m条边有向图,每个点有一个权值,求一条路径,使路径经过的点权值之和最大.你只需要求出这个权值和.允许多次经过一条边或者一个点,但是,重复经过的点,权值只计算一次. 缩点含义:将一个环缩成 ...

  3. mysql的常用优化知识

    索引类型:主键索引,唯一索引,联合索引,普通索引,全文索引 建立索引: create index index_name on table(field_name); 删除索引: drop index i ...

  4. Centos 定时任务发送smtp邮件

    接着上一篇文章...... 1.首先创建一个sheel的脚本命令,我是在home文件夹下面创建的命令: touch a.sh 2.编辑a.sh脚本 vim a.sh ,键入键盘   i  键 准备插入 ...

  5. JS关闭窗口而不提示

    使用js关闭窗口而不提示代码: window.opener = null; window.open( '', '_self' ); window.close();

  6. 微信小程序之可滚动视图容器组件 scroll-view

    1. 纵向滚动 scroll-y 当 设置为scroll-y 时, 需要将其高度设为固定值 如果整个页面,即最外层标签为scroll-view,需要并将其高度设为100%,也需要将page设为100% ...

  7. python代码实现经典排序算法

    排序算法在程序中有至关重要的作用, 不同算法的时间复杂度和空间复杂度都有所区别, 这影响着程序运行的效率和资源占用的情况, 经常对一些算法多加练习, 强化吸收, 可以提高对算法的理解, 进而运用到实践 ...

  8. http-cache浏览器缓存

    摘至知乎 首先得明确 http 缓存的好处 减少了冗余的数据传输,减少网费 减少服务器端的压力 Web 缓存能够减少延迟与网络阻塞,进而减少显示某个资源所用的时间 加快客户端加载网页的速度 常见 ht ...

  9. 上云利器,K8S应用编排设计器之快到极致

    前言在前面的文章中,我们已经提到,华为云有一个上云利器:应用编排设计器.作为华为云应用编排服务与用户沟通的桥梁,设计器坚持用户体验至上的理念,以图形化方式,在鼠标点击之间,助力企业快速上云.优质的交互 ...

  10. LeetCode-97.交错字符串

    给定三个字符串 s1, s2, s3, 验证 s3 是否是由 s1 和 s2 交错组成的. 示例 1: 输入: s1 = "aabcc", s2 = "dbbca&quo ...