前言

好久没有写博客了,本文主要是对网上文章的总结篇,主要是将安装和运行代码做了一次真机实验,亲测可用。文章内包含的脚本和代码,多来自于网络,也有我自己的调整和配置,文章末尾对参考的文献做了列举,方便大家参考。

过程很简单,一路next往下看和操作即可,文章不对脚本和代码做原理解释,某些注意点加了红色标注,部分脚本有注释,可以自行参考,以后有机会可以视频讲解。

核心步骤

因为是next的方式,所以本章节主要是操作步骤,步骤中涉及到的代码或者脚本,可以在下文中找到,比如:附录代码一、附录代码二等等,因为脚本实在太长,不太方便放到步骤里。

1、配置 node01 主节点(2个文件,1个结果);

在root目录下拷贝k8s脚本(附录代码一:kubernetes_node01.sh)和flannel网络(附录代码二:kube-flannel.yml)的文件;

然后给脚本文件赋权限:chmod +x kubernetes_node01.sh

最后执行脚本:./kubernetes_node01.sh

ps:1、sh脚本中,需要配置节点,是内网的。

2、多个节点之间要保证能ping通;

3、中间可能需要自己来配合做些操作,比如输入:y,来做确认等等。

最后,可以在当前文件夹下,看到一个key.txt的文件,里边有安装的结果数据或者密钥等,可查看附录代码三:key.txt,这是我安装的结果,里边有join主节点的配置语句。

查看所有的nodes和pods:

  1. [root@node01 ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. node01 Ready master 26h v1.18.0

所有的pods:

  1. [root@node01 ~]# kubectl get pods -A
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. kube-system coredns-7ff77c879f-6m6fl 1/1 Running 0 25m
  4. kube-system coredns-7ff77c879f-dkd56 1/1 Running 0 25m
  5. kube-system etcd-node01 1/1 Running 0 26m
  6. kube-system kube-apiserver-node01 1/1 Running 0 26m
  7. kube-system kube-controller-manager-node01 1/1 Running 0 26m
  8. kube-system kube-flannel-ds-amd64-sdv2h 1/1 Running 0 25m
  9. kube-system kube-proxy-vgf4r 1/1 Running 0 25m
  10. kube-system kube-scheduler-node01 1/1 Running 0 26m

如果都启动,都READY了,表示安装成功。

2、配置dashboard仪表盘(2个文件)

上面安装好了kubectl、kubeadm、kubelet后,我们可以通过客户端来连接,这里安利下k8s的客户端:Lens,很香。

如果不用客户端,那就需要安装仪表盘了。

1、Linux根目录拷贝文件,附录代码四:recommended.yaml(安装看板),附录代码五:dashboard-svc-account.yaml(配置管理员账户)

2、执行命令:

  1. sed -i '/targetPort: 8443/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort' recommended.yaml

3、启动仪表盘服务:

  1. kubectl apply -f recommended.yaml

4、启动配置账户:

  1. kubectl apply -f dashboard-svc-account.yaml

都成功后,会生成一个token字符串,用来登录web端的令牌的,如果没有拷贝或者丢失了也不怕,可以使用命令查看:

  1. kubectl describe secrets -n kube-system `kubectl get secret -n kube-system | grep admin | awk '{print $1}'` | grep '^token'|awk '{print $2}'

token就是类似这种:

  1. eyJhbGciOiJSUzI1NiIsImtpZCI6Ikl5SE00cXFZR1V2cWstQURVcGlUOGk4cTBYekZMV0VmNDEwRy14UTd1d2sifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tY3JnejYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWliwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMjYwMGQ0ZjctM2ZhOS00ODIwLWFmMmUtZTJlZDMxYWMyYWFhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.BBtdG-S2kHEwRbWIAf6DiUgC3ILUOStPATyWfvxcQs5VJBtLRyMGqQ-AfkUoVLuhZdUv-CGoEJ1OYA00M6MwoehDdkhLFbXF7Xx1IPyhFTHxZ_oXHBPyjEREkTEerarZnvgt0ufU4g_Eqn91jdHet73itz-0abgmLMPkRl5YYjlh36Ivwq9IjKgujLwTNisUFckLuHOscHtQIrjIvAZlWTRh_awMsDHvemAKG_YIjMbyQnXi6VfN3rTW869DA0XAGOF2t7cWBtMmHvmLxVYqpOauUzwXXeYbO9eP0_d9JtVwKv6R0Q7sexRFZ-iTdZBOJDujFI3UT2jsqgVdbagA

这里再检查下:

查看所有的nodes和pods:

  1. [root@node01 ~]# kubectl get pods -A
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. kube-system coredns-7ff77c879f-6m6fl 1/1 Running 0 25m
  4. kube-system coredns-7ff77c879f-dkd56 1/1 Running 0 25m
  5. kube-system etcd-node01 1/1 Running 0 26m
  6. kube-system kube-apiserver-node01 1/1 Running 0 26m
  7. kube-system kube-controller-manager-node01 1/1 Running 0 26m
  8. kube-system kube-flannel-ds-amd64-sdv2h 1/1 Running 0 25m
  9. kube-system kube-proxy-vgf4r 1/1 Running 0 25m
  10. kube-system kube-scheduler-node01 1/1 Running 0 26m
  11. kubernetes-dashboard dashboard-metrics-scraper-78f5d9f487-ldswx 1/1 Running 0 12m
  12. kubernetes-dashboard kubernetes-dashboard-577bd97bc-szvwt 1/1 Running 0 12m

多了kubernetes-dashboard命名空间下的两个pod。

3、配置 node02 子节点(1个文件)

如果你没有多余的服务器,也可以在master节点做自己的pod的,需要开启下,命令将 master 标记为可调度:

  1. sudo kubectl taint nodes --all node-role.kubernetes.io/masteflr-

如果要配置多个子节点,那就仿照主节点来继续写sh脚本吧(附录代码六:kubernetes_node02.sh),步骤和主节点一致:

1、拷贝到子节点服务器;

2、赋权限,执行文件:./kubernetes_node02.sh

3、这里不用flannel配置;

4、安装完成后,可以join到主节点,配置文件在主节点的key.txt文件里,如果你安装成功了的话;

  1. kubeadm join 172.17.10.4:6443 --token q3uu1o.4rdfkcyzxjhawvk1 \
  2. --discovery-token-ca-cert-hash sha256:a755d8f56733ba8f9d1951298b200202fce7b84389954bf7a38558fa6ce2a9c9

如果一切正常,可以去主节点查看所有的nodes:

  1. NAME STATUS ROLES AGE VERSION
  2. node01 Ready master 26h v1.18.0
  3. node02 Ready <none> 25h v1.18.0

表示我们的子节点已经配置完成。

4、配置ASP.Net Core服务

这里的Deployment+Service的写法比较简单,直接贴出来,就不做过多的解释了。

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. labels:
  5. app: laozhang-op2
  6. name: laozhang-op2
  7. spec:
  8. selector:
  9. matchLabels:
  10. app: laozhang-op2
  11. replicas: 2
  12. template:
  13. metadata:
  14. labels:
  15. app: laozhang-op2
  16. spec:
  17. containers:
  18. - name: laozhang-op2
  19. image: laozhangisphi/apkimg315
  20. imagePullPolicy: IfNotPresent #pull镜像时机,
  21.  
  22. ----
  23. apiVersion: v1
  24. kind: Service
  25. metadata:
  26. name: laozhang-op2
  27. spec:
  28. type: NodePort
  29. ports:
  30. - name: default
  31. protocol: TCP
  32. port: 80
  33. targetPort: 80
  34. nodePort: 30099
  35. selector:
  36. app: laozhang-op2

但是这种是nodePort的方式,平时使用更多的是Ingress的方式,那使用ingress,就先需要配置ingress的服务。

5、配置Ingress-nginx(1个文件)

在根目录拷贝文件,附录代码七:mandatory.yaml,配置Ingress-Nginx服务,

这里需要注意下,如果服务器之前已经配置过nginx,需要在mandatory.yaml文件中,修改http-port输出端口,详细内容见下面的代码,有注释。

直接执行yaml:

  1. kubectl apply -f mandatory.yaml

如果没有报错,可以查看所有的pods:

  1. [root@node01 ~]# kubectl get pods -A
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. default laozhang-op2-5cf487b57f-pdvfg 1/1 Running 0 4h29m
  4. default laozhang-op2-5cf487b57f-vtgwc 1/1 Running 0 4h29m
  5. ingress-nginx nginx-ingress-controller-557475687f-rfl98 1/1 Running 0 122m
  6. kube-system coredns-7ff77c879f-gj4sl 1/1 Running 0 26h
  7. kube-system coredns-7ff77c879f-mqp2q 1/1 Running 0 26h
  8. kube-system etcd-node01 1/1 Running 0 26h
  9. kube-system kube-apiserver-node01 1/1 Running 0 26h
  10. kube-system kube-controller-manager-node01 1/1 Running 0 26h
  11. kube-system kube-flannel-ds-amd64-nmnj2 1/1 Running 0 26h
  12. kube-system kube-proxy-wcjb8 1/1 Running 0 26h
  13. kube-system kube-scheduler-node01 1/1 Running 2 26h
  14. kubernetes-dashboard dashboard-metrics-scraper-78f5d9f487-qp2fw 1/1 Running 0 26h
  15. kubernetes-dashboard kubernetes-dashboard-577bd97bc-2tsj7 1/1 Running 0 26h

如果和上面一样,那恭喜,一切配置就完成了。

附录代码一:kubernetes_node01.sh

  1. #!/bin/bash
  2. ##############
  3. ##主节点##
  4. ##############
  5.  
  6. #### 第一部分,环境初始化 ####
  7. #k8s版本
  8. version=v1.18.0
  9. kubelet=kubelet-1.18.0-0.x86_64
  10. kubeadm=kubeadm-1.18.0-0.x86_64
  11. kubectl=kubectl-1.18.0-0.x86_64
  12. #集群加入方式
  13. key=/root/key.txt
  14. #部署flannel网络
  15. flannel=/root/kube-flannel.yml
  16. #安装必要依赖
  17. yum -y install vim wget git cmake make gcc gcc-c++ net-tools lrzsz
  18.  
  19. #### 第二部分,节点配置 ####
  20. #主机解析,免密登录
  21. #内网ip,配置多节点,也可以不配置,后期通过join的方式
  22. node01=172.21.10.4
  23. #node02=192.168.10.7
  24. #node03=192.168.1.30
  25. hostnamectl set-hostname node01
  26. echo '172.21.10.4 node01
  27. #192.168.10.7 node02
  28. #192.168.1.30 node03' >> /etc/hosts
  29. ssh-keygen
  30. ssh-copy-id -i $node01
  31. #ssh-copy-id -i $node02
  32. #ssh-copy-id -i $node03
  33. #scp /etc/hosts node02:/etc/hosts
  34. #scp /etc/hosts node03:/etc/hosts
  35. #关闭防火墙
  36. systemctl stop firewalld
  37. systemctl disable firewalld
  38. #swap分区关闭
  39. swapoff -a
  40. sed -i 's/.*swap.*/#&/' /etc/fstab
  41. #关闭沙盒
  42. setenforce 0
  43. sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
  44. sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
  45. sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux
  46. sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config
  47. #打开ipv6
  48. modprobe br_netfilter
  49. modprobe ip_vs_rr
  50. cat <<EOF > /etc/sysctl.d/k8s.conf
  51. net.bridge.bridge-nf-call-ip6tables = 1
  52. net.bridge.bridge-nf-call-iptables = 1
  53. vm.swappiness = 0
  54. EOF
  55. sysctl -p /etc/sysctl.d/k8s.conf
  56. ls /proc/sys/net/bridge
  57.  
  58. #### 第三部分,参数/源处理 ####
  59. #安装epel源
  60. yum install -y epel-release
  61. yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim ntpdate libseccomp libtool-ltdl
  62. #时区校准
  63. systemctl enable ntpdate.service
  64. echo '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1' > /tmp/crontab2.tmp
  65. crontab /tmp/crontab2.tmp
  66. systemctl start ntpdate.service
  67. #添加参数
  68. echo "* soft nofile 65536" >> /etc/security/limits.conf
  69. echo "* hard nofile 65536" >> /etc/security/limits.conf
  70. echo "* soft nproc 65536" >> /etc/security/limits.conf
  71. echo "* hard nproc 65536" >> /etc/security/limits.conf
  72. echo "* soft memlock unlimited" >> /etc/security/limits.conf
  73. echo "* hard memlock unlimited" >> /etc/security/limits.conf
  74. #添加kubernetes的epel源
  75. echo '[kubernetes]
  76. name=Kubernetes
  77. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  78. enabled=1
  79. gpgcheck=1
  80. repo_gpgcheck=1
  81. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg' > /etc/yum.repos.d/kubernetes.repo
  82. #下载
  83. sudo yum-config-manager \
  84. --add-repo \
  85. https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo
  86. yum makecache fast
  87.  
  88. #### 第四部分,开始安装 ####
  89. yum -y install docker-ce
  90. yum install --enablerepo="kubernetes" $kubelet $kubeadm $kubectl
  91. systemctl enable kubelet.service && systemctl start kubelet.service
  92. systemctl start docker.service && systemctl enable docker.service
  93. #安装tab快捷键
  94. yum -y install bash-completion && source /usr/share/bash-completion/bash_completion && source <(kubectl completion bash) && echo "source <(kubectl completion bash)" >> ~/.bashrc
  95. #创建集群
  96. kubeadm init --apiserver-advertise-address $node01 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version $version --pod-network-cidr=10.244.0.0/16 >> $key 2>&1
  97. export KUBECONFIG=/etc/kubernetes/admin.conf
  98. mkdir -p $HOME/.kube
  99. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  100. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  101. docker pull quay.io/coreos/flannel:v0.12.0-amd64
  102. kubectl apply -f $flannel
  103. echo '请手动查看$key文件的密钥将其他节点接入集群'

附录代码二:kube-flannel.yml

  1. ##############
  2. ##flannel网络##
  3. ##############
  4. ---
  5. apiVersion: policy/v1beta1
  6. kind: PodSecurityPolicy
  7. metadata:
  8. name: psp.flannel.unprivileged
  9. annotations:
  10. seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
  11. seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
  12. apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
  13. apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
  14. spec:
  15. privileged: false
  16. volumes:
  17. - configMap
  18. - secret
  19. - emptyDir
  20. - hostPath
  21. allowedHostPaths:
  22. - pathPrefix: "/etc/cni/net.d"
  23. - pathPrefix: "/etc/kube-flannel"
  24. - pathPrefix: "/run/flannel"
  25. readOnlyRootFilesystem: false
  26. # Users and groups
  27. runAsUser:
  28. rule: RunAsAny
  29. supplementalGroups:
  30. rule: RunAsAny
  31. fsGroup:
  32. rule: RunAsAny
  33. # Privilege Escalation
  34. allowPrivilegeEscalation: false
  35. defaultAllowPrivilegeEscalation: false
  36. # Capabilities
  37. allowedCapabilities: ['NET_ADMIN']
  38. defaultAddCapabilities: []
  39. requiredDropCapabilities: []
  40. # Host namespaces
  41. hostPID: false
  42. hostIPC: false
  43. hostNetwork: true
  44. hostPorts:
  45. - min: 0
  46. max: 65535
  47. # SELinux
  48. seLinux:
  49. # SELinux is unused in CaaSP
  50. rule: 'RunAsAny'
  51. ---
  52. kind: ClusterRole
  53. apiVersion: rbac.authorization.k8s.io/v1beta1
  54. metadata:
  55. name: flannel
  56. rules:
  57. - apiGroups: ['extensions']
  58. resources: ['podsecuritypolicies']
  59. verbs: ['use']
  60. resourceNames: ['psp.flannel.unprivileged']
  61. - apiGroups:
  62. - ""
  63. resources:
  64. - pods
  65. verbs:
  66. - get
  67. - apiGroups:
  68. - ""
  69. resources:
  70. - nodes
  71. verbs:
  72. - list
  73. - watch
  74. - apiGroups:
  75. - ""
  76. resources:
  77. - nodes/status
  78. verbs:
  79. - patch
  80. ---
  81. kind: ClusterRoleBinding
  82. apiVersion: rbac.authorization.k8s.io/v1beta1
  83. metadata:
  84. name: flannel
  85. roleRef:
  86. apiGroup: rbac.authorization.k8s.io
  87. kind: ClusterRole
  88. name: flannel
  89. subjects:
  90. - kind: ServiceAccount
  91. name: flannel
  92. namespace: kube-system
  93. ---
  94. apiVersion: v1
  95. kind: ServiceAccount
  96. metadata:
  97. name: flannel
  98. namespace: kube-system
  99. ---
  100. kind: ConfigMap
  101. apiVersion: v1
  102. metadata:
  103. name: kube-flannel-cfg
  104. namespace: kube-system
  105. labels:
  106. tier: node
  107. app: flannel
  108. data:
  109. cni-conf.json: |
  110. {
  111. "name": "cbr0",
  112. "cniVersion": "0.3.1",
  113. "plugins": [
  114. {
  115. "type": "flannel",
  116. "delegate": {
  117. "hairpinMode": true,
  118. "isDefaultGateway": true
  119. }
  120. },
  121. {
  122. "type": "portmap",
  123. "capabilities": {
  124. "portMappings": true
  125. }
  126. }
  127. ]
  128. }
  129. net-conf.json: |
  130. {
  131. "Network": "10.244.0.0/16",
  132. "Backend": {
  133. "Type": "vxlan"
  134. }
  135. }
  136. ---
  137. apiVersion: apps/v1
  138. kind: DaemonSet
  139. metadata:
  140. name: kube-flannel-ds-amd64
  141. namespace: kube-system
  142. labels:
  143. tier: node
  144. app: flannel
  145. spec:
  146. selector:
  147. matchLabels:
  148. app: flannel
  149. template:
  150. metadata:
  151. labels:
  152. tier: node
  153. app: flannel
  154. spec:
  155. affinity:
  156. nodeAffinity:
  157. requiredDuringSchedulingIgnoredDuringExecution:
  158. nodeSelectorTerms:
  159. - matchExpressions:
  160. - key: kubernetes.io/os
  161. operator: In
  162. values:
  163. - linux
  164. - key: kubernetes.io/arch
  165. operator: In
  166. values:
  167. - amd64
  168. hostNetwork: true
  169. tolerations:
  170. - operator: Exists
  171. effect: NoSchedule
  172. serviceAccountName: flannel
  173. initContainers:
  174. - name: install-cni
  175. image: quay.io/coreos/flannel:v0.12.0-amd64
  176. command:
  177. - cp
  178. args:
  179. - -f
  180. - /etc/kube-flannel/cni-conf.json
  181. - /etc/cni/net.d/10-flannel.conflist
  182. volumeMounts:
  183. - name: cni
  184. mountPath: /etc/cni/net.d
  185. - name: flannel-cfg
  186. mountPath: /etc/kube-flannel/
  187. containers:
  188. - name: kube-flannel
  189. image: quay.io/coreos/flannel:v0.12.0-amd64
  190. command:
  191. - /opt/bin/flanneld
  192. args:
  193. - --ip-masq
  194. - --kube-subnet-mgr
  195. resources:
  196. requests:
  197. cpu: "100m"
  198. memory: "50Mi"
  199. limits:
  200. cpu: "100m"
  201. memory: "50Mi"
  202. securityContext:
  203. privileged: false
  204. capabilities:
  205. add: ["NET_ADMIN"]
  206. env:
  207. - name: POD_NAME
  208. valueFrom:
  209. fieldRef:
  210. fieldPath: metadata.name
  211. - name: POD_NAMESPACE
  212. valueFrom:
  213. fieldRef:
  214. fieldPath: metadata.namespace
  215. volumeMounts:
  216. - name: run
  217. mountPath: /run/flannel
  218. - name: flannel-cfg
  219. mountPath: /etc/kube-flannel/
  220. volumes:
  221. - name: run
  222. hostPath:
  223. path: /run/flannel
  224. - name: cni
  225. hostPath:
  226. path: /etc/cni/net.d
  227. - name: flannel-cfg
  228. configMap:
  229. name: kube-flannel-cfg
  230. ---
  231. apiVersion: apps/v1
  232. kind: DaemonSet
  233. metadata:
  234. name: kube-flannel-ds-arm64
  235. namespace: kube-system
  236. labels:
  237. tier: node
  238. app: flannel
  239. spec:
  240. selector:
  241. matchLabels:
  242. app: flannel
  243. template:
  244. metadata:
  245. labels:
  246. tier: node
  247. app: flannel
  248. spec:
  249. affinity:
  250. nodeAffinity:
  251. requiredDuringSchedulingIgnoredDuringExecution:
  252. nodeSelectorTerms:
  253. - matchExpressions:
  254. - key: kubernetes.io/os
  255. operator: In
  256. values:
  257. - linux
  258. - key: kubernetes.io/arch
  259. operator: In
  260. values:
  261. - arm64
  262. hostNetwork: true
  263. tolerations:
  264. - operator: Exists
  265. effect: NoSchedule
  266. serviceAccountName: flannel
  267. initContainers:
  268. - name: install-cni
  269. image: quay.io/coreos/flannel:v0.12.0-arm64
  270. command:
  271. - cp
  272. args:
  273. - -f
  274. - /etc/kube-flannel/cni-conf.json
  275. - /etc/cni/net.d/10-flannel.conflist
  276. volumeMounts:
  277. - name: cni
  278. mountPath: /etc/cni/net.d
  279. - name: flannel-cfg
  280. mountPath: /etc/kube-flannel/
  281. containers:
  282. - name: kube-flannel
  283. image: quay.io/coreos/flannel:v0.12.0-arm64
  284. command:
  285. - /opt/bin/flanneld
  286. args:
  287. - --ip-masq
  288. - --kube-subnet-mgr
  289. resources:
  290. requests:
  291. cpu: "100m"
  292. memory: "50Mi"
  293. limits:
  294. cpu: "100m"
  295. memory: "50Mi"
  296. securityContext:
  297. privileged: false
  298. capabilities:
  299. add: ["NET_ADMIN"]
  300. env:
  301. - name: POD_NAME
  302. valueFrom:
  303. fieldRef:
  304. fieldPath: metadata.name
  305. - name: POD_NAMESPACE
  306. valueFrom:
  307. fieldRef:
  308. fieldPath: metadata.namespace
  309. volumeMounts:
  310. - name: run
  311. mountPath: /run/flannel
  312. - name: flannel-cfg
  313. mountPath: /etc/kube-flannel/
  314. volumes:
  315. - name: run
  316. hostPath:
  317. path: /run/flannel
  318. - name: cni
  319. hostPath:
  320. path: /etc/cni/net.d
  321. - name: flannel-cfg
  322. configMap:
  323. name: kube-flannel-cfg
  324. ---
  325. apiVersion: apps/v1
  326. kind: DaemonSet
  327. metadata:
  328. name: kube-flannel-ds-arm
  329. namespace: kube-system
  330. labels:
  331. tier: node
  332. app: flannel
  333. spec:
  334. selector:
  335. matchLabels:
  336. app: flannel
  337. template:
  338. metadata:
  339. labels:
  340. tier: node
  341. app: flannel
  342. spec:
  343. affinity:
  344. nodeAffinity:
  345. requiredDuringSchedulingIgnoredDuringExecution:
  346. nodeSelectorTerms:
  347. - matchExpressions:
  348. - key: kubernetes.io/os
  349. operator: In
  350. values:
  351. - linux
  352. - key: kubernetes.io/arch
  353. operator: In
  354. values:
  355. - arm
  356. hostNetwork: true
  357. tolerations:
  358. - operator: Exists
  359. effect: NoSchedule
  360. serviceAccountName: flannel
  361. initContainers:
  362. - name: install-cni
  363. image: quay.io/coreos/flannel:v0.12.0-arm
  364. command:
  365. - cp
  366. args:
  367. - -f
  368. - /etc/kube-flannel/cni-conf.json
  369. - /etc/cni/net.d/10-flannel.conflist
  370. volumeMounts:
  371. - name: cni
  372. mountPath: /etc/cni/net.d
  373. - name: flannel-cfg
  374. mountPath: /etc/kube-flannel/
  375. containers:
  376. - name: kube-flannel
  377. image: quay.io/coreos/flannel:v0.12.0-arm
  378. command:
  379. - /opt/bin/flanneld
  380. args:
  381. - --ip-masq
  382. - --kube-subnet-mgr
  383. resources:
  384. requests:
  385. cpu: "100m"
  386. memory: "50Mi"
  387. limits:
  388. cpu: "100m"
  389. memory: "50Mi"
  390. securityContext:
  391. privileged: false
  392. capabilities:
  393. add: ["NET_ADMIN"]
  394. env:
  395. - name: POD_NAME
  396. valueFrom:
  397. fieldRef:
  398. fieldPath: metadata.name
  399. - name: POD_NAMESPACE
  400. valueFrom:
  401. fieldRef:
  402. fieldPath: metadata.namespace
  403. volumeMounts:
  404. - name: run
  405. mountPath: /run/flannel
  406. - name: flannel-cfg
  407. mountPath: /etc/kube-flannel/
  408. volumes:
  409. - name: run
  410. hostPath:
  411. path: /run/flannel
  412. - name: cni
  413. hostPath:
  414. path: /etc/cni/net.d
  415. - name: flannel-cfg
  416. configMap:
  417. name: kube-flannel-cfg
  418. ---
  419. apiVersion: apps/v1
  420. kind: DaemonSet
  421. metadata:
  422. name: kube-flannel-ds-ppc64le
  423. namespace: kube-system
  424. labels:
  425. tier: node
  426. app: flannel
  427. spec:
  428. selector:
  429. matchLabels:
  430. app: flannel
  431. template:
  432. metadata:
  433. labels:
  434. tier: node
  435. app: flannel
  436. spec:
  437. affinity:
  438. nodeAffinity:
  439. requiredDuringSchedulingIgnoredDuringExecution:
  440. nodeSelectorTerms:
  441. - matchExpressions:
  442. - key: kubernetes.io/os
  443. operator: In
  444. values:
  445. - linux
  446. - key: kubernetes.io/arch
  447. operator: In
  448. values:
  449. - ppc64le
  450. hostNetwork: true
  451. tolerations:
  452. - operator: Exists
  453. effect: NoSchedule
  454. serviceAccountName: flannel
  455. initContainers:
  456. - name: install-cni
  457. image: quay.io/coreos/flannel:v0.12.0-ppc64le
  458. command:
  459. - cp
  460. args:
  461. - -f
  462. - /etc/kube-flannel/cni-conf.json
  463. - /etc/cni/net.d/10-flannel.conflist
  464. volumeMounts:
  465. - name: cni
  466. mountPath: /etc/cni/net.d
  467. - name: flannel-cfg
  468. mountPath: /etc/kube-flannel/
  469. containers:
  470. - name: kube-flannel
  471. image: quay.io/coreos/flannel:v0.12.0-ppc64le
  472. command:
  473. - /opt/bin/flanneld
  474. args:
  475. - --ip-masq
  476. - --kube-subnet-mgr
  477. resources:
  478. requests:
  479. cpu: "100m"
  480. memory: "50Mi"
  481. limits:
  482. cpu: "100m"
  483. memory: "50Mi"
  484. securityContext:
  485. privileged: false
  486. capabilities:
  487. add: ["NET_ADMIN"]
  488. env:
  489. - name: POD_NAME
  490. valueFrom:
  491. fieldRef:
  492. fieldPath: metadata.name
  493. - name: POD_NAMESPACE
  494. valueFrom:
  495. fieldRef:
  496. fieldPath: metadata.namespace
  497. volumeMounts:
  498. - name: run
  499. mountPath: /run/flannel
  500. - name: flannel-cfg
  501. mountPath: /etc/kube-flannel/
  502. volumes:
  503. - name: run
  504. hostPath:
  505. path: /run/flannel
  506. - name: cni
  507. hostPath:
  508. path: /etc/cni/net.d
  509. - name: flannel-cfg
  510. configMap:
  511. name: kube-flannel-cfg
  512. ---
  513. apiVersion: apps/v1
  514. kind: DaemonSet
  515. metadata:
  516. name: kube-flannel-ds-s390x
  517. namespace: kube-system
  518. labels:
  519. tier: node
  520. app: flannel
  521. spec:
  522. selector:
  523. matchLabels:
  524. app: flannel
  525. template:
  526. metadata:
  527. labels:
  528. tier: node
  529. app: flannel
  530. spec:
  531. affinity:
  532. nodeAffinity:
  533. requiredDuringSchedulingIgnoredDuringExecution:
  534. nodeSelectorTerms:
  535. - matchExpressions:
  536. - key: kubernetes.io/os
  537. operator: In
  538. values:
  539. - linux
  540. - key: kubernetes.io/arch
  541. operator: In
  542. values:
  543. - s390x
  544. hostNetwork: true
  545. tolerations:
  546. - operator: Exists
  547. effect: NoSchedule
  548. serviceAccountName: flannel
  549. initContainers:
  550. - name: install-cni
  551. image: quay.io/coreos/flannel:v0.12.0-s390x
  552. command:
  553. - cp
  554. args:
  555. - -f
  556. - /etc/kube-flannel/cni-conf.json
  557. - /etc/cni/net.d/10-flannel.conflist
  558. volumeMounts:
  559. - name: cni
  560. mountPath: /etc/cni/net.d
  561. - name: flannel-cfg
  562. mountPath: /etc/kube-flannel/
  563. containers:
  564. - name: kube-flannel
  565. image: quay.io/coreos/flannel:v0.12.0-s390x
  566. command:
  567. - /opt/bin/flanneld
  568. args:
  569. - --ip-masq
  570. - --kube-subnet-mgr
  571. resources:
  572. requests:
  573. cpu: "100m"
  574. memory: "50Mi"
  575. limits:
  576. cpu: "100m"
  577. memory: "50Mi"
  578. securityContext:
  579. privileged: false
  580. capabilities:
  581. add: ["NET_ADMIN"]
  582. env:
  583. - name: POD_NAME
  584. valueFrom:
  585. fieldRef:
  586. fieldPath: metadata.name
  587. - name: POD_NAMESPACE
  588. valueFrom:
  589. fieldRef:
  590. fieldPath: metadata.namespace
  591. volumeMounts:
  592. - name: run
  593. mountPath: /run/flannel
  594. - name: flannel-cfg
  595. mountPath: /etc/kube-flannel/
  596. volumes:
  597. - name: run
  598. hostPath:
  599. path: /run/flannel
  600. - name: cni
  601. hostPath:
  602. path: /etc/cni/net.d
  603. - name: flannel-cfg
  604. configMap:
  605. name: kube-flannel-cfg

附录代码三:key.txt

  1. W0526 16:17:20.680490 13760 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  2. [init] Using Kubernetes version: v1.18.0
  3. [preflight] Running pre-flight checks
  4. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  5. [preflight] Pulling images required for setting up a Kubernetes cluster
  6. [preflight] This might take a minute or two, depending on the speed of your internet connection
  7. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  8. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  9. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  10. [kubelet-start] Starting the kubelet
  11. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  12. [certs] Generating "ca" certificate and key
  13. [certs] Generating "apiserver" certificate and key
  14. [certs] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.17.10.4]
  15. [certs] Generating "apiserver-kubelet-client" certificate and key
  16. [certs] Generating "front-proxy-ca" certificate and key
  17. [certs] Generating "front-proxy-client" certificate and key
  18. [certs] Generating "etcd/ca" certificate and key
  19. [certs] Generating "etcd/server" certificate and key
  20. [certs] etcd/server serving cert is signed for DNS names [node01 localhost] and IPs [172.17.10.4 127.0.0.1 ::1]
  21. [certs] Generating "etcd/peer" certificate and key
  22. [certs] etcd/peer serving cert is signed for DNS names [node01 localhost] and IPs [172.17.10.4 127.0.0.1 ::1]
  23. [certs] Generating "etcd/healthcheck-client" certificate and key
  24. [certs] Generating "apiserver-etcd-client" certificate and key
  25. [certs] Generating "sa" key and public key
  26. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  27. [kubeconfig] Writing "admin.conf" kubeconfig file
  28. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  29. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  30. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  31. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  32. [control-plane] Creating static Pod manifest for "kube-apiserver"
  33. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  34. W0526 16:18:02.560249 13760 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  35. [control-plane] Creating static Pod manifest for "kube-scheduler"
  36. W0526 16:18:02.561130 13760 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  37. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  38. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  39. [apiclient] All control plane components are healthy after 26.504466 seconds
  40. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  41. [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
  42. [upload-certs] Skipping phase. Please see --upload-certs
  43. [mark-control-plane] Marking the node node01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
  44. [mark-control-plane] Marking the node node01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  45. [bootstrap-token] Using token: q3uu1o.4rdfkcyzxjhawvk1
  46. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  47. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
  48. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  49. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  50. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  51. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  52. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  53. [addons] Applied essential addon: CoreDNS
  54. [addons] Applied essential addon: kube-proxy
  55.  
  56. Your Kubernetes control-plane has initialized successfully!
  57.  
  58. To start using your cluster, you need to run the following as a regular user:
  59.  
  60. mkdir -p $HOME/.kube
  61. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  62. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  63.  
  64. You should now deploy a pod network to the cluster.
  65. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  66. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  67.  
  68. Then you can join any number of worker nodes by running the following on each as root:
  69.  
  70. kubeadm join 172.17.10.4:6443 --token q3uu1o.4rdfkcyzxjhawvk1 \
  71. --discovery-token-ca-cert-hash sha256:a755d8f56733ba8f9d1951298b200202fce7b84389954bf7a38558fa6ce2a9c9


附录代码四:recommended.yaml

  1. # Copyright 2017 The Kubernetes Authors.
  2. #
  3. # Licensed under the Apache License, Version 2.0 (the "License");
  4. # you may not use this file except in compliance with the License.
  5. # You may obtain a copy of the License at
  6. #
  7. # http://www.apache.org/licenses/LICENSE-2.0
  8. #
  9. # Unless required by applicable law or agreed to in writing, software
  10. # distributed under the License is distributed on an "AS IS" BASIS,
  11. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  12. # See the License for the specific language governing permissions and
  13. # limitations under the License.
  14.  
  15. ##############
  16. ##安装dashboard##
  17. ##############
  18.  
  19. apiVersion: v1
  20. kind: Namespace
  21. metadata:
  22. name: kubernetes-dashboard
  23.  
  24. ---
  25.  
  26. apiVersion: v1
  27. kind: ServiceAccount
  28. metadata:
  29. labels:
  30. k8s-app: kubernetes-dashboard
  31. name: kubernetes-dashboard
  32. namespace: kubernetes-dashboard
  33.  
  34. ---
  35.  
  36. kind: Service
  37. apiVersion: v1
  38. metadata:
  39. labels:
  40. k8s-app: kubernetes-dashboard
  41. name: kubernetes-dashboard
  42. namespace: kubernetes-dashboard
  43. spec:
  44. ports:
  45. - port: 443
  46. targetPort: 8443
  47. selector:
  48. k8s-app: kubernetes-dashboard
  49.  
  50. ---
  51.  
  52. apiVersion: v1
  53. kind: Secret
  54. metadata:
  55. labels:
  56. k8s-app: kubernetes-dashboard
  57. name: kubernetes-dashboard-certs
  58. namespace: kubernetes-dashboard
  59. type: Opaque
  60.  
  61. ---
  62.  
  63. apiVersion: v1
  64. kind: Secret
  65. metadata:
  66. labels:
  67. k8s-app: kubernetes-dashboard
  68. name: kubernetes-dashboard-csrf
  69. namespace: kubernetes-dashboard
  70. type: Opaque
  71. data:
  72. csrf: ""
  73.  
  74. ---
  75.  
  76. apiVersion: v1
  77. kind: Secret
  78. metadata:
  79. labels:
  80. k8s-app: kubernetes-dashboard
  81. name: kubernetes-dashboard-key-holder
  82. namespace: kubernetes-dashboard
  83. type: Opaque
  84.  
  85. ---
  86.  
  87. kind: ConfigMap
  88. apiVersion: v1
  89. metadata:
  90. labels:
  91. k8s-app: kubernetes-dashboard
  92. name: kubernetes-dashboard-settings
  93. namespace: kubernetes-dashboard
  94.  
  95. ---
  96.  
  97. kind: Role
  98. apiVersion: rbac.authorization.k8s.io/v1
  99. metadata:
  100. labels:
  101. k8s-app: kubernetes-dashboard
  102. name: kubernetes-dashboard
  103. namespace: kubernetes-dashboard
  104. rules:
  105. # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  106. - apiGroups: [""]
  107. resources: ["secrets"]
  108. resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
  109. verbs: ["get", "update", "delete"]
  110. # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  111. - apiGroups: [""]
  112. resources: ["configmaps"]
  113. resourceNames: ["kubernetes-dashboard-settings"]
  114. verbs: ["get", "update"]
  115. # Allow Dashboard to get metrics.
  116. - apiGroups: [""]
  117. resources: ["services"]
  118. resourceNames: ["heapster", "dashboard-metrics-scraper"]
  119. verbs: ["proxy"]
  120. - apiGroups: [""]
  121. resources: ["services/proxy"]
  122. resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
  123. verbs: ["get"]
  124.  
  125. ---
  126.  
  127. kind: ClusterRole
  128. apiVersion: rbac.authorization.k8s.io/v1
  129. metadata:
  130. labels:
  131. k8s-app: kubernetes-dashboard
  132. name: kubernetes-dashboard
  133. rules:
  134. # Allow Metrics Scraper to get metrics from the Metrics server
  135. - apiGroups: ["metrics.k8s.io"]
  136. resources: ["pods", "nodes"]
  137. verbs: ["get", "list", "watch"]
  138.  
  139. ---
  140.  
  141. apiVersion: rbac.authorization.k8s.io/v1
  142. kind: RoleBinding
  143. metadata:
  144. labels:
  145. k8s-app: kubernetes-dashboard
  146. name: kubernetes-dashboard
  147. namespace: kubernetes-dashboard
  148. roleRef:
  149. apiGroup: rbac.authorization.k8s.io
  150. kind: Role
  151. name: kubernetes-dashboard
  152. subjects:
  153. - kind: ServiceAccount
  154. name: kubernetes-dashboard
  155. namespace: kubernetes-dashboard
  156.  
  157. ---
  158.  
  159. apiVersion: rbac.authorization.k8s.io/v1
  160. kind: ClusterRoleBinding
  161. metadata:
  162. name: kubernetes-dashboard
  163. roleRef:
  164. apiGroup: rbac.authorization.k8s.io
  165. kind: ClusterRole
  166. name: kubernetes-dashboard
  167. subjects:
  168. - kind: ServiceAccount
  169. name: kubernetes-dashboard
  170. namespace: kubernetes-dashboard
  171.  
  172. ---
  173.  
  174. kind: Deployment
  175. apiVersion: apps/v1
  176. metadata:
  177. labels:
  178. k8s-app: kubernetes-dashboard
  179. name: kubernetes-dashboard
  180. namespace: kubernetes-dashboard
  181. spec:
  182. replicas: 1
  183. revisionHistoryLimit: 10
  184. selector:
  185. matchLabels:
  186. k8s-app: kubernetes-dashboard
  187. template:
  188. metadata:
  189. labels:
  190. k8s-app: kubernetes-dashboard
  191. spec:
  192. containers:
  193. - name: kubernetes-dashboard
  194. image: kubernetesui/dashboard:v2.2.0
  195. imagePullPolicy: Always
  196. ports:
  197. - containerPort: 8443
  198. protocol: TCP
  199. args:
  200. - --auto-generate-certificates
  201. - --namespace=kubernetes-dashboard
  202. # Uncomment the following line to manually specify Kubernetes API server Host
  203. # If not specified, Dashboard will attempt to auto discover the API server and connect
  204. # to it. Uncomment only if the default does not work.
  205. # - --apiserver-host=http://my-address:port
  206. volumeMounts:
  207. - name: kubernetes-dashboard-certs
  208. mountPath: /certs
  209. # Create on-disk volume to store exec logs
  210. - mountPath: /tmp
  211. name: tmp-volume
  212. livenessProbe:
  213. httpGet:
  214. scheme: HTTPS
  215. path: /
  216. port: 8443
  217. initialDelaySeconds: 30
  218. timeoutSeconds: 30
  219. securityContext:
  220. allowPrivilegeEscalation: false
  221. readOnlyRootFilesystem: true
  222. runAsUser: 1001
  223. runAsGroup: 2001
  224. volumes:
  225. - name: kubernetes-dashboard-certs
  226. secret:
  227. secretName: kubernetes-dashboard-certs
  228. - name: tmp-volume
  229. emptyDir: {}
  230. serviceAccountName: kubernetes-dashboard
  231. nodeSelector:
  232. "kubernetes.io/os": linux
  233. # Comment the following tolerations if Dashboard must not be deployed on master
  234. tolerations:
  235. - key: node-role.kubernetes.io/master
  236. effect: NoSchedule
  237.  
  238. ---
  239.  
  240. kind: Service
  241. apiVersion: v1
  242. metadata:
  243. labels:
  244. k8s-app: dashboard-metrics-scraper
  245. name: dashboard-metrics-scraper
  246. namespace: kubernetes-dashboard
  247. spec:
  248. ports:
  249. - port: 8000
  250. targetPort: 8000
  251. selector:
  252. k8s-app: dashboard-metrics-scraper
  253.  
  254. ---
  255.  
  256. kind: Deployment
  257. apiVersion: apps/v1
  258. metadata:
  259. labels:
  260. k8s-app: dashboard-metrics-scraper
  261. name: dashboard-metrics-scraper
  262. namespace: kubernetes-dashboard
  263. spec:
  264. replicas: 1
  265. revisionHistoryLimit: 10
  266. selector:
  267. matchLabels:
  268. k8s-app: dashboard-metrics-scraper
  269. template:
  270. metadata:
  271. labels:
  272. k8s-app: dashboard-metrics-scraper
  273. annotations:
  274. seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
  275. spec:
  276. containers:
  277. - name: dashboard-metrics-scraper
  278. image: kubernetesui/metrics-scraper:v1.0.6
  279. ports:
  280. - containerPort: 8000
  281. protocol: TCP
  282. livenessProbe:
  283. httpGet:
  284. scheme: HTTP
  285. path: /
  286. port: 8000
  287. initialDelaySeconds: 30
  288. timeoutSeconds: 30
  289. volumeMounts:
  290. - mountPath: /tmp
  291. name: tmp-volume
  292. securityContext:
  293. allowPrivilegeEscalation: false
  294. readOnlyRootFilesystem: true
  295. runAsUser: 1001
  296. runAsGroup: 2001
  297. serviceAccountName: kubernetes-dashboard
  298. nodeSelector:
  299. "kubernetes.io/os": linux
  300. # Comment the following tolerations if Dashboard must not be deployed on master
  301. tolerations:
  302. - key: node-role.kubernetes.io/master
  303. effect: NoSchedule
  304. volumes:
  305. - name: tmp-volume
  306. emptyDir: {}

附录代码五:dashboard-svc-account.yaml

  1. ##############
  2. ##配置dashboard管理员账号##
  3. ##############
  4.  
  5. apiVersion: v1
  6. kind: ServiceAccount
  7. metadata:
  8. name: dashboard-admin
  9. namespace: kube-system
  10. ---
  11. apiVersion: rbac.authorization.k8s.io/v1
  12. kind: ClusterRoleBinding
  13. metadata:
  14. name: dashboard-admin
  15. roleRef:
  16. apiGroup: rbac.authorization.k8s.io
  17. kind: ClusterRole
  18. name: cluster-admin
  19. subjects:
  20. - kind: ServiceAccount
  21. name: dashboard-admin
  22. namespace: kube-system

附录代码六:kubernetes_node02.sh

#!/bin/bash
##############
##子节点##
############## #### 第一部分,环境初始化 ####
#k8s版本
version=v1.18.0
kubelet=kubelet-1.18.0-0.x86_64
kubeadm=kubeadm-1.18.0-0.x86_64
kubectl=kubectl-1.18.0-0.x86_64 #集群加入方式
key=/root/key.txt
#部署flannel网络
flannel=/root/kube-flannel.yml
#安装必要依赖
yum -y install vim wget git cmake make gcc gcc-c++ net-tools lrzsz #### 第二部分,节点配置 ####
#配置节点,主机解析,免密登录
node01=172.17.10.4
node02=172.17.10.7
# node03=192.168.1.30
hostnamectl set-hostname node02
echo '172.17.10.4 node01
172.17.10.7 node02' >> /etc/hosts
ssh-keygen
ssh-copy-id -i $node01
ssh-copy-id -i $node02
# ssh-copy-id -i $node03
scp /etc/hosts node02:/etc/hosts
# scp /etc/hosts node03:/etc/hosts #关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
#swap分区关闭
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
#关闭沙盒
setenforce 0
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config
#打开ipv6
modprobe br_netfilter
modprobe ip_vs_rr
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0
EOF
sysctl -p /etc/sysctl.d/k8s.conf
ls /proc/sys/net/bridge #### 第三部分,参数/源处理 ####
#安装epel源
yum install -y epel-release
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim ntpdate libseccomp libtool-ltdl
#时区校准
systemctl enable ntpdate.service
echo '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1' > /tmp/crontab2.tmp
crontab /tmp/crontab2.tmp
systemctl start ntpdate.service
#添加参数
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536" >> /etc/security/limits.conf
echo "* hard nproc 65536" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf
#添加kubernetes的epel源
echo '[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg' > /etc/yum.repos.d/kubernetes.repo
#下载
sudo yum-config-manager \
--add-repo \
https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo
yum makecache fast #### 第四部分,开始安装 ####
yum -y install docker-ce
yum install --enablerepo="kubernetes" $kubelet $kubeadm $kubectl
systemctl enable kubelet.service && systemctl start kubelet.service
systemctl start docker.service && systemctl enable docker.service
#安装tab快捷键
yum -y install bash-completion && source /usr/share/bash-completion/bash_completion && source <(kubectl completion bash) && echo "source <(kubectl completion bash)" >> ~/.bashrc
#创建集群
docker pull quay.io/coreos/flannel:v0.12.0-amd64
echo '请手动查看主节点$key文件的密钥将其他节点接入集群'

附录代码七:mandatory.yaml

##############
##配置ingress-nginx服务##
##############
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx ---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx ---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx ---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx ---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update ---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get ---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx ---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx --- apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
hostNetwork: true
# wait up to five minutes for the drain of connections
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
Ingress: nginx
kubernetes.io/os: linux
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.29.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
- --http-port=8080 # 如果你的master服务器已经安装了nginx,这里需要修改下,否则无法启动ingress-nginx服务
- --https-port=8443 # 同上
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 101
runAsUser: 101
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
lifecycle:
preStop:
exec:
command:
- /wait-shutdown --- apiVersion: v1
kind: LimitRange
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
limits:
- min:
memory: 90Mi
cpu: 100m
type: Container

参考文献:

https://blog.csdn.net/qq_37746855/article/details/116173976

https://blog.csdn.net/weixin_46152207/article/details/111355788

https://blog.csdn.net/catcher92/article/details/116207040

https://blog.51cto.com/u_14306186/2523096



 

在Linux服务器,搭建K8s服务【脚本篇】的更多相关文章

  1. 监控Linux服务器上python服务脚本

    提供给公司使用的测试平台这两天频繁地挂掉,影响到相关同事的正常使用,决定在服务器上写个监控脚本,监控到服务挂了就启动起来,一分钟检查一次.注:后台服务使用的是python.监控脚本如下: NUM=`p ...

  2. Linux服务器部署系列之八—Sendmail篇

    Sendmail是目前Linux系统下面用得最广的邮件系统之一,虽然它存在一些不足,不过,目前还是有不少公司在使用它.对它的学习,也能让我们更深的了解邮件系统的运作.下面我们就来看看sendmail邮 ...

  3. Linux服务器部署系列之七—OpenLDAP篇

    LDAP(轻量级目录访问服务),通过配置这个服务,我们也可以在linux下面使用目录的形式管理用户,就像windows下面的AD一样,方便我们管理.下面我们就一起来配置openldap服务.本文运行环 ...

  4. linux下搭建DHCP服务

    一键搭建dhcpd服务脚本 [root@dhcp-server~]# cat auto_install_dhcpd.sh #!/bin/sh . /etc/init.d/functions #安装dh ...

  5. Vsftpd3.0--FTP服务器搭建之本地用户篇

    Vsftpd3.0--FTP服务器搭建之本地用户篇 年4月10日 19:23 FTP服务在工作中是经用到的一种工具,可以实现上传下载等功能.那么今天我们来聊一聊FTP服务器使用本地用户登录的实现模式. ...

  6. Linux服务器部署系列之一—Apache篇(下)

    接上篇 linux服务器部署系列之一—Apache篇(上)    四.管理日志文件 Apache日志分为访问日志和错误日志两种: 1)访问日志 用于记录客户端的访问信息,文件名默认为access_lo ...

  7. Linux服务器启动jstatd服务

    Linux服务器启动jstatd服务 1.查找jdk所在目录 2.在jdk的bin目录下创建文件jstatd.all.policy touch jstatd.all.policy 3.写入安全配置 g ...

  8. 图文详解linux如何搭建lamp服务环境

    企业网站建设必然离不开服务器运维,一个稳定高效的服务器环境是保证网站正常运行的重要前提.本文小编将会详细讲解Linux系统上如何搭建配置高效的lamp服务环境,并在lamp环境中搭建起企业自己的网站. ...

  9. 结合jenkins在Linux服务器搭建测试环境

    何时使用: 测试过程中我们需要持续构建一个软件项目,为避免重复的手动下载.解压操作,我们需要搭建一个能够自动构建的测试环境,当代码有更新时,测试人员只需点一下[构建]即可拉取最新的代码进行测试(也可设 ...

  10. Linux服务器搭建相关教程链接整理

    Linux: Linux 教程 | 菜鸟教程 linux下如何添加一个用户并且让用户获得root权限 - !canfly - 博客园 Git: 在 Linux 下搭建 Git 服务器 - 黄棣-dee ...

随机推荐

  1. 自动化kolla-ansible部署centos7.9+openstack-train-超融合高可用架构

    自动化kolla-ansible部署centos7.9+openstack-train-超融合高可用架构 欢迎加QQ群:1026880196 进行交流学习 环境说明: 1. 满足一台电脑一个网卡的环境 ...

  2. Android Activity间跳转与传递数据

    1 概述 Activity之间的跳转主要使用 startActivity(Intent intent); startActivityForResult(Intent intent,int reques ...

  3. kubernetes-copyFromPod

    import com.google.common.io.ByteStreams; import io.kubernetes.client.Copy; import io.kubernetes.clie ...

  4. The Blocks Problem UVA - 101

      Many areas of Computer Science use simple, abstract domains for both analytical and empirical stud ...

  5. zabbix容器化安装及监控docker应用

    一.zabbix agent2 介绍 从Zabbix 4.4之后,官方推出了Zabbix Agent 2,意味着zabbix 不在只是物理机监控的代名词,现在你可以使用Go为Zabbix编写插件,来监 ...

  6. 到底什么才叫SEO

    昨天去面试,公司的老板,问了我几个SEO的问题.SEO是什么?长尾词与关键词的区别?你用哪些SEO工具? SEO就是为了将关键词做上好的位置展示给用户.难道不是吗? 这些问题,我都答了一下. 然后他问 ...

  7. 脚本加载后执行JS回调函数的方法

    动态脚本简单示例 // IE下: var HEAD = document.getElementsByTagName('head')[0] || document.documentElement var ...

  8. 技术面试问题汇总第003篇:猎豹移动反病毒工程师part3

    从现在开始,面试的问题渐渐深入.这次的三个问题,都是对PE格式的不断深入的提问.从最初的概念,到病毒对PE格式的利用,再到最后的壳的问题.这里需要说明的是,由于壳是一个比较复杂的概念,面试中也仅仅只能 ...

  9. hdu4810

    题意:      给你n个数,让你输出n个数,没一次输出的是在这n个数里面取i个数异或的和(所有情况<C n中取i>). 思路:      首先把所有的数都拆成二进制,然后把他们在某一位上 ...

  10. Android的so库注入

    作者:Fly2015 Android平台的so库的注入是有Linux平台的进程注入移植来的.由于Android系统的底层实现是基于Linux系统的源码修改而来,因此很多Linux下的应用可以移植到An ...