在Linux服务器,搭建K8s服务【脚本篇】
前言
好久没有写博客了,本文主要是对网上文章的总结篇,主要是将安装和运行代码做了一次真机实验,亲测可用。文章内包含的脚本和代码,多来自于网络,也有我自己的调整和配置,文章末尾对参考的文献做了列举,方便大家参考。
过程很简单,一路next往下看和操作即可,文章不对脚本和代码做原理解释,某些注意点加了红色标注,部分脚本有注释,可以自行参考,以后有机会可以视频讲解。
核心步骤
因为是next的方式,所以本章节主要是操作步骤,步骤中涉及到的代码或者脚本,可以在下文中找到,比如:附录代码一、附录代码二等等,因为脚本实在太长,不太方便放到步骤里。
1、配置 node01 主节点(2个文件,1个结果);
在root目录下拷贝k8s脚本(附录代码一:kubernetes_node01.sh)和flannel网络(附录代码二:kube-flannel.yml)的文件;
然后给脚本文件赋权限:chmod +x kubernetes_node01.sh
最后执行脚本:./kubernetes_node01.sh
ps:1、sh脚本中,需要配置节点,是内网的。
2、多个节点之间要保证能ping通;
3、中间可能需要自己来配合做些操作,比如输入:y,来做确认等等。
最后,可以在当前文件夹下,看到一个key.txt的文件,里边有安装的结果数据或者密钥等,可查看附录代码三:key.txt,这是我安装的结果,里边有join主节点的配置语句。
查看所有的nodes和pods:
- [root@node01 ~]# kubectl get nodes
- NAME STATUS ROLES AGE VERSION
- node01 Ready master 26h v1.18.0
所有的pods:
- [root@node01 ~]# kubectl get pods -A
- NAMESPACE NAME READY STATUS RESTARTS AGE
- kube-system coredns-7ff77c879f-6m6fl 1/1 Running 0 25m
- kube-system coredns-7ff77c879f-dkd56 1/1 Running 0 25m
- kube-system etcd-node01 1/1 Running 0 26m
- kube-system kube-apiserver-node01 1/1 Running 0 26m
- kube-system kube-controller-manager-node01 1/1 Running 0 26m
- kube-system kube-flannel-ds-amd64-sdv2h 1/1 Running 0 25m
- kube-system kube-proxy-vgf4r 1/1 Running 0 25m
- kube-system kube-scheduler-node01 1/1 Running 0 26m
如果都启动,都READY了,表示安装成功。
2、配置dashboard仪表盘(2个文件)
上面安装好了kubectl、kubeadm、kubelet后,我们可以通过客户端来连接,这里安利下k8s的客户端:Lens,很香。
如果不用客户端,那就需要安装仪表盘了。
1、Linux根目录拷贝文件,附录代码四:recommended.yaml(安装看板),附录代码五:dashboard-svc-account.yaml(配置管理员账户)
2、执行命令:
- sed -i '/targetPort: 8443/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort' recommended.yaml
3、启动仪表盘服务:
- kubectl apply -f recommended.yaml
4、启动配置账户:
- kubectl apply -f dashboard-svc-account.yaml
都成功后,会生成一个token字符串,用来登录web端的令牌的,如果没有拷贝或者丢失了也不怕,可以使用命令查看:
- kubectl describe secrets -n kube-system `kubectl get secret -n kube-system | grep admin | awk '{print $1}'` | grep '^token'|awk '{print $2}'
token就是类似这种:
- eyJhbGciOiJSUzI1NiIsImtpZCI6Ikl5SE00cXFZR1V2cWstQURVcGlUOGk4cTBYekZMV0VmNDEwRy14UTd1d2sifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tY3JnejYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWliwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMjYwMGQ0ZjctM2ZhOS00ODIwLWFmMmUtZTJlZDMxYWMyYWFhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.BBtdG-S2kHEwRbWIAf6DiUgC3ILUOStPATyWfvxcQs5VJBtLRyMGqQ-AfkUoVLuhZdUv-CGoEJ1OYA00M6MwoehDdkhLFbXF7Xx1IPyhFTHxZ_oXHBPyjEREkTEerarZnvgt0ufU4g_Eqn91jdHet73itz-0abgmLMPkRl5YYjlh36Ivwq9IjKgujLwTNisUFckLuHOscHtQIrjIvAZlWTRh_awMsDHvemAKG_YIjMbyQnXi6VfN3rTW869DA0XAGOF2t7cWBtMmHvmLxVYqpOauUzwXXeYbO9eP0_d9JtVwKv6R0Q7sexRFZ-iTdZBOJDujFI3UT2jsqgVdbagA
这里再检查下:
查看所有的nodes和pods:
- [root@node01 ~]# kubectl get pods -A
- NAMESPACE NAME READY STATUS RESTARTS AGE
- kube-system coredns-7ff77c879f-6m6fl 1/1 Running 0 25m
- kube-system coredns-7ff77c879f-dkd56 1/1 Running 0 25m
- kube-system etcd-node01 1/1 Running 0 26m
- kube-system kube-apiserver-node01 1/1 Running 0 26m
- kube-system kube-controller-manager-node01 1/1 Running 0 26m
- kube-system kube-flannel-ds-amd64-sdv2h 1/1 Running 0 25m
- kube-system kube-proxy-vgf4r 1/1 Running 0 25m
- kube-system kube-scheduler-node01 1/1 Running 0 26m
- kubernetes-dashboard dashboard-metrics-scraper-78f5d9f487-ldswx 1/1 Running 0 12m
- kubernetes-dashboard kubernetes-dashboard-577bd97bc-szvwt 1/1 Running 0 12m
多了kubernetes-dashboard命名空间下的两个pod。
3、配置 node02 子节点(1个文件)
如果你没有多余的服务器,也可以在master节点做自己的pod的,需要开启下,命令将 master 标记为可调度:
- sudo kubectl taint nodes --all node-role.kubernetes.io/masteflr-
如果要配置多个子节点,那就仿照主节点来继续写sh脚本吧(附录代码六:kubernetes_node02.sh),步骤和主节点一致:
1、拷贝到子节点服务器;
2、赋权限,执行文件:./kubernetes_node02.sh
3、这里不用flannel配置;
4、安装完成后,可以join到主节点,配置文件在主节点的key.txt文件里,如果你安装成功了的话;
- kubeadm join 172.17.10.4:6443 --token q3uu1o.4rdfkcyzxjhawvk1 \
- --discovery-token-ca-cert-hash sha256:a755d8f56733ba8f9d1951298b200202fce7b84389954bf7a38558fa6ce2a9c9
如果一切正常,可以去主节点查看所有的nodes:
- NAME STATUS ROLES AGE VERSION
- node01 Ready master 26h v1.18.0
- node02 Ready <none> 25h v1.18.0
表示我们的子节点已经配置完成。
4、配置ASP.Net Core服务
这里的Deployment+Service的写法比较简单,直接贴出来,就不做过多的解释了。
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- labels:
- app: laozhang-op2
- name: laozhang-op2
- spec:
- selector:
- matchLabels:
- app: laozhang-op2
- replicas: 2
- template:
- metadata:
- labels:
- app: laozhang-op2
- spec:
- containers:
- - name: laozhang-op2
- image: laozhangisphi/apkimg315
- imagePullPolicy: IfNotPresent #pull镜像时机,
- ----
- apiVersion: v1
- kind: Service
- metadata:
- name: laozhang-op2
- spec:
- type: NodePort
- ports:
- - name: default
- protocol: TCP
- port: 80
- targetPort: 80
- nodePort: 30099
- selector:
- app: laozhang-op2
但是这种是nodePort的方式,平时使用更多的是Ingress的方式,那使用ingress,就先需要配置ingress的服务。
5、配置Ingress-nginx(1个文件)
在根目录拷贝文件,附录代码七:mandatory.yaml,配置Ingress-Nginx服务,
这里需要注意下,如果服务器之前已经配置过nginx,需要在mandatory.yaml文件中,修改http-port输出端口,详细内容见下面的代码,有注释。
直接执行yaml:
- kubectl apply -f mandatory.yaml
如果没有报错,可以查看所有的pods:
- [root@node01 ~]# kubectl get pods -A
- NAMESPACE NAME READY STATUS RESTARTS AGE
- default laozhang-op2-5cf487b57f-pdvfg 1/1 Running 0 4h29m
- default laozhang-op2-5cf487b57f-vtgwc 1/1 Running 0 4h29m
- ingress-nginx nginx-ingress-controller-557475687f-rfl98 1/1 Running 0 122m
- kube-system coredns-7ff77c879f-gj4sl 1/1 Running 0 26h
- kube-system coredns-7ff77c879f-mqp2q 1/1 Running 0 26h
- kube-system etcd-node01 1/1 Running 0 26h
- kube-system kube-apiserver-node01 1/1 Running 0 26h
- kube-system kube-controller-manager-node01 1/1 Running 0 26h
- kube-system kube-flannel-ds-amd64-nmnj2 1/1 Running 0 26h
- kube-system kube-proxy-wcjb8 1/1 Running 0 26h
- kube-system kube-scheduler-node01 1/1 Running 2 26h
- kubernetes-dashboard dashboard-metrics-scraper-78f5d9f487-qp2fw 1/1 Running 0 26h
- kubernetes-dashboard kubernetes-dashboard-577bd97bc-2tsj7 1/1 Running 0 26h
如果和上面一样,那恭喜,一切配置就完成了。
附录代码一:kubernetes_node01.sh
- #!/bin/bash
- ##############
- ##主节点##
- ##############
- #### 第一部分,环境初始化 ####
- #k8s版本
- version=v1.18.0
- kubelet=kubelet-1.18.0-0.x86_64
- kubeadm=kubeadm-1.18.0-0.x86_64
- kubectl=kubectl-1.18.0-0.x86_64
- #集群加入方式
- key=/root/key.txt
- #部署flannel网络
- flannel=/root/kube-flannel.yml
- #安装必要依赖
- yum -y install vim wget git cmake make gcc gcc-c++ net-tools lrzsz
- #### 第二部分,节点配置 ####
- #主机解析,免密登录
- #内网ip,配置多节点,也可以不配置,后期通过join的方式
- node01=172.21.10.4
- #node02=192.168.10.7
- #node03=192.168.1.30
- hostnamectl set-hostname node01
- echo '172.21.10.4 node01
- #192.168.10.7 node02
- #192.168.1.30 node03' >> /etc/hosts
- ssh-keygen
- ssh-copy-id -i $node01
- #ssh-copy-id -i $node02
- #ssh-copy-id -i $node03
- #scp /etc/hosts node02:/etc/hosts
- #scp /etc/hosts node03:/etc/hosts
- #关闭防火墙
- systemctl stop firewalld
- systemctl disable firewalld
- #swap分区关闭
- swapoff -a
- sed -i 's/.*swap.*/#&/' /etc/fstab
- #关闭沙盒
- setenforce 0
- sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
- sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
- sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux
- sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config
- #打开ipv6
- modprobe br_netfilter
- modprobe ip_vs_rr
- cat <<EOF > /etc/sysctl.d/k8s.conf
- net.bridge.bridge-nf-call-ip6tables = 1
- net.bridge.bridge-nf-call-iptables = 1
- vm.swappiness = 0
- EOF
- sysctl -p /etc/sysctl.d/k8s.conf
- ls /proc/sys/net/bridge
- #### 第三部分,参数/源处理 ####
- #安装epel源
- yum install -y epel-release
- yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim ntpdate libseccomp libtool-ltdl
- #时区校准
- systemctl enable ntpdate.service
- echo '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1' > /tmp/crontab2.tmp
- crontab /tmp/crontab2.tmp
- systemctl start ntpdate.service
- #添加参数
- echo "* soft nofile 65536" >> /etc/security/limits.conf
- echo "* hard nofile 65536" >> /etc/security/limits.conf
- echo "* soft nproc 65536" >> /etc/security/limits.conf
- echo "* hard nproc 65536" >> /etc/security/limits.conf
- echo "* soft memlock unlimited" >> /etc/security/limits.conf
- echo "* hard memlock unlimited" >> /etc/security/limits.conf
- #添加kubernetes的epel源
- echo '[kubernetes]
- name=Kubernetes
- baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
- enabled=1
- gpgcheck=1
- repo_gpgcheck=1
- gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg' > /etc/yum.repos.d/kubernetes.repo
- #下载
- sudo yum-config-manager \
- --add-repo \
- https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo
- yum makecache fast
- #### 第四部分,开始安装 ####
- yum -y install docker-ce
- yum install --enablerepo="kubernetes" $kubelet $kubeadm $kubectl
- systemctl enable kubelet.service && systemctl start kubelet.service
- systemctl start docker.service && systemctl enable docker.service
- #安装tab快捷键
- yum -y install bash-completion && source /usr/share/bash-completion/bash_completion && source <(kubectl completion bash) && echo "source <(kubectl completion bash)" >> ~/.bashrc
- #创建集群
- kubeadm init --apiserver-advertise-address $node01 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version $version --pod-network-cidr=10.244.0.0/16 >> $key 2>&1
- export KUBECONFIG=/etc/kubernetes/admin.conf
- mkdir -p $HOME/.kube
- sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
- sudo chown $(id -u):$(id -g) $HOME/.kube/config
- docker pull quay.io/coreos/flannel:v0.12.0-amd64
- kubectl apply -f $flannel
- echo '请手动查看$key文件的密钥将其他节点接入集群'
附录代码二:kube-flannel.yml
- ##############
- ##flannel网络##
- ##############
- ---
- apiVersion: policy/v1beta1
- kind: PodSecurityPolicy
- metadata:
- name: psp.flannel.unprivileged
- annotations:
- seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
- seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
- apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
- apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
- spec:
- privileged: false
- volumes:
- - configMap
- - secret
- - emptyDir
- - hostPath
- allowedHostPaths:
- - pathPrefix: "/etc/cni/net.d"
- - pathPrefix: "/etc/kube-flannel"
- - pathPrefix: "/run/flannel"
- readOnlyRootFilesystem: false
- # Users and groups
- runAsUser:
- rule: RunAsAny
- supplementalGroups:
- rule: RunAsAny
- fsGroup:
- rule: RunAsAny
- # Privilege Escalation
- allowPrivilegeEscalation: false
- defaultAllowPrivilegeEscalation: false
- # Capabilities
- allowedCapabilities: ['NET_ADMIN']
- defaultAddCapabilities: []
- requiredDropCapabilities: []
- # Host namespaces
- hostPID: false
- hostIPC: false
- hostNetwork: true
- hostPorts:
- - min: 0
- max: 65535
- # SELinux
- seLinux:
- # SELinux is unused in CaaSP
- rule: 'RunAsAny'
- ---
- kind: ClusterRole
- apiVersion: rbac.authorization.k8s.io/v1beta1
- metadata:
- name: flannel
- rules:
- - apiGroups: ['extensions']
- resources: ['podsecuritypolicies']
- verbs: ['use']
- resourceNames: ['psp.flannel.unprivileged']
- - apiGroups:
- - ""
- resources:
- - pods
- verbs:
- - get
- - apiGroups:
- - ""
- resources:
- - nodes
- verbs:
- - list
- - watch
- - apiGroups:
- - ""
- resources:
- - nodes/status
- verbs:
- - patch
- ---
- kind: ClusterRoleBinding
- apiVersion: rbac.authorization.k8s.io/v1beta1
- metadata:
- name: flannel
- roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: ClusterRole
- name: flannel
- subjects:
- - kind: ServiceAccount
- name: flannel
- namespace: kube-system
- ---
- apiVersion: v1
- kind: ServiceAccount
- metadata:
- name: flannel
- namespace: kube-system
- ---
- kind: ConfigMap
- apiVersion: v1
- metadata:
- name: kube-flannel-cfg
- namespace: kube-system
- labels:
- tier: node
- app: flannel
- data:
- cni-conf.json: |
- {
- "name": "cbr0",
- "cniVersion": "0.3.1",
- "plugins": [
- {
- "type": "flannel",
- "delegate": {
- "hairpinMode": true,
- "isDefaultGateway": true
- }
- },
- {
- "type": "portmap",
- "capabilities": {
- "portMappings": true
- }
- }
- ]
- }
- net-conf.json: |
- {
- "Network": "10.244.0.0/16",
- "Backend": {
- "Type": "vxlan"
- }
- }
- ---
- apiVersion: apps/v1
- kind: DaemonSet
- metadata:
- name: kube-flannel-ds-amd64
- namespace: kube-system
- labels:
- tier: node
- app: flannel
- spec:
- selector:
- matchLabels:
- app: flannel
- template:
- metadata:
- labels:
- tier: node
- app: flannel
- spec:
- affinity:
- nodeAffinity:
- requiredDuringSchedulingIgnoredDuringExecution:
- nodeSelectorTerms:
- - matchExpressions:
- - key: kubernetes.io/os
- operator: In
- values:
- - linux
- - key: kubernetes.io/arch
- operator: In
- values:
- - amd64
- hostNetwork: true
- tolerations:
- - operator: Exists
- effect: NoSchedule
- serviceAccountName: flannel
- initContainers:
- - name: install-cni
- image: quay.io/coreos/flannel:v0.12.0-amd64
- command:
- - cp
- args:
- - -f
- - /etc/kube-flannel/cni-conf.json
- - /etc/cni/net.d/10-flannel.conflist
- volumeMounts:
- - name: cni
- mountPath: /etc/cni/net.d
- - name: flannel-cfg
- mountPath: /etc/kube-flannel/
- containers:
- - name: kube-flannel
- image: quay.io/coreos/flannel:v0.12.0-amd64
- command:
- - /opt/bin/flanneld
- args:
- - --ip-masq
- - --kube-subnet-mgr
- resources:
- requests:
- cpu: "100m"
- memory: "50Mi"
- limits:
- cpu: "100m"
- memory: "50Mi"
- securityContext:
- privileged: false
- capabilities:
- add: ["NET_ADMIN"]
- env:
- - name: POD_NAME
- valueFrom:
- fieldRef:
- fieldPath: metadata.name
- - name: POD_NAMESPACE
- valueFrom:
- fieldRef:
- fieldPath: metadata.namespace
- volumeMounts:
- - name: run
- mountPath: /run/flannel
- - name: flannel-cfg
- mountPath: /etc/kube-flannel/
- volumes:
- - name: run
- hostPath:
- path: /run/flannel
- - name: cni
- hostPath:
- path: /etc/cni/net.d
- - name: flannel-cfg
- configMap:
- name: kube-flannel-cfg
- ---
- apiVersion: apps/v1
- kind: DaemonSet
- metadata:
- name: kube-flannel-ds-arm64
- namespace: kube-system
- labels:
- tier: node
- app: flannel
- spec:
- selector:
- matchLabels:
- app: flannel
- template:
- metadata:
- labels:
- tier: node
- app: flannel
- spec:
- affinity:
- nodeAffinity:
- requiredDuringSchedulingIgnoredDuringExecution:
- nodeSelectorTerms:
- - matchExpressions:
- - key: kubernetes.io/os
- operator: In
- values:
- - linux
- - key: kubernetes.io/arch
- operator: In
- values:
- - arm64
- hostNetwork: true
- tolerations:
- - operator: Exists
- effect: NoSchedule
- serviceAccountName: flannel
- initContainers:
- - name: install-cni
- image: quay.io/coreos/flannel:v0.12.0-arm64
- command:
- - cp
- args:
- - -f
- - /etc/kube-flannel/cni-conf.json
- - /etc/cni/net.d/10-flannel.conflist
- volumeMounts:
- - name: cni
- mountPath: /etc/cni/net.d
- - name: flannel-cfg
- mountPath: /etc/kube-flannel/
- containers:
- - name: kube-flannel
- image: quay.io/coreos/flannel:v0.12.0-arm64
- command:
- - /opt/bin/flanneld
- args:
- - --ip-masq
- - --kube-subnet-mgr
- resources:
- requests:
- cpu: "100m"
- memory: "50Mi"
- limits:
- cpu: "100m"
- memory: "50Mi"
- securityContext:
- privileged: false
- capabilities:
- add: ["NET_ADMIN"]
- env:
- - name: POD_NAME
- valueFrom:
- fieldRef:
- fieldPath: metadata.name
- - name: POD_NAMESPACE
- valueFrom:
- fieldRef:
- fieldPath: metadata.namespace
- volumeMounts:
- - name: run
- mountPath: /run/flannel
- - name: flannel-cfg
- mountPath: /etc/kube-flannel/
- volumes:
- - name: run
- hostPath:
- path: /run/flannel
- - name: cni
- hostPath:
- path: /etc/cni/net.d
- - name: flannel-cfg
- configMap:
- name: kube-flannel-cfg
- ---
- apiVersion: apps/v1
- kind: DaemonSet
- metadata:
- name: kube-flannel-ds-arm
- namespace: kube-system
- labels:
- tier: node
- app: flannel
- spec:
- selector:
- matchLabels:
- app: flannel
- template:
- metadata:
- labels:
- tier: node
- app: flannel
- spec:
- affinity:
- nodeAffinity:
- requiredDuringSchedulingIgnoredDuringExecution:
- nodeSelectorTerms:
- - matchExpressions:
- - key: kubernetes.io/os
- operator: In
- values:
- - linux
- - key: kubernetes.io/arch
- operator: In
- values:
- - arm
- hostNetwork: true
- tolerations:
- - operator: Exists
- effect: NoSchedule
- serviceAccountName: flannel
- initContainers:
- - name: install-cni
- image: quay.io/coreos/flannel:v0.12.0-arm
- command:
- - cp
- args:
- - -f
- - /etc/kube-flannel/cni-conf.json
- - /etc/cni/net.d/10-flannel.conflist
- volumeMounts:
- - name: cni
- mountPath: /etc/cni/net.d
- - name: flannel-cfg
- mountPath: /etc/kube-flannel/
- containers:
- - name: kube-flannel
- image: quay.io/coreos/flannel:v0.12.0-arm
- command:
- - /opt/bin/flanneld
- args:
- - --ip-masq
- - --kube-subnet-mgr
- resources:
- requests:
- cpu: "100m"
- memory: "50Mi"
- limits:
- cpu: "100m"
- memory: "50Mi"
- securityContext:
- privileged: false
- capabilities:
- add: ["NET_ADMIN"]
- env:
- - name: POD_NAME
- valueFrom:
- fieldRef:
- fieldPath: metadata.name
- - name: POD_NAMESPACE
- valueFrom:
- fieldRef:
- fieldPath: metadata.namespace
- volumeMounts:
- - name: run
- mountPath: /run/flannel
- - name: flannel-cfg
- mountPath: /etc/kube-flannel/
- volumes:
- - name: run
- hostPath:
- path: /run/flannel
- - name: cni
- hostPath:
- path: /etc/cni/net.d
- - name: flannel-cfg
- configMap:
- name: kube-flannel-cfg
- ---
- apiVersion: apps/v1
- kind: DaemonSet
- metadata:
- name: kube-flannel-ds-ppc64le
- namespace: kube-system
- labels:
- tier: node
- app: flannel
- spec:
- selector:
- matchLabels:
- app: flannel
- template:
- metadata:
- labels:
- tier: node
- app: flannel
- spec:
- affinity:
- nodeAffinity:
- requiredDuringSchedulingIgnoredDuringExecution:
- nodeSelectorTerms:
- - matchExpressions:
- - key: kubernetes.io/os
- operator: In
- values:
- - linux
- - key: kubernetes.io/arch
- operator: In
- values:
- - ppc64le
- hostNetwork: true
- tolerations:
- - operator: Exists
- effect: NoSchedule
- serviceAccountName: flannel
- initContainers:
- - name: install-cni
- image: quay.io/coreos/flannel:v0.12.0-ppc64le
- command:
- - cp
- args:
- - -f
- - /etc/kube-flannel/cni-conf.json
- - /etc/cni/net.d/10-flannel.conflist
- volumeMounts:
- - name: cni
- mountPath: /etc/cni/net.d
- - name: flannel-cfg
- mountPath: /etc/kube-flannel/
- containers:
- - name: kube-flannel
- image: quay.io/coreos/flannel:v0.12.0-ppc64le
- command:
- - /opt/bin/flanneld
- args:
- - --ip-masq
- - --kube-subnet-mgr
- resources:
- requests:
- cpu: "100m"
- memory: "50Mi"
- limits:
- cpu: "100m"
- memory: "50Mi"
- securityContext:
- privileged: false
- capabilities:
- add: ["NET_ADMIN"]
- env:
- - name: POD_NAME
- valueFrom:
- fieldRef:
- fieldPath: metadata.name
- - name: POD_NAMESPACE
- valueFrom:
- fieldRef:
- fieldPath: metadata.namespace
- volumeMounts:
- - name: run
- mountPath: /run/flannel
- - name: flannel-cfg
- mountPath: /etc/kube-flannel/
- volumes:
- - name: run
- hostPath:
- path: /run/flannel
- - name: cni
- hostPath:
- path: /etc/cni/net.d
- - name: flannel-cfg
- configMap:
- name: kube-flannel-cfg
- ---
- apiVersion: apps/v1
- kind: DaemonSet
- metadata:
- name: kube-flannel-ds-s390x
- namespace: kube-system
- labels:
- tier: node
- app: flannel
- spec:
- selector:
- matchLabels:
- app: flannel
- template:
- metadata:
- labels:
- tier: node
- app: flannel
- spec:
- affinity:
- nodeAffinity:
- requiredDuringSchedulingIgnoredDuringExecution:
- nodeSelectorTerms:
- - matchExpressions:
- - key: kubernetes.io/os
- operator: In
- values:
- - linux
- - key: kubernetes.io/arch
- operator: In
- values:
- - s390x
- hostNetwork: true
- tolerations:
- - operator: Exists
- effect: NoSchedule
- serviceAccountName: flannel
- initContainers:
- - name: install-cni
- image: quay.io/coreos/flannel:v0.12.0-s390x
- command:
- - cp
- args:
- - -f
- - /etc/kube-flannel/cni-conf.json
- - /etc/cni/net.d/10-flannel.conflist
- volumeMounts:
- - name: cni
- mountPath: /etc/cni/net.d
- - name: flannel-cfg
- mountPath: /etc/kube-flannel/
- containers:
- - name: kube-flannel
- image: quay.io/coreos/flannel:v0.12.0-s390x
- command:
- - /opt/bin/flanneld
- args:
- - --ip-masq
- - --kube-subnet-mgr
- resources:
- requests:
- cpu: "100m"
- memory: "50Mi"
- limits:
- cpu: "100m"
- memory: "50Mi"
- securityContext:
- privileged: false
- capabilities:
- add: ["NET_ADMIN"]
- env:
- - name: POD_NAME
- valueFrom:
- fieldRef:
- fieldPath: metadata.name
- - name: POD_NAMESPACE
- valueFrom:
- fieldRef:
- fieldPath: metadata.namespace
- volumeMounts:
- - name: run
- mountPath: /run/flannel
- - name: flannel-cfg
- mountPath: /etc/kube-flannel/
- volumes:
- - name: run
- hostPath:
- path: /run/flannel
- - name: cni
- hostPath:
- path: /etc/cni/net.d
- - name: flannel-cfg
- configMap:
- name: kube-flannel-cfg
附录代码三:key.txt
- W0526 16:17:20.680490 13760 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
- [init] Using Kubernetes version: v1.18.0
- [preflight] Running pre-flight checks
- [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
- [preflight] Pulling images required for setting up a Kubernetes cluster
- [preflight] This might take a minute or two, depending on the speed of your internet connection
- [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
- [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
- [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
- [kubelet-start] Starting the kubelet
- [certs] Using certificateDir folder "/etc/kubernetes/pki"
- [certs] Generating "ca" certificate and key
- [certs] Generating "apiserver" certificate and key
- [certs] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.17.10.4]
- [certs] Generating "apiserver-kubelet-client" certificate and key
- [certs] Generating "front-proxy-ca" certificate and key
- [certs] Generating "front-proxy-client" certificate and key
- [certs] Generating "etcd/ca" certificate and key
- [certs] Generating "etcd/server" certificate and key
- [certs] etcd/server serving cert is signed for DNS names [node01 localhost] and IPs [172.17.10.4 127.0.0.1 ::1]
- [certs] Generating "etcd/peer" certificate and key
- [certs] etcd/peer serving cert is signed for DNS names [node01 localhost] and IPs [172.17.10.4 127.0.0.1 ::1]
- [certs] Generating "etcd/healthcheck-client" certificate and key
- [certs] Generating "apiserver-etcd-client" certificate and key
- [certs] Generating "sa" key and public key
- [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
- [kubeconfig] Writing "admin.conf" kubeconfig file
- [kubeconfig] Writing "kubelet.conf" kubeconfig file
- [kubeconfig] Writing "controller-manager.conf" kubeconfig file
- [kubeconfig] Writing "scheduler.conf" kubeconfig file
- [control-plane] Using manifest folder "/etc/kubernetes/manifests"
- [control-plane] Creating static Pod manifest for "kube-apiserver"
- [control-plane] Creating static Pod manifest for "kube-controller-manager"
- W0526 16:18:02.560249 13760 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
- [control-plane] Creating static Pod manifest for "kube-scheduler"
- W0526 16:18:02.561130 13760 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
- [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
- [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
- [apiclient] All control plane components are healthy after 26.504466 seconds
- [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
- [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
- [upload-certs] Skipping phase. Please see --upload-certs
- [mark-control-plane] Marking the node node01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
- [mark-control-plane] Marking the node node01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
- [bootstrap-token] Using token: q3uu1o.4rdfkcyzxjhawvk1
- [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
- [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
- [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
- [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
- [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
- [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
- [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
- [addons] Applied essential addon: CoreDNS
- [addons] Applied essential addon: kube-proxy
- Your Kubernetes control-plane has initialized successfully!
- To start using your cluster, you need to run the following as a regular user:
- mkdir -p $HOME/.kube
- sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
- sudo chown $(id -u):$(id -g) $HOME/.kube/config
- You should now deploy a pod network to the cluster.
- Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
- https://kubernetes.io/docs/concepts/cluster-administration/addons/
- Then you can join any number of worker nodes by running the following on each as root:
- kubeadm join 172.17.10.4:6443 --token q3uu1o.4rdfkcyzxjhawvk1 \
- --discovery-token-ca-cert-hash sha256:a755d8f56733ba8f9d1951298b200202fce7b84389954bf7a38558fa6ce2a9c9
附录代码四:recommended.yaml
- # Copyright 2017 The Kubernetes Authors.
- #
- # Licensed under the Apache License, Version 2.0 (the "License");
- # you may not use this file except in compliance with the License.
- # You may obtain a copy of the License at
- #
- # http://www.apache.org/licenses/LICENSE-2.0
- #
- # Unless required by applicable law or agreed to in writing, software
- # distributed under the License is distributed on an "AS IS" BASIS,
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- # See the License for the specific language governing permissions and
- # limitations under the License.
- ##############
- ##安装dashboard##
- ##############
- apiVersion: v1
- kind: Namespace
- metadata:
- name: kubernetes-dashboard
- ---
- apiVersion: v1
- kind: ServiceAccount
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard
- namespace: kubernetes-dashboard
- ---
- kind: Service
- apiVersion: v1
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard
- namespace: kubernetes-dashboard
- spec:
- ports:
- - port: 443
- targetPort: 8443
- selector:
- k8s-app: kubernetes-dashboard
- ---
- apiVersion: v1
- kind: Secret
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard-certs
- namespace: kubernetes-dashboard
- type: Opaque
- ---
- apiVersion: v1
- kind: Secret
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard-csrf
- namespace: kubernetes-dashboard
- type: Opaque
- data:
- csrf: ""
- ---
- apiVersion: v1
- kind: Secret
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard-key-holder
- namespace: kubernetes-dashboard
- type: Opaque
- ---
- kind: ConfigMap
- apiVersion: v1
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard-settings
- namespace: kubernetes-dashboard
- ---
- kind: Role
- apiVersion: rbac.authorization.k8s.io/v1
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard
- namespace: kubernetes-dashboard
- rules:
- # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- - apiGroups: [""]
- resources: ["secrets"]
- resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
- verbs: ["get", "update", "delete"]
- # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- - apiGroups: [""]
- resources: ["configmaps"]
- resourceNames: ["kubernetes-dashboard-settings"]
- verbs: ["get", "update"]
- # Allow Dashboard to get metrics.
- - apiGroups: [""]
- resources: ["services"]
- resourceNames: ["heapster", "dashboard-metrics-scraper"]
- verbs: ["proxy"]
- - apiGroups: [""]
- resources: ["services/proxy"]
- resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
- verbs: ["get"]
- ---
- kind: ClusterRole
- apiVersion: rbac.authorization.k8s.io/v1
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard
- rules:
- # Allow Metrics Scraper to get metrics from the Metrics server
- - apiGroups: ["metrics.k8s.io"]
- resources: ["pods", "nodes"]
- verbs: ["get", "list", "watch"]
- ---
- apiVersion: rbac.authorization.k8s.io/v1
- kind: RoleBinding
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard
- namespace: kubernetes-dashboard
- roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: Role
- name: kubernetes-dashboard
- subjects:
- - kind: ServiceAccount
- name: kubernetes-dashboard
- namespace: kubernetes-dashboard
- ---
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRoleBinding
- metadata:
- name: kubernetes-dashboard
- roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: ClusterRole
- name: kubernetes-dashboard
- subjects:
- - kind: ServiceAccount
- name: kubernetes-dashboard
- namespace: kubernetes-dashboard
- ---
- kind: Deployment
- apiVersion: apps/v1
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- name: kubernetes-dashboard
- namespace: kubernetes-dashboard
- spec:
- replicas: 1
- revisionHistoryLimit: 10
- selector:
- matchLabels:
- k8s-app: kubernetes-dashboard
- template:
- metadata:
- labels:
- k8s-app: kubernetes-dashboard
- spec:
- containers:
- - name: kubernetes-dashboard
- image: kubernetesui/dashboard:v2.2.0
- imagePullPolicy: Always
- ports:
- - containerPort: 8443
- protocol: TCP
- args:
- - --auto-generate-certificates
- - --namespace=kubernetes-dashboard
- # Uncomment the following line to manually specify Kubernetes API server Host
- # If not specified, Dashboard will attempt to auto discover the API server and connect
- # to it. Uncomment only if the default does not work.
- # - --apiserver-host=http://my-address:port
- volumeMounts:
- - name: kubernetes-dashboard-certs
- mountPath: /certs
- # Create on-disk volume to store exec logs
- - mountPath: /tmp
- name: tmp-volume
- livenessProbe:
- httpGet:
- scheme: HTTPS
- path: /
- port: 8443
- initialDelaySeconds: 30
- timeoutSeconds: 30
- securityContext:
- allowPrivilegeEscalation: false
- readOnlyRootFilesystem: true
- runAsUser: 1001
- runAsGroup: 2001
- volumes:
- - name: kubernetes-dashboard-certs
- secret:
- secretName: kubernetes-dashboard-certs
- - name: tmp-volume
- emptyDir: {}
- serviceAccountName: kubernetes-dashboard
- nodeSelector:
- "kubernetes.io/os": linux
- # Comment the following tolerations if Dashboard must not be deployed on master
- tolerations:
- - key: node-role.kubernetes.io/master
- effect: NoSchedule
- ---
- kind: Service
- apiVersion: v1
- metadata:
- labels:
- k8s-app: dashboard-metrics-scraper
- name: dashboard-metrics-scraper
- namespace: kubernetes-dashboard
- spec:
- ports:
- - port: 8000
- targetPort: 8000
- selector:
- k8s-app: dashboard-metrics-scraper
- ---
- kind: Deployment
- apiVersion: apps/v1
- metadata:
- labels:
- k8s-app: dashboard-metrics-scraper
- name: dashboard-metrics-scraper
- namespace: kubernetes-dashboard
- spec:
- replicas: 1
- revisionHistoryLimit: 10
- selector:
- matchLabels:
- k8s-app: dashboard-metrics-scraper
- template:
- metadata:
- labels:
- k8s-app: dashboard-metrics-scraper
- annotations:
- seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
- spec:
- containers:
- - name: dashboard-metrics-scraper
- image: kubernetesui/metrics-scraper:v1.0.6
- ports:
- - containerPort: 8000
- protocol: TCP
- livenessProbe:
- httpGet:
- scheme: HTTP
- path: /
- port: 8000
- initialDelaySeconds: 30
- timeoutSeconds: 30
- volumeMounts:
- - mountPath: /tmp
- name: tmp-volume
- securityContext:
- allowPrivilegeEscalation: false
- readOnlyRootFilesystem: true
- runAsUser: 1001
- runAsGroup: 2001
- serviceAccountName: kubernetes-dashboard
- nodeSelector:
- "kubernetes.io/os": linux
- # Comment the following tolerations if Dashboard must not be deployed on master
- tolerations:
- - key: node-role.kubernetes.io/master
- effect: NoSchedule
- volumes:
- - name: tmp-volume
- emptyDir: {}
附录代码五:dashboard-svc-account.yaml
- ##############
- ##配置dashboard管理员账号##
- ##############
- apiVersion: v1
- kind: ServiceAccount
- metadata:
- name: dashboard-admin
- namespace: kube-system
- ---
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRoleBinding
- metadata:
- name: dashboard-admin
- roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: ClusterRole
- name: cluster-admin
- subjects:
- - kind: ServiceAccount
- name: dashboard-admin
- namespace: kube-system
附录代码六:kubernetes_node02.sh
#!/bin/bash
##############
##子节点##
############## #### 第一部分,环境初始化 ####
#k8s版本
version=v1.18.0
kubelet=kubelet-1.18.0-0.x86_64
kubeadm=kubeadm-1.18.0-0.x86_64
kubectl=kubectl-1.18.0-0.x86_64 #集群加入方式
key=/root/key.txt
#部署flannel网络
flannel=/root/kube-flannel.yml
#安装必要依赖
yum -y install vim wget git cmake make gcc gcc-c++ net-tools lrzsz #### 第二部分,节点配置 ####
#配置节点,主机解析,免密登录
node01=172.17.10.4
node02=172.17.10.7
# node03=192.168.1.30
hostnamectl set-hostname node02
echo '172.17.10.4 node01
172.17.10.7 node02' >> /etc/hosts
ssh-keygen
ssh-copy-id -i $node01
ssh-copy-id -i $node02
# ssh-copy-id -i $node03
scp /etc/hosts node02:/etc/hosts
# scp /etc/hosts node03:/etc/hosts #关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
#swap分区关闭
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
#关闭沙盒
setenforce 0
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config
#打开ipv6
modprobe br_netfilter
modprobe ip_vs_rr
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0
EOF
sysctl -p /etc/sysctl.d/k8s.conf
ls /proc/sys/net/bridge #### 第三部分,参数/源处理 ####
#安装epel源
yum install -y epel-release
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim ntpdate libseccomp libtool-ltdl
#时区校准
systemctl enable ntpdate.service
echo '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1' > /tmp/crontab2.tmp
crontab /tmp/crontab2.tmp
systemctl start ntpdate.service
#添加参数
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536" >> /etc/security/limits.conf
echo "* hard nproc 65536" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf
#添加kubernetes的epel源
echo '[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg' > /etc/yum.repos.d/kubernetes.repo
#下载
sudo yum-config-manager \
--add-repo \
https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo
yum makecache fast #### 第四部分,开始安装 ####
yum -y install docker-ce
yum install --enablerepo="kubernetes" $kubelet $kubeadm $kubectl
systemctl enable kubelet.service && systemctl start kubelet.service
systemctl start docker.service && systemctl enable docker.service
#安装tab快捷键
yum -y install bash-completion && source /usr/share/bash-completion/bash_completion && source <(kubectl completion bash) && echo "source <(kubectl completion bash)" >> ~/.bashrc
#创建集群
docker pull quay.io/coreos/flannel:v0.12.0-amd64
echo '请手动查看主节点$key文件的密钥将其他节点接入集群'
附录代码七:mandatory.yaml
##############
##配置ingress-nginx服务##
##############
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx ---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx ---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx ---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx ---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update ---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get ---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx ---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx --- apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
hostNetwork: true
# wait up to five minutes for the drain of connections
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
Ingress: nginx
kubernetes.io/os: linux
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.29.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
- --http-port=8080 # 如果你的master服务器已经安装了nginx,这里需要修改下,否则无法启动ingress-nginx服务
- --https-port=8443 # 同上
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 101
runAsUser: 101
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
lifecycle:
preStop:
exec:
command:
- /wait-shutdown --- apiVersion: v1
kind: LimitRange
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
limits:
- min:
memory: 90Mi
cpu: 100m
type: Container
参考文献:
https://blog.csdn.net/qq_37746855/article/details/116173976
https://blog.csdn.net/weixin_46152207/article/details/111355788
https://blog.csdn.net/catcher92/article/details/116207040
https://blog.51cto.com/u_14306186/2523096
在Linux服务器,搭建K8s服务【脚本篇】的更多相关文章
- 监控Linux服务器上python服务脚本
提供给公司使用的测试平台这两天频繁地挂掉,影响到相关同事的正常使用,决定在服务器上写个监控脚本,监控到服务挂了就启动起来,一分钟检查一次.注:后台服务使用的是python.监控脚本如下: NUM=`p ...
- Linux服务器部署系列之八—Sendmail篇
Sendmail是目前Linux系统下面用得最广的邮件系统之一,虽然它存在一些不足,不过,目前还是有不少公司在使用它.对它的学习,也能让我们更深的了解邮件系统的运作.下面我们就来看看sendmail邮 ...
- Linux服务器部署系列之七—OpenLDAP篇
LDAP(轻量级目录访问服务),通过配置这个服务,我们也可以在linux下面使用目录的形式管理用户,就像windows下面的AD一样,方便我们管理.下面我们就一起来配置openldap服务.本文运行环 ...
- linux下搭建DHCP服务
一键搭建dhcpd服务脚本 [root@dhcp-server~]# cat auto_install_dhcpd.sh #!/bin/sh . /etc/init.d/functions #安装dh ...
- Vsftpd3.0--FTP服务器搭建之本地用户篇
Vsftpd3.0--FTP服务器搭建之本地用户篇 年4月10日 19:23 FTP服务在工作中是经用到的一种工具,可以实现上传下载等功能.那么今天我们来聊一聊FTP服务器使用本地用户登录的实现模式. ...
- Linux服务器部署系列之一—Apache篇(下)
接上篇 linux服务器部署系列之一—Apache篇(上) 四.管理日志文件 Apache日志分为访问日志和错误日志两种: 1)访问日志 用于记录客户端的访问信息,文件名默认为access_lo ...
- Linux服务器启动jstatd服务
Linux服务器启动jstatd服务 1.查找jdk所在目录 2.在jdk的bin目录下创建文件jstatd.all.policy touch jstatd.all.policy 3.写入安全配置 g ...
- 图文详解linux如何搭建lamp服务环境
企业网站建设必然离不开服务器运维,一个稳定高效的服务器环境是保证网站正常运行的重要前提.本文小编将会详细讲解Linux系统上如何搭建配置高效的lamp服务环境,并在lamp环境中搭建起企业自己的网站. ...
- 结合jenkins在Linux服务器搭建测试环境
何时使用: 测试过程中我们需要持续构建一个软件项目,为避免重复的手动下载.解压操作,我们需要搭建一个能够自动构建的测试环境,当代码有更新时,测试人员只需点一下[构建]即可拉取最新的代码进行测试(也可设 ...
- Linux服务器搭建相关教程链接整理
Linux: Linux 教程 | 菜鸟教程 linux下如何添加一个用户并且让用户获得root权限 - !canfly - 博客园 Git: 在 Linux 下搭建 Git 服务器 - 黄棣-dee ...
随机推荐
- 自动化kolla-ansible部署centos7.9+openstack-train-超融合高可用架构
自动化kolla-ansible部署centos7.9+openstack-train-超融合高可用架构 欢迎加QQ群:1026880196 进行交流学习 环境说明: 1. 满足一台电脑一个网卡的环境 ...
- Android Activity间跳转与传递数据
1 概述 Activity之间的跳转主要使用 startActivity(Intent intent); startActivityForResult(Intent intent,int reques ...
- kubernetes-copyFromPod
import com.google.common.io.ByteStreams; import io.kubernetes.client.Copy; import io.kubernetes.clie ...
- The Blocks Problem UVA - 101
Many areas of Computer Science use simple, abstract domains for both analytical and empirical stud ...
- zabbix容器化安装及监控docker应用
一.zabbix agent2 介绍 从Zabbix 4.4之后,官方推出了Zabbix Agent 2,意味着zabbix 不在只是物理机监控的代名词,现在你可以使用Go为Zabbix编写插件,来监 ...
- 到底什么才叫SEO
昨天去面试,公司的老板,问了我几个SEO的问题.SEO是什么?长尾词与关键词的区别?你用哪些SEO工具? SEO就是为了将关键词做上好的位置展示给用户.难道不是吗? 这些问题,我都答了一下. 然后他问 ...
- 脚本加载后执行JS回调函数的方法
动态脚本简单示例 // IE下: var HEAD = document.getElementsByTagName('head')[0] || document.documentElement var ...
- 技术面试问题汇总第003篇:猎豹移动反病毒工程师part3
从现在开始,面试的问题渐渐深入.这次的三个问题,都是对PE格式的不断深入的提问.从最初的概念,到病毒对PE格式的利用,再到最后的壳的问题.这里需要说明的是,由于壳是一个比较复杂的概念,面试中也仅仅只能 ...
- hdu4810
题意: 给你n个数,让你输出n个数,没一次输出的是在这n个数里面取i个数异或的和(所有情况<C n中取i>). 思路: 首先把所有的数都拆成二进制,然后把他们在某一位上 ...
- Android的so库注入
作者:Fly2015 Android平台的so库的注入是有Linux平台的进程注入移植来的.由于Android系统的底层实现是基于Linux系统的源码修改而来,因此很多Linux下的应用可以移植到An ...