二进制部署K8S-3核心插件部署

5.1. CNI网络插件

kubernetes设计了网络模型,但是pod之间通信的具体实现交给了CNI往插件。常用的CNI网络插件有:Flannel 、Calico、Canal、Contiv等,其中Flannel和Calico占比接近80%,Flannel占比略多于Calico。本次部署使用Flannel作为网络插件。涉及的机器 hdss7-21,hdss7-22

kubernetes设计了网络模型,但是pod之间通信的具体实现交给了CNI往插件。跨宿主机之间的pod是无法互通的,所以要使用 flannel插件。

host-gw 模式,需要宿主机在同一二层网络下,同一网关,是资源利用率最高的。

vxlan 模式

混合模式

5.1.1. 安装Flannel

github地址:https://github.com/coreos/flannel/releases

涉及的机器 hdss7-21,hdss7-22

  1. [root@hdss7-21 ~]# cd /opt/src/
  2. [root@hdss7-21 src]# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
  3. [root@hdss7-21 src]# mkdir /opt/release/flannel-v0.11.0 # 因为flannel压缩包内部没有套目录
  4. [root@hdss7-21 src]# tar -xf flannel-v0.11.0-linux-amd64.tar.gz -C /opt/release/flannel-v0.11.0
  5. [root@hdss7-21 src]# ln -s /opt/release/flannel-v0.11.0 /opt/apps/flannel
  6. [root@hdss7-21 src]# ll /opt/apps/flannel
  7. lrwxrwxrwx 1 root root 28 Jan 9 22:33 /opt/apps/flannel -> /opt/release/flannel-v0.11.0

5.1.2. 拷贝证书

  1. # flannel 需要以客户端的身份访问etcd,需要相关证书
  2. [root@hdss7-21 src]# mkdir /opt/apps/flannel/certs
  3. [root@hdss7-200 ~]# cd /opt/certs/
  4. [root@hdss7-200 certs]# for i in 21 22;do scp ca.pem client-key.pem client.pem hdss7-$i:/opt/apps/flannel/certs/;done

5.1.3. 创建启动脚本

涉及的机器 hdss7-21,hdss7-22

  1. [root@hdss7-21 src]# vim /opt/apps/flannel/subnet.env # 创建子网信息,7-22的subnet需要修改
  2. FLANNEL_NETWORK=172.7.0.0/16
  3. FLANNEL_SUBNET=172.7.21.1/24
  4. FLANNEL_MTU=1500
  5. FLANNEL_IPMASQ=false
  6. [root@hdss7-21 src]# /opt/apps/etcd/etcdctl set /coreos.com/network/config '{"Network": "172.7.0.0/16", "Backend": {"Type": "host-gw"}}'
  7. [root@hdss7-21 src]# /opt/apps/etcd/etcdctl get /coreos.com/network/config # 只需要在一台etcd机器上设置就可以了
  8. {"Network": "172.7.0.0/16", "Backend": {"Type": "host-gw"}}
  1. # public-ip 为本机IP,iface 为当前宿主机对外网卡
  2. [root@hdss7-21 src]# vim /opt/apps/flannel/flannel-startup.sh
  3. #!/bin/sh
  4. WORK_DIR=$(dirname $(readlink -f $0))
  5. [ $? -eq 0 ] && cd $WORK_DIR || exit
  6. /opt/apps/flannel/flanneld \
  7. --public-ip=10.4.7.21 \
  8. --etcd-endpoints=https://10.4.7.12:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 \
  9. --etcd-keyfile=./certs/client-key.pem \
  10. --etcd-certfile=./certs/client.pem \
  11. --etcd-cafile=./certs/ca.pem \
  12. --iface=ens32 \
  13. --subnet-file=./subnet.env \
  14. --healthz-port=2401
  15. [root@hdss7-21 src]# chmod u+x /opt/apps/flannel/flannel-startup.sh
  1. [root@hdss7-21 src]# vim /etc/supervisord.d/flannel.ini
  2. [program:flanneld-7-21]
  3. command=/opt/apps/flannel/flannel-startup.sh
  4. numprocs=1
  5. directory=/opt/apps/flannel
  6. autostart=true
  7. autorestart=true
  8. startsecs=30
  9. startretries=3
  10. exitcodes=0,2
  11. stopsignal=QUIT
  12. stopwaitsecs=10
  13. user=root
  14. redirect_stderr=true
  15. stdout_logfile=/data/logs/flanneld/flanneld.stdout.log
  16. stdout_logfile_maxbytes=64MB
  17. stdout_logfile_backups=5
  18. stdout_capture_maxbytes=1MB
  19. stdout_events_enabled=false
  20. killasgroup=true
  21. stopasgroup=true
  22. [root@hdss7-21 src]# mkdir -p /data/logs/flanneld/
  23. [root@hdss7-21 src]# supervisorctl update
  24. flanneld-7-21: added process group
  25. [root@hdss7-21 src]# supervisorctl status
  26. etcd-server-7-21 RUNNING pid 1058, uptime -1 day, 16:33:25
  27. flanneld-7-21 RUNNING pid 13154, uptime 0:00:30
  28. kube-apiserver-7-21 RUNNING pid 1061, uptime -1 day, 16:33:25
  29. kube-controller-manager-7-21 RUNNING pid 1068, uptime -1 day, 16:33:25
  30. kube-kubelet-7-21 RUNNING pid 1052, uptime -1 day, 16:33:25
  31. kube-proxy-7-21 RUNNING pid 1082, uptime -1 day, 16:33:25
  32. kube-scheduler-7-21 RUNNING pid 1089, uptime -1 day, 16:33:25

5.1.4. 验证跨网络访问

  1. [root@hdss7-21 src]# kubectl get pods -o wide
  2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  3. nginx-ds-7db29 1/1 Running 1 2d 172.7.22.2 hdss7-22.host.com <none> <none>
  4. nginx-ds-vvsz7 1/1 Running 1 2d 172.7.21.2 hdss7-21.host.com <none> <none>
  5. [root@hdss7-21 src]# curl -I 172.7.22.2
  6. HTTP/1.1 200 OK
  7. Server: nginx/1.17.6
  8. Date: Thu, 09 Jan 2020 14:55:21 GMT
  9. Content-Type: text/html
  10. Content-Length: 612
  11. Last-Modified: Tue, 19 Nov 2019 12:50:08 GMT
  12. Connection: keep-alive
  13. ETag: "5dd3e500-264"
  14. Accept-Ranges: bytes

5.1.5. 解决pod间IP透传问题

所有Node上操作,即优化NAT网络

  1. # 从pod a跨宿主机访问pod b时,在pod b中能看到的地址为 pod a 宿主机地址
  2. [root@nginx-ds-jdp7q /]# tail -f /usr/local/nginx/logs/access.log
  3. 10.4.7.22 - - [13/Jan/2020:13:13:39 +0000] "GET / HTTP/1.1" 200 12 "-" "curl/7.29.0"
  4. 10.4.7.22 - - [13/Jan/2020:13:14:27 +0000] "GET / HTTP/1.1" 200 12 "-" "curl/7.29.0"
  5. 10.4.7.22 - - [13/Jan/2020:13:54:20 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0"
  6. 10.4.7.22 - - [13/Jan/2020:13:54:25 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0"
  7. [root@hdss7-21 ~]# iptables-save |grep POSTROUTING|grep docker # 引发问题的规则
  8. -A POSTROUTING -s 172.7.21.0/24 ! -o docker0 -j MASQUERADE
  1. [root@hdss7-21 ~]# yum install -y iptables-services
  2. [root@hdss7-21 ~]# systemctl start iptables.service ; systemctl enable iptables.service
  3. # 需要处理的规则:
  4. [root@hdss7-21 ~]# iptables-save |grep POSTROUTING|grep docker
  5. -A POSTROUTING -s 172.7.21.0/24 ! -o docker0 -j MASQUERADE
  6. [root@hdss7-21 ~]# iptables-save | grep -i reject
  7. -A INPUT -j REJECT --reject-with icmp-host-prohibited
  8. -A FORWARD -j REJECT --reject-with icmp-host-prohibited
  9. # 处理方式:
  10. [root@hdss7-21 ~]# iptables -t nat -D POSTROUTING -s 172.7.21.0/24 ! -o docker0 -j MASQUERADE
  11. [root@hdss7-21 ~]# iptables -t nat -I POSTROUTING -s 172.7.21.0/24 ! -d 172.7.0.0/16 ! -o docker0 -j MASQUERADE
  12. [root@hdss7-21 ~]# iptables -t filter -D INPUT -j REJECT --reject-with icmp-host-prohibited
  13. [root@hdss7-21 ~]# iptables -t filter -D FORWARD -j REJECT --reject-with icmp-host-prohibited
  14. [root@hdss7-21 ~]# iptables-save > /etc/sysconfig/iptables
  15. # 注意重启一次docker
  16. systemctl restart docker
  1. # 此时跨宿主机访问pod时,显示pod的IP
  2. [root@nginx-ds-jdp7q /]# tail -f /usr/local/nginx/logs/access.log
  3. 172.7.22.2 - - [13/Jan/2020:14:15:39 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0"
  4. 172.7.22.2 - - [13/Jan/2020:14:15:47 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0"
  5. 172.7.22.2 - - [13/Jan/2020:14:15:48 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0"
  6. 172.7.22.2 - - [13/Jan/2020:14:15:48 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0"

5.2. CoreDNS

CoreDNS用于实现 service --> cluster IP 的DNS解析。以容器的方式交付到k8s集群,由k8s自行管理,降低人为操作的复杂度。

在k8s集群中pod的IP是不断的变化的,使用service通过标签选择器关联一组pod。

抽象出集群网络,通过相对固定的“集群IP”使服务接入点固定。

CoreDNS只负责把服务名(Service)和集群网络IP(ClusterIP)管理起来。

5.2.1. 配置yaml文件库

在hdss7-200中配置yaml文件库,后期通过Http方式去使用yaml清单文件。

  • 配置nginx虚拟主机( hdss7-200 )
  1. [root@hdss7-200 ~]# vim /etc/nginx/conf.d/k8s-yaml.od.com.conf
  2. server {
  3. listen 80;
  4. server_name k8s-yaml.od.com;
  5. location / {
  6. autoindex on;
  7. default_type text/plain;
  8. root /data/k8s-yaml;
  9. }
  10. }
  11. [root@hdss7-200 ~]# mkdir /data/k8s-yaml;
  12. [root@hdss7-200 ~]# nginx -qt && nginx -s reload
  • 配置dns解析(hdss7-11)
  1. [root@hdss7-11 ~]# vim /var/named/od.com.zone
  2. [root@hdss7-11 ~]# cat /var/named/od.com.zone
  3. $ORIGIN od.com.
  4. $TTL 600 ; 10 minutes
  5. @ IN SOA dns.od.com. dnsadmin.od.com. (
  6. 2020011301 ; serial
  7. 10800 ; refresh (3 hours)
  8. 900 ; retry (15 minutes)
  9. 604800 ; expire (1 week)
  10. 86400 ; minimum (1 day)
  11. )
  12. NS dns.od.com.
  13. $TTL 60 ; 1 minute
  14. dns A 10.4.7.11
  15. harbor A 10.4.7.200
  16. k8s-yaml A 10.4.7.200
  17. [root@hdss7-11 ~]# systemctl restart named

5.2.2. coredns的资源清单文件

清单文件存放到 hdss7-200:/data/k8s-yaml/coredns/coredns_1.6.1/

  • rbac.yaml
  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4. name: coredns
  5. namespace: kube-system
  6. labels:
  7. kubernetes.io/cluster-service: "true"
  8. addonmanager.kubernetes.io/mode: Reconcile
  9. ---
  10. apiVersion: rbac.authorization.k8s.io/v1
  11. kind: ClusterRole
  12. metadata:
  13. labels:
  14. kubernetes.io/bootstrapping: rbac-defaults
  15. addonmanager.kubernetes.io/mode: Reconcile
  16. name: system:coredns
  17. rules:
  18. - apiGroups:
  19. - ""
  20. resources:
  21. - endpoints
  22. - services
  23. - pods
  24. - namespaces
  25. verbs:
  26. - list
  27. - watch
  28. ---
  29. apiVersion: rbac.authorization.k8s.io/v1
  30. kind: ClusterRoleBinding
  31. metadata:
  32. annotations:
  33. rbac.authorization.kubernetes.io/autoupdate: "true"
  34. labels:
  35. kubernetes.io/bootstrapping: rbac-defaults
  36. addonmanager.kubernetes.io/mode: EnsureExists
  37. name: system:coredns
  38. roleRef:
  39. apiGroup: rbac.authorization.k8s.io
  40. kind: ClusterRole
  41. name: system:coredns
  42. subjects:
  43. - kind: ServiceAccount
  44. name: coredns
  45. namespace: kube-system
  • configmap.yaml
  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: coredns
  5. namespace: kube-system
  6. data:
  7. Corefile: |
  8. .:53 {
  9. errors
  10. log
  11. health
  12. ready
  13. kubernetes cluster.local 192.168.0.0/16
  14. # 上级dns
  15. forward . 10.4.7.11
  16. cache 30
  17. loop
  18. reload
  19. loadbalance
  20. }
  • deployment.yaml
  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: coredns
  5. namespace: kube-system
  6. labels:
  7. k8s-app: coredns
  8. kubernetes.io/name: "CoreDNS"
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. k8s-app: coredns
  14. template:
  15. metadata:
  16. labels:
  17. k8s-app: coredns
  18. spec:
  19. priorityClassName: system-cluster-critical
  20. serviceAccountName: coredns
  21. containers:
  22. - name: coredns
  23. image: harbor.od.com/public/coredns:v1.6.1
  24. args:
  25. - -conf
  26. - /etc/coredns/Corefile
  27. volumeMounts:
  28. - name: config-volume
  29. mountPath: /etc/coredns
  30. ports:
  31. - containerPort: 53
  32. name: dns
  33. protocol: UDP
  34. - containerPort: 53
  35. name: dns-tcp
  36. protocol: TCP
  37. - containerPort: 9153
  38. name: metrics
  39. protocol: TCP
  40. livenessProbe:
  41. httpGet:
  42. path: /health
  43. port: 8080
  44. scheme: HTTP
  45. initialDelaySeconds: 60
  46. timeoutSeconds: 5
  47. successThreshold: 1
  48. failureThreshold: 5
  49. dnsPolicy: Default
  50. volumes:
  51. - name: config-volume
  52. configMap:
  53. name: coredns
  54. items:
  55. - key: Corefile
  56. path: Corefile
  • service.yaml
  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: coredns
  5. namespace: kube-system
  6. labels:
  7. k8s-app: coredns
  8. kubernetes.io/cluster-service: "true"
  9. kubernetes.io/name: "CoreDNS"
  10. spec:
  11. selector:
  12. k8s-app: coredns
  13. clusterIP: 192.168.0.2
  14. ports:
  15. - name: dns
  16. port: 53
  17. protocol: UDP
  18. - name: dns-tcp
  19. port: 53
  20. - name: metrics
  21. port: 9153
  22. protocol: TCP

5.2.3. 交付coredns到K8s

  1. # 准备镜像
  2. [root@hdss7-200 ~]# docker pull coredns/coredns:1.6.1
  3. [root@hdss7-200 ~]# docker image tag coredns/coredns:1.6.1 harbor.od.com/public/coredns:v1.6.1
  4. [root@hdss7-200 ~]# docker image push harbor.od.com/public/coredns:v1.6.1
  1. # 交付coredns
  2. [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/coredns_1.6.1/rbac.yaml
  3. [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/coredns_1.6.1/configmap.yaml
  4. [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/coredns_1.6.1/deployment.yaml
  5. [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/coredns_1.6.1/service.yaml
  6. [root@hdss7-21 ~]# kubectl get all -n kube-system -o wide
  7. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  8. pod/coredns-6b6c4f9648-4vtcl 1/1 Running 0 38s 172.7.21.3 hdss7-21.host.com <none> <none>
  9. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
  10. service/coredns ClusterIP 192.168.0.2 <none> 53/UDP,53/TCP,9153/TCP 29s k8s-app=coredns
  11. NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
  12. deployment.apps/coredns 1/1 1 1 39s coredns harbor.od.com/public/coredns:v1.6.1 k8s-app=coredns
  13. NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
  14. replicaset.apps/coredns-6b6c4f9648 1 1 1 39s coredns harbor.od.com/public/coredns:v1.6.1 k8s-app=coredns,pod-template-hash=6b6c4f9648

5.2.4. 测试dns

  1. # 创建service
  2. [root@hdss7-21 ~]# kubectl create deployment nginx-web --image=harbor.od.com/public/nginx:src_1.14.2
  3. [root@hdss7-21 ~]# kubectl expose deployment nginx-web --port=80 --target-port=80
  4. [root@hdss7-21 ~]# kubectl get svc
  5. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  6. kubernetes ClusterIP 192.168.0.1 <none> 443/TCP 8d
  7. nginx-web ClusterIP 192.168.164.230 <none> 80/TCP 8s
  8. # 测试DNS,集群外必须使用FQDN(Fully Qualified Domain Name),全域名
  9. [root@hdss7-21 ~]# dig -t A nginx-web.default.svc.cluster.local @192.168.0.2 +short # 内网解析OK
  10. 192.168.164.230
  11. [root@hdss7-21 ~]# dig -t A www.baidu.com @192.168.0.2 +short # 外网解析OK
  12. www.a.shifen.com.
  13. 180.101.49.11
  14. 180.101.49.12

5.3. Ingress-Controller

service是将一组pod管理起来,提供了一个cluster ip和service name的统一访问入口,屏蔽了pod的ip变化。 ingress 是一种基于七层的流量转发策略,即将符合条件的域名或者location流量转发到特定的service上,而ingress仅仅是一种规则,k8s内部并没有自带代理程序完成这种规则转发。

ingress-controller 是一个代理服务器,将ingress的规则能真正实现的方式,常用的有 nginx,traefik,haproxy。但是在k8s集群中,建议使用traefik,性能比haroxy强大,更新配置不需要重载服务,是首选的ingress-controller。github地址:https://github.com/containous/traefik

使得k8s集群内部的服务,能够被外部访问。

  • NodePort

    使用这种方式,无法使用kube-proxy的ipvs模型,只能使用iptables模型。

  • 使用ingress资源

    Ingress只能调度并暴露7层应用,特指http和https。

  • Ingress是K8S API的标准资源类型之一,也是一种核心资源,它其实就是一组基于域名和url路径,把用户的请求转发给指定的service资源的规则。

  • 可以将集群外部的的请求流量转发至集群内部,从而实现“服务暴露”

  • Ingress控制器是能够为Ingress资源监听某套接字,然后根据Ingress规则匹配机制路由调度流量的一个组件。

  1. kubectl describe pod traefik-ingress-2tr49 -n kube-system

5.3.1. 配置traefik资源清单

清单文件存放到 hdss7-200:/data/k8s-yaml/traefik/traefik_1.7.2

  • rbac.yaml
  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4. name: traefik-ingress-controller
  5. namespace: kube-system
  6. ---
  7. apiVersion: rbac.authorization.k8s.io/v1beta1
  8. kind: ClusterRole
  9. metadata:
  10. name: traefik-ingress-controller
  11. rules:
  12. - apiGroups:
  13. - ""
  14. resources:
  15. - services
  16. - endpoints
  17. - secrets
  18. verbs:
  19. - get
  20. - list
  21. - watch
  22. - apiGroups:
  23. - extensions
  24. resources:
  25. - ingresses
  26. verbs:
  27. - get
  28. - list
  29. - watch
  30. ---
  31. kind: ClusterRoleBinding
  32. apiVersion: rbac.authorization.k8s.io/v1beta1
  33. metadata:
  34. name: traefik-ingress-controller
  35. roleRef:
  36. apiGroup: rbac.authorization.k8s.io
  37. kind: ClusterRole
  38. name: traefik-ingress-controller
  39. subjects:
  40. - kind: ServiceAccount
  41. name: traefik-ingress-controller
  42. namespace: kube-system
  • daemonset.yaml
  1. apiVersion: extensions/v1beta1
  2. kind: DaemonSet
  3. metadata:
  4. name: traefik-ingress
  5. namespace: kube-system
  6. labels:
  7. k8s-app: traefik-ingress
  8. spec:
  9. template:
  10. metadata:
  11. labels:
  12. k8s-app: traefik-ingress
  13. name: traefik-ingress
  14. spec:
  15. serviceAccountName: traefik-ingress-controller
  16. terminationGracePeriodSeconds: 60
  17. containers:
  18. - image: harbor.od.com/public/traefik:v1.7.2
  19. name: traefik-ingress
  20. ports:
  21. - name: controller
  22. containerPort: 80
  23. hostPort: 81
  24. - name: admin-web
  25. containerPort: 8080
  26. securityContext:
  27. capabilities:
  28. drop:
  29. - ALL
  30. add:
  31. - NET_BIND_SERVICE
  32. args:
  33. - --api
  34. - --kubernetes
  35. - --logLevel=INFO
  36. - --insecureskipverify=true
  37. - --kubernetes.endpoint=https://10.4.7.10:7443
  38. - --accesslog
  39. - --accesslog.filepath=/var/log/traefik_access.log
  40. - --traefiklog
  41. - --traefiklog.filepath=/var/log/traefik.log
  42. - --metrics.prometheus
  • service.yaml
  1. kind: Service
  2. apiVersion: v1
  3. metadata:
  4. name: traefik-ingress-service
  5. namespace: kube-system
  6. spec:
  7. selector:
  8. k8s-app: traefik-ingress
  9. ports:
  10. - protocol: TCP
  11. port: 80
  12. name: controller
  13. - protocol: TCP
  14. port: 8080
  15. name: admin-web
  • ingress.yaml
  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: traefik-web-ui
  5. namespace: kube-system
  6. annotations:
  7. kubernetes.io/ingress.class: traefik
  8. spec:
  9. rules:
  10. - host: traefik.od.com
  11. http:
  12. paths:
  13. - path: /
  14. backend:
  15. serviceName: traefik-ingress-service
  16. servicePort: 8080
  • 准备镜像
  1. [root@hdss7-200 traefik_1.7.2]# docker pull traefik:v1.7.2-alpine
  2. [root@hdss7-200 traefik_1.7.2]# docker image tag traefik:v1.7.2-alpine harbor.od.com/public/traefik:v1.7.2
  3. [root@hdss7-200 traefik_1.7.2]# docker push harbor.od.com/public/traefik:v1.7.2

5.3.2. 交付traefik到k8s

  1. [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/traefik_1.7.2/rbac.yaml
  2. [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/traefik_1.7.2/daemonset.yaml
  3. [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/traefik_1.7.2/service.yaml
  4. [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/traefik_1.7.2/ingress.yaml
  1. [root@hdss7-21 ~]# kubectl get pods -n kube-system -o wide
  2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  3. coredns-6b6c4f9648-4vtcl 1/1 Running 1 24h 172.7.21.3 hdss7-21.host.com <none> <none>
  4. traefik-ingress-4gm4w 1/1 Running 0 77s 172.7.21.5 hdss7-21.host.com <none> <none>
  5. traefik-ingress-hwr2j 1/1 Running 0 77s 172.7.22.3 hdss7-22.host.com <none> <none>
  6. [root@hdss7-21 ~]# kubectl get ds -n kube-system
  7. NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
  8. traefik-ingress 2 2 2 2 2 <none> 107s

5.3.3. 配置外部nginx负载均衡

  • 在hdss7-11,hdss7-12 配置nginx L7转发
  1. [root@hdss7-11 ~]# vim /etc/nginx/conf.d/od.com.conf
  2. server {
  3. server_name *.od.com;
  4. location / {
  5. proxy_pass http://default_backend_traefik;
  6. proxy_set_header Host $http_host;
  7. proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
  8. }
  9. }
  10. upstream default_backend_traefik {
  11. # 所有的nodes都放到upstream中
  12. server 10.4.7.21:81 max_fails=3 fail_timeout=10s;
  13. server 10.4.7.22:81 max_fails=3 fail_timeout=10s;
  14. }
  15. [root@hdss7-11 ~]# nginx -tq && nginx -s reload
  • 配置dns解析
  1. [root@hdss7-11 ~]# vim /var/named/od.com.zone
  2. $ORIGIN od.com.
  3. $TTL 600 ; 10 minutes
  4. @ IN SOA dns.od.com. dnsadmin.od.com. (
  5. 2020011302 ; serial
  6. 10800 ; refresh (3 hours)
  7. 900 ; retry (15 minutes)
  8. 604800 ; expire (1 week)
  9. 86400 ; minimum (1 day)
  10. )
  11. NS dns.od.com.
  12. $TTL 60 ; 1 minute
  13. dns A 10.4.7.11
  14. harbor A 10.4.7.200
  15. k8s-yaml A 10.4.7.200
  16. traefik A 10.4.7.10
  17. [root@hdss7-11 ~]# systemctl restart named
  • 查看traefik网页

5.4. dashboard

5.4.1. 配置资源清单

清单文件存放到 hdss7-200:/data/k8s-yaml/dashboard/dashboard_1.10.1

  • 准备镜像
  1. # 镜像准备
  2. # 因不可描述原因,无法访问k8s.gcr.io,改成registry.aliyuncs.com/google_containers
  3. [root@hdss7-200 ~]# docker image pull registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
  4. [root@hdss7-200 ~]# docker image tag f9aed6605b81 harbor.od.com/public/kubernetes-dashboard-amd64:v1.10.1
  5. [root@hdss7-200 ~]# docker image push harbor.od.com/public/kubernetes-dashboard-amd64:v1.10.1
  • rbac.yaml
  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4. labels:
  5. k8s-app: kubernetes-dashboard
  6. addonmanager.kubernetes.io/mode: Reconcile
  7. name: kubernetes-dashboard-admin
  8. namespace: kube-system
  9. ---
  10. apiVersion: rbac.authorization.k8s.io/v1
  11. kind: ClusterRoleBinding
  12. metadata:
  13. name: kubernetes-dashboard-admin
  14. namespace: kube-system
  15. labels:
  16. k8s-app: kubernetes-dashboard
  17. addonmanager.kubernetes.io/mode: Reconcile
  18. roleRef:
  19. apiGroup: rbac.authorization.k8s.io
  20. kind: ClusterRole
  21. name: cluster-admin
  22. subjects:
  23. - kind: ServiceAccount
  24. name: kubernetes-dashboard-admin
  25. namespace: kube-system
  • deployment.yaml
  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: kubernetes-dashboard
  5. namespace: kube-system
  6. labels:
  7. k8s-app: kubernetes-dashboard
  8. kubernetes.io/cluster-service: "true"
  9. addonmanager.kubernetes.io/mode: Reconcile
  10. spec:
  11. selector:
  12. matchLabels:
  13. k8s-app: kubernetes-dashboard
  14. template:
  15. metadata:
  16. labels:
  17. k8s-app: kubernetes-dashboard
  18. annotations:
  19. scheduler.alpha.kubernetes.io/critical-pod: ''
  20. spec:
  21. priorityClassName: system-cluster-critical
  22. containers:
  23. - name: kubernetes-dashboard
  24. image: harbor.od.com/public/kubernetes-dashboard-amd64:v1.10.1
  25. resources:
  26. limits:
  27. cpu: 100m
  28. memory: 300Mi
  29. requests:
  30. cpu: 50m
  31. memory: 100Mi
  32. ports:
  33. - containerPort: 8443
  34. protocol: TCP
  35. args:
  36. # PLATFORM-SPECIFIC ARGS HERE
  37. - --auto-generate-certificates
  38. volumeMounts:
  39. - name: tmp-volume
  40. mountPath: /tmp
  41. livenessProbe:
  42. httpGet:
  43. scheme: HTTPS
  44. path: /
  45. port: 8443
  46. initialDelaySeconds: 30
  47. timeoutSeconds: 30
  48. volumes:
  49. - name: tmp-volume
  50. emptyDir: {}
  51. serviceAccountName: kubernetes-dashboard-admin
  52. tolerations:
  53. - key: "CriticalAddonsOnly"
  54. operator: "Exists"
  • service.yaml
  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: kubernetes-dashboard
  5. namespace: kube-system
  6. labels:
  7. k8s-app: kubernetes-dashboard
  8. kubernetes.io/cluster-service: "true"
  9. addonmanager.kubernetes.io/mode: Reconcile
  10. spec:
  11. selector:
  12. k8s-app: kubernetes-dashboard
  13. ports:
  14. - port: 443
  15. targetPort: 8443
  • ingress.yaml
  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: kubernetes-dashboard
  5. namespace: kube-system
  6. annotations:
  7. kubernetes.io/ingress.class: traefik
  8. spec:
  9. rules:
  10. - host: dashboard.od.com
  11. http:
  12. paths:
  13. - backend:
  14. serviceName: kubernetes-dashboard
  15. servicePort: 443

5.4.2. 交付dashboard到k8s

  1. [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/dashboard_1.10.1/rbac.yaml
  2. [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/dashboard_1.10.1/deployment.yaml
  3. [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/dashboard_1.10.1/service.yaml
  4. [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/dashboard_1.10.1/ingress.yaml

5.4.3. 配置DNS解析

  1. [root@hdss7-11 ~]# vim /var/named/od.com.zone
  2. $ORIGIN od.com.
  3. $TTL 600 ; 10 minutes
  4. @ IN SOA dns.od.com. dnsadmin.od.com. (
  5. 2020011303 ; serial
  6. 10800 ; refresh (3 hours)
  7. 900 ; retry (15 minutes)
  8. 604800 ; expire (1 week)
  9. 86400 ; minimum (1 day)
  10. )
  11. NS dns.od.com.
  12. $TTL 60 ; 1 minute
  13. dns A 10.4.7.11
  14. harbor A 10.4.7.200
  15. k8s-yaml A 10.4.7.200
  16. traefik A 10.4.7.10
  17. dashboard A 10.4.7.10
  18. [root@hdss7-11 ~]# systemctl restart named.service

5.4.4. 签发SSL证书

  1. [root@hdss7-200 ~]# cd /opt/certs/
  2. [root@hdss7-200 certs]# (umask 077; openssl genrsa -out dashboard.od.com.key 2048)
  3. [root@hdss7-200 certs]# openssl req -new -key dashboard.od.com.key -out dashboard.od.com.csr -subj "/CN=dashboard.od.com/C=CN/ST=BJ/L=Beijing/O=OldboyEdu/OU=ops"
  4. [root@hdss7-200 certs]# openssl x509 -req -in dashboard.od.com.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out dashboard.od.com.crt -days 3650
  5. [root@hdss7-200 certs]# ll dashboard.od.com.*
  6. -rw-r--r-- 1 root root 1196 Jan 29 20:52 dashboard.od.com.crt
  7. -rw-r--r-- 1 root root 1005 Jan 29 20:51 dashboard.od.com.csr
  8. -rw------- 1 root root 1675 Jan 29 20:51 dashboard.od.com.key
  9. [root@hdss7-200 certs]# for i in 11 12;do ssh hdss7-$i mkdir /etc/nginx/certs/;scp dashboard.od.com.key dashboard.od.com.crt hdss7-$i:/etc/nginx/certs/ ;done

5.4.5. 配置Nginx

  1. # hdss7-11和hdss7-12都需要操作
  2. [root@hdss7-11 ~]# vim /etc/nginx/conf.d/dashborad.conf
  3. server {
  4. listen 80;
  5. server_name dashboard.od.com;
  6. rewrite ^(.*)$ https://${server_name}$1 permanent;
  7. }
  8. server {
  9. listen 443 ssl;
  10. server_name dashboard.od.com;
  11. ssl_certificate "certs/dashboard.od.com.crt";
  12. ssl_certificate_key "certs/dashboard.od.com.key";
  13. ssl_session_cache shared:SSL:1m;
  14. ssl_session_timeout 10m;
  15. ssl_ciphers HIGH:!aNULL:!MD5;
  16. ssl_prefer_server_ciphers on;
  17. location / {
  18. proxy_pass http://default_backend_traefik;
  19. proxy_set_header Host $http_host;
  20. proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
  21. }
  22. }
  23. [root@hdss7-11 ~]# nginx -t && nginx -s reload

5.4.6. 测试token登陆

  1. [root@hdss7-21 ~]# kubectl get secret -n kube-system|grep kubernetes-dashboard
  2. kubernetes-dashboard-token-hr5rj kubernetes.io/service-account-token 3 17m
  3. [root@hdss7-21 ~]# kubectl describe secret kubernetes-dashboard-admin-token-bsfv7 -n kube-system|grep token
  4. token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1ocjVyaiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImZhNzAxZTRmLWVjMGItNDFkNS04NjdmLWY0MGEwYmFkMjFmNSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.SDUZEkH_N0B6rjm6bW_jN03F4pHCPafL3uKD2HU0ksM0oenB2425jxvfi16rUbTRCsfcGqYXRrE2x15gpb03fb3jJy-IhnInUnPrw6ZwEdqWagen_Z4tdFhUgCpdjdShHy40ZPfql_iuVKbvv7ASt8w8v13Ar3FxztyDyLScVO3rNEezT7JUqMI4yj5LYQ0IgpSXoH12tlDSTyX8Rk2a_3QlOM_yT5GB_GEZkwIESttQKVr7HXSCrQ2tEdYA4cYO2AbF1NgAo_CVBNNvZLvdDukWiQ_b5zwOiO0cUbbiu46x_p6gjNWzVb7zHNro4gh0Shr4hIhiRQot2DJ-sq94Ag

二进制部署K8S-3核心插件部署的更多相关文章

  1. k8s之Dashboard插件部署及使用

    k8s之Dashboard插件部署及使用 目录 k8s之Dashboard插件部署及使用 1. Dashboard介绍 2. 服务器环境 3. 在K8S工具目录中创建dashboard工作目录 4. ...

  2. Elasticsearch学习随笔(一)--原理理解与5.0核心插件部署过程

    最近由于要涉及一些安全运维的工作,最近在研究Elasticsearch,为ELK做相关的准备.于是把自己学习的一些随笔分享给大家,进行学习,在部署常用插件的时候由于是5.0版本的Elasticsear ...

  3. K8S(03)核心插件-Flannel网络插件

    系列文章说明 本系列文章,可以基本算是 老男孩2019年王硕的K8S周末班课程 笔记,根据视频来看本笔记最好,否则有些地方会看不明白 需要视频可以联系我 K8S核心网络插件Flannel 目录 系列文 ...

  4. K8S(05)核心插件-ingress(服务暴露)控制器-traefik

    K8S核心插件-ingress(服务暴露)控制器-traefik 1 K8S两种服务暴露方法 前面通过coredns在k8s集群内部做了serviceNAME和serviceIP之间的自动映射,使得不 ...

  5. K8S(04)核心插件-coredns服务

    K8S核心插件-coredns服务 目录 K8S核心插件-coredns服务 1 coredns用途 1.1 为什么需要服务发现 2 coredns的部署 2.1 获取coredns的docker镜像 ...

  6. 二进制部署k8s

    一.二进制部署 k8s集群 1)参考文章 博客: https://blog.qikqiak.com 文章: https://www.qikqiak.com/post/manual-install-hi ...

  7. 【原】二进制部署 k8s 1.18.3

    二进制部署 k8s 1.18.3 1.相关前置信息 1.1 版本信息 kube_version: v1.18.3 etcd_version: v3.4.9 flannel: v0.12.0 cored ...

  8. 二进制方法-部署k8s集群部署1.18版本

    二进制方法-部署k8s集群部署1.18版本 1. 前置知识点 1.1 生产环境可部署kubernetes集群的两种方式 目前生产部署Kubernetes集群主要有两种方式 kuberadm Kubea ...

  9. 6、二进制安装K8s之部署kubectl

    二进制安装K8s之部署kubectl 我们把k8s-master 也设置成node,所以先master上面部署node,在其他机器上部署node也适用,更换名称即可. 1.在所有worker node ...

随机推荐

  1. CSS 常用样式 – 背景属性

    一.背景颜色 background-color 属性名:background-color 作用:在盒子区域添加背景颜色的修饰 加载区域:在 border 及以内加载背景颜色 属性值:颜色名.颜色值 & ...

  2. MySQL常用配置参数说明

    1.sync_binlog sync_binlog=0,当事务提交之后,MySQL不做fsync之类的磁盘同步指令刷新binlog_cache中的信息到磁盘,而让Filesystem自行决定什么时候来 ...

  3. Java读取图片exif信息实现图片方向自动纠正

    起因 一个对试卷进行OCR识别需求,需要实现一个功能,一个章节下的题目图片需要上下拼接合成一张大图,起初写了一个工具实现图片的合并,程序一直很稳定的运行着,有一反馈合成的图片方向不对,起初怀疑是本身图 ...

  4. OO电梯系列优化分享

    目录 前言 HW5 HW6 第二次作业uml协作图 HW7 第三次作业uml协作图 前言 本单元作业在优化方面确实有一些想法值得分享,故单开一篇博客分享一下三次作业的优化以及架构. 三次作业的共同之处 ...

  5. 吃透什么是KVM虚拟化

    概念: 云计算自从提出,一直没有一个明确而统一的定义.维基百科对云计算做了如下的描述:云计算是一种通过因特网以服务的方式提供动态可伸缩的虚拟化的资源的计算模式.美国国家标准与技术研究院( NIST)定 ...

  6. 【剑指offer】7:斐波那契数列

    题目描述: 大家都知道斐波那契数列,现在要求输入一个整数n,请你输出斐波那契数列的第n项(从0开始,第0项为0,第1项是1).假设 n≤39 解题思路: 斐波拉契数列:1,1,2,3,5,8--,总结 ...

  7. 火爆外网的 DGS 框架使用

    Netflix 已开放其 Domain Graph Service(DGS)框架的源代码 ,该框架是为了方便整合 GraphQL 使用,用于简化 GraphQL 的实现. GraphQL 主要是作用于 ...

  8. 聊聊 OAuth 2.0 的 Token 续期处理

    Token 校验逻辑 // CheckTokenEndpoint.checkToken @RequestMapping(value = "/oauth/check_token") ...

  9. ElasticSearch-03-远行、停止

    在Windows下执行elasticsearch.bat 在Linux下运行./elasticsearch 指定集群名称和节点名称: ./elasticsearch --cluster.name my ...

  10. LNMP架构上线动态网站

    第一步,一键安装所需程序 yum install -y nginx php php-mysql php-fpm mariadb-server 第二步,修改Nginx配置文件/etc/nginx/ngi ...