1. 架构篇

1.1 kubernetes 架构说明

  1.             

1.2 Flannel网络架构图

  1.  

1.3 Kubernetes工作流程

  1.            

2. 组件介绍

2.1 Master节点

  1. 2.1.1 、网关服务 API Server:提供Kubernetes API接口,主要处理REST操作以及更新ETCD中的对象。所有资源增删改查的唯一入口
  2.   只有API Server才直接操作etcd
  3.   其他模块通过API Server查询活修改数据
  4.   提供其他模块之间的数据交互和通信的枢纽
  5. 2.1.2 调度器 Scheduler:资源调度,负责分配调度Pod到集群内的Node节点
  6.   监听kube-apiserver,查询还未分配NodePod
  7.   根据调度策略为这些Pod分配节点
  8. 2.1.3 控制器 Controller Manager:所有其他群集级别的功能。目前由控制器Manager执行。资源对象的自动化控制中心。它通过apiserver监控整个集群的状态,并确保集群处于预期的工作状态。
  9. 2.1.4 存储 ETCD:所有持久化的状态信息存储在ETCD

2.2 Node节点

  1. 2.2.1Kubelet:管理Pods以及容器、镜像、Volume等,实现对集群对节点的管理。
  2. 2.2.2Kube-proxy:提供网络代理以及负载均衡,实现与Service通信。
  3. 2.2.3Docker:负责节点的容器管理工作

3.环境说明

3.1 部署节点说明

主机名 IP 用途 部署软件
linux-node1 172.16.1.31 master apiserver,scheduler,controller-manager
etcd,flanneld
linux-node2 172.16.1.32 node kubelet,kube-proxy
etcd,flanneld
linux-node3 172.16.1.33 node kubelet,kube-proxy
etcd,flanneld

3.2 软件包版本

软件包 下载地址
kubernetes-node-linux-amd64.tar.gz https://dl.k8s.io/v1.10.1/kubernetes-node-linux-amd64.tar.gz
kubernetes-server-linux-amd64.tar.gz https://dl.k8s.io/v1.10.1/kubernetes-server-linux-amd64.tar.gz
kubernetes-client-linux-amd64.tar.gz https://dl.k8s.io/v1.10.1/kubernetes-client-linux-amd64.tar.gz
kubernetes.tar.gz https://dl.k8s.io/v1.10.1/kubernetes.tar.gz
flannel-v0.11.0-linux-amd64.tar.gz https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
cni-plugins-amd64-v0.7.1.tgz https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz
etcd-v3.2.18-linux-amd64.tar.gz https://github.com/coreos/etcd/releases/download/v3.2.18/etcd-v3.2.18-linux-amd64.tar.gz

4.Kubernetes 安装

4.1 初始化环境

  1. 4.1.、设置关闭防火墙及SELINUX,关闭swap
  2.   systemctl stop firewalld && systemctl disable firewalld
  3.   setenforce
  4.   vi /etc/selinux/config
  5.   SELINUX=disabled
  6.   swapoff -a && sysctl -w vm.swappiness=
  7.   sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
  8.  
  9. 4.1.、下载国内docker源,部署docker
  10.   cd /etc/yum.repos.d/
  11.   wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  12.   yum clean all && yum repolist -y
  13.   yum install -y docker-ce
  14.   systemctl start docker
  15.  
  16. 4.1.. 准备部署目录
  17.   mkdir -p /opt/kubernetes/{cfg,bin,ssl,log}
  18.   # scp -r /opt/kubernetes 172.16.1.32:/opt/
  19.   # scp -r /opt/kubernetes 172.16.1.33:/opt/
  20.  
  21. 4.1.、添加启动命令所在目录环境变量
  22.   vim ~/.bash_profile
  23.   # .bash_profile
  24.   # Get the aliases and functions
  25.   if [ -f ~/.bashrc ]; then
  26. . ~/.bashrc
  27.   fi
  28.   # User specific environment and startup programs
  29.   PATH=$PATH:$HOME/bin:/opt/kubernetes/bin/
  30.   export PATH
  31.  
  32.   source ~/.bash_profile
  33.   # scp ~/.bash_profile 172.16.1.32:~/
  34.   # scp ~/.bash_profile 172.16.1.33:~/
  35.  
  36. 4.1.、配置内核参数【需重启服务器】
  37.   cat /etc/sysctl.conf
  38.   net.ipv6.conf.all.disable_ipv6 =
  39.   net.ipv6.conf.default.disable_ipv6 =
  40.   net.ipv6.conf.lo.disable_ipv6 =
  41.   vm.swappiness =
  42.   net.ipv4.neigh.default.gc_stale_time=
  43.   net.ipv4.ip_forward =
  44.   # see details in https://help.aliyun.com/knowledge_detail/39428.html
  45.   net.ipv4.conf.all.rp_filter=
  46.   net.ipv4.conf.default.rp_filter=
  47.   net.ipv4.conf.default.arp_announce =
  48.   net.ipv4.conf.lo.arp_announce=
  49.   net.ipv4.conf.all.arp_announce=
  50.   # see details in https://help.aliyun.com/knowledge_detail/41334.html
  51.   net.ipv4.tcp_max_tw_buckets =
  52.   net.ipv4.tcp_syncookies =
  53.   net.ipv4.tcp_max_syn_backlog =
  54.   net.ipv4.tcp_synack_retries =
  55.   kernel.sysrq =
  56.   # iptables透明网桥的实现
  57.   net.bridge.bridge-nf-call-ip6tables =
  58.   net.bridge.bridge-nf-call-iptables =
  59.   net.bridge.bridge-nf-call-arptables =

4.2 安装制作CA证书工具【kubernetes 系统的各组件需要使用 TLS 证书对通信进行加密】

  1. 4.2.. 安装CFSSL
  2.   [root@linux-node1 ~]# cd /usr/local/src
  3.   [root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
  4.   [root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
  5.   [root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
  6.   [root@linux-node1 src]# chmod +x cfssl*
  7.   [root@linux-node1 src]# mv cfssl-certinfo_linux-amd64 /opt/kubernetes/bin/cfssl-certinfo
  8.   [root@linux-node1 src]# mv cfssljson_linux-amd64 /opt/kubernetes/bin/cfssljson
  9.   [root@linux-node1 src]# mv cfssl_linux-amd64 /opt/kubernetes/bin/cfssl
  10.   #复制cfssl命令文件到k8s-node1和k8s-node2节点。如果实际中多个节点,就都需要同步复制。
  11.   # scp /opt/kubernetes/bin/cfssl* 172.16.1.32:/opt/kubernetes/bin/
  12.   # scp /opt/kubernetes/bin/cfssl* 172.16.1.33:/opt/kubernetes/bin/
  13. 4.2.. 生成模板文件
  14.   [root@linux-node1 ~]# cd /usr/local/src
  15.   [root@linux-node1 src]# mkdir ssl && cd ssl
  16.   [root@linux-node1 ssl]# cfssl print-defaults config > config.json #默认证书生产策略配置模板
  17.   [root@linux-node1 ssl]# cfssl print-defaults csr > csr.json #默认csr请求模板
  18.  
  19. 4.2.. 创建用来生成CA文件的JSON配置文件
  20.   [root@linux-node1 ~]# vim /usr/local/src/ssl/ca-config.json
  21.   {
  22.    "signing": {
  23.    "default": {
  24.    "expiry": "8760h"
  25.    },
  26.    "profiles": {
  27.    "kubernetes": {
  28.    "usages": [
  29.    "signing",
  30.    "key encipherment",
  31.    "server auth",
  32.    "client auth"
  33.    ],
  34.    "expiry": "8760h"
  35.    }
  36.    }
  37.    }
  38.   }
  39.  
  40. 4.2..创建用来生成 CA 证书签名请求(CSR)的 JSON 配置文件
  41.   [root@linux-node1 ~]# vim /usr/local/src/ssl/ca-csr.json
  42.   {
  43.    "CN": "kubernetes",
  44.    "key": {
  45.    "algo": "rsa",
  46.    "size":
  47.    },
  48.    "names": [
  49.    {
  50.    "C": "CN",
  51.    "ST": "BeiJing",
  52.    "L": "BeiJing",
  53.    "O": "k8s",
  54.    "OU": "System"
  55.    }
  56.    ]
  57.   }
  58.  
  59. 4.2.. 生成CA证书(ca.pem)和秘钥(ca-key.pem
  60.   [root@linux-node1 ~]# cd /usr/local/src/ssl
  61.   [root@ linux-node1 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca #初始化创建CA认证中心,生成 ca-key.pem(私钥) ca.pem(公钥)
  62.   [root@ linux-node1 ssl]# ls -l ca*
  63.   -rw-r--r-- root root Mar : ca-config.json
  64.   -rw-r--r-- root root Mar : ca.csr
  65.   -rw-r--r-- root root Mar : ca-csr.json
  66.   -rw------- root root Mar : ca-key.pem
  67.   -rw-r--r-- root root Mar : ca.pem
  68.  
  69. 4.2..分发证书
  70.   [root@ linux-node1 ssl]# cp ca.csr ca.pem ca-key.pem ca-config.json /opt/kubernetes/ssl
  71.   SCP证书到k8s-node1k8s-node2节点
  72.   # scp ca.csr ca.pem ca-key.pem ca-config.json 172.16.1.32:/opt/kubernetes/ssl
  73.   # scp ca.csr ca.pem ca-key.pem ca-config.json 172.16.1.33:/opt/kubernetes/ssl

4.3 部署ETCD集群

  1. 4.3.. 准备etcd软件包
  2.   [root@linux-node1 ~]# cd /usr/local/src && wget https://github.com/coreos/etcd/releases/download/v3.2.18/etcd-v3.2.18-linux-amd64.tar.gz
  3.   [root@linux-node1 src]# tar zxf etcd-v3.2.18-linux-amd64.tar.gz
  4.   [root@linux-node1 src]# cd etcd-v3.2.18-linux-amd64
  5.   [root@linux-node1 etcd-v3.2.18-linux-amd64]# cp etcd etcdctl /opt/kubernetes/bin/
  6.   # scp etcd etcdctl 172.16.1.32:/opt/kubernetes/bin/
  7.   # scp etcd etcdctl 172.16.1.33:/opt/kubernetes/bin/
  8.  
  9. 4.3.. 创建etcd证书签名请求
  10.   [root@linux-node1 src]# cd /usr/local/src
  11.   [root@linux-node1 src]# vim /usr/local/src/etcd-csr.json
  12.   {
  13.    "CN": "etcd",
  14.    "hosts": [
  15.    "127.0.0.1",
  16.    "172.16.1.31",
  17.    "172.16.1.32",
  18.    "172.16.1.33"
  19.    ],
  20.    "key": {
  21.    "algo": "rsa",
  22.    "size":
  23.    },
  24.    "names": [
  25.    {
  26.    "C": "CN",
  27.    "ST": "BeiJing",
  28.    "L": "BeiJing",
  29.    "O": "k8s",
  30.    "OU": "System"
  31.    }
  32.    ]
  33.   }
  34.  
  35. 4.3.. 生成etcd证书和私钥
  36.   [root@linux-node1 ~]# cd /usr/local/src
  37.   [root@linux-node1 src]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
  38.   # 会生成以下证书文件
  39.   [root@k8s-master src]# ls -l etcd*
  40.   -rw-r--r-- root root Mar : etcd.csr
  41.   -rw-r--r-- root root Mar : etcd-csr.json
  42.   -rw------- root root Mar : etcd-key.pem
  43.   -rw-r--r-- root root Mar : etcd.pem
  44.  
  45. 4.3.. 将证书移动到/opt/kubernetes/ssl目录下
  46.   [root@k8s-master src]# cp etcd*.pem /opt/kubernetes/ssl
  47.   # scp etcd*.pem 172.16.1.32:/opt/kubernetes/ssl
  48.   # scp etcd*.pem 172.16.1.33:/opt/kubernetes/ssl
  49.   [root@linux-node1 src]# rm -f etcd.csr etcd-csr.json
  50.  
  51. 4.3.. 设置etcd配置文件【etcd配置文件需手动创建生成】
  52.   #其他节点 灰色背景标注 需要修改
  53.   [root@linux-node1 ~]# vim /opt/kubernetes/cfg/etcd.conf
  54.   #[member]
  55.   ETCD_NAME="etcd-node1"
  56.   ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
  57.   #ETCD_SNAPSHOT_COUNTER=""
  58.   #ETCD_HEARTBEAT_INTERVAL=""
  59.   #ETCD_ELECTION_TIMEOUT=""
  60.   ETCD_LISTEN_PEER_URLS="https://172.16.1.31:2380"
  61.   ETCD_LISTEN_CLIENT_URLS="https://172.16.1.31:2379,https://127.0.0.1:2379"
  62.   #ETCD_MAX_SNAPSHOTS=""
  63.   #ETCD_MAX_WALS=""
  64.   #ETCD_CORS=""
  65.   #[cluster]
  66.   ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.1.31:2380"
  67.   # if you use different ETCD_NAME (e.g. test),
  68.   # set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
  69.   ETCD_INITIAL_CLUSTER="etcd-node1=https://172.16.1.31:2380,etcd-node2=https://172.16.1.32:2380,etcd-node3=https://172.16.1.33:2380"
  70.   ETCD_INITIAL_CLUSTER_STATE="new"
  71.   ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
  72.   ETCD_ADVERTISE_CLIENT_URLS="https://172.16.1.31:2379"
  73.   #[security]
  74.   CLIENT_CERT_AUTH="true"
  75.   ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
  76.   ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
  77.   ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
  78.   PEER_CLIENT_CERT_AUTH="true"
  79.   ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
  80.   ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
  81.   ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
  82.  
  83. 4.3.. 创建etcd系统服务
  84.   [root@linux-node1 ~]# vim /etc/systemd/system/etcd.service
  85.   [Unit]
  86.   Description=Etcd Server
  87.   After=network.target
  88.   [Service]
  89.   Type=simple
  90.   WorkingDirectory=/var/lib/etcd
  91.   EnvironmentFile=-/opt/kubernetes/cfg/etcd.conf
  92.   # set GOMAXPROCS to number of processors
  93.   ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /opt/kubernetes/bin/etcd"
  94.   Type=notify
  95.   [Install]
  96.   WantedBy=multi-user.target
  97.  
  98. 4.3.. 重新加载系统服务
  99.   [root@linux-node1 ~]# systemctl daemon-reload
  100.   [root@linux-node1 ~]# systemctl enable etcd
  101.   # scp /opt/kubernetes/cfg/etcd.conf 172.16.1.32:/opt/kubernetes/cfg/
  102.   # scp /opt/kubernetes/cfg/etcd.conf 172.16.1.33:/opt/kubernetes/cfg/
  103.   # scp /etc/systemd/system/etcd.service 172.16.1.32:/etc/systemd/system/
  104.   # scp /etc/systemd/system/etcd.service 172.16.1.33:/etc/systemd/system/
  105.   #在所有节点上创建etcd存储目录并启动etcd
  106.   [root@linux-node1 ~]# mkdir /var/lib/etcd
  107.   [root@linux-node1 ~]# systemctl start etcd
  108.   [root@linux-node1 ~]# systemctl status etcd
  109.  
  110. 4.3.. 验证集群
  111.   [root@linux-node1 ~]# etcdctl --endpoints=https://172.16.1.31:2379 --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/etcd.pem --key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health
  112.   member 435fb0a8da627a4c is healthy: got healthy result from https://172.16.1.32:2379
  113.   member 6566e06d7343e1bb is healthy: got healthy result from https://172.16.1.31:2379
  114.   member ce7b884e428b6c8c is healthy: got healthy result from https://172.16.1.33:2379
  115.   cluster is healthy

4.4 Master节点部署 【Kubernetes API服务】

  1. 4.4.1.1 【部署Kubernetes API服务部署】准备软件包
  2. [root@linux-node1 ~]# #cd /usr/local/src && wget https://dl.k8s.io/v1.10.1/kubernetes-server-linux-amd64.tar.gz #需要代理上网下载
  3. [root@linux-node1 ~]# #cd /usr/local/src && tar xf kubernetes-server-linux-amd64.tar.gz
  4. [root@linux-node1 ~]# cd /usr/local/src/kubernetes
  5.   [root@linux-node1 kubernetes]# cp server/bin/kube-apiserver /opt/kubernetes/bin/
  6.   [root@linux-node1 kubernetes]# cp server/bin/kube-controller-manager /opt/kubernetes/bin/
  7.   [root@linux-node1 kubernetes]# cp server/bin/kube-scheduler /opt/kubernetes/bin/
  8. 4.4.1.2【部署Kubernetes API服务部署】创建生成CSR JSON 配置文件
  9.   [root@linux-node1 src]# vim /usr/local/src/ssl/kubernetes-csr.json
  10.   {
  11.    "CN": "kubernetes",
  12.    "hosts": [
  13.    "127.0.0.1",
  14.    "172.16.1.31",
  15.    "10.1.0.1",
  16.    "kubernetes",
  17.    "kubernetes.default",
  18.    "kubernetes.default.svc",
  19.    "kubernetes.default.svc.cluster",
  20.    "kubernetes.default.svc.cluster.local"
  21.    ],
  22.    "key": {
  23.    "algo": "rsa",
  24.    "size":
  25.    },
  26.    "names": [
  27.    {
  28.    "C": "CN",
  29.    "ST": "BeiJing",
  30.    "L": "BeiJing",
  31.    "O": "k8s",
  32.    "OU": "System"
  33.    }
  34.    ]
  35.   }
  36.  
  37. 4.4.1.3【部署Kubernetes API服务部署】生成 kubernetes 证书和私钥
  38.   [root@linux-node1 ssl]# cd /usr/local/src/ssl/
  39.   [root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
  40.   [root@linux-node1 src]# cp kubernetes*.pem /opt/kubernetes/ssl/
  41.   # scp kubernetes*.pem 172.16.1.32:/opt/kubernetes/ssl/
  42.   # scp kubernetes*.pem 172.16.1.33:/opt/kubernetes/ssl/
  43.  
  44. 4.4.1.4【部署Kubernetes API服务部署】创建 kube-apiserver 使用的客户端 token 文件
  45.   [root@linux-node1 ~]# head -c /dev/urandom | od -An -t x | tr -d ' '
  46.   cebfb6641d0845bd61808e2337955ea0
  47.   [root@linux-node1 ~]# vim /opt/kubernetes/ssl/bootstrap-token.csv
  48.   cebfb6641d0845bd61808e2337955ea0,kubelet-bootstrap,,"system:kubelet-bootstrap"
  49.  
  50. 4.4.1.5【部署Kubernetes API服务部署】创建基础用户名/密码认证配置
  51.   [root@linux-node1 ~]# vim /opt/kubernetes/ssl/basic-auth.csv
  52.   admin,admin,
  53.   readonly,readonly,
  54.  
  55. 4.4.1.6【部署Kubernetes API服务部署】部署Kubernetes API Server (配置文件中指定service对外访问生成的随机端口范围)
  56.   [root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-apiserver.service
  57.   [Unit]
  58.   Description=Kubernetes API Server
  59.   Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  60.   After=network.target
  61.   [Service]
  62.   ExecStart=/opt/kubernetes/bin/kube-apiserver \
  63.    --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \
  64.    --bind-address=172.16.1.31 \
  65.    --insecure-bind-address=127.0.0.1 \
  66.    --authorization-mode=Node,RBAC \
  67.    --runtime-config=rbac.authorization.k8s.io/v1 \
  68.    --kubelet-https=true \
  69.    --anonymous-auth=false \
  70.    --basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv \
  71.    --enable-bootstrap-token-auth \
  72.    --token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv \
  73.    --service-cluster-ip-range=10.1.0.0/ \
  74.    --service-node-port-range=- \
  75.    --tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \
  76.    --tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \
  77.    --client-ca-file=/opt/kubernetes/ssl/ca.pem \
  78.    --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
  79.    --etcd-cafile=/opt/kubernetes/ssl/ca.pem \
  80.    --etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \
  81.    --etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \
  82.    --etcd-servers=https://172.16.1.31:2379,https://172.16.1.32:2379,https://172.16.1.33:2379 \
  83.    --enable-swagger-ui=true \
  84.    --allow-privileged=true \
  85.    --audit-log-maxage= \
  86.    --audit-log-maxbackup= \
  87.    --audit-log-maxsize= \
  88.    --audit-log-path=/opt/kubernetes/log/api-audit.log \
  89.    --event-ttl=1h \
  90.    --v= \
  91.    --logtostderr=false \
  92.    --log-dir=/opt/kubernetes/log
  93.   Restart=on-failure
  94.   RestartSec=
  95.   Type=notify
  96.   LimitNOFILE=
  97.   [Install]
  98.   WantedBy=multi-user.target
  99.  
  100. 4.4.1.7【部署Kubernetes API服务部署】启动API Server服务
  101.   [root@linux-node1 ~]# systemctl daemon-reload
  102.   [root@linux-node1 ~]# systemctl enable kube-apiserver
  103.   [root@linux-node1 ~]# systemctl start kube-apiserver
  104.  
  105. 4.4.1.8【部署Kubernetes API服务部署】查看API Server服务状态
  106.   [root@linux-node1 ~]# systemctl status kube-apiserver
  107.  
  108. 4.4.2.1【部署Controller Manager(控制服务)】配置Controller Manager
  109.   [root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-controller-manager.service
  110.   [Unit]
  111.   Description=Kubernetes Controller Manager
  112.   Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  113.   [Service]
  114.   ExecStart=/opt/kubernetes/bin/kube-controller-manager \
  115.    --address=127.0.0.1 \
  116.    --master=http://127.0.0.1:8080 \
  117.    --allocate-node-cidrs=true \
  118.    --service-cluster-ip-range=10.1.0.0/ \
  119.    --cluster-cidr=10.2.0.0/ \
  120.    --cluster-name=kubernetes \
  121.    --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
  122.    --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
  123.    --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
  124.    --root-ca-file=/opt/kubernetes/ssl/ca.pem \
  125.    --leader-elect=true \
  126.    --v= \
  127.    --logtostderr=false \
  128.    --log-dir=/opt/kubernetes/log
  129.   Restart=on-failure
  130.   RestartSec=
  131.   [Install]
  132.   WantedBy=multi-user.target
  133.  
  134. 4.4.2.2【部署Controller Manager(控制服务)】启动Controller Manager
  135.   [root@linux-node1 ~]# systemctl daemon-reload
  136.   [root@linux-node1 scripts]# systemctl enable kube-controller-manager
  137.   [root@linux-node1 scripts]# systemctl start kube-controller-manager
  138.  
  139. 4.4.2.3【部署Controller Manager(控制服务)】查看服务状态
  140.   [root@linux-node1 scripts]# systemctl status kube-controller-manager
  141.  
  142. 4.4.3.1【部署Kubernetes Scheduler(调度服务)】配置Kubernetes Scheduler
  143.   [root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-scheduler.service
  144.   [Unit]
  145.   Description=Kubernetes Scheduler
  146.   Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  147.   [Service]
  148.   ExecStart=/opt/kubernetes/bin/kube-scheduler \
  149.    --address=127.0.0.1 \
  150.    --master=http://127.0.0.1:8080 \
  151.    --leader-elect=true \
  152.    --v= \
  153.    --logtostderr=false \
  154.    --log-dir=/opt/kubernetes/log
  155.   Restart=on-failure
  156.   RestartSec=
  157.   [Install]
  158.   WantedBy=multi-user.target
  159.  
  160. 4.4.3.2【部署Kubernetes Scheduler(调度服务)】部署服务
  161.   [root@linux-node1 ~]# systemctl daemon-reload
  162.   [root@linux-node1 scripts]# systemctl enable kube-scheduler
  163.   [root@linux-node1 scripts]# systemctl start kube-scheduler
  164.   [root@linux-node1 scripts]# systemctl status kube-scheduler
  165.   
  166.  
  167. 4.4.3.3【部署kubectl 命令行工具】准备二进制命令包
  168.   [root@linux-node1 ~]# #cd /usr/local/src && wget https://dl.k8s.io/v1.10.1/kubernetes-client-linux-amd64.tar.gz #需要代理上网下载
  169.   [root@linux-node1 ~]# #cd /usr/local/src && tar xf kubernetes-client-linux-amd64.tar.gz
  170.   [root@linux-node1 ~]# cd /usr/local/src/kubernetes/client/bin
  171.   [root@linux-node1 bin]# cp kubectl /opt/kubernetes/bin/
  172.  
  173. 4.4.3.4【部署kubectl 命令行工具】创建 admin 证书签名请求
  174.   [root@linux-node1 ~]# cd /usr/local/src/ssl/
  175.   [root@linux-node1 ssl]# vim admin-csr.json
  176.   {
  177.    "CN": "admin",
  178.    "hosts": [],
  179.    "key": {
  180.    "algo": "rsa",
  181.    "size":
  182.    },
  183.    "names": [
  184.    {
  185.    "C": "CN",
  186.    "ST": "BeiJing",
  187.    "L": "BeiJing",
  188.    "O": "system:masters",
  189.    "OU": "System"
  190.    }
  191.    ]
  192.   }
  193.  
  194. 4.4.3.5【部署kubectl 命令行工具】生成 admin 证书和私钥
  195.   [root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
  196.   [root@linux-node1 ssl]# ls -l admin*
  197.   -rw-r--r-- root root Mar : admin.csr
  198.   -rw-r--r-- root root Mar : admin-csr.json
  199.   -rw------- root root Mar : admin-key.pem
  200.   -rw-r--r-- root root Mar : admin.pem
  201.   [root@linux-node1 ssl]# mv admin*.pem /opt/kubernetes/ssl/
  202.  
  203. 4.4.3.6【部署kubectl 命令行工具】设置集群参数
  204.   [root@linux-node1 src]# kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=https://172.16.1.31:6443
  205.   Cluster "kubernetes" set.
  206.  
  207. 4.4.3.7【部署kubectl 命令行工具】设置客户端认证参数
  208.   [root@linux-node1 src]# kubectl config set-credentials admin --client-certificate=/opt/kubernetes/ssl/admin.pem --embed-certs=true --client-key=/opt/kubernetes/ssl/admin-key.pem
  209.   User "admin" set.
  210.  
  211. 4.4.3.8【部署kubectl 命令行工具】设置上下文参数
  212.   [root@linux-node1 src]# kubectl config set-context kubernetes --cluster=kubernetes --user=admin
  213.   Context "kubernetes" created.
  214.  
  215. 4.4.3.9【部署kubectl 命令行工具】设置默认上下文
  216.   [root@linux-node1 src]# kubectl config use-context kubernetes
  217.   Switched to context "kubernetes".
  218.  
  219. 4.4.3.10【部署kubectl 命令行工具】使用kubectl工具(获取节点状态)
  220.   [root@linux-node1 ~]# kubectl get cs
  221.   NAME STATUS MESSAGE ERROR
  222.   controller-manager Healthy ok
  223.   scheduler Healthy ok
  224.   etcd- Healthy {"health":"true"}
  225.   etcd- Healthy {"health":"true"}
  226.   etcd- Healthy {"health":"true"}

4.5 Node节点部署

  1. 4.5.1.1【部署kubelet】二进制包准备 将软件包从linux-node1复制到linux-node2中去。
  2.   [root@linux-node1 bin]# cd /usr/local/src/kubernetes/server/bin/ && cp kubelet kube-proxy /opt/kubernetes/bin/
  3.   # scp kubelet kube-proxy 172.16.1.32:/opt/kubernetes/bin/
  4.   # scp kubelet kube-proxy 172.16.1.33:/opt/kubernetes/bin/
  5.  
  6. 4.5.1.2【部署kubelet】创建角色绑定
  7.   [root@linux-node1 ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
  8.   clusterrolebinding "kubelet-bootstrap" created
  9.  
  10. 4.5.1.3【部署kubelet】创建 kubelet bootstrapping kubeconfig 文件 设置集群参数
  11.   [root@linux-node1 ~]# kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=https://172.16.1.31:6443 --kubeconfig=bootstrap.kubeconfig
  12.   Cluster "kubernetes" set.
  13.  
  14. 4.5.1.4【部署kubelet】设置客户端认证参数
  15.   [root@linux-node1 ~]# kubectl config set-credentials kubelet-bootstrap --token=cebfb6641d0845bd61808e2337955ea0 --kubeconfig=bootstrap.kubeconfig
  16.   User "kubelet-bootstrap" set.
  17.  
  18. 4.5.1.5【部署kubelet】设置上下文参数
  19.   [root@linux-node1 ~]# kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=bootstrap.kubeconfig
  20.   Context "default" created.
  21.  
  22. 4.5.1.6【部署kubelet】选择默认上下文
  23.   [root@linux-node1 ~]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
  24.   Switched to context "default".
  25.   [root@linux-node1 kubernetes]# cp /usr/local/src/kubernetes/server/bin/bootstrap.kubeconfig /opt/kubernetes/cfg
  26.   # scp /usr/local/src/kubernetes/server/bin/bootstrap.kubeconfig 172.16.1.32:/opt/kubernetes/cfg
  27.   # scp /usr/local/src/kubernetes/server/bin/bootstrap.kubeconfig 172.16.1.33:/opt/kubernetes/cfg
  28.  
  29. 4.5.1.7【部署kubelet】部署kubelet .设置CNI支持
  30.   [root@linux-node1 ~]# mkdir -p /etc/cni/net.d
  31.   [root@linux-node1 ~]# vim /etc/cni/net.d/-default.conf
  32.   {
  33.    "name": "flannel",
  34.    "type": "flannel",
  35.    "delegate": {
  36.    "bridge": "docker0",
  37.    "isDefaultGateway": true,
  38.    "mtu":
  39.    }
  40.   }
  41.   # scp -r /etc/cni/net.d 172.16.1.32:/etc/cni/
  42.   # scp -r /etc/cni/net.d 172.16.1.33:/etc/cni/
  43.  
  44. 4.5.1.8【部署kubelet】创建kubelet目录
  45.   [root@linux-node1 ~]# mkdir /var/lib/kubelet
  46.   # scp -r /var/lib/kubelet 172.16.1.32:/var/lib/
  47.   # scp -r /var/lib/kubelet 172.16.1.33:/var/lib/
  48.  
  49. 4.5.1.9【部署kubelet】创建kubelet服务配置
  50.   # 灰色部分需要修改
  51.   [root@k8s-node1 ~]# vim /usr/lib/systemd/system/kubelet.service
  52.   [Unit]
  53.   Description=Kubernetes Kubelet
  54.   Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  55.   After=docker.service
  56.   Requires=docker.service
  57.   [Service]
  58.   WorkingDirectory=/var/lib/kubelet
  59.   ExecStart=/opt/kubernetes/bin/kubelet \
  60.    --address=172.16.1.31 \
  61.    --hostname-override=172.16.1.31 \
  62.    --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \
  63.    --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
  64.    --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
  65.    --cert-dir=/opt/kubernetes/ssl \
  66.    --network-plugin=cni \
  67.    --cni-conf-dir=/etc/cni/net.d \
  68.    --cni-bin-dir=/opt/kubernetes/bin/cni \
  69.    --cluster-dns=10.1.0.2 \
  70.    --cluster-domain=cluster.local. \
  71.    --hairpin-mode hairpin-veth \
  72.    --allow-privileged=true \
  73.    --fail-swap-on=false \
  74.    --logtostderr=true \
  75.    --v= \
  76.    --logtostderr=false \
  77.    --log-dir=/opt/kubernetes/log
  78.   Restart=on-failure
  79.   RestartSec=
  80.   # scp /usr/lib/systemd/system/kubelet.service 172.16.1.32:/usr/lib/systemd/system/
  81.   # scp /usr/lib/systemd/system/kubelet.service 172.16.1.33:/usr/lib/systemd/system/
  82.  
  83. 4.5.1.10【部署kubelet】启动Kubelet
  84.   [root@linux-node2 ~]# systemctl daemon-reload
  85.   [root@linux-node2 ~]# systemctl enable kubelet
  86.   [root@linux-node2 ~]# systemctl start kubelet
  87.   [root@linux-node3 ~]# systemctl daemon-reload
  88.   [root@linux-node3 ~]# systemctl enable kubelet
  89.   [root@linux-node3 ~]# systemctl start kubelet
  90.  
  91. 4.5.1.11【部署kubelet】查看服务状态
  92.   [root@linux-node2 kubernetes]# systemctl status kubelet
  93.  
  94. 4.5.1.12 查看csr请求 注意是在linux-node1上执行。
  95.   [root@linux-node1 ~]# kubectl get csr
  96.   NAME AGE REQUESTOR CONDITION
  97.   node-csr-0_w5F1FM_la_SeGiu3Y5xELRpYUjjT2icIFk9gO9KOU 1m kubelet-bootstrap Pending
  98.  
  99. 4.5.1.13【部署kubelet】批准kubelet TLS 证书请求
  100.   [root@linux-node1 ~]# kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve
  101. certificatesigningrequest.certificates.k8s.io "node-csr-QCgiejwSx_bPgcBLNxHkMHs-lzNAY-bJNgm4skUMqII" approved
  102.   执行完毕后,查看节点状态已经是Ready的状态了
  103.   [root@linux-node1 ssl]# kubectl get node
  104. NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
  105. 172.16.1.32 Ready <none> 10m v1.10.1 <none> CentOS Linux (Core) 3.10.-.el7.x86_64 docker://19.3.5
  106. 172.16.1.33 Ready <none> 10m v1.10.1 <none> CentOS Linux (Core) 3.10.-.el7.x86_64 docker://19.3.5
  107.  
  108. 4.5.2.1【部署Kubernetes Proxy】配置kube-proxy使用LVS
  109.   [root@linux-node2 ~]# yum install -y ipvsadm ipset conntrack
  110.  
  111. 4.5.2.2【部署Kubernetes Proxy】创建 kube-proxy 证书请求
  112.   [root@linux-node1 ~]# cd /usr/local/src/ssl/
  113.   [root@linux-node1 ssl]# vim kube-proxy-csr.json
  114.   {
  115.    "CN": "system:kube-proxy",
  116.    "hosts": [],
  117.    "key": {
  118.    "algo": "rsa",
  119.    "size":
  120.    },
  121.    "names": [
  122.    {
  123.    "C": "CN",
  124.    "ST": "BeiJing",
  125.    "L": "BeiJing",
  126.    "O": "k8s",
  127.    "OU": "System"
  128.    }
  129.    ]
  130.   }
  131.  
  132. 4.5.2.3【部署Kubernetes Proxy】生成证书
  133.   [root@linux-node1ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
  134.  
  135. 4.5.2.4【部署Kubernetes Proxy】分发证书到所有Node节点
  136.   [root@linux-node1 ssl]# cp kube-proxy*.pem /opt/kubernetes/ssl/
  137.   # scp kube-proxy*.pem 172.16.1.32:/opt/kubernetes/ssl/
  138.   # scp kube-proxy*.pem 172.16.1.33:/opt/kubernetes/ssl/
  139.  
  140. 4.5.2.5【部署Kubernetes Proxy】创建kube-proxy配置文件
  141.   [root@linux-node1 ssl]# kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=https://172.16.1.31:6443 --kubeconfig=kube-proxy.kubeconfig
  142.   Cluster "kubernetes" set.
  143.   [root@linux-node1 ssl]# kubectl config set-credentials kube-proxy --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
  144.   User "kube-proxy" set.
  145.   [root@linux-node1 ssl]# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
  146.   Context "default" created.
  147.   [root@linux-node1 ssl]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  148.   Switched to context "default".
  149.  
  150. 4.5.2.6【部署Kubernetes Proxy】分发kubeconfig配置文件
  151.   [root@linux-node1 ssl]# cp kube-proxy.kubeconfig /opt/kubernetes/cfg/
  152.   # scp kube-proxy.kubeconfig 172.16.1.32:/opt/kubernetes/cfg/
  153.   # scp kube-proxy.kubeconfig 172.16.1.33:/opt/kubernetes/cfg/
  154.  
  155. 4.5.2.7【部署Kubernetes Proxy】创建kube-proxy服务配置
  156.   [root@linux-node1 ~]# mkdir /var/lib/kube-proxy
  157.   # scp -r /var/lib/kube-proxy 172.16.1.32:/var/lib/
  158.   # scp -r /var/lib/kube-proxy 172.16.1.33:/var/lib/
  159.   #各节点灰色部分 需要修改
  160.   [root@k8s-node1 ~]# vim /usr/lib/systemd/system/kube-proxy.service
  161.   [Unit]
  162.   Description=Kubernetes Kube-Proxy Server
  163.   Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  164.   After=network.target
  165.   [Service]
  166.   WorkingDirectory=/var/lib/kube-proxy
  167.   ExecStart=/opt/kubernetes/bin/kube-proxy \
  168.    --bind-address=172.16.1.31 \
  169.    --hostname-override=172.16.1.31 \
  170.    --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig \
  171.    --masquerade-all \
  172.    --feature-gates=SupportIPVSProxyMode=true \
  173.    --proxy-mode=ipvs \
  174.    --ipvs-min-sync-period=5s \
  175.    --ipvs-sync-period=5s \
  176.    --ipvs-scheduler=rr \
  177.    --logtostderr=true \
  178.    --v= \
  179.    --logtostderr=false \
  180.    --log-dir=/opt/kubernetes/log
  181.   Restart=on-failure
  182.   RestartSec=
  183.   LimitNOFILE=
  184.   [Install]
  185.   WantedBy=multi-user.target
  186.   # scp /usr/lib/systemd/system/kube-proxy.service 172.16.1.32:/usr/lib/systemd/system/
  187.   # scp /usr/lib/systemd/system/kube-proxy.service 172.16.1.33:/usr/lib/systemd/system/
  188.  
  189. 4.5.2.8【部署Kubernetes Proxy】启动Kubernetes Proxy(**Node节点启动)
  190.   [root@linux-node2 ~]# systemctl daemon-reload
  191.   [root@linux-node2 ~]# systemctl enable kube-proxy
  192.   [root@linux-node2 ~]# systemctl start kube-proxy
  193.   [root@linux-node3 ~]# systemctl daemon-reload
  194.   [root@linux-node3 ~]# systemctl enable kube-proxy
  195.   [root@linux-node3 ~]# systemctl start kube-proxy
  196.  
  197. 4.5.2.9【部署Kubernetes Proxy】查看服务状态 查看kube-proxy服务状态
  198.   [root@linux-node2 scripts]# systemctl status kube-proxy
  199.   检查LVS状态
  200.   [root@linux-node2 ~]# ipvsadm -L -n
  201.   IP Virtual Server version 1.2. (size=)
  202.   Prot LocalAddress:Port Scheduler Flags
  203.   -> RemoteAddress:Port Forward Weight ActiveConn InActConn
  204.   TCP 10.1.0.1: rr persistent
  205.   -> 172.16.1.31: Masq
  206.   如果你在两台实验机器都安装了kubeletproxy服务,使用下面的命令可以检查状态:
  207.   [root@linux-node1 ssl]# kubectl get node
  208.   NAME STATUS ROLES AGE VERSION
  209.   172.16.1.32 Ready <none> 22m v1.10.1
  210.   172.16.1.33 Ready <none> 3m v1.10.1

4.6 flanal网络部署

  1. 4.6.1 Flannel创建证书
  2.   [root@linux-node1 ~]#cd /usr/local/src/ssl
  3.   [root@linux-node1 ssl]# vim flanneld-csr.json
  4.   {
  5.    "CN": "flanneld",
  6.    "hosts": [],
  7.    "key": {
  8.    "algo": "rsa",
  9.    "size": 2048
  10.    },
  11.    "names": [
  12.    {
  13.    "C": "CN",
  14.    "ST": "BeiJing",
  15.    "L": "BeiJing",
  16.    "O": "k8s",
  17.    "OU": "System"
  18.    }
  19.    ]
  20.   }
  21.  
  22. 4.6.2 生成证书
  23.   [root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld
  24.   [root@linux-node1 ssl]# ls flanneld*.pem
  25. flanneld-key.pem flanneld.pem
  26.   [root@linux-node1 ssl]# ls -l flanneld*.pem
  27. -rw------- 1 root root 1675 Dec 27 18:55 flanneld-key.pem
  28. -rw-r--r-- 1 root root 1391 Dec 27 18:55 flanneld.pem
  29.  
  30. 4.6.3 分发证书
  31.   [root@linux-node1 ssl]# cp flanneld*.pem /opt/kubernetes/ssl/
  32.   # scp flanneld*.pem 172.16.1.32:/opt/kubernetes/ssl/
  33.   # scp flanneld*.pem 172.16.1.33:/opt/kubernetes/ssl/
  34.  
  35. 4.6.4 下载Flannel软件包
  36.   [root@linux-node1 ~]# cd /usr/local/src && wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
  37.   [root@linux-node1 src]# tar zxf flannel-v0.10.0-linux-amd64.tar.gz
  38.   [root@linux-node1 src]# cp flanneld mk-docker-opts.sh /opt/kubernetes/bin/
  39.   #复制到linux-node2节点
  40.   # scp flanneld mk-docker-opts.sh 172.16.1.32:/opt/kubernetes/bin/
  41.   # scp flanneld mk-docker-opts.sh 172.16.1.33:/opt/kubernetes/bin/
  42.   #复制对应脚本到/opt/kubernetes/bin目录下。
      [root@linux-node1 ~]# wget https://dl.k8s.io/v1.10.1/kubernetes.tar.gz #需要代理上网下载此包
  43.   [root@linux-node1 ~]# tar xf kubernetes.tar.gz -C /usr/local/src/ && cd /usr/local/src/kubernetes/cluster/centos/node/bin/
  44.   [root@linux-node1 bin]# cp remove-docker0.sh /opt/kubernetes/bin/
  45.   # scp remove-docker0.sh 172.16.1.32:/opt/kubernetes/bin/
  46.   # scp remove-docker0.sh 172.16.1.33:/opt/kubernetes/bin/
  47.  
  48. 4.6.5 配置Flannel
  49.   [root@linux-node1 ~]# vim /opt/kubernetes/cfg/flannel
  50.   FLANNEL_ETCD="-etcd-endpoints=https://172.16.1.31:2379,https://172.16.1.32:2379,https://172.16.1.33:2379"
  51.   FLANNEL_ETCD_KEY="-etcd-prefix=/kubernetes/network"
  52.   FLANNEL_ETCD_CAFILE="--etcd-cafile=/opt/kubernetes/ssl/ca.pem"
  53.   FLANNEL_ETCD_CERTFILE="--etcd-certfile=/opt/kubernetes/ssl/flanneld.pem"
  54.   FLANNEL_ETCD_KEYFILE="--etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem"
  55.   #复制配置到其它节点上
  56.   # scp /opt/kubernetes/cfg/flannel 172.16.1.32:/opt/kubernetes/cfg/
  57.   # scp /opt/kubernetes/cfg/flannel 172.16.1.33:/opt/kubernetes/cfg/
  58.  
  59. 4.6.6 设置Flannel系统服务
  60.   [root@linux-node1 ~]# vim /usr/lib/systemd/system/flannel.service
  61.   [Unit]
  62.   Description=Flanneld overlay address etcd agent
  63.   After=network.target
  64.   Before=docker.service
  65.   [Service]
  66.   EnvironmentFile=-/opt/kubernetes/cfg/flannel
  67.   ExecStartPre=/opt/kubernetes/bin/remove-docker0.sh
  68.   ExecStart=/opt/kubernetes/bin/flanneld ${FLANNEL_ETCD} ${FLANNEL_ETCD_KEY} ${FLANNEL_ETCD_CAFILE} ${FLANNEL_ETCD_CERTFILE} ${FLANNEL_ETCD_KEYFILE}
  69.   ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -d /run/flannel/docker
  70.   Type=notify
  71.   [Install]
  72.   WantedBy=multi-user.target
  73.   RequiredBy=docker.service
  74.   复制系统服务脚本到其它节点上
  75.   # scp /usr/lib/systemd/system/flannel.service 172.16.1.32:/usr/lib/systemd/system/
  76.   # scp /usr/lib/systemd/system/flannel.service 172.16.1.33:/usr/lib/systemd/system/
  77.  
  78. 4.6.7Flannel CNI集成】下载CNI插件
  79.   [root@linux-node1 ~]# wget https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz
  80.   [root@linux-node1 ~]# mkdir /opt/kubernetes/bin/cni
  81.   [root@linux-node1 ~]# tar zxf cni-plugins-amd64-v0.7.1.tgz -C /opt/kubernetes/bin/cni
  82.   # scp -r /opt/kubernetes/bin/cni 172.16.1.32:/opt/kubernetes/bin/
  83.   # scp -r /opt/kubernetes/bin/cni 172.16.1.33:/opt/kubernetes/bin/
  84.  
  85. 4.6.8Flannel CNI集成】创建Etcdkey
  86.   [root@linux-node1 ~]# /opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem --cert-file /opt/kubernetes/ssl/flanneld.pem --key-file /opt/kubernetes/ssl/flanneld-key.pem --no-sync -C https://172.16.1.31:2379,https://172.16.1.32:2379,https://172.16.1.33:2379 mk /kubernetes/network/config '{ "Network": "10.2.0.0/16", "Backend": { "Type": "vxlan", "VNI": 1 }}' >/dev/null 2>&1
  87.  
  88. 4.6.9Flannel CNI集成】启动flannel (所有节点都启动)
  89.   [root@linux-node1 ~]# systemctl daemon-reload
  90.   [root@linux-node1 ~]# systemctl enable flannel
  91.   [root@linux-node1 ~]# chmod +x /opt/kubernetes/bin/*
  92.   [root@linux-node1 ~]# systemctl start flannel
  93.  
  94. 4.6.10【Flannel CNI集成】查看服务状态
  95.   [root@linux-node1 ~]# systemctl status flannel
  96.  
  97. 4.6.11【Flannel CNI集成】配置Docker使用Flannel
  98.   [root@linux-node1 ~]# vim /usr/lib/systemd/system/docker.service
  99.   [Unit] #在Unit下面修改After和增加Requires
  100.   After=network-online.target firewalld.service flannel.service
  101.   Wants=network-online.target
  102.   Requires=flannel.service #docker启动 依赖flannel网络
  103.   [Service] #增加EnvironmentFile=-/run/flannel/docker
  104.   Type=notify
  105.   EnvironmentFile=-/run/flannel/docker
  106.   ExecStart=/usr/bin/dockerd $DOCKER_OPTS
  107.   #将配置复制到另外两个节点
  108.   # scp /usr/lib/systemd/system/docker.service 172.16.1.32:/usr/lib/systemd/system/
  109.   # scp /usr/lib/systemd/system/docker.service 172.16.1.33:/usr/lib/systemd/system/
  110.  
  111. 4.6.12【Flannel CNI集成】重启Docker (所有节点重启)
  112.   [root@linux-node1 ~]# systemctl daemon-reload
  113.   [root@linux-node1 ~]# systemctl restart docker

4.7 CoreDNS部署

  1. 4.7. 编写corDNS yaml文件
  2. [root@linux-node1 ~]# vim coredns.yaml
  3. apiVersion: v1
  4. kind: ServiceAccount
  5. metadata:
  6. name: coredns
  7. namespace: kube-system
  8. labels:
  9. kubernetes.io/cluster-service: "true"
  10. addonmanager.kubernetes.io/mode: Reconcile
  11. ---
  12. apiVersion: rbac.authorization.k8s.io/v1
  13. kind: ClusterRole
  14. metadata:
  15. labels:
  16. kubernetes.io/bootstrapping: rbac-defaults
  17. addonmanager.kubernetes.io/mode: Reconcile
  18. name: system:coredns
  19. rules:
  20. - apiGroups:
  21. - ""
  22. resources:
  23. - endpoints
  24. - services
  25. - pods
  26. - namespaces
  27. verbs:
  28. - list
  29. - watch
  30. ---
  31. apiVersion: rbac.authorization.k8s.io/v1
  32. kind: ClusterRoleBinding
  33. metadata:
  34. annotations:
  35. rbac.authorization.kubernetes.io/autoupdate: "true"
  36. labels:
  37. kubernetes.io/bootstrapping: rbac-defaults
  38. addonmanager.kubernetes.io/mode: EnsureExists
  39. name: system:coredns
  40. roleRef:
  41. apiGroup: rbac.authorization.k8s.io
  42. kind: ClusterRole
  43. name: system:coredns
  44. subjects:
  45. - kind: ServiceAccount
  46. name: coredns
  47. namespace: kube-system
  48. ---
  49. apiVersion: v1
  50. kind: ConfigMap
  51. metadata:
  52. name: coredns
  53. namespace: kube-system
  54. labels:
  55. addonmanager.kubernetes.io/mode: EnsureExists
  56. data:
  57. Corefile: |
  58. .: {
  59. errors
  60. health
  61. kubernetes cluster.local. in-addr.arpa ip6.arpa {
  62. pods insecure
  63. upstream
  64. fallthrough in-addr.arpa ip6.arpa
  65. }
  66. prometheus :
  67. proxy . /etc/resolv.conf
  68. cache
  69. }
  70. ---
  71. apiVersion: extensions/v1beta1
  72. kind: Deployment
  73. metadata:
  74. name: coredns
  75. namespace: kube-system
  76. labels:
  77. k8s-app: coredns
  78. kubernetes.io/cluster-service: "true"
  79. addonmanager.kubernetes.io/mode: Reconcile
  80. kubernetes.io/name: "CoreDNS"
  81. spec:
  82. replicas:
  83. strategy:
  84. type: RollingUpdate
  85. rollingUpdate:
  86. maxUnavailable:
  87. selector:
  88. matchLabels:
  89. k8s-app: coredns
  90. template:
  91. metadata:
  92. labels:
  93. k8s-app: coredns
  94. spec:
  95. serviceAccountName: coredns
  96. tolerations:
  97. - key: node-role.kubernetes.io/master
  98. effect: NoSchedule
  99. - key: "CriticalAddonsOnly"
  100. operator: "Exists"
  101. containers:
  102. - name: coredns
  103. image: coredns/coredns:1.0.
  104. imagePullPolicy: IfNotPresent
  105. resources:
  106. limits:
  107. memory: 170Mi
  108. requests:
  109. cpu: 100m
  110. memory: 70Mi
  111. args: [ "-conf", "/etc/coredns/Corefile" ]
  112. volumeMounts:
  113. - name: config-volume
  114. mountPath: /etc/coredns
  115. ports:
  116. - containerPort:
  117. name: dns
  118. protocol: UDP
  119. - containerPort:
  120. name: dns-tcp
  121. protocol: TCP
  122. livenessProbe:
  123. httpGet:
  124. path: /health
  125. port:
  126. scheme: HTTP
  127. initialDelaySeconds:
  128. timeoutSeconds:
  129. successThreshold:
  130. failureThreshold:
  131. dnsPolicy: Default
  132. volumes:
  133. - name: config-volume
  134. configMap:
  135. name: coredns
  136. items:
  137. - key: Corefile
  138. path: Corefile
  139. ---
  140. apiVersion: v1
  141. kind: Service
  142. metadata:
  143. name: coredns
  144. namespace: kube-system
  145. labels:
  146. k8s-app: coredns
  147. kubernetes.io/cluster-service: "true"
  148. addonmanager.kubernetes.io/mode: Reconcile
  149. kubernetes.io/name: "CoreDNS"
  150. spec:
  151. selector:
  152. k8s-app: coredns
  153. clusterIP: 10.1.0.2
  154. ports:
  155. - name: dns
  156. port:
  157. protocol: UDP
  158. - name: dns-tcp
  159. port:
  160. protocol: TCP
  161.  
  162. 4.7. 部署coredns
  163. [root@linux-node1 ~]# kubectl create -f coredns.yaml
  164.  
  165. 4.7. 测试DNS是否配置成功
  166. [root@linux-node1 ~]# kubectl run dns-test --rm -it --image=alpine /bin/sh
  167. If you don't see a command prompt, try pressing enter.
  168. / # ping www.baidu.com -c
  169. PING www.baidu.com (61.135.169.125): data bytes
  170. bytes from 61.135.169.125: seq= ttl= time=5.718 ms
  171. bytes from 61.135.169.125: seq= ttl= time=5.695 ms
  172. --- www.baidu.com ping statistics ---
  173. packets transmitted, packets received, % packet loss
  174. round-trip min/avg/max = 5.695/5.706/5.718 ms
  175. / #

4.8 dashboard部署

  1. 4.8. 创建dashboard yaml存放目录【自定义创建】
  2. [root@linux-node1 ~]# mkdir -p /root/dashboard_yaml_dir
  1. 4.8. 编写admin-user-sa-rbac.yaml文件
  2. [root@linux-node1 ~]# vim /root/dashboard_yaml_dir/admin-user-sa-rbac.yaml
  3. apiVersion: v1
  4. kind: ServiceAccount
  5. metadata:
  6. name: admin-user
  7. namespace: kube-system
  8. ---
  9. apiVersion: rbac.authorization.k8s.io/v1
  10. kind: ClusterRoleBinding
  11. metadata:
  12. name: admin-user
  13. roleRef:
  14. apiGroup: rbac.authorization.k8s.io
  15. kind: ClusterRole
  16. name: cluster-admin
  17. subjects:
  18. - kind: ServiceAccount
  19. name: admin-user
  20. namespace: kube-system
  1. 4.8. 编写kubernetes-dashboard.yaml文件
  2. [root@linux-node1 ~]# vim /root/dashboard_yaml_dir/kubernetes-dashboard.yaml
  3. # Copyright The Kubernetes Authors.
  4. #
  5. # Licensed under the Apache License, Version 2.0 (the "License");
  6. # you may not use this file except in compliance with the License.
  7. # You may obtain a copy of the License at
  8. #
  9. # http://www.apache.org/licenses/LICENSE-2.0
  10. #
  11. # Unless required by applicable law or agreed to in writing, software
  12. # distributed under the License is distributed on an "AS IS" BASIS,
  13. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  14. # See the License for the specific language governing permissions and
  15. # limitations under the License.
  16.  
  17. # Configuration to deploy release version of the Dashboard UI compatible with
  18. # Kubernetes 1.8.
  19. #
  20. # Example usage: kubectl create -f <this_file>
  21.  
  22. # ------------------- Dashboard Secret ------------------- #
  23.  
  24. apiVersion: v1
  25. kind: Secret
  26. metadata:
  27. labels:
  28. k8s-app: kubernetes-dashboard
  29. name: kubernetes-dashboard-certs
  30. namespace: kube-system
  31. type: Opaque
  32.  
  33. ---
  34. # ------------------- Dashboard Service Account ------------------- #
  35.  
  36. apiVersion: v1
  37. kind: ServiceAccount
  38. metadata:
  39. labels:
  40. k8s-app: kubernetes-dashboard
  41. name: kubernetes-dashboard
  42. namespace: kube-system
  43.  
  44. ---
  45. # ------------------- Dashboard Role & Role Binding ------------------- #
  46.  
  47. kind: Role
  48. apiVersion: rbac.authorization.k8s.io/v1
  49. metadata:
  50. name: kubernetes-dashboard-minimal
  51. namespace: kube-system
  52. rules:
  53. # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
  54. - apiGroups: [""]
  55. resources: ["secrets"]
  56. verbs: ["create"]
  57. # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
  58. - apiGroups: [""]
  59. resources: ["configmaps"]
  60. verbs: ["create"]
  61. # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  62. - apiGroups: [""]
  63. resources: ["secrets"]
  64. resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  65. verbs: ["get", "update", "delete"]
  66. # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  67. - apiGroups: [""]
  68. resources: ["configmaps"]
  69. resourceNames: ["kubernetes-dashboard-settings"]
  70. verbs: ["get", "update"]
  71. # Allow Dashboard to get metrics from heapster.
  72. - apiGroups: [""]
  73. resources: ["services"]
  74. resourceNames: ["heapster"]
  75. verbs: ["proxy"]
  76. - apiGroups: [""]
  77. resources: ["services/proxy"]
  78. resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  79. verbs: ["get"]
  80.  
  81. ---
  82. apiVersion: rbac.authorization.k8s.io/v1
  83. kind: RoleBinding
  84. metadata:
  85. name: kubernetes-dashboard-minimal
  86. namespace: kube-system
  87. roleRef:
  88. apiGroup: rbac.authorization.k8s.io
  89. kind: Role
  90. name: kubernetes-dashboard-minimal
  91. subjects:
  92. - kind: ServiceAccount
  93. name: kubernetes-dashboard
  94. namespace: kube-system
  95.  
  96. ---
  97. # ------------------- Dashboard Deployment ------------------- #
  98.  
  99. kind: Deployment
  100. apiVersion: apps/v1
  101. metadata:
  102. labels:
  103. k8s-app: kubernetes-dashboard
  104. name: kubernetes-dashboard
  105. namespace: kube-system
  106. spec:
  107. replicas:
  108. revisionHistoryLimit:
  109. selector:
  110. matchLabels:
  111. k8s-app: kubernetes-dashboard
  112. template:
  113. metadata:
  114. labels:
  115. k8s-app: kubernetes-dashboard
  116. spec:
  117. containers:
  118. - name: kubernetes-dashboard
  119. #image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
  120. image: mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.8.3
  121. ports:
  122. - containerPort:
  123. protocol: TCP
  124. args:
  125. - --auto-generate-certificates
  126. # Uncomment the following line to manually specify Kubernetes API server Host
  127. # If not specified, Dashboard will attempt to auto discover the API server and connect
  128. # to it. Uncomment only if the default does not work.
  129. # - --apiserver-host=http://my-address:port
  130. volumeMounts:
  131. - name: kubernetes-dashboard-certs
  132. mountPath: /certs
  133. # Create on-disk volume to store exec logs
  134. - mountPath: /tmp
  135. name: tmp-volume
  136. livenessProbe:
  137. httpGet:
  138. scheme: HTTPS
  139. path: /
  140. port:
  141. initialDelaySeconds:
  142. timeoutSeconds:
  143. volumes:
  144. - name: kubernetes-dashboard-certs
  145. secret:
  146. secretName: kubernetes-dashboard-certs
  147. - name: tmp-volume
  148. emptyDir: {}
  149. serviceAccountName: kubernetes-dashboard
  150. # Comment the following tolerations if Dashboard must not be deployed on master
  151. tolerations:
  152. - key: node-role.kubernetes.io/master
  153. effect: NoSchedule
  154.  
  155. ---
  156. # ------------------- Dashboard Service ------------------- #
  157.  
  158. kind: Service
  159. apiVersion: v1
  160. metadata:
  161. labels:
  162. k8s-app: kubernetes-dashboard
  163. kubernetes.io/cluster-service: "true"
  164. addonmanager.kubernetes.io/mode: Reconcile
  165. name: kubernetes-dashboard
  166. namespace: kube-system
  167. spec:
  168. type: NodePort
  169. ports:
  170. - port:
  171. targetPort:
  172. nodePort:
  173. selector:
  174. k8s-app: kubernetes-dashboard
  175. type: NodePort
  1. 4.8. 编写ui-admin-rbac.yaml文件
  2. [root@linux-node1 ~]# vim /root/dashboard_yaml_dir/ui-admin-rbac.yaml
  3. kind: ClusterRole
  4. apiVersion: rbac.authorization.k8s.io/v1
  5. metadata:
  6. name: ui-admin
  7. rules:
  8. - apiGroups:
  9. - ""
  10. resources:
  11. - services
  12. - services/proxy
  13. verbs:
  14. - '*'
  15. ---
  16. apiVersion: rbac.authorization.k8s.io/v1
  17. kind: RoleBinding
  18. metadata:
  19. name: ui-admin-binding
  20. namespace: kube-system
  21. roleRef:
  22. apiGroup: rbac.authorization.k8s.io
  23. kind: ClusterRole
  24. name: ui-admin
  25. subjects:
  26. - apiGroup: rbac.authorization.k8s.io
  27. kind: User
  28. name: admin
  1. 4.8. 编写ui-read-rbac.yaml文件
  2. [root@linux-node1 ~]# vim /root/dashboard_yaml_dir/ui-read-rbac.yaml
  3. kind: ClusterRole
  4. apiVersion: rbac.authorization.k8s.io/v1
  5. metadata:
  6. name: ui-read
  7. rules:
  8. - apiGroups:
  9. - ""
  10. resources:
  11. - services
  12. - services/proxy
  13. verbs:
  14. - get
  15. - list
  16. - watch
  17.  
  18. ---
  19. apiVersion: rbac.authorization.k8s.io/v1
  20. kind: RoleBinding
  21. metadata:
  22. name: ui-read-binding
  23. namespace: kube-system
  24. roleRef:
  25. apiGroup: rbac.authorization.k8s.io
  26. kind: ClusterRole
  27. name: ui-read
  28. subjects:
  29. - apiGroup: rbac.authorization.k8s.io
  30. kind: User
  31. name: readonly
  1. 4.8.6 创建Dashboard
  2. [root@linux-node1 ~]# kubectl create -f /root/dashboard_yaml_dir/
  3. [root@linux-node1 ~]# kubectl cluster-info
  4. Kubernetes master is running at https://172.16.1.31:6443
  5. kubernetes-dashboard is running at
  6. https://172.16.1.31:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
  7. To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
  1. 4.8. 访问Dashboard
  2. https://172.16.1.31:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
  3. 用户名:admin 密码:admin 选择Token令牌模式登录。
  1. 4.8. 获取Token
  2. kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
  1.  

Kubernetes V1.15 二进制部署集群的更多相关文章

  1. Kubernetes v1.12/v1.13 二进制部署集群(HTTPS+RBAC)

    官方提供的几种Kubernetes部署方式 minikube Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,尝试Kubernetes或日常开发的用户使用.不能用于生产环 ...

  2. linux运维、架构之路-Kubernetes离线、二进制部署集群

    一.Kubernetes对应Docker的版本支持列表 Kubernetes 1.9 <--Docker 1.11.2 to 1.13.1 and 17.03.x Kubernetes 1.8 ...

  3. [转贴]CentOS7.5 Kubernetes V1.13(最新版)二进制部署集群

    CentOS7.5 Kubernetes V1.13(最新版)二进制部署集群 http://blog.51cto.com/10880347/2326146   一.概述 kubernetes 1.13 ...

  4. 2、kubeadm快速部署kubernetes(v1.15.0)集群190623

    一.网络规划 节点网络:192.168.100.0/24 Service网络:10.96.0.0/12 Pod网络(默认):10.244.0.0/16 二.组件分布及节点规划 master(192.1 ...

  5. 基于Kubernetes v1.24.0的集群搭建(二)

    上一篇文章主要是介绍了,每台虚拟机的环境配置.接下来我们开始有关K8S的相关部署. 另外补充一下上一篇文章中的K8S的change​log链接: https://github.com/kubernet ...

  6. 基于Kubernetes v1.24.0的集群搭建(三)

    1 使用kubeadm部署Kubernetes 如无特殊说明,以下操作可以在所有节点上进行. 1.1 首先我们需要配置一下阿里源 cat <<EOF > /etc/yum.repos ...

  7. 基于Kubernetes v1.24.0的集群搭建(一)

    一.写在前面 K8S 1.24作为一个很重要的版本更新,它为我们提供了很多重要功能.该版本涉及46项增强功能:其中14项已升级为稳定版,15项进入beta阶段,13项则刚刚进入alpha阶段.此外,另 ...

  8. Kubernetes学习之路(26)之kubeasz+ansible部署集群

    目录 1.环境说明 2.准备工作 3.分步骤安装 3.1.创建证书和安装准备 3.2.安装etcd集群 3.3.安装docker 3.4.安装master节点 3.5.安装node节点 3.6.部署集 ...

  9. Kubernetes学习之路(八)之Kubeadm部署集群

    一.环境说明 节点名称 ip地址 部署说明 Pod 网段 Service网段 系统说明 k8s-master 192.168.56.11 docker.kubeadm.kubectl.kubelet ...

随机推荐

  1. Mysql安装、配置、优化

    MYSQL定义 MySQL是一个关系型数据库管理系统,由瑞典MySQL AB 公司开发,属于 Oracle旗下产品.MySQL 是最流行的关系型数据库管理系统之一,在 WEB 应用方面,MySQL是最 ...

  2. redhat 6.5 更换yum源

    新安装了redhat6.5.安装后,登录系统,使用yum update 更新系统.提示: Loaded plugins: product-id, security, subscription-mana ...

  3. Spring学习的第一天

    Spring是以Ioc和Aop为内核,提供了表现层spring MVC 和持久层Spring JDBC等众多应用技术,还能整合开源世界众多著名的第三方框架和类库,成为使用最多的JavaEE企业应用开源 ...

  4. Python网络爬虫_Scrapy框架_2.logging模块的使用

    logging模块提供日志服务 在scrapy框架中已经对其进行一些操作所以使用更为简单 在Scrapy框架中使用: 1.在setting.py文件中设置LOG_LEVEL(设置日志等级,只有高于等于 ...

  5. Python常用的正则表达式处理函数

    Python常用的正则表达式处理函数 正则表达式是一个特殊的字符序列,用于简洁表达一组字符串特征,检查一个字符串是否与某种模式匹配,使用起来十分方便. 在Python中,我们通过调用re库来使用re模 ...

  6. RN调试坑点总结(不定期更新)

    前言 我感觉,如果模拟器是个人的话,我已经想打死他了 大家不要催我学flutter啦,哈哈哈,学了后跟大家分享下 RN报错的终极解决办法 众所周知,RN经常遇到无可奈何的超级Bug, 那么对于这些问题 ...

  7. [译]Vulkan教程(09)窗口表面

    [译]Vulkan教程(09)窗口表面 Since Vulkan is a platform agnostic API, it can not interface directly with the ...

  8. IDEA生成可执行的jar文件

    场景 用IDEA开发一个Java控制台程序,项目完成后,打包给客户使用. 做法 首先用IDEA打开要生成jar的项目,打开后选择File->Project Structure... 选择Arti ...

  9. redis缓存穿透,缓存击穿,缓存雪崩

    概念解释 redis 缓存穿透 key对应的数据在数据源并不存在,每次针对此key的请求从缓存获取不到,请求都会到数据源,从而可能压垮数据源.比如用一个不存在的用户id获取用户信息,不论缓存还是数据库 ...

  10. 使用opencv和numpy实现矩阵相乘和按元素相乘 matrix multiplication vs element-wise multiplication

    本文首发于个人博客https://kezunlin.me/post/1e37a6/,欢迎阅读最新内容! opencv and numpy matrix multiplication vs elemen ...