主要参考https://github.com/opsnull/follow-me-install-kubernetes-cluster,采用Flanel和docker

系统信息

角色 系统 CPU Core 内存 主机名称 ip 安装组件
master 18.04.1-Ubuntu 4 8G master 192.168.0.107 kubectl,kube-apiserver,kube-controller-manager,kube-scheduler,etcd,flannald
slave 18.04.1-Ubuntu 4 4G slave 192.168.0.114 docker,flannald,kubelet,kube-proxy,coredns

k8s&docker版本

软件 版本
k8s 1.17.2
etcd v3.3.18
coredns 1.6.6(docker镜像)
Flanel v0.11.0
docker 18.09

安装前准备(主节点和从节点都需要执行)

  1. 关闭swap

    1. sudo swapoff -a
    2. sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
  2. 配置常用软件安装源

    在/etc/apt/sources.list.d/ 追加system.list文件,内容如下

    1. deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted
    2. deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted
    3. deb http://mirrors.aliyun.com/ubuntu/ bionic universe
    4. deb http://mirrors.aliyun.com/ubuntu/ bionic-updates universe
    5. deb http://mirrors.aliyun.com/ubuntu/ bionic multiverse
    6. deb http://mirrors.aliyun.com/ubuntu/ bionic-updates multiverse
    7. deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse

    执行

    1. sudo apt-get update
  3. 创建工作目录

    1. mkdir -p /opt/k8s/{bin,work} /etc/{kubernetes,etcd}/cert
  4. 将 /opt/k8s/bin追加到$PATH中

    1. echo 'PATH=/opt/k8s/bin:$PATH' >>/root/.bashrc
    2. source /root/.bashrc
  5. 安装ssh服务,并设置root可以执行

    1. apt install openssh-server
    2. #编辑/etc/ssh/sshd_config文件,在#PermitRootLogin prohibit-password下追加PermitRootLogin yes ,重启ssh服务
    3. systemctl restart ssh.service
  6. 安装依赖工具包

    1. apt install -y ipvsadm ipset curl jq
  7. 设置主机名

    1. cat >> /etc/hosts <<EOF
    2. 192.168.0.107 master
    3. 192.168.0.114 slave
    4. EOF
  8. 添加节点信任关系,只用在master节点上执行

    1. ssh-keygen -t rsa
    2. ssh-copy-id root@192.168.0.114

创建CA根证书和秘钥(在master节点上执行)

  1. 安装cfssl工具集

    1. cd /opt/k8s/work
    2. wget https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl_1.4.1_linux_amd64
    3. cp cfssl_1.4.1_linux_amd64 /opt/k8s/bin/cfssl
    4. wget https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssljson_1.4.1_linux_amd64
    5. cp cfssljson_1.4.1_linux_amd64 /opt/k8s/bin/cfssljson
    6. wget https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl-certinfo_1.4.1_linux_amd64
    7. cp cfssl-certinfo_1.4.1_linux_amd64 /opt/k8s/bin/cfssl-certinfo
    8. chmod +x /opt/k8s/bin/*
  2. 创建CA配置文件

    1. cd /opt/k8s/work
    2. cat > ca-config.json <<EOF
    3. {
    4. "signing": {
    5. "default": {
    6. "expiry": "87600h"
    7. },
    8. "profiles": {
    9. "kubernetes": {
    10. "usages": [
    11. "signing",
    12. "key encipherment",
    13. "server auth",
    14. "client auth"
    15. ],
    16. "expiry": "87600h"
    17. }
    18. }
    19. }
    20. }
    21. EOF
    • signing:表示该证书可用于签名其它证书(生成的 ca.pem 证书中 CA=TRUE);
    • server auth:表示 client 可以用该该证书对 server 提供的证书进行验证;
    • client auth:表示 server 可以用该该证书对 client 提供的证书进行验证;
    • expiry : "87600h":证书有效期设置为 10 年;
  3. 创建证书签名请求文件

    1. cd /opt/k8s/work
    2. cat > ca-csr.json <<EOF
    3. {
    4. "CN": "kubernetes",
    5. "key": {
    6. "algo": "rsa",
    7. "size": 2048
    8. },
    9. "names": [
    10. {
    11. "C": "CN",
    12. "ST": "NanJing",
    13. "L": "NanJing",
    14. "O": "k8s",
    15. "OU": "system"
    16. }
    17. ],
    18. "ca": {
    19. "expiry": "87600h"
    20. }
    21. }
    22. EOF
  4. 生成证书

    1. cd /opt/k8s/work
    2. cfssl gencert -initca ca-csr.json | cfssljson -bare ca
    3. ls ca*
  5. 安装证书

    1. cd /opt/k8s/work
    2. cp ca*.pem ca-config.json /etc/kubernetes/cert
    3. # 分发到从节点
    4. export node_ip=192.168.0.114
    5. scp ca*.pem ca-config.json root@${node_ip}:/etc/kubernetes/cert/

部署 etcd(在master节点上执行)

  1. 下载安装etcd

    1. cd /opt/k8s/work
    2. wget https://github.com/etcd-io/etcd/releases/download/v3.3.18/etcd-v3.3.18-linux-amd64.tar.gz
    3. tar -xvf etcd-v3.3.18-linux-amd64.tar.gz
  2. 安装etcd

    1. cd /opt/k8s/work
    2. cp etcd-v3.3.18-linux-amd64/etcd* /opt/k8s/bin/
    3. chmod +x /opt/k8s/bin/*
  3. 创建 etcd 证书和私钥

    1. 创建证书签名请求文件


      1. cd /opt/k8s/work
      2. cat > etcd-csr.json <<EOF
      3. {
      4. "CN": "etcd",
      5. "hosts": [
      6. "127.0.0.1",
      7. "192.168.0.107"
      8. ],
      9. "key": {
      10. "algo": "rsa",
      11. "size": 2048
      12. },
      13. "names": [
      14. {
      15. "C": "CN",
      16. "ST": "NanJing",
      17. "L": "NanJing",
      18. "O": "k8s",
      19. "OU": "system"
      20. }
      21. ]
      22. }
      23. EOF
      • 指定授权使用该证书的 etcd 节点 IP 列表
    2. 生成证书和私钥

      1. cd /opt/k8s/work
      2. cfssl gencert -ca=/opt/k8s/work/ca.pem \
      3. -ca-key=/opt/k8s/work/ca-key.pem \
      4. -config=/opt/k8s/work/ca-config.json \
      5. -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
      6. ls etcd*pem
    3. 安装证书

      1. cd /opt/k8s/work
      2. cp etcd*.pem /etc/etcd/cert/
  4. 创建etcd启动文件

    1. cat> /etc/systemd/system/etcd.service<< EOF
    2. [Unit]
    3. Description=Etcd Server
    4. After=network.target
    5. After=network-online.target
    6. Wants=network-online.target
    7. Documentation=https://github.com/coreos
    8. [Service]
    9. Type=notify
    10. WorkingDirectory=/data/k8s/etcd/data
    11. ExecStart=/opt/k8s/bin/etcd \\
    12. --data-dir=/etc/etcd/cfg/etcd \\
    13. --name=etcd-chengf \\
    14. --cert-file=/etc/etcd/cert/etcd.pem \\
    15. --key-file=/etc/etcd/cert/etcd-key.pem \\
    16. --trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
    17. --peer-cert-file=/etc/etcd/cert/etcd.pem \\
    18. --peer-key-file=/etc/etcd/cert/etcd-key.pem \\
    19. --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
    20. --peer-client-cert-auth \\
    21. --client-cert-auth \\
    22. --listen-peer-urls=https://192.168.0.107:2380 \\
    23. --initial-advertise-peer-urls=https://192.168.0.107:2380 \\
    24. --listen-client-urls=https://192.168.0.107:2379,http://127.0.0.1:2379 \\
    25. --advertise-client-urls=https://192.168.0.107:2379 \\
    26. --initial-cluster-token=etcd-cluster-0\\
    27. --initial-cluster=etcd-chengf=https://192.168.0.107:2380 \\
    28. --initial-cluster-state=new \\
    29. --auto-compaction-mode=periodic \\
    30. --auto-compaction-retention=1 \\
    31. --max-request-bytes=33554432 \\
    32. --quota-backend-bytes=6442450944 \\
    33. --heartbeat-interval=250 \\
    34. --election-timeout=2000
    35. Restart=on-failure
    36. RestartSec=5
    37. LimitNOFILE=65536
    38. [Install]
    39. WantedBy=multi-user.target
    40. EOF
    • WorkingDirectory、--data-dir:指定工作目录和数据目录,需在启动服务前创建这个目录;
    • --name:指定节点名称,当 --initial-cluster-state 值为 new 时,--name 的参数值必须位于 --initial-cluster 列表中;
    • --cert-file、--key-file:etcd server 与 client 通信时使用的证书和私钥;
    • --trusted-ca-file:签名 client 证书的 CA 证书,用于验证 client 证书;
    • --peer-cert-file、--peer-key-file:etcd 与 peer 通信使用的证书和私钥;
    • --peer-trusted-ca-file:签名 peer 证书的 CA 证书,用于验证 peer 证书;
  5. 创建etcd数据目录

    1. mkdir -p /data/k8s/etcd/data
  6. 启动 etcd 服务

    1. systemctl enable etcd && systemctl start etcd
  7. 检查启动结果

    1. systemctl status etcd|grep Active
    • 确保状态为 active (running),否则查看日志,确认原因

    • 如果出现异常,通过如下命令查看

      1. journalctl -u etcd
  8. 验证服务状态

    1. export ETCD_ENDPOINTS=https://192.168.0.107:2379
    2. etcdctl \
    3. --endpoints=${ETCD_ENDPOINTS} \
    4. --ca-file=/etc/kubernetes/cert/ca.pem \
    5. --cert-file=/etc/etcd/cert/etcd.pem \
    6. --key-file=/etc/etcd/cert/etcd-key.pem cluster-health
    1. etcdctl \
    2. --endpoints=${ETCD_ENDPOINTS} \
    3. --ca-file=/etc/kubernetes/cert/ca.pem \
    4. --cert-file=/etc/etcd/cert/etcd.pem \
    5. --key-file=/etc/etcd/cert/etcd-key.pem member list

    输出结果

    1. root@master:/opt/k8s/work# etcdctl --endpoints=${ETCD_ENDPOINTS} --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/etcd/cert/etcd.pem --key-file=/etc/etcd/cert/etcd-key.pem cluster-health

member c0d3b56a9878e38f is healthy: got healthy result from https://192.168.0.107:2379

cluster is healthy

root@master:/opt/k8s/work# etcdctl --endpoints=${ETCD_ENDPOINTS} --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/etcd/cert/etcd.pem --key-file=/etc/etcd/cert/etcd-key.pemmember list

c0d3b56a9878e38f: name=etcd-chengf peerURLs=https://192.168.0.107:2380 clientURLs=https://192.168.0.107:2379 isLeader=true

```

部署 flannel 网络(在master节点上执行)

kubernetes组件kubelet服务依赖docker服务,docker网络需要用flannel来配置docker0网桥的ip地址,所以需要先安装flannel网络组建

flannel 使用 vxlan 技术为各节点创建一个可以互通的 Pod 网络,使用的端口为 UDP 8472(需要开放该端口,如公有云 AWS 等)。

flanneld 第一次启动时,从 etcd 获取配置的 Pod 网段信息,为本节点分配一个未使用的地址段,然后创建 flannedl.1 网络接口(也可能是其它名称,如 flannel1 等)。

flannel 将分配给自己的 Pod 网段信息写入 /run/flannel/docker 文件,docker 后续使用这个文件中的环境变量设置 docker0 网桥,从而从这个地址段为本节点的所有 Pod 容器分配 IP

  1. 下载和安装flanneld 二进制文件


    1. cd /opt/k8s/work
    2. mkdir flannel
    3. wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
    4. tar -xzvf flannel-v0.11.0-linux-amd64.tar.gz -C flannel
    5. cp flannel/{flanneld,mk-docker-opts.sh} /opt/k8s/bin/
    6. export node_ip=192.168.0.114
    7. scp flannel/{flanneld,mk-docker-opts.sh} root@${192.168.0.114}:/opt/k8s/bin/
  2. 创建 flanneld 证书和私钥

    flanneld 从 etcd 集群存取网段分配信息,而 etcd 集群启用了双向 x509 证书认证,所以需要为 flanneld 生成证书和私钥。

    1. 创建证书签名请求

      1. cd /opt/k8s/work
      2. cat > flanneld-csr.json <<EOF
      3. {
      4. "CN": "flanneld",
      5. "hosts": [],
      6. "key": {
      7. "algo": "rsa",
      8. "size": 2048
      9. },
      10. "names": [
      11. {
      12. "C": "CN",
      13. "ST": "NanJing",
      14. "L": "NanJing",
      15. "O": "k8s",
      16. "OU": "system"
      17. }
      18. ]
      19. }
      20. EOF
    2. 生成证书和私钥

      1. cfssl gencert -ca=/opt/k8s/work/ca.pem \
      2. -ca-key=/opt/k8s/work/ca-key.pem \
      3. -config=/opt/k8s/work/ca-config.json \
      4. -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld
      5. ls flanneld*pem
    3. 将生成的证书和私钥分发到所有节点

      1. cd /opt/k8s/work
      2. mkdir -p /etc/flanneld/cert
      3. cp flanneld*.pem /etc/flanneld/cert
      4. export node_ip=192.168.0.114
      5. ssh root@${node_ip} "mkdir -p /etc/flanneld/cert"
      6. scp flanneld*.pem root@${node_ip}:/etc/flanneld/cert
  3. 向 etcd 写入集群 Pod 网段信息

    1. cd /opt/k8s/work
    2. export FLANNEL_ETCD_PREFIX="/kubernetes/network"
    3. export ETCD_ENDPOINTS="https://192.168.0.107:2379"
    4. etcdctl \
    5. --endpoints=${ETCD_ENDPOINTS} \
    6. --ca-file=/opt/k8s/work/ca.pem \
    7. --cert-file=/opt/k8s/work/flanneld.pem \
    8. --key-file=/opt/k8s/work/flanneld-key.pem \
    9. mk ${FLANNEL_ETCD_PREFIX}/config '{"Network":"172.30.0.0/16", "SubnetLen": 24, "Backend": {"Type": "vxlan"}}'
    • 写入的 Pod 网段 Network 网络段对应的数值(如 /16)必须小于 SubnetLen对应的值(如24)
  4. 创建 flanneld 服务的启动文件


    1. cd /opt/k8s/work
    2. export FLANNEL_ETCD_PREFIX="/kubernetes/network"
    3. export ETCD_ENDPOINTS="https://192.168.0.107:2379"
    4. cat > flanneld.service << EOF
    5. [Unit]
    6. Description=Flanneld overlay address etcd agent
    7. After=network.target
    8. After=network-online.target
    9. Wants=network-online.target
    10. After=etcd.service
    11. Before=docker.service
    12. [Service]
    13. Type=notify
    14. ExecStart=/opt/k8s/bin/flanneld \\
    15. -etcd-cafile=/etc/kubernetes/cert/ca.pem \\
    16. -etcd-certfile=/etc/flanneld/cert/flanneld.pem \\
    17. -etcd-keyfile=/etc/flanneld/cert/flanneld-key.pem \\
    18. -etcd-endpoints=${ETCD_ENDPOINTS} \\
    19. -etcd-prefix=${FLANNEL_ETCD_PREFIX} \\
    20. -ip-masq
    21. ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
    22. Restart=always
    23. RestartSec=5
    24. StartLimitInterval=0
    25. [Install]
    26. WantedBy=multi-user.target
    27. RequiredBy=docker.service
    28. EOF
    • mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网段信息,通过-d参数写入 /run/flannel/docker 文件,后续 docker 启动时使用这个文件中的环境变量配置 docker0 网桥, -k 参数控制生成文件中变量的名称,下面docker启动时会用到这个变量;
    • flanneld 使用系统缺省路由所在的接口与其它节点通信,对于有多个网络接口(如内网和公网)的节点,可以用 -iface 参数指定通信接口;
    • -ip-masq: flanneld 为访问 Pod 网络外的流量设置 SNAT 规则,同时将传递给 Docker 的变量 --ip-masq(/run/flannel/docker 文件中)设置为 false,这样 Docker 将不再创建 SNAT 规则; Docker 的 --ip-masq 为 true 时,创建的 SNAT 规则比较“暴力”:将所有本节点 Pod 发起的、访问非 docker0 接口的请求做 SNAT,这样访问其他节点 Pod 的请求来源 IP 会被设置为 flannel.1 接口的 IP,导致目的 Pod 看不到真实的来源 Pod IP。 flanneld 创建的 SNAT 规则比较温和,只对访问非 Pod 网段的请求做 SNAT
  5. 分发flanneld服务

    1. cd /opt/k8s/work
    2. cp flanneld.service /etc/systemd/system/
    3. export node_ip=192.168.0.114
    4. scp flanneld.service root@${node_ip}:/etc/systemd/system/
  6. 启动flanneld服务

    1. systemctl daemon-reload && systemctl enable flanneld && systemctl restart flanneld
    2. ssh root@${node_ip) "systemctl daemon-reload && systemctl enable flanneld && systemctl restart flanneld"
  7. 检查启动结果

    1. systemctl status flanneld|grep Active
    2. export node_ip=192.168.0.114
    3. ssh root@${node_ip} "systemctl status flanneld|grep Active"
    • 确保状态为 active (running),否则查看日志,确认原因

    • 如果出现异常,通过如下命令查看

      1. journalctl -u flanneld
  8. 检查分配给各 flanneld 的 Pod 网段信息

    1. export FLANNEL_ETCD_PREFIX="/kubernetes/network"
    2. export ETCD_ENDPOINTS="https://192.168.0.107:2379"
    3. etcdctl \
    4. --endpoints=${ETCD_ENDPOINTS} \
    5. --ca-file=/etc/kubernetes/cert/ca.pem \
    6. --cert-file=/etc/flanneld/cert/flanneld.pem \
    7. --key-file=/etc/flanneld/cert/flanneld-key.pem \
    8. get ${FLANNEL_ETCD_PREFIX}/config

    输出结果

    1. {"Network":"172.30.0.0/16", "SubnetLen": 24, "Backend": {"Type": "vxlan"}}
  9. 查看已分配的 Pod 子网段列表

    1. export FLANNEL_ETCD_PREFIX="/kubernetes/network"
    2. export ETCD_ENDPOINTS="https://192.168.0.107:2379"
    3. etcdctl \
    4. --endpoints=${ETCD_ENDPOINTS} \
    5. --ca-file=/etc/kubernetes/cert/ca.pem \
    6. --cert-file=/etc/flanneld/cert/flanneld.pem \
    7. --key-file=/etc/flanneld/cert/flanneld-key.pem \
    8. ls ${FLANNEL_ETCD_PREFIX}/subnets

    输出结果

    1. /kubernetes/network/subnets/172.30.22.0-24
    2. /kubernetes/network/subnets/172.30.78.0-24
  10. 检查节点 flannel 网络信息

    1. root@master:/opt/k8s/work# ip addr show
    2. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    3. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    4. inet 127.0.0.1/8 scope host lo
    5. valid_lft forever preferred_lft forever
    6. inet6 ::1/128 scope host
    7. valid_lft forever preferred_lft forever
    8. 2: enp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    9. link/ether 04:92:26:13:92:2b brd ff:ff:ff:ff:ff:ff
    10. 3: wlp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    11. link/ether d0:c5:d3:57:73:01 brd ff:ff:ff:ff:ff:ff
    12. inet 192.168.0.107/24 brd 192.168.0.255 scope global dynamic noprefixroute wlp3s0
    13. valid_lft 6385sec preferred_lft 6385sec
    14. inet6 fe80::1fda:e90a:207a:67e4/64 scope link noprefixroute
    15. valid_lft forever preferred_lft forever
    16. 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    17. link/ether 12:cb:66:43:de:36 brd ff:ff:ff:ff:ff:ff
    18. inet 172.30.22.0/32 scope global flannel.1
    19. valid_lft forever preferred_lft forever
    20. inet6 fe80::10cb:66ff:fe43:de36/64 scope link
    21. valid_lft forever preferred_lft forever
    22. root@master:/opt/k8s/work# ip route show |grep flannel.1
    23. 172.30.78.0/24 via 172.30.78.0 dev flannel.1 onlink
  11. 验证各节点能通过 Pod 网段互通

    1. root@master:/opt/k8s/work# ip addr show flannel.1 |grep -w inet
    2. inet 172.30.22.0/32 scope global flannel.1
    3. root@master:/opt/k8s/work# ssh 192.168.0.114 "/sbin/ip addr show flannel.1|grep -w inet"
    4. inet 172.30.78.0/32 scope global flannel.1
    5. root@master:/opt/k8s/work# ping -c 1 172.30.78.0
    6. PING 172.30.78.0 (172.30.78.0) 56(84) bytes of data.
    7. 64 bytes from 172.30.78.0: icmp_seq=1 ttl=64 time=80.7 ms
    8. --- 172.30.78.0 ping statistics ---
    9. 1 packets transmitted, 1 received, 0% packet loss, time 0ms
    10. rtt min/avg/max/mdev = 80.707/80.707/80.707/0.000 ms
    11. root@master:/opt/k8s/work# ssh 192.168.0.114 "ping -c 1 172.30.22.0"
    12. PING 172.30.22.0 (172.30.22.0) 56(84) bytes of data.
    13. 64 bytes from 172.30.22.0: icmp_seq=1 ttl=64 time=4.09 ms
    14. --- 172.30.22.0 ping statistics ---
    15. 1 packets transmitted, 1 received, 0% packet loss, time 0ms
    16. rtt min/avg/max/mdev = 4.094/4.094/4.094/0.000 ms
  12. 生成文件

    1. root@master:/opt/k8s/work# cat /run/flannel/subnet.env
    2. FLANNEL_NETWORK=172.30.0.0/16
    3. FLANNEL_SUBNET=172.30.22.1/24
    4. FLANNEL_MTU=1450
    5. FLANNEL_IPMASQ=true
    6. root@master:/opt/k8s/work# cat /run/flannel/docker
    7. DOCKER_OPT_BIP="--bip=172.30.22.1/24"
    8. DOCKER_OPT_IPMASQ="--ip-masq=false"
    9. DOCKER_OPT_MTU="--mtu=1450"
    10. DOCKER_NETWORK_OPTIONS=" --bip=172.30.22.1/24 --ip-masq=false --mtu=1450"

部署docker服务(在master节点上执行)

  1. 下载和分发 docker 二进制文件

    1. cd /opt/k8s/work
    2. wget https://download.docker.com/linux/static/stable/x86_64/docker-18.09.6.tgz
    3. tar -xvf docker-18.09.6.tgz
  2. 分发二进制文件到所有 worker 节点

    1. cd /opt/k8s/work
    2. export node_ip=192.168.0.114
    3. scp docker/* root@${node_ip}:/opt/k8s/bin/
    4. ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
  3. 创建docker服务启动文件

    1. cd /opt/k8s/work
    2. cat > docker.service <<"EOF"
    3. [Unit]
    4. Description=Docker Application Container Engine
    5. Documentation=http://docs.docker.io
    6. [Service]
    7. WorkingDirectory=/data/k8s/docker
    8. Environment="PATH=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin"
    9. EnvironmentFile=-/run/flannel/docker
    10. ExecStart=/opt/k8s/bin/dockerd $DOCKER_NETWORK_OPTIONS
    11. ExecReload=/bin/kill -s HUP $MAINPID
    12. Restart=on-failure
    13. RestartSec=5
    14. LimitNOFILE=infinity
    15. LimitNPROC=infinity
    16. LimitCORE=infinity
    17. Delegate=yes
    18. KillMode=process
    19. [Install]
    20. WantedBy=multi-user.target
    21. EOF
    • EOF 前后有双引号,这样 bash 不会替换文档中的变量,如 $DOCKER_NETWORK_OPTIONS (这些环境变量是 systemd 负责替换的。);

    • dockerd 运行时会调用其它 docker 命令,如 docker-proxy,所以需要将 docker 命令所在的目录加到 PATH 环境变量中;

    • flanneld 启动时将网络配置写入 /run/flannel/docker 文件中,dockerd 启动前读取该文件中的环境变量 DOCKER_NETWORK_OPTIONS ,然后设置 docker0 网桥网段;

    • docker 从 1.13 版本开始,可能将 iptables FORWARD chain的默认策略设置为DROP,从而导致 ping 其它 Node 上的 Pod IP 失败,遇到这种情况时,需要手动设置策略为 ACCEPT:

      1. export node_ip=192.168.0.114
      2. ssh root@${node_ip} "/sbin/iptables -P FORWARD ACCEPT"
  4. 分发 docker.service 文件到所有 worker 机器:

    1. cd /opt/k8s/work
    2. export node_ip=192.168.0.114
    3. scp docker.service root@${node_ip}:/etc/systemd/system/
  5. 配置和分发 docker 配置文件

    使用国内的仓库镜像服务器以加快 pull image 的速度,同时增加下载的并发数 (需要重启 dockerd 生效):

    1. cd /opt/k8s/work
    2. cat > docker-daemon.json <<EOF
    3. {
    4. "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn","https://hub-mirror.c.163.com"],
    5. "max-concurrent-downloads": 20,
    6. "live-restore": true,
    7. "max-concurrent-uploads": 10,
    8. "data-root": "/data/k8s/docker/data",
    9. "log-opts": {
    10. "max-size": "100m",
    11. "max-file": "5"
    12. }
    13. }
    14. EOF
  6. 分发 docker 配置文件到所有 worker 节点:

    1. cd /opt/k8s/work
    2. export node_ip=192.168.0.114
    3. ssh root@${node_ip} "mkdir -p /etc/docker/ /data/k8s/docker/data"
    4. scp docker-daemon.json root@${node_ip}:/etc/docker/daemon.json
  7. 启动 docker 服务

    1. export node_ip=192.168.0.114
    2. ssh root@${node_ip} "systemctl daemon-reload && systemctl enable docker && systemctl restart docker"
  8. 检查服务运行状态

    1. export node_ip=192.168.0.114
    2. ssh root@${node_ip} "systemctl status docker|grep Active"
    • 确保状态为 active (running),否则查看日志,确认原因

    • 如果出现异常,通过如下命令查看

      1. journalctl -u docker
  9. 检查 docker0 网桥

    1. export node_ip=192.168.0.114
    2. ssh root@${node_ip} "/sbin/ip addr show flannel.1 && /sbin/ip addr show docker0"
    • 确认各 worker 节点的 docker0 网桥和 flannel.1 接口的 IP 处于同一个网段中

      输出内容

      1. export node_ip=192.168.0.114
      2. root@master:/opt/k8s/work# ssh root@${node_ip} "/sbin/ip addr show flannel.1 && /sbin/ip addr show docker0"
      3. 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
      4. link/ether f2:fc:0f:7e:98:e4 brd ff:ff:ff:ff:ff:ff
      5. inet 172.30.78.0/32 scope global flannel.1
      6. valid_lft forever preferred_lft forever
      7. inet6 fe80::f0fc:fff:fe7e:98e4/64 scope link
      8. valid_lft forever preferred_lft forever
      9. 5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
      10. link/ether 02:42:fd:1f:8f:d8 brd ff:ff:ff:ff:ff:ff
      11. inet 172.30.78.1/24 brd 172.30.78.255 scope global docker0
      12. valid_lft forever preferred_lft forever
    • 注意: 如果您的服务安装顺序不对或者机器环境比较复杂, docker服务早于flanneld服务安装,此时 worker 节点的 docker0 网桥和 flannel.1 接口的 IP可能不会同处同一个网段下,这个时候请先停止docker服务, 手工删除docker0网卡,重新启动docker服务后即可修复

      1. systemctl stop docker
      2. ip link delete docker0
      3. systemctl start docker
  10. 查看 docker 的状态信息

    1. root@slave:/opt/k8s/work# docker info
    2. Containers: 0
    3. Running: 0
    4. Paused: 0
    5. Stopped: 0
    6. Images: 0
    7. Server Version: 18.09.6
    8. Storage Driver: overlay2
    9. Backing Filesystem: extfs
    10. Supports d_type: true
    11. Native Overlay Diff: true
    12. Logging Driver: json-file
    13. Cgroup Driver: cgroupfs
    14. Plugins:
    15. Volume: local
    16. Network: bridge host macvlan null overlay
    17. Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
    18. Swarm: inactive
    19. Runtimes: runc
    20. Default Runtime: runc
    21. Init Binary: docker-init
    22. containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
    23. runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30
    24. init version: fec3683
    25. Security Options:
    26. apparmor
    27. seccomp
    28. Profile: default
    29. Kernel Version: 5.0.0-23-generic
    30. Operating System: Ubuntu 18.04.3 LTS
    31. OSType: linux
    32. Architecture: x86_64
    33. CPUs: 4
    34. Total Memory: 3.741GiB
    35. Name: slave
    36. ID: IDMG:7A6F:UNTP:IWVM:ZBK5:VHJ4:STC5:UXZX:HQT6:UUNE:YDOC:I27L
    37. Docker Root Dir: /data/k8s/docker/data
    38. Debug Mode (client): false
    39. Debug Mode (server): false
    40. Registry: https://index.docker.io/v1/
    41. Labels:
    42. Experimental: false
    43. Insecure Registries:
    44. 127.0.0.0/8
    45. Registry Mirrors:
    46. https://docker.mirrors.ustc.edu.cn/
    47. https://hub-mirror.c.163.com/
    48. Live Restore Enabled: true
    49. Product License: Community Engine
    50. WARNING: No swap limit support

部署 master 节点(在master节点上执行)

  1. 下载最新版本二进制文件

    1. cd /opt/k8s/work
    2. wget https://dl.k8s.io/v1.17.2/kubernetes-server-linux-amd64.tar.gz # 目前国内不能直接下载,需翻墙
    3. tar -xzvf kubernetes-server-linux-amd64.tar
  2. 安装对应的k8s命令

    1. cd /opt/k8s/work
    2. cp kubernetes/server/bin/{apiextensions-apiserver,kubeadm,kube-apiserver,kube-controller-manager,kubectl,kubelet,kube-proxy,kube-scheduler,mounter} /opt/k8s/bin/
    3. #将kubelet、kube-proxy分发到worker节点
    4. export node_ip=192.168.0.114
    5. scp kubernetes/server/bin/{kubelet,kube-proxy} root@${node_ip}:/opt/k8s/bin/

配置kubectl

kubectl 使用 https 协议与 kube-apiserver 进行安全通信,kube-apiserver 对 kubectl 请求包含的证书进行认证和授权。

kubectl 后续用于集群管理,所以这里创建具有最高权限的 admin 证书。

  1. 创建 admin 证书和私钥

    1. 创建证书签名请求文件


      1. cd /opt/k8s/work
      2. cat > admin-csr.json <<EOF
      3. {
      4. "CN": "admin",
      5. "hosts": [],
      6. "key": {
      7. "algo": "rsa",
      8. "size": 2048
      9. },
      10. "names": [
      11. {
      12. "C": "CN",
      13. "ST": "NanJing",
      14. "L": "NanJing",
      15. "O": "system:masters",
      16. "OU": "system"
      17. }
      18. ]
      19. }
      20. EOF
      • O: system:masters:kube-apiserver 收到使用该证书的客户端请求后,为请求添加组(Group)认证标识 system:masters;
      • 预定义的 ClusterRoleBinding cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予操作集群所需的最高权限;
      • 该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;
    2. 生成证书和私钥

      1. cd /opt/k8s/work
      2. cfssl gencert -ca=/opt/k8s/work/ca.pem \
      3. -ca-key=/opt/k8s/work/ca-key.pem \
      4. -config=/opt/k8s/work/ca-config.json \
      5. -profile=kubernetes admin-csr.json | cfssljson -bare admin
      6. ls admin*
    3. 安装证书

      1. cd /opt/k8s/work
      2. cp admin*.pem /etc/kubernetes/cert
  2. 创建 kubeconfig 文件

    1. cd /opt/k8s/work
    2. export KUBE_APISERVER=https://192.168.0.107:6443
    3. # 设置集群参数
    4. kubectl config set-cluster kubernetes \
    5. --certificate-authority=/etc/kubernetes/cert/ca.pem \
    6. --embed-certs=true \
    7. --server=${KUBE_APISERVER} \
    8. --kubeconfig=kubectl.kubeconfig
    9. # 设置客户端认证参数
    10. kubectl config set-credentials admin \
    11. --client-certificate=/etc/kubernetes/cert/admin.pem \
    12. --client-key=/etc/kubernetes/cert/admin-key.pem \
    13. --embed-certs=true \
    14. --kubeconfig=kubectl.kubeconfig
    15. # 设置上下文参数
    16. kubectl config set-context kubernetes \
    17. --cluster=kubernetes \
    18. --user=admin \
    19. --kubeconfig=kubectl.kubeconfig
    20. # 设置默认上下文
    21. kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig
    • --certificate-authority:验证 kube-apiserver 证书的根证书;
    • --client-certificate、--client-key:刚生成的 admin 证书和私钥,与 kube-apiserver https 通信时使用;
    • --embed-certs=true:将 ca.pem 和 admin.pem 证书内容嵌入到生成的 kubectl.kubeconfig 文件中;
    • --server:指定 kube-apiserver 的地址;
  3. 分发 kubeconfig 文件(其他用户想要访问kubernetes时,也需要把此文件copy到对应的用户目录)

    1. cd /opt/k8s/work
    2. mkdir -p ~/.kube
    3. cp kubectl.kubeconfig ~/.kube/config
  4. 配置kubectl自动补全功能

    1. root@master:/opt/k8s/work# apt install -y bash-completion
    2. root@master:/opt/k8s/work# locate bash_completion /usr/share/bash-completion/bash_completion
    3. root@master:/opt/k8s/work# source /usr/share/bash-completion/bash_completion
    4. root@master:/opt/k8s/work# source <(kubectl completion bash)
    5. root@master:/opt/k8s/work# echo 'source <(kubectl completion bash)' >>~/.bashrc

配置kube-apiserver

  1. 创建 kubernetes-api 证书和私钥

    1. 创建证书签名请求文件


      1. cd /opt/k8s/work
      2. cat > kubernetes-csr.json <<EOF
      3. {
      4. "CN": "kubernetes-api",
      5. "hosts": [
      6. "127.0.0.1",
      7. "192.168.0.107",
      8. "10.254.0.1",
      9. "kubernetes",
      10. "kubernetes.default",
      11. "kubernetes.default.svc",
      12. "kubernetes.default.svc.cluster",
      13. "kubernetes.default.svc.cluster.local."
      14. ],
      15. "key": {
      16. "algo": "rsa",
      17. "size": 2048
      18. },
      19. "names": [
      20. {
      21. "C": "CN",
      22. "ST": "NanJing",
      23. "L": "NanJing",
      24. "O": "k8s",
      25. "OU": "system"
      26. }
      27. ]
      28. }
      29. EOF
    2. 生成证书和私钥

      1. cd /opt/k8s/work
      2. cfssl gencert -ca=/opt/k8s/work/ca.pem \
      3. -ca-key=/opt/k8s/work/ca-key.pem \
      4. -config=/opt/k8s/work/ca-config.json \
      5. -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
      6. ls kubernetes*
    3. 安装证书

      1. cd /opt/k8s/work
      2. cp kubernetes*.pem /etc/kubernetes/cert/
  2. 创建kube-api服务启动文件

    1. export ETCD_ENDPOINTS="https://192.168.0.107:2379"
    2. export SERVICE_CIDR="10.254.0.0/16"
    3. export NODE_PORT_RANGE=80-60000
    4. cat > /etc/systemd/system/kube-apiserver.service <<EOF
    5. [Unit]
    6. Description=Kubernetes API Server
    7. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    8. After=network.target
    9. [Service]
    10. WorkingDirectory=/data/k8s/k8s/kube-apiserver
    11. ExecStart=/opt/k8s/bin/kube-apiserver \\
    12. --advertise-address=192.168.0.107 \\
    13. --etcd-cafile=/etc/kubernetes/cert/ca.pem \\
    14. --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \\
    15. --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \\
    16. --etcd-servers=${ETCD_ENDPOINTS} \\
    17. --bind-address=192.168.0.107 \\
    18. --secure-port=6443 \\
    19. --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \\
    20. --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \\
    21. --audit-log-maxage=15 \\
    22. --audit-log-maxbackup=3 \\
    23. --audit-log-maxsize=100 \\
    24. --audit-log-truncate-enabled \\
    25. --audit-log-path=/data/k8s/k8s/kube-apiserver/audit.log \\
    26. --profiling \\
    27. --anonymous-auth=false \\
    28. --client-ca-file=/etc/kubernetes/cert/ca.pem \\
    29. --enable-bootstrap-token-auth \\
    30. --service-account-key-file=/etc/kubernetes/cert/ca-key.pem \\
    31. --authorization-mode=Node,RBAC \\
    32. --runtime-config=api/all=true \\
    33. --allow-privileged=true \\
    34. --event-ttl=168h \\
    35. --kubelet-certificate-authority=/etc/kubernetes/cert/ca.pem \\
    36. --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \\
    37. --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \\
    38. --kubelet-https=true \\
    39. --kubelet-timeout=10s \\
    40. --service-cluster-ip-range=${SERVICE_CIDR} \\
    41. --service-node-port-range=${NODE_PORT_RANGE} \\
    42. --logtostderr=true \\
    43. --v=2
    44. Restart=on-failure
    45. RestartSec=10
    46. Type=notify
    47. LimitNOFILE=65536
    48. [Install]
    49. WantedBy=multi-user.target
    50. EOF
  3. 创建kube-api工作目录

    1. mkdir -p /data/k8s/k8s/kube-apiserver
  4. 启动 kube-apiserver 服务

    1. systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver
  5. 检查启动结果

    1. systemctl status kube-apiserver |grep Active
    • 确保状态为 active (running),否则查看日志,确认原因

    • 如果出现异常,通过如下命令查看

      1. journalctl -u kube-apiserver
  6. 检查 kube-apiserver 运行状态

    1. root@master:/opt/k8s/work# kubectl cluster-info
    2. Kubernetes master is running at https://192.168.0.107:6443
    3. To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    4. root@master:/opt/k8s/work# kubectl get all --all-namespaces
    5. NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    6. default service/kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 2m30s
    7. root@master:/opt/k8s/work# kubectl get componentstatuses
    8. NAME STATUS MESSAGE ERROR
    9. scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
    10. controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
    11. etcd-0 Healthy {"health":"true"}

配置kube-controller-manager

  1. 创建 kube-controller-manager 证书和私钥

    1. 创建证书签名请求文件

      1. cd /opt/k8s/work
      2. cat > kube-controller-manager-csr.json <<EOF
      3. {
      4. "CN": "system:kube-controller-manager",
      5. "key": {
      6. "algo": "rsa",
      7. "size": 2048
      8. },
      9. "hosts": [
      10. "127.0.0.1",
      11. "192.168.0.107"
      12. ],
      13. "names": [
      14. {
      15. "C": "CN",
      16. "ST": "NanJing",
      17. "L": "NanJing",
      18. "O": "system:kube-controller-manager",
      19. "OU": "system"
      20. }
      21. ]
      22. }
      23. EOF
      • CN 和 O 均为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限。
    2. 生成证书和私钥

      1. cd /opt/k8s/work
      2. cfssl gencert -ca=/opt/k8s/work/ca.pem \
      3. -ca-key=/opt/k8s/work/ca-key.pem \
      4. -config=/opt/k8s/work/ca-config.json \
      5. -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
      6. ls kube-controller-manager*pem
    3. 安装证书

      1. cd /opt/k8s/work
      2. cp kube-controller-manager*.pem /etc/kubernetes/cert/
  2. 创建 kubeconfig 文件

    • kube-controller-manager 使用此文件访问apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-controller-manager 证书等信息
    1. cd /opt/k8s/work
    2. export KUBE_APISERVER=https://192.168.0.107:6443
    3. kubectl config set-cluster kubernetes \
    4. --certificate-authority=/opt/k8s/work/ca.pem \
    5. --embed-certs=true \
    6. --server="${KUBE_APISERVER}" \
    7. --kubeconfig=kube-controller-manager.kubeconfig
    8. kubectl config set-credentials system:kube-controller-manager \
    9. --client-certificate=kube-controller-manager.pem \
    10. --client-key=kube-controller-manager-key.pem \
    11. --embed-certs=true \
    12. --kubeconfig=kube-controller-manager.kubeconfig
    13. kubectl config set-context system:kube-controller-manager \
    14. --cluster=kubernetes \
    15. --user=system:kube-controller-manager \
    16. --kubeconfig=kube-controller-manager.kubeconfig
    17. kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
  3. 分发 kubeconfig

    1. cd /opt/k8s/work
    2. cp kube-controller-manager.kubeconfig /etc/kubernetes/kube-controller-manager.kubeconfig
  4. 创建kube-controller-manager服务启动文件

    1. export SERVICE_CIDR="10.254.0.0/16"
    2. cat > /etc/systemd/system/kube-controller-manager.service <<EOF
    3. [Unit]
    4. Description=Kubernetes Controller Manager
    5. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    6. [Service]
    7. WorkingDirectory=/data/k8s/k8s/kube-controller-manager
    8. ExecStart=/opt/k8s/bin/kube-controller-manager \\
    9. --profiling \\
    10. --cluster-name=kubernetes \\
    11. --kube-api-qps=1000 \\
    12. --kube-api-burst=2000 \\
    13. --leader-elect \\
    14. --use-service-account-credentials\\
    15. --concurrent-service-syncs=2 \\
    16. --bind-address=192.168.0.107 \\
    17. --secure-port=10252 \\
    18. --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \\
    19. --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \\
    20. --port=0 \\
    21. --authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
    22. --client-ca-file=/etc/kubernetes/cert/ca.pem \\
    23. --authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
    24. --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \\
    25. --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \\
    26. --experimental-cluster-signing-duration=87600h \\
    27. --horizontal-pod-autoscaler-sync-period=10s \\
    28. --concurrent-deployment-syncs=10 \\
    29. --concurrent-gc-syncs=30 \\
    30. --node-cidr-mask-size=24 \\
    31. --service-cluster-ip-range=${SERVICE_CIDR} \\
    32. --pod-eviction-timeout=6m \\
    33. --terminated-pod-gc-threshold=10000 \\
    34. --root-ca-file=/etc/kubernetes/cert/ca.pem \\
    35. --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \\
    36. --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
    37. --logtostderr=true \\
    38. --v=2
    39. Restart=on-failure
    40. RestartSec=5
    41. [Install]
    42. WantedBy=multi-user.target
    43. EOF
  5. 创建kube-controller-manager工作目录

    1. mkdir -p /data/k8s/k8s/kube-controller-manager
  6. 启动 kube-controller-manager服务

    1. systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager
  7. 检查启动结果

    1. systemctl status kube-controller-manager |grep Active
    • 确保状态为 active (running),否则查看日志,确认原因

    • 如果出现异常,通过如下命令查看

      1. journalctl -u kube-controller-manager
  8. 检查 kube-controller-manager 运行状态

    1. root@master:/opt/k8s/work# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml
    2. apiVersion: v1
    3. kind: Endpoints
    4. metadata:
    5. annotations:
    6. control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master_6e2dfb91-8eaa-42d0-ba83-be669b99801f","leaseDurationSeconds":15,"acquireTime":"2020-02-09T13:37:08Z","renewTime":"2020-02-09T13:38:02Z","leaderTransitions":0}'
    7. creationTimestamp: "2020-02-09T13:37:08Z"
    8. name: kube-controller-manager
    9. namespace: kube-system
    10. resourceVersion: "888"
    11. selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
    12. uid: 5aa2c4a1-5ded-4870-900e-63dfd212c912
    13. root@master:/opt/k8s/work# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://192.168.0.107:10252/healthz
    14. ok

配置kube-scheduler

  1. 创建 kube-scheduler 证书和私钥

    1. 创建证书签名请求文件

      1. cd /opt/k8s/work
      2. cat > kube-scheduler-csr.json <<EOF
      3. {
      4. "CN": "system:kube-scheduler",
      5. "key": {
      6. "algo": "rsa",
      7. "size": 2048
      8. },
      9. "hosts": [
      10. "127.0.0.1",
      11. "192.168.0.107"
      12. ],
      13. "names": [
      14. {
      15. "C": "CN",
      16. "ST": "NanJing",
      17. "L": "NanJing",
      18. "O": "system:kube-scheduler",
      19. "OU": "system"
      20. }
      21. ]
      22. }
      23. EOF
      • CN 和 O 均为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 赋予 kube-scheduler 工作所需的权限。
    2. 生成证书和私钥

      1. cd /opt/k8s/work
      2. cfssl gencert -ca=/opt/k8s/work/ca.pem \
      3. -ca-key=/opt/k8s/work/ca-key.pem \
      4. -config=/opt/k8s/work/ca-config.json \
      5. -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
      6. ls kube-scheduler*pem
    3. 安装证书

      1. cd /opt/k8s/work
      2. cp kube-scheduler*.pem /etc/kubernetes/cert/
  2. 创建 kubeconfig 文件

    • kube-scheduler 使用此文件访问apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-scheduler证书等信息
    1. cd /opt/k8s/work
    2. export KUBE_APISERVER=https://192.168.0.107:6443
    3. kubectl config set-cluster kubernetes \
    4. --certificate-authority=/opt/k8s/work/ca.pem \
    5. --embed-certs=true \
    6. --server="${KUBE_APISERVER}" \
    7. --kubeconfig=kube-scheduler.kubeconfig
    8. kubectl config set-credentials system:kube-scheduler \
    9. --client-certificate=kube-scheduler.pem \
    10. --client-key=kube-scheduler-key.pem \
    11. --embed-certs=true \
    12. --kubeconfig=kube-scheduler.kubeconfig
    13. kubectl config set-context system:kube-scheduler \
    14. --cluster=kubernetes \
    15. --user=system:kube-scheduler \
    16. --kubeconfig=kube-scheduler.kubeconfig
    17. kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
  3. 分发 kubeconfig

    1. cd /opt/k8s/work
    2. cp kube-scheduler.kubeconfig /etc/kubernetes/kube-scheduler.kubeconfig
  4. 创建 kube-scheduler 配置文件

    1. cd /opt/k8s/work
    2. cat >kube-scheduler.yaml <<EOF
    3. apiVersion: kubescheduler.config.k8s.io/v1alpha1
    4. kind: KubeSchedulerConfiguration
    5. bindTimeoutSeconds: 600
    6. clientConnection:
    7. burst: 200
    8. kubeconfig: "/etc/kubernetes/kube-scheduler.kubeconfig"
    9. qps: 100
    10. enableContentionProfiling: false
    11. enableProfiling: true
    12. hardPodAffinitySymmetricWeight: 1
    13. healthzBindAddress: 192.168.0.107:10251
    14. leaderElection:
    15. leaderElect: true
    16. metricsBindAddress: 192.168.0.107:10251
    17. EOF
    18. cp kube-scheduler.yaml /etc/kubernetes/kube-scheduler.yaml
  5. 创建kube-scheduler服务启动文件

    1. cat > /etc/systemd/system/kube-scheduler.service <<EOF
    2. [Unit]
    3. Description=Kubernetes Scheduler
    4. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    5. [Service]
    6. WorkingDirectory=/data/k8s/k8s/kube-scheduler
    7. ExecStart=/opt/k8s/bin/kube-scheduler \\
    8. --config=/etc/kubernetes/kube-scheduler.yaml \\
    9. --bind-address=192.168.0.107 \\
    10. --secure-port=10259 \\
    11. --port=0 \\
    12. --tls-cert-file=/etc/kubernetes/cert/kube-scheduler.pem \\
    13. --tls-private-key-file=/etc/kubernetes/cert/kube-scheduler-key.pem \\
    14. --authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
    15. --client-ca-file=/etc/kubernetes/cert/ca.pem \\
    16. --authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
    17. --logtostderr=true \\
    18. --v=2
    19. Restart=always
    20. RestartSec=5
    21. StartLimitInterval=0
    22. [Install]
    23. WantedBy=multi-user.target
    24. EOF
  6. 创建kube-scheduler工作目录

    1. mkdir -p /data/k8s/k8s/kube-scheduler
  7. 启动 kube-scheduler服务

    1. systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler
  8. 检查启动结果

    1. systemctl status kube-scheduler |grep Active
    • 确保状态为 active (running),否则查看日志,确认原因

    • 如果出现异常,通过如下命令查看

      1. journalctl -u kube-scheduler
  9. 检查 kube-scheduler 运行状态

    1. root@master:/opt/k8s/work# kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml
    2. apiVersion: v1
    3. kind: Endpoints
    4. metadata:
    5. annotations:
    6. control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master_383054c4-58d8-4c24-a766-551a92492219","leaseDurationSeconds":15,"acquireTime":"2020-02-10T02:17:40Z","renewTime":"2020-02-10T02:18:09Z","leaderTransitions":0}'
    7. creationTimestamp: "2020-02-10T02:17:41Z"
    8. name: kube-scheduler
    9. namespace: kube-system
    10. resourceVersion: "50203"
    11. selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
    12. uid: 39821272-40a1-4b3a-95bd-a4f09af09231
    13. root@master:/opt/k8s/work# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://192.168.0.107:10259/healthz
    14. ok
    15. root@master:/opt/k8s/work# curl http://192.168.0.107:10251/healthz
    16. ok

部署worker节点(在master节点上执行)

配置kubelet

kubelet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如 exec、run、logs 等。

kubelet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。

为确保安全,部署时关闭了 kubelet 的非安全 http 端口,对请求进行认证和授权,拒绝未授权的访问(如 apiserver、heapster 的请求)。

  1. 创建 kubelet bootstrap kubeconfig 文件


    1. cd /opt/k8s/work
    2. export KUBE_APISERVER=https://192.168.0.107:6443
    3. export node_name=slave
    4. export BOOTSTRAP_TOKEN=$(kubeadm token create \
    5. --description kubelet-bootstrap-token \
    6. --groups system:bootstrappers:${node_name} \
    7. --kubeconfig ~/.kube/config)
    8. # 设置集群参数
    9. kubectl config set-cluster kubernetes \
    10. --certificate-authority=/etc/kubernetes/cert/ca.pem \
    11. --embed-certs=true \
    12. --server=${KUBE_APISERVER} \
    13. --kubeconfig=kubelet-bootstrap.kubeconfig
    14. # 设置客户端认证参数
    15. kubectl config set-credentials kubelet-bootstrap \
    16. --token=${BOOTSTRAP_TOKEN} \
    17. --kubeconfig=kubelet-bootstrap.kubeconfig
    18. # 设置上下文参数
    19. kubectl config set-context default \
    20. --cluster=kubernetes \
    21. --user=kubelet-bootstrap \
    22. --kubeconfig=kubelet-bootstrap.kubeconfig
    23. # 设置默认上下文
    24. kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
    • 向 kubeconfig 写入的是 token,bootstrap 结束后 kube-controller-manager 为 kubelet 创建 client 和 server 证书
    • kube-apiserver 接收 kubelet 的 bootstrap token 后,将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers,后续将为这个 group 设置 ClusterRoleBinding
  2. 分发 bootstrap kubeconfig 文件到所有 worker 节点

    1. cd /opt/k8s/work
    2. export node_ip=192.168.0.114
    3. scp kubelet-bootstrap.kubeconfig root@${node_ip}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
  3. 创建和分发 kubelet 参数配置文件

    从 v1.10 开始,部分 kubelet 参数需在配置文件中配置,kubelet --help 会提示

    1. cd /opt/k8s/work
    2. export CLUSTER_CIDR="172.30.0.0/16"
    3. export NODE_IP=192.168.0.114
    4. export CLUSTER_DNS_SVC_IP="10.254.0.2"
    5. cat > kubelet-config.yaml <<EOF
    6. kind: KubeletConfiguration
    7. apiVersion: kubelet.config.k8s.io/v1beta1
    8. address: ${NODE_IP}
    9. staticPodPath: "/etc/kubernetes/manifests"
    10. syncFrequency: 1m
    11. fileCheckFrequency: 20s
    12. httpCheckFrequency: 20s
    13. staticPodURL: ""
    14. port: 10250
    15. readOnlyPort: 0
    16. rotateCertificates: true
    17. serverTLSBootstrap: true
    18. authentication:
    19. anonymous:
    20. enabled: false
    21. webhook:
    22. enabled: true
    23. x509:
    24. clientCAFile: "/etc/kubernetes/cert/ca.pem"
    25. authorization:
    26. mode: Webhook
    27. registryPullQPS: 0
    28. registryBurst: 20
    29. eventRecordQPS: 0
    30. eventBurst: 20
    31. enableDebuggingHandlers: true
    32. enableContentionProfiling: true
    33. healthzPort: 10248
    34. healthzBindAddress: ${NODE_IP}
    35. clusterDomain: "cluster.local"
    36. clusterDNS:
    37. - "${CLUSTER_DNS_SVC_IP}"
    38. nodeStatusUpdateFrequency: 10s
    39. nodeStatusReportFrequency: 1m
    40. imageMinimumGCAge: 2m
    41. imageGCHighThresholdPercent: 85
    42. imageGCLowThresholdPercent: 80
    43. volumeStatsAggPeriod: 1m
    44. kubeletCgroups: ""
    45. systemCgroups: ""
    46. cgroupRoot: ""
    47. cgroupsPerQOS: true
    48. cgroupDriver: cgroupfs
    49. runtimeRequestTimeout: 10m
    50. hairpinMode: promiscuous-bridge
    51. maxPods: 220
    52. podCIDR: "${CLUSTER_CIDR}"
    53. podPidsLimit: -1
    54. resolvConf: /run/systemd/resolve/resolv.conf
    55. maxOpenFiles: 1000000
    56. kubeAPIQPS: 1000
    57. kubeAPIBurst: 2000
    58. serializeImagePulls: false
    59. evictionHard:
    60. memory.available: "100Mi"
    61. nodefs.available: "10%"
    62. nodefs.inodesFree: "5%"
    63. imagefs.available: "15%"
    64. evictionSoft: {}
    65. enableControllerAttachDetach: true
    66. failSwapOn: true
    67. containerLogMaxSize: 20Mi
    68. containerLogMaxFiles: 10
    69. systemReserved: {}
    70. kubeReserved: {}
    71. systemReservedCgroup: ""
    72. kubeReservedCgroup: ""
    73. enforceNodeAllocatable: ["pods"]
    74. EOF
    • address:kubelet 安全端口(https,10250)监听的地址,不能为 127.0.0.1,否则 kube-apiserver、heapster 等不能调用 kubelet 的 API;
    • readOnlyPort=0:关闭只读端口(默认 10255),等效为未指定;
    • authentication.anonymous.enabled:设置为 false,不允许匿名访问 10250 端口;
    • authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTP 证书认证;
    • authentication.webhook.enabled=true:开启 HTTPs bearer token 认证;

      对于未通过 x509 证书和 webhook 认证的请求(kube-apiserver 或其他客户端),将被拒绝,提示 Unauthorized;
    • authroization.mode=Webhook:kubelet 使用 SubjectAccessReview API 查询 kube-apiserver 某 user、group 是否具有操作资源的权限(RBAC);
    • featureGates.RotateKubeletClientCertificate、featureGates.RotateKubeletServerCertificate:自动 rotate 证书,证书的有效期取决于 kube-controller-manager 的 --experimental-cluster-signing-duration 参数
  4. 为各节点创建和分发 kubelet 配置文件

    1. cd /opt/k8s/work
    2. export node_ip=192.168.0.114
    3. scp kubelet-config.yaml root@${node_ip}:/etc/kubernetes/kubelet-config.yaml
  5. 创建和分发 kubelet 服务启动文件

    1. cd /opt/k8s/work
    2. export K8S_DIR=/data/k8s/k8s
    3. export NODE_NAME=slave
    4. cat > kubelet.service <<EOF
    5. [Unit]
    6. Description=Kubernetes Kubelet
    7. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    8. After=docker.service
    9. Requires=docker.service
    10. [Service]
    11. WorkingDirectory=${K8S_DIR}/kubelet
    12. ExecStart=/opt/k8s/bin/kubelet \\
    13. --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
    14. --cert-dir=/etc/kubernetes/cert \\
    15. --root-dir=${K8S_DIR}/kubelet \\
    16. --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
    17. --config=/etc/kubernetes/kubelet-config.yaml \\
    18. --hostname-override=${NODE_NAME} \\
    19. --image-pull-progress-deadline=15m \\
    20. --volume-plugin-dir=${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/ \\
    21. --logtostderr=true \\
    22. --v=2
    23. Restart=always
    24. RestartSec=5
    25. StartLimitInterval=0
    26. [Install]
    27. WantedBy=multi-user.target
    28. EOF
    • 如果设置了 --hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 Node 的情况;
    • --bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求;
    • K8S approve kubelet 的 csr 请求后,在 --cert-dir 目录创建证书和私钥文件,然后写入 --kubeconfig 文件
  6. 安装分发kubelet服务文件

    1. cd /opt/k8s/work
    2. export node_ip=192.168.0.114
    3. scp kubelet.service root@${node_ip}:/etc/systemd/system/kubelet.service
  7. 授予 kube-apiserver 访问 kubelet API 的权限

    在执行 kubectl exec、run、logs 等命令时,apiserver 会将请求转发到 kubelet 的 https 端口。这里定义 RBAC 规则,授权 apiserver 使用的证书(kubernetes.pem)用户名(CN:kubernetes-api)访问 kubelet API 的权限:

    1. kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes-api
  8. Bootstrap Token Auth 和授予权限

    kubelet 启动时查找 --kubeletconfig 参数对应的文件是否存在,如果不存在则使用 --bootstrap-kubeconfig 指定的 kubeconfig 文件向 kube-apiserver 发送证书签名请求 (CSR)。

    kube-apiserver 收到 CSR 请求后,对其中的 Token 进行认证,认证通过后将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers,这一过程称为 Bootstrap Token Auth。

    默认情况下,这个 user 和 group 没有创建 CSR 的权限, 需要创建一个 clusterrolebinding,将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 绑定:

    1. kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
  9. 启动 kubelet 服务

    1. export K8S_DIR=/data/k8s/k8s
    2. export node_ip=192.168.0.114
    3. ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/"
    4. ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
    • kubelet 启动后使用 --bootstrap-kubeconfig 向 kube-apiserver 发送 CSR 请求,当这个 CSR 被 approve 后,kube-controller-manager 为 kubelet 创建 TLS 客户端证书、私钥和 --kubeletconfig 文件。

    • 注意:kube-controller-manager 需要配置 --cluster-signing-cert-file 和 --cluster-signing-key-file 参数,才会为 TLS Bootstrap 创建证书和私钥。

  10. 遇到问题

    1. 启动kubelet后,使用 kubectl get csr 没有结果,查看kubelet出现错误

      1. journalctl -u kubelet -a |grep -A 2 'certificate_manager.go'
      2. Failed while requesting a signed certificate from the master: cannot create certificate signing request: Unauthorized

      查看kube-api服务日志

      1. root@master:/opt/k8s/work# journalctl -eu kube-apiserver
      2. Unable to authenticate the request due to an error: invalid bearer token

      原因,在kube-apiserver服务的启动文件中丢掉了下面的配置

      1. --enable-bootstrap-token-auth \\

      追加上,重新启动kube-apiserver后解决

    2. kubelet 启动后持续不断的产生csr,手动approve后还继续产生

      原因是kube-controller-manager服务停止掉了,重新启动后解决

      • kubelet服务出问题后 要删除对应节点的/etc/kubernetes/kubelet.kubeconfig和/etc/kubernetes/cert/kubelet-client-current*.pem、/etc/kubernetes/cert/kubelet-client-current*.pem,之后再重新启动kubelet
  11. 查看 kubelet 情况

    1. root@master:/opt/k8s/work# kubectl get csr
    2. NAME AGE REQUESTOR CONDITION
    3. csr-kl5mg 49s system:bootstrap:5t989l Pending
    4. csr-mrmkf 2m1s system:bootstrap:5t989l Pending
    5. csr-ql68g 13s system:bootstrap:5t989l Pending
    6. csr-rvl2v 84s system:bootstrap:5t989l Pending
    • 执行时,在手动approve之前会一直追加csr
  12. 手动 approve csr

    1. root@master:/opt/k8s/work# kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve
    2. certificatesigningrequest.certificates.k8s.io/csr-kl5mg approved
    3. certificatesigningrequest.certificates.k8s.io/csr-mrmkf approved
    4. certificatesigningrequest.certificates.k8s.io/csr-ql68g approved
    5. certificatesigningrequest.certificates.k8s.io/csr-rvl2v approved
    6. root@master:/opt/k8s/work# kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve
    7. certificatesigningrequest.certificates.k8s.io/csr-f4smx approved
  13. 查看node信息

    1. root@master:/opt/k8s/work# kubectl get nodes
    2. NAME STATUS ROLES AGE VERSION
    3. slave Ready <none> 10m v1.17.2
  14. 查看kubelet服务状态

    1. export node_ip=192.168.0.114
    2. root@master:/opt/k8s/work# ssh root@${node_ip} "systemctl status kubelet.service"
    3. kubelet.service - Kubernetes Kubelet
    4. Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    5. Active: active (running) since Mon 2020-02-10 22:48:41 CST; 12min ago
    6. Docs: https://github.com/GoogleCloudPlatform/kubernetes
    7. Main PID: 15529 (kubelet)
    8. Tasks: 19 (limit: 4541)
    9. CGroup: /system.slice/kubelet.service
    10. └─15529 /opt/k8s/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig --cert-dir=/etc/kubernetes/cert --root-dir=/data/k8s/k8s/kubelet --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/kubernetes/kubelet-config.yaml --hostname-override=slave --image-pull-progress-deadline=15m --volume-plugin-dir=/data/k8s/k8s/kubelet/kubelet-plugins/volume/exec/ --logtostderr=true --v=2
    11. 2 10 22:49:04 slave kubelet[15529]: I0210 22:49:04.846285 15529 kubelet_node_status.go:73] Successfully registered node slave
    12. 2 10 22:49:04 slave kubelet[15529]: I0210 22:49:04.930745 15529 certificate_manager.go:402] Rotating certificates
    13. 2 10 22:49:14 slave kubelet[15529]: I0210 22:49:14.966351 15529 kubelet_node_status.go:486] Recording NodeReady event message for node slave
    14. 2 10 22:49:29 slave kubelet[15529]: I0210 22:49:29.580410 15529 certificate_manager.go:531] Certificate expiration is 2030-02-06 04:19:00 +0000 UTC, rotation deadline is 2029-01-21 13:08:18.850930128 +0000 UTC
    15. 2 10 22:49:29 slave kubelet[15529]: I0210 22:49:29.580484 15529 certificate_manager.go:281] Waiting 78430h18m49.270459727s for next certificate rotation
    16. 2 10 22:49:30 slave kubelet[15529]: I0210 22:49:30.580981 15529 certificate_manager.go:531] Certificate expiration is 2030-02-06 04:19:00 +0000 UTC, rotation deadline is 2027-07-14 16:09:26.990162158 +0000 UTC
    17. 2 10 22:49:30 slave kubelet[15529]: I0210 22:49:30.581096 15529 certificate_manager.go:281] Waiting 65065h19m56.409078053s for next certificate rotation
    18. 2 10 22:53:44 slave kubelet[15529]: I0210 22:53:44.911705 15529 kubelet.go:1312] Image garbage collection succeeded
    19. 2 10 22:53:45 slave kubelet[15529]: I0210 22:53:45.053792 15529 container_manager_linux.go:469] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
    20. 2 10 22:58:45 slave kubelet[15529]: I0210 22:58:45.054225 15529 container_manager_linux.go:469] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.servic

配置kube-proxy 组件

  1. 创建 kube-proxy 证书和私钥

    1. 创建证书签名请求文件

      1. cd /opt/k8s/work
      2. cat > kube-proxy-csr.json <<EOF
      3. {
      4. "CN": "system:kube-proxy",
      5. "key": {
      6. "algo": "rsa",
      7. "size": 2048
      8. },
      9. "names": [
      10. {
      11. "C": "CN",
      12. "ST": "NanJing",
      13. "L": "NanJing",
      14. "O": "system:kube-proxy",
      15. "OU": "system"
      16. }
      17. ]
      18. }
      19. EOF
      • CN:指定该证书的 User 为 system:kube-proxy;
      • 预定义的 RoleBinding system:node-proxier 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限。
    2. 生成证书和私钥

      1. cd /opt/k8s/work
      2. cfssl gencert -ca=/opt/k8s/work/ca.pem \
      3. -ca-key=/opt/k8s/work/ca-key.pem \
      4. -config=/opt/k8s/work/ca-config.json \
      5. -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
      6. ls kube-proxy*pem
    3. 安装证书

      1. cd /opt/k8s/work
      2. export node_ip=192.168.0.114
      3. scp kube-proxy*.pem root@${node_ip}:/etc/kubernetes/cert/
  2. 创建 kubeconfig 文件

    • kube-proxy 使用此文件访问apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-proxy证书等信息
    1. cd /opt/k8s/work
    2. export KUBE_APISERVER=https://192.168.0.107:6443
    3. kubectl config set-cluster kubernetes \
    4. --certificate-authority=/opt/k8s/work/ca.pem \
    5. --embed-certs=true \
    6. --server=${KUBE_APISERVER} \
    7. --kubeconfig=kube-proxy.kubeconfig
    8. kubectl config set-credentials kube-proxy \
    9. --client-certificate=kube-proxy.pem \
    10. --client-key=kube-proxy-key.pem \
    11. --embed-certs=true \
    12. --kubeconfig=kube-proxy.kubeconfig
    13. kubectl config set-context default \
    14. --cluster=kubernetes \
    15. --user=kube-proxy \
    16. --kubeconfig=kube-proxy.kubeconfig
    17. kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  3. 分发 kubeconfig

    1. cd /opt/k8s/work
    2. export node_ip=192.168.0.114
    3. scp kube-proxy.kubeconfig root@${node_ip}:/etc/kubernetes/kube-proxy.kubeconfig
  4. 创建 kube-proxy 配置文件

    1. cd /opt/k8s/work
    2. export CLUSTER_CIDR="172.30.0.0/16"
    3. export NODE_IP=192.168.0.114
    4. export NODE_NAME=slave
    5. cat > kube-proxy-config.yaml <<EOF
    6. kind: KubeProxyConfiguration
    7. apiVersion: kubeproxy.config.k8s.io/v1alpha1
    8. clientConnection:
    9. burst: 200
    10. kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"
    11. qps: 100
    12. bindAddress: ${NODE_IP}
    13. healthzBindAddress: ${NODE_IP}:10256
    14. metricsBindAddress: ${NODE_IP}:10249
    15. enableProfiling: true
    16. clusterCIDR: ${CLUSTER_CIDR}
    17. hostnameOverride: ${NODE_NAME}
    18. mode: "ipvs"
    19. portRange: ""
    20. iptables:
    21. masqueradeAll: false
    22. ipvs:
    23. scheduler: rr
    24. excludeCIDRs: []
    25. EOF
    • bindAddress: 监听地址;
    • clientConnection.kubeconfig: 连接 apiserver 的 kubeconfig 文件;
    • clusterCIDR: kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT;
    • hostnameOverride: 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;
    • mode: 使用 ipvs 模式;
  5. 分发kube-proxy 配置文件

    1. cd /opt/k8s/work
    2. export node_ip=192.168.0.114
    3. scp kube-proxy-config.yaml root@${node_ip}:/etc/kubernetes/kube-proxy-config.yaml
  6. 创建kube-proxy服务启动文件

    1. cd /opt/k8s/work
    2. export K8S_DIR=/data/k8s/k8s
    3. cat > kube-proxy.service <<EOF
    4. [Unit]
    5. Description=Kubernetes Kube-Proxy Server
    6. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    7. After=network.target
    8. [Service]
    9. WorkingDirectory=${K8S_DIR}/kube-proxy
    10. ExecStart=/opt/k8s/bin/kube-proxy \\
    11. --config=/etc/kubernetes/kube-proxy-config.yaml \\
    12. --logtostderr=true \\
    13. --v=2
    14. Restart=on-failure
    15. RestartSec=5
    16. LimitNOFILE=65536
    17. [Install]
    18. WantedBy=multi-user.target
    19. EOF
  7. 分发 kube-proxy服务启动文件:

    1. export node_ip=192.168.0.114
    2. scp kube-proxy.service root@${node_ip}:/etc/systemd/system/
  8. 启动 kube-proxy服务

    1. export node_ip=192.168.0.114
    2. export K8S_DIR=/data/k8s/k8s
    3. ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-proxy"
    4. ssh root@${node_ip} "modprobe ip_vs_rr"
    5. ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
  9. 检查启动结果

    1. export node_ip=192.168.0.114
    2. ssh root@${node_ip} "systemctl status kube-proxy |grep Active"
    • 确保状态为 active (running),否则查看日志,确认原因

    • 如果出现异常,通过如下命令查看

      1. journalctl -u kube-proxy
  10. 查看状态


    1. root@slave:~# netstat -lnpt|grep kube-prox
    2. tcp 0 0 192.168.0.114:10256 0.0.0.0:* LISTEN 23078/kube-proxy
    3. tcp 0 0 192.168.0.114:10249 0.0.0.0:* LISTEN 23078/kube-proxy
    4. root@slave:~# ipvsadm -ln
    5. IP Virtual Server version 1.2.1 (size=4096)
    6. Prot LocalAddress:Port Scheduler Flags
    7. -> RemoteAddress:Port Forward Weight ActiveConn InActConn
    8. TCP 10.254.0.1:443 rr
    9. -> 192.168.0.107:6443 Masq 1 0 0

验证集群功能(在master节点上执行)

以一个nginx的service和deployment来验证集群功能

  1. 创建启动文件

    1. mkdir /opt/k8s/yml
    2. cd /opt/k8s/yml
    3. cat > nginx.yml << EOF
    4. apiVersion: v1
    5. kind: Service
    6. metadata:
    7. name: nginx
    8. labels:
    9. app: nginx
    10. spec:
    11. type: NodePort
    12. selector:
    13. app: nginx
    14. ports:
    15. - name: http
    16. port: 80
    17. targetPort: 80
    18. nodePort: 8080
    19. ---
    20. apiVersion: apps/v1
    21. kind: Deployment
    22. metadata:
    23. name: nginx-deployment
    24. spec:
    25. selector:
    26. matchLabels:
    27. app: nginx
    28. replicas: 1
    29. template:
    30. metadata:
    31. labels:
    32. app: nginx
    33. spec:
    34. containers:
    35. - name: nginx
    36. image: nginx:1.9.1
    37. ports:
    38. - containerPort: 80
    39. EOF
  2. 启动服务

    1. kubectl create -f nginx.yml
    • 第一次启动时需要下载k8s.gcr.io/pause:3.1镜像,国内无法直接下载,造成服务无法启动,通过下面操作来解决

      1. docker pull kubeimage/pause:3.1
      2. docker tag kubeimage/pause:3.1 k8s.gcr.io/pause:3.1
  3. 观察服务启动情况


    1. root@master:/opt/k8s/yml# kubectl get service -o wide
    2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
    3. kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 41h <none>
    4. nginx NodePort 10.254.8.25 <none> 80:8080/TCP 30m app=nginx
    5. root@master:/opt/k8s/yml# kubectl get pod -o wide
    6. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    7. nginx-deployment-56f8998dbc-955gf 1/1 Running 0 30m 172.30.78.2 slave <none> <none>
    8. root@master:/opt/k8s/yml# curl http://192.168.0.114:8080
    9. <!DOCTYPE html>
    10. <html>
    11. <head>
    12. <title>Welcome to nginx!</title>
    13. <style>
    14. body {
    15. width: 35em;
    16. margin: 0 auto;
    17. font-family: Tahoma, Verdana, Arial, sans-serif;
    18. }
    19. </style>
    20. </head>
    21. <body>
    22. <h1>Welcome to nginx!</h1>
    23. <p>If you see this page, the nginx web server is successfully installed and
    24. working. Further configuration is required.</p>
    25. <p>For online documentation and support please refer to
    26. <a href="http://nginx.org/">nginx.org</a>.<br/>
    27. Commercial support is available at
    28. <a href="http://nginx.com/">nginx.com</a>.</p>
    29. <p><em>Thank you for using nginx.</em></p>
    30. </body>
    31. </html>

部署 coredns 插件(在master节点上执行)

  1. 下载和配置 coredns

    1. cd /opt/k8s/work
    2. git clone https://github.com/coredns/deployment.git
    3. mv deployment coredns
  2. 启动 coredns

    1. cd /opt/k8s/work/coredns/kubernetes
    2. export CLUSTER_DNS_SVC_IP="10.254.0.2"
    3. export CLUSTER_DNS_DOMAIN="cluster.local"
    4. ./deploy.sh -i ${CLUSTER_DNS_SVC_IP} -d ${CLUSTER_DNS_DOMAIN} | kubectl apply -f -
  3. 遇到问题

    启动coredns后,状态是CrashLoopBackOff

    1. root@master:/opt/k8s/work/coredns/kubernetes# kubectl get pod -n kube-system -l k8s-app=kube-dns

NAME READY STATUS RESTARTS AGE

coredns-76b74f549-99bxd 0/1 CrashLoopBackOff 5 4m45s

```

  1. 查看coredns对应的pod日志有如下错误
  2. ```
  3. root@master:/opt/k8s/work/coredns/kubernetes# kubectl -n kube-system logs coredns-76b74f549-99bxd
  4. .:53
  5. [INFO] plugin/reload: Running configuration MD5 = 8b19e11d5b2a72fb8e63383b064116a1
  6. CoreDNS-1.6.6
  7. linux/amd64, go1.13.5, 6a7a75e
  8. [FATAL] plugin/loop: Loop (127.0.0.1:60429 -> :53) detected for zone ".", see https://coredns.io/plugins/loop#troubleshooting. Query: "HINFO 6292641803451309721.7599235642583168995."
  9. ```
  10. 按照提示进入https://coredns.io/plugins/loop#troubleshooting页面,有如下表述
  11. > When a CoreDNS Pod deployed in Kubernetes detects a loop, the CoreDNS Pod will start to CrashLoopBackOff”. This is because Kubernetes will try to restart the Pod every time CoreDNS detects the loop and exits.
  12. > A common cause of forwarding loops in Kubernetes clusters is an interaction with a local DNS cache on the host node (e.g. systemd-resolved). For example, in certain configurations systemd-resolved will put the loopback address 127.0.0.53 as a nameserver into /etc/resolv.conf. Kubernetes (via kubelet) by default will pass this /etc/resolv.conf file to all Pods using the default dnsPolicy rendering them unable to make DNS lookups (this includes CoreDNS Pods). CoreDNS uses this /etc/resolv.conf as a list of upstreams to forward requests to. Since it contains a loopback address, CoreDNS ends up forwarding requests to itself.
  13. > There are many ways to work around this issue, some are listed here:
  14. > * Add the following to your kubelet config yaml: resolvConf: <path-to-your-real-resolv-conf-file> (or via command line flag --resolv-conf deprecated in 1.10). Your real resolv.conf is the one that contains the actual IPs of your upstream servers, and no local/loopback address. This flag tells kubelet to pass an alternate resolv.conf to Pods. For systems using systemd-resolved, /run/systemd/resolve/resolv.conf is typically the location of the real resolv.conf, although this can be different depending on your distribution.
  15. > * Disable the local DNS cache on host nodes, and restore /etc/resolv.conf to the original.
  16. > * A quick and dirty fix is to edit your Corefile, replacing forward . /etc/resolv.conf with the IP address of your upstream DNS, for example forward . 8.8.8.8. But this only fixes the issue for CoreDNS, kubelet will continue to forward the invalid resolv.conf to all default dnsPolicy Pods, leaving them unable to resolve DNS.
  17. 按照提示的第一种解决方法,修改kubelet对应的配置文件kubelet-config.yamlresolv-conf的值为/run/systemd/resolve/resolv.conf,配置片段如下
  18. ```
  19. ...
  20. podPidsLimit: -1
  21. resolvConf: /run/systemd/resolve/resolv.conf
  22. maxOpenFiles: 1000000
  23. ...
  24. ```
  25. 重启kubelet服务
  26. ```
  27. systemctl daemon-reload
  28. systemctl restart kubelet
  29. ```
  30. 之后重新部署coredns
  31. ```
  32. root@master:/opt/k8s/work/coredns/kubernetes# ./deploy.sh -i ${CLUSTER_DNS_SVC_IP} -d ${CLUSTER_DNS_DOMAIN} | kubectl apply -f -
  33. serviceaccount/coredns created
  34. clusterrole.rbac.authorization.k8s.io/system:coredns created
  35. clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
  36. configmap/coredns created
  37. deployment.apps/coredns created
  38. service/kube-dns created
  39. root@master:/opt/k8s/work/coredns/kubernetes# kubectl get pod -A
  40. NAMESPACE NAME READY STATUS RESTARTS AGE
  41. kube-system coredns-76b74f549-j5t9c 1/1 Running 0 12s
  42. root@master:/opt/k8s/work/coredns/kubernetes# kubectl get all -n kube-system -l k8s-app=kube-dns
  43. NAME READY STATUS RESTARTS AGE
  44. pod/coredns-76b74f549-j5t9c 1/1 Running 0 2m8s
  45. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  46. service/kube-dns ClusterIP 10.254.0.2 <none> 53/UDP,53/TCP,9153/TCP 2m8s
  47. NAME READY UP-TO-DATE AVAILABLE AGE
  48. deployment.apps/coredns 1/1 1 1 2m8s
  49. NAME DESIRED CURRENT READY AGE
  50. replicaset.apps/coredns-76b74f549 1 1 1 2m8s
  51. ```
  1. 启动一个busybox pod,并启动上一章节中验证集群功能的nginx服务,在busybox通过服务名,访问nginx服务

    1. cd /opt/k8s/yml
    2. cat > busybox.yml << EOF
    3. apiVersion: v1
    4. kind: Pod
    5. metadata:
    6. name: busybox
    7. spec:
    8. containers:
    9. - name: busybox
    10. image: busybox
    11. command:
    12. - sleep
    13. - "3600"
    14. EOF
    15. kubectl create -f busybox.yml
    16. kubectl create -f nginx.yml
  2. 进入busybox pod中访问nginx

    1. root@master:/opt/k8s/yml# kubectl exec -it busybox sh
    2. / # cat /etc/resolv.conf
    3. nameserver 10.254.0.2
    4. search default.svc.cluster.local svc.cluster.local cluster.local
    5. options ndots:5
    6. / # nslookup www.baidu.com
    7. Server: 10.254.0.2
    8. Address: 10.254.0.2:53
    9. Non-authoritative answer:
    10. www.baidu.com canonical name = www.a.shifen.com
    11. Name: www.a.shifen.com
    12. Address: 183.232.231.174
    13. Name: www.a.shifen.com
    14. Address: 183.232.231.172
    15. / # nslookup kubernetes
    16. Server: 10.254.0.2
    17. Address: 10.254.0.2:53
    18. Name: kubernetes.default.svc.cluster.local
    19. Address: 10.254.0.1
    20. / # nslookup nginx
    21. Server: 10.254.0.2
    22. Address: 10.254.0.2:53
    23. Name: nginx.default.svc.cluster.local
    24. Address: 10.254.19.32
    25. / # ping -c 1 nginx
    26. PING nginx (10.254.19.32): 56 data bytes
    27. 64 bytes from 10.254.19.32: seq=0 ttl=64 time=0.155 ms
    28. --- nginx ping statistics ---
    29. 1 packets transmitted, 1 packets received, 0% packet loss
    30. round-trip min/avg/max = 0.155/0.155/0.155 ms

追加节点(在master上执行)

追加节点

资源有限,我们这边尝试把master节点追加到集群中,如果是新机器,需要执行本文档的 安装前准备,把ca相关的证书分发到这个机器上,部署 flannel 网络步骤

  1. 安装前准备(master节点已做过)

  2. 把ca相关的证书分发到这个机器上(master节点已做过)

  3. 部署 flannel 网络(master节点已做过)

  4. 安装docker服务

  5. 安装kubelet服务

    参照之前追加salve节点的操作,如果直接使用之前的kubelet-bootstrap.yml,发现节点无法加入,因为kubelet-bootstrap.yml中的token值有效期只有一天,如果token已经过期,在kube-apiserver中会出现错误

    1. 2 12 11:01:01 master kube-apiserver[5018]: E0212 11:01:01.640497 5018 authentication.go:104] Unable to authenticate the request due to an error: invalid bearer token

    查看token

    1. root@master:/opt/k8s/work# kubeadm token list --kubeconfig ~/.kube/config
    2. TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
    3. 5t989l.rweut7kedj7ifl1a <invalid> 2020-02-11T18:19:41+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:slave

    此时需要按照slave节点上安装kubelet的步骤,重新生成kubelet-bootstrap.yml

    将csr approve后,查看节点情况

    1. root@master:/opt/k8s/work# kubectl get nodes
    2. NAME STATUS ROLES AGE VERSION
    3. master Ready <none> 21s v1.17.2
    4. slave Ready <none> 36h v1.17.2
  6. 安装kubeproxy服务

重新验证集群

  1. root@master:/opt/k8s/yml# kubectl create -f nginx.yml
  2. service/nginx created
  3. deployment.apps/nginx-deployment created
  4. root@master:/opt/k8s/yml# kubectl get pod -o wide
  5. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  6. nginx-deployment-56f8998dbc-6b6qm 1/1 Running 0 87s 172.30.22.2 master <none> <none>
  7. root@master:/opt/k8s/yml# kubectl create -f busybox.yml
  8. pod/busybox created
  9. root@master:/opt/k8s/yml# kubectl get pod -o wide
  10. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  11. busybox 1/1 Running 0 102s 172.30.22.3 master <none> <none>
  12. nginx-deployment-56f8998dbc-6b6qm 1/1 Running 0 3m20s 172.30.22.2 master <none> <none>
  13. root@master:/opt/k8s/yml# curl http://192.168.0.107:8080
  14. <!DOCTYPE html>
  15. <html>
  16. <head>
  17. <title>Welcome to nginx!</title>
  18. <style>
  19. body {
  20. width: 35em;
  21. margin: 0 auto;
  22. font-family: Tahoma, Verdana, Arial, sans-serif;
  23. }
  24. </style>
  25. </head>
  26. <body>
  27. <h1>Welcome to nginx!</h1>
  28. <p>If you see this page, the nginx web server is successfully installed and
  29. working. Further configuration is required.</p>
  30. <p>For online documentation and support please refer to
  31. <a href="http://nginx.org/">nginx.org</a>.<br/>
  32. Commercial support is available at
  33. <a href="http://nginx.com/">nginx.com</a>.</p>
  34. <p><em>Thank you for using nginx.</em></p>
  35. </body>
  36. </html>
  37. root@master:/opt/k8s/yml# curl http://192.168.0.114:8080
  38. <!DOCTYPE html>
  39. <html>
  40. <head>
  41. <title>Welcome to nginx!</title>
  42. <style>
  43. body {
  44. width: 35em;
  45. margin: 0 auto;
  46. font-family: Tahoma, Verdana, Arial, sans-serif;
  47. }
  48. </style>
  49. </head>
  50. <body>
  51. <h1>Welcome to nginx!</h1>
  52. <p>If you see this page, the nginx web server is successfully installed and
  53. working. Further configuration is required.</p>
  54. <p>For online documentation and support please refer to
  55. <a href="http://nginx.org/">nginx.org</a>.<br/>
  56. Commercial support is available at
  57. <a href="http://nginx.com/">nginx.com</a>.</p>
  58. <p><em>Thank you for using nginx.</em></p>
  59. </body>
  60. </html>

可以看到访问集群中任意一个节点的8080端口,都可以正确访问到后端对应的nginx服务

kubernetes安装-二进制的更多相关文章

  1. kubernetes 安装手册(成功版)

    管理组件采用staticPod或者daemonSet形式跑的,宿主机os能跑docker应该本篇教程能大多适用安装完成仅供学习和实验 本次安裝的版本: Kubernetes v1.10.0 (1.10 ...

  2. Docker系列(九)Kubernetes安装

    环境: A.B两天机器A机器IP:192.169.0.104,B机器IP:192.168.0.102,其中A为Master节点,B为Slave节点 操作系统:Centos7 Master与Slave节 ...

  3. mysql5.6.40单实例安装二进制快捷安装

    mysql5.6.40单实例安装二进制快捷安装 近期因不同环境需要不同版本的mysql实例,故为了方便操作,特此记录下来,方便自己查找. # 1.1.Centos最小化安装推荐常用依赖包 yum cl ...

  4. 轻松加愉快的 Kubernetes 安装教程

    轻松加愉快的 Kubernetes 安装教程 马哥Linux运维 2 days ago 作者:无聊的学习者 来源:见文末 在国内安装 K8S,一直是大家很头痛的问题,各种麻烦,关键是还不知道需要下载什 ...

  5. Kuboard Kubernetes安装

    一.简介 Kubernetes 容器编排已越来越被大家关注,然而使用 Kubernetes 的门槛却依然很高,主要体现在这几个方面: 集群的安装复杂,出错概率大 Kubernetes相较于容器化,引入 ...

  6. kubernetes安装-kubeadm

    系统信息 角色 系统 CPU Core memory master 18.04.1-Ubuntu 4 8G slave 18.04.1-Ubuntu 4 4G 安装前准备(主节点和从节点都需要执行) ...

  7. 服务网格Istio入门-详细记录Kubernetes安装Istio并使用

    我最新最全的文章都在南瓜慢说 www.pkslow.com,文章更新也只在官网,欢迎大家来喝茶~~ 1 服务网格Istio Istio是开源的Service Mesh实现,一般用于Kubernetes ...

  8. kubernetes安装部署-day01

    一.基础环境的准备: 1.1.安装docker: docker的官网是:https://www.docker.com/ 1.1.1.rpm包安装: 官方下载地址:https://download.do ...

  9. 安装linux版qq,安装二进制包编译器,安装mysql-5.6.11,删除已安装或安装失败的mysql-5.6.11,简单mysql练习题

    上午[root@localhost ~]# ./test3.sh dev1^C[root@localhost ~]# groupadd dev1[root@localhost ~]# vim /etc ...

随机推荐

  1. MyBatis 介绍

    MyBatis 介绍 MyBatis 是一款优秀的 ORM(Object Relational Mapping,对象关系映射)框架,它可以通过对象和数据库之间的映射,将程序中的对象自动存储到数据库中. ...

  2. abp vnext2.0核心组件之领域实体组件源码解析

    接着abp vnext2.0核心组件之模块加载组件源码解析和abp vnext2.0核心组件之.Net Core默认DI组件切换到AutoFac源码解析集合.Net Core3.1,基本环境已经完备, ...

  3. MQTT协议的学习

    MQTT是一个客户端服务端架构的发布/订阅模式的消息传输协议.它的设计思想是轻巧.开放.简单.规范,易于实现.这些特点使得它对很多场景来说都是很好的选择,特别是对于受限的环境如机器与机器的通信(M2M ...

  4. Deep Learning for Chatbots(Introduction)

    聊天机器人又被称为会话系统,已经成为一个热门话题,许多公司都在这上面的投入巨大,包括微软,Facebook,苹果(Siri),Google,微信,Slack.许多创业公司尝试通过多种方式来改变与消费者 ...

  5. 【C++】C++程序链接失败,无法解析的外部命令,无法解析的外部符号 "private: static class * Object::current"

    C++程序编译结束后,出现链接失败提示: 严重性    代码    说明    项目    文件    行    类别    禁止显示状态错误    LNK2001    无法解析的外部符号 &quo ...

  6. Mysql设置创建时间字段和更新时间字段自动获取时间,填充时间

    1.引言在实际开发中,每条数据的创建时间和修改时间,尽量不需要应用程序去记录,而由数据库获取当前时间自动记录创建时间,获取当前时间自动记录修改时间. 2.创建语句(1)–添加CreateTime 设置 ...

  7. C++括号匹配检测(用栈)

    输入一串括号,包括圆括号和方括号,()[],判断是否匹配,即([]())或[([][])]为匹配的正确的格式,[(])或([())为不匹配的格式. #include<iostream> # ...

  8. mIoU混淆矩阵生成函数代码详解

    代码参考博客原文: https://blog.csdn.net/jiongnima/article/details/84750819 在原文和原文的引用里,找到了关于mIoU详尽的解释.这里重点解析  ...

  9. 吴恩达deepLearning.ai循环神经网络RNN学习笔记_看图就懂了!!!(理论篇)

    前言 目录: RNN提出的背景 - 一个问题 - 为什么不用标准神经网络 - RNN模型怎么解决这个问题 - RNN模型适用的数据特征 - RNN几种类型 RNN模型结构 - RNN block - ...

  10. 使用ASDM 管理 ciscoASA设备

    用vm虚拟机模拟了一台 ASA设备   自适应安全设备软件为 ASA8.25  asdm镜像为asdm-6.49.bin 用客户端连接时,一定要安装java  jre,版本我是用的是7,6应该也可以. ...