1. 简介

测试环境Kubernetes 1.14.2版本高可用搭建文档,搭建方式为kubeadm

2. 服务器版本和架构信息

  1. 系统版本:CentOS Linux release 7.6.1810 (Core)
  2. 内核:4.4.184-1.el7.elrepo.x86_64 注意:有可能后面安装的内核版本高于此版本
  3. Kubernetes: v1.14.2
  4. Docker-ce: 18.06
  5. 网络组件:calico
  6. 硬件配置:1664G
  7. Keepalived保证apiserever服务器的IP高可用
  8. Haproxy实现apiserver的负载均衡

3. 服务器角色规划

一定注意对应自己的服务器IP和主机名

master01/02节点上面部署了kubelet、keepalived、haproxy、controllmanager、apiserver、scheduler、docker、kube-proxy、calico组件

master03节点上面部署了kubelet、controllmanager、apiserver、scheduler、docker、kube-proxy、calico组件

node01/node02节点上面部署了kubelet、kube-proxy、docker、calico组件

其中除了kubelet和docker组件,其他组件都是以静态pod模式存在

节点名称 角色 IP 安装软件
负载VIP VIP 192.168.4.110
master-01 master 192.168.4.129 kubeadm、kubelet、kubectl、docker、haproxy、keepalived
master-02 master 192.168.4.130 kubeadm、kubelet、kubectl、docker、haproxy、keepalived
master-03 master 192.168.4.133 kubeadm、kubelet、kubectl、docker
node-01 node 192.168.4.128 kubeadm、kubelet、kubectl、docker
node-03 node 192.168.4.132 kubeadm、kubelet、kubectl、docker
service网段 10.209.0.0/16

4. 服务器初始化

4.1 关闭Selinux/firewalld/iptables(所有机器执行)

  1. setenforce 0 \
  2. && sed -i 's/^SELINUX=.*$/SELINUX=disabled/' /etc/selinux/config \
  3. && getenforce
  4. systemctl stop firewalld \
  5. && systemctl daemon-reload \
  6. && systemctl disable firewalld \
  7. && systemctl daemon-reload \
  8. && systemctl status firewalld
  9. yum install -y iptables-services \
  10. && systemctl stop iptables \
  11. && systemctl disable iptables \
  12. && systemctl status iptables

4.2 为每台服务器添加host解析记录(所有机器执行)

  1. cat >>/etc/hosts<<EOF
  2. 192.168.4.129 master01
  3. 192.168.4.130 master02
  4. 192.168.4.133 master03
  5. 192.168.4.128 node01
  6. 192.168.4.132 node03
  7. EOF

4.3 更换阿里源(所有机器执行)

  1. yum install wget -y
  2. cp -r /etc/yum.repos.d /etc/yum.repos.d.bak
  3. rm -f /etc/yum.repos.d/*.repo
  4. wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo \
  5. && wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
  6. yum clean all && yum makecache

4.4 设置limits.conf(所有机器执行)

  1. cat >> /etc/security/limits.conf <<EOF
  2. # End of file
  3. * soft nproc 10240000
  4. * hard nproc 10240000
  5. * soft nofile 10240000
  6. * hard nofile 10240000
  7. EOF

4.5 设置sysctl.conf(所有机器执行)

  1. [ ! -e "/etc/sysctl.conf_bk" ] && /bin/mv /etc/sysctl.conf{,_bk} \
  2. && cat > /etc/sysctl.conf << EOF
  3. fs.file-max=20480000
  4. fs.nr_open=20480000
  5. net.ipv4.tcp_max_tw_buckets = 180000
  6. net.ipv4.tcp_sack = 1
  7. net.ipv4.tcp_window_scaling = 1
  8. net.ipv4.tcp_rmem = 4096 87380 4194304
  9. net.ipv4.tcp_wmem = 4096 16384 4194304
  10. net.ipv4.tcp_max_syn_backlog = 16384
  11. net.core.netdev_max_backlog = 32768
  12. net.core.somaxconn = 32768
  13. net.core.wmem_default = 8388608
  14. net.core.rmem_default = 8388608
  15. net.core.rmem_max = 16777216
  16. net.core.wmem_max = 16777216
  17. net.ipv4.tcp_timestamps = 0
  18. net.ipv4.tcp_fin_timeout = 20
  19. net.ipv4.tcp_synack_retries = 2
  20. net.ipv4.tcp_syn_retries = 2
  21. net.ipv4.tcp_syncookies = 1
  22. #net.ipv4.tcp_tw_len = 1
  23. net.ipv4.tcp_tw_reuse = 1
  24. net.ipv4.tcp_mem = 94500000 915000000 927000000
  25. net.ipv4.tcp_max_orphans = 3276800
  26. net.ipv4.ip_local_port_range = 1024 65000
  27. #net.nf_conntrack_max = 6553500
  28. #net.netfilter.nf_conntrack_max = 6553500
  29. #net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
  30. #net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
  31. #net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
  32. #net.netfilter.nf_conntrack_tcp_timeout_established = 3600
  33. EOF
  1. sysctl -p

4.6 配置时间同步(所有机器执行)

  1. ntpdate -u pool.ntp.org
  2. crontab -e #加入定时任务
  3. */15 * * * * /usr/sbin/ntpdate -u pool.ntp.org >/dev/null 2>&1

4.7 配置k8s.conf(所有机器执行)

  1. cat <<EOF > /etc/sysctl.d/k8s.conf
  2. net.bridge.bridge-nf-call-ip6tables = 1
  3. net.bridge.bridge-nf-call-iptables = 1
  4. net.ipv4.ip_nonlocal_bind = 1
  5. net.ipv4.ip_forward = 1
  6. vm.swappiness=0
  7. EOF
  8. #执行命令使其修改生效
  9. modprobe br_netfilter \
  10. && sysctl -p /etc/sysctl.d/k8s.conf

4.8 关闭交换分区(所有机器执行)

  1. swapoff -a
  2. yes | cp /etc/fstab /etc/fstab_bak
  3. cat /etc/fstab_bak |grep -v swap > /etc/fstab

4.9 升级系统内核(所有机器执行)

  1. yum update -y
  2. rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm ;yum --enablerepo=elrepo-kernel install kernel-lt-devel kernel-lt -y
  3. 查看内核修改结果
  4. grub2-editenv list
  5. #注意,这里执行下面的命令会出现多个内核版本
  6. [root@master01 ~]# cat /boot/grub2/grub.cfg |grep "menuentry "
  7. menuentry 'CentOS Linux (4.4.184-1.el7.elrepo.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-862.el7.x86_64-advanced-021a955b-781d-425a-8250-f39857437658'
  8. 设置默认内核版本,改版本必须已经存在,请注意执行命令cat /boot/grub2/grub.cfg |grep "menuentry "后生成的内容,切勿随意复制
  9. grub2-set-default 'CentOS Linux (4.4.184-1.el7.elrepo.x86_64) 7 (Core)'
  10. 查看内核修改结果
  11. grub2-editenv list
  12. # 检查默认内核版本高于4.1,否则请调整默认启动参数
  13. # 查看内核修改结果
  14. grub2-editenv list
  15. #重启以更换内核使其生效
  16. reboot

4.10 加载ipvs模块(所有机器执行)

  1. cat > /etc/sysconfig/modules/ipvs.modules <<EOF
  2. #!/bin/bash
  3. modprobe -- ip_vs
  4. modprobe -- ip_vs_rr
  5. modprobe -- ip_vs_wrr
  6. modprobe -- ip_vs_sh
  7. modprobe -- nf_conntrack_ipv4
  8. EOF
  9. chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

4.11 添加k8s yum源(所有机器执行)

  1. cat << EOF > /etc/yum.repos.d/kubernetes.repo
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  5. enabled=1
  6. gpgcheck=1
  7. repo_gpgcheck=1
  8. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  9. EOF

4.12 安装服务器必备软件

  1. yum -y install wget vim iftop iotop net-tools nmon telnet lsof iptraf nmap httpd-tools lrzsz mlocate ntp ntpdate strace libpcap nethogs iptraf iftop nmon bridge-utils bind-utils telnet nc nfs-utils rpcbind nfs-utils dnsmasq python python-devel tcpdump mlocate tree

5. 安装keepalived和haproxy

5.1 在master01和master02上安装keepalived和haproxy

master01的priority为250,master02的priority为200,其他配置一样。

master01(192.168.4.129)

vim /etc/keepalived/keepalived.conf

注意interface这个配置,配置成你服务器的网卡,切勿随意粘贴

  1. ! Configuration File for keepalived
  2. global_defs {
  3. router_id LVS_DEVEL
  4. }
  5. vrrp_script check_haproxy {
  6. script "killall -0 haproxy"
  7. interval 3
  8. weight -2
  9. fall 10
  10. rise 2
  11. }
  12. vrrp_instance VI_1 {
  13. state MASTER
  14. interface ens160
  15. virtual_router_id 51
  16. priority 250
  17. advert_int 1
  18. authentication {
  19. auth_type PASS
  20. auth_pass 35f18af7190d51c9f7f78f37300a0cbd
  21. }
  22. virtual_ipaddress {
  23. 192.168.4.110
  24. }
  25. track_script {
  26. check_haproxy
  27. }
  28. }

master02(192.168.4.130)

vim /etc/keepalived/keepalived.conf

  1. ! Configuration File for keepalived
  2. global_defs {
  3. router_id LVS_DEVEL
  4. }
  5. vrrp_script check_haproxy {
  6. script "killall -0 haproxy"
  7. interval 3
  8. weight -2
  9. fall 10
  10. rise 2
  11. }
  12. vrrp_instance VI_1 {
  13. state BACKUP
  14. interface ens160
  15. virtual_router_id 51
  16. priority 200
  17. advert_int 1
  18. authentication {
  19. auth_type PASS
  20. auth_pass 35f18af7190d51c9f7f78f37300a0cbd
  21. }
  22. virtual_ipaddress {
  23. 192.168.4.110
  24. }
  25. track_script {
  26. check_haproxy
  27. }
  28. }

5.2 haproxy配置

master01和master02的haproxy配置是一样的。此处我们监听的是192.168.4.110的8443端口,因为haproxy是和k8s apiserver是部署在同一台服务器上,都用6443会冲突。

192.168.4.129

vim /etc/haproxy/haproxy.cfg

  1. #---------------------------------------------------------------------
  2. # Global settings
  3. #---------------------------------------------------------------------
  4. global
  5. # to have these messages end up in /var/log/haproxy.log you will
  6. # need to:
  7. #
  8. # 1) configure syslog to accept network log events. This is done
  9. # by adding the '-r' option to the SYSLOGD_OPTIONS in
  10. # /etc/sysconfig/syslog
  11. #
  12. # 2) configure local2 events to go to the /var/log/haproxy.log
  13. # file. A line like the following can be added to
  14. # /etc/sysconfig/syslog
  15. #
  16. # local2.* /var/log/haproxy.log
  17. #
  18. #log 127.0.0.1 local2
  19. log 127.0.0.1 local0 info
  20. chroot /var/lib/haproxy
  21. pidfile /var/run/haproxy.pid
  22. maxconn 4000
  23. user haproxy
  24. group haproxy
  25. daemon
  26. # turn on stats unix socket
  27. stats socket /var/lib/haproxy/stats
  28. #---------------------------------------------------------------------
  29. # common defaults that all the 'listen' and 'backend' sections will
  30. # use if not designated in their block
  31. #---------------------------------------------------------------------
  32. defaults
  33. mode http
  34. log global
  35. option httplog
  36. option dontlognull
  37. option http-server-close
  38. option forwardfor except 127.0.0.0/8
  39. option redispatch
  40. retries 3
  41. timeout http-request 10s
  42. timeout queue 1m
  43. timeout connect 10s
  44. timeout client 1m
  45. timeout server 1m
  46. timeout http-keep-alive 10s
  47. timeout check 10s
  48. maxconn 3000
  49. #---------------------------------------------------------------------
  50. # kubernetes apiserver frontend which proxys to the backends
  51. #---------------------------------------------------------------------
  52. frontend kubernetes-apiserver
  53. mode tcp
  54. bind *:8443
  55. option tcplog
  56. default_backend kubernetes-apiserver
  57. #---------------------------------------------------------------------
  58. # round robin balancing between the various backends
  59. #---------------------------------------------------------------------
  60. backend kubernetes-apiserver
  61. mode tcp
  62. balance roundrobin
  63. server master01 192.168.4.129:6443 check
  64. server master02 192.168.4.130:6443 check
  65. server master03 192.168.4.133:6443 check
  66. #---------------------------------------------------------------------
  67. # collection haproxy statistics message
  68. #---------------------------------------------------------------------
  69. listen stats
  70. bind *:1080
  71. stats auth admin:awesomePassword
  72. stats refresh 5s
  73. stats realm HAProxy\ Statistics
  74. stats uri /admin?stats

192.168.4.130

vim /etc/haproxy/haproxy.cfg

  1. #---------------------------------------------------------------------
  2. # Global settings
  3. #---------------------------------------------------------------------
  4. global
  5. # to have these messages end up in /var/log/haproxy.log you will
  6. # need to:
  7. #
  8. # 1) configure syslog to accept network log events. This is done
  9. # by adding the '-r' option to the SYSLOGD_OPTIONS in
  10. # /etc/sysconfig/syslog
  11. #
  12. # 2) configure local2 events to go to the /var/log/haproxy.log
  13. # file. A line like the following can be added to
  14. # /etc/sysconfig/syslog
  15. #
  16. # local2.* /var/log/haproxy.log
  17. #
  18. #log 127.0.0.1 local2
  19. log 127.0.0.1 local0 info
  20. ```
  21. chroot /var/lib/haproxy
  22. pidfile /var/run/haproxy.pid
  23. maxconn 4000
  24. user haproxy
  25. group haproxy
  26. daemon
  27. # turn on stats unix socket
  28. stats socket /var/lib/haproxy/stats
  29. ​```
  30. #---------------------------------------------------------------------
  31. # common defaults that all the 'listen' and 'backend' sections will
  32. # use if not designated in their block
  33. #---------------------------------------------------------------------
  34. defaults
  35. mode http
  36. log global
  37. option httplog
  38. option dontlognull
  39. option http-server-close
  40. option forwardfor except 127.0.0.0/8
  41. option redispatch
  42. retries 3
  43. timeout http-request 10s
  44. timeout queue 1m
  45. timeout connect 10s
  46. timeout client 1m
  47. timeout server 1m
  48. timeout http-keep-alive 10s
  49. timeout check 10s
  50. maxconn 3000
  51. #---------------------------------------------------------------------
  52. # kubernetes apiserver frontend which proxys to the backends
  53. #---------------------------------------------------------------------
  54. frontend kubernetes-apiserver
  55. mode tcp
  56. bind *:8443
  57. option tcplog
  58. default_backend kubernetes-apiserver
  59. #---------------------------------------------------------------------
  60. # round robin balancing between the various backends
  61. #---------------------------------------------------------------------
  62. backend kubernetes-apiserver
  63. mode tcp
  64. balance roundrobin
  65. server master01 192.168.4.129:6443 check
  66. server master02 192.168.4.130:6443 check
  67. server master03 192.168.4.133:6443 check
  68. #---------------------------------------------------------------------
  69. # collection haproxy statistics message
  70. #---------------------------------------------------------------------
  71. listen stats
  72. bind *:1080
  73. stats auth admin:awesomePassword
  74. stats refresh 5s
  75. stats realm HAProxy\ Statistics
  76. stats uri /admin?stats

5.3 设置服务启动顺序及依赖关系(master01和master02操作)

vim /usr/lib/systemd/system/keepalived.service

  1. [Unit]
  2. Description=LVS and VRRP High Availability Monitor
  3. After=syslog.target network-online.target haproxy.service
  4. Requires=haproxy.service

5.4 启动服务

  1. systemctl enable keepalived && systemctl start keepalived \
  2. && systemctl enable haproxy && systemctl start haproxy && systemctl status keepalived && systemctl status haproxy

6.安装docker

6.1 安装必要的一些系统工具(所有服务器安装)

  1. yum install -y yum-utils device-mapper-persistent-data lvm2

6.2 添加软件源信息(所有服务器配置)

  1. yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  2. yum list docker-ce --showduplicates | sort -r
  3. yum -y install docker-ce-18.06.3.ce-3.el7
  4. usermod -aG docker bumblebee

6.3 配置daemon.json文件(所有服务器配置)

  1. mkdir -p /etc/docker/ \
  2. && cat > /etc/docker/daemon.json << EOF
  3. {
  4. "registry-mirrors":[
  5. "https://c6ai9izk.mirror.aliyuncs.com"
  6. ],
  7. "max-concurrent-downloads":3,
  8. "data-root":"/data/docker",
  9. "log-driver":"json-file",
  10. "log-opts":{
  11. "max-size":"100m",
  12. "max-file":"1"
  13. },
  14. "max-concurrent-uploads":5,
  15. "storage-driver":"overlay2",
  16. "storage-opts": [
  17. "overlay2.override_kernel_check=true"
  18. ]
  19. }
  20. EOF

6.4 启动检查docker服务

  1. systemctl enable docker \
  2. && systemctl restart docker \
  3. && systemctl status docker

7 使用kubeadm部署kubernetes

7.1 配置kubernetes.repo(每台机器都需要配置)

  1. cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  5. enabled=1
  6. gpgcheck=0
  7. repo_gpgcheck=0
  8. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  9. EOF

7.2 安装必备软件(所有机器安装)

  1. yum install -y kubelet-1.14.2 kubeadm-1.14.2 kubectl-1.14.2 ipvsadm ipset
  2. #设置kubelet开机自启动,注意:这一步不能直接执行 systemctl start kubelet,会报错,成功初始化完后kubelet会自动起来
  3. systemctl enable kubelet

7.3 修改初始化配置

使用kubeadm config print init-defaults > kubeadm-init.yaml 打印出默认配置,然后在根据自己的环境修改配置

注意需要修改advertiseAddress、controlPlaneEndpoint、imageRepository、serviceSubnet

其中advertiseAddress为master01的ip,controlPlaneEndpoint为VIP+8443端口,imageRepository修改为阿里的源,serviceSubnet找网络组要一段没人使用的IP段

  1. [root@master01 ~]# cat kubeadm-init.yaml
  2. apiVersion: kubeadm.k8s.io/v1beta1
  3. bootstrapTokens:
  4. - groups:
  5. - system:bootstrappers:kubeadm:default-node-token
  6. token: abcdef.0123456789abcdef
  7. ttl: 24h0m0s
  8. usages:
  9. - signing
  10. - authentication
  11. kind: InitConfiguration
  12. localAPIEndpoint:
  13. advertiseAddress: 192.168.4.129
  14. bindPort: 6443
  15. nodeRegistration:
  16. criSocket: /var/run/dockershim.sock
  17. name: master01
  18. taints:
  19. - effect: NoSchedule
  20. key: node-role.kubernetes.io/master
  21. ---
  22. apiServer:
  23. timeoutForControlPlane: 4m0s
  24. apiVersion: kubeadm.k8s.io/v1beta1
  25. certificatesDir: /etc/kubernetes/pki
  26. clusterName: kubernetes
  27. controlPlaneEndpoint: "192.168.4.110:8443"
  28. controllerManager: {}
  29. dns:
  30. type: CoreDNS
  31. etcd:
  32. local:
  33. dataDir: /var/lib/etcd
  34. imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
  35. kind: ClusterConfiguration
  36. kubernetesVersion: v1.14.2
  37. networking:
  38. dnsDomain: cluster.local
  39. podSubnet: "10.209.0.0/16"
  40. serviceSubnet: ""
  41. scheduler: {}

7.4 预下载镜像

  1. [root@master01 ~]# kubeadm config images pull --config kubeadm-init.yaml
  2. [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.14.2
  3. [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.14.2
  4. [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.14.2
  5. [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.2
  6. [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
  7. [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10
  8. [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1

7.5 初始化

  1. [root@master01 ~]# kubeadm init --config kubeadm-init.yaml
  2. [init] Using Kubernetes version: v1.14.2
  3. [preflight] Running pre-flight checks
  4. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  5. [preflight] Pulling images required for setting up a Kubernetes cluster
  6. [preflight] This might take a minute or two, depending on the speed of your internet connection
  7. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  8. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  9. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  10. [kubelet-start] Activating the kubelet service
  11. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  12. [certs] Generating "etcd/ca" certificate and key
  13. [certs] Generating "etcd/healthcheck-client" certificate and key
  14. [certs] Generating "apiserver-etcd-client" certificate and key
  15. [certs] Generating "etcd/server" certificate and key
  16. [certs] etcd/server serving cert is signed for DNS names [master01 localhost] and IPs [192.168.4.129 127.0.0.1 ::1]
  17. [certs] Generating "etcd/peer" certificate and key
  18. [certs] etcd/peer serving cert is signed for DNS names [master01 localhost] and IPs [192.168.4.129 127.0.0.1 ::1]
  19. [certs] Generating "ca" certificate and key
  20. [certs] Generating "apiserver" certificate and key
  21. [certs] apiserver serving cert is signed for DNS names [master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.209.0.1 192.168.4.129 192.168.4.110]
  22. [certs] Generating "apiserver-kubelet-client" certificate and key
  23. [certs] Generating "front-proxy-ca" certificate and key
  24. [certs] Generating "front-proxy-client" certificate and key
  25. [certs] Generating "sa" key and public key
  26. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  27. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  28. [kubeconfig] Writing "admin.conf" kubeconfig file
  29. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  30. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  31. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  32. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  33. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  34. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  35. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  36. [control-plane] Creating static Pod manifest for "kube-apiserver"
  37. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  38. [control-plane] Creating static Pod manifest for "kube-scheduler"
  39. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  40. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  41. [apiclient] All control plane components are healthy after 17.506253 seconds
  42. [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  43. [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
  44. [upload-certs] Skipping phase. Please see --experimental-upload-certs
  45. [mark-control-plane] Marking the node master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
  46. [mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  47. [bootstrap-token] Using token: abcdef.0123456789abcdef
  48. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  49. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  50. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  51. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  52. [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
  53. [addons] Applied essential addon: CoreDNS
  54. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  55. [addons] Applied essential addon: kube-proxy
  56. Your Kubernetes control-plane has initialized successfully!
  57. To start using your cluster, you need to run the following as a regular user:
  58. mkdir -p $HOME/.kube
  59. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  60. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  61. You should now deploy a pod network to the cluster.
  62. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  63. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  64. You can now join any number of control-plane nodes by copying certificate authorities
  65. and service account keys on each node and then running the following as root:
  66. kubeadm join 192.168.4.110:8443 --token abcdef.0123456789abcdef \
  67. --discovery-token-ca-cert-hash sha256:0ca0a5fd28409faecba5d2f21aeb010a945a5dae42023fe361424d621708edc1 \
  68. --experimental-control-plane
  69. Then you can join any number of worker nodes by running the following on each as root:
  70. kubeadm join 192.168.4.110:8443 --token abcdef.0123456789abcdef \
  71. --discovery-token-ca-cert-hash sha256:0ca0a5fd28409faecba5d2f21aeb010a945a5dae42023fe361424d621708edc1

kubeadm init主要执行了以下操作:

  • [init]:指定版本进行初始化操作
  • [preflight] :初始化前的检查和下载所需要的Docker镜像文件
  • [kubelet-start] :生成kubelet的配置文件”/var/lib/kubelet/config.yaml”,没有这个文件kubelet无法启动,所以初始化之前的kubelet实际上启动失败。
  • [certificates]:生成Kubernetes使用的证书,存放在/etc/kubernetes/pki目录中。
  • [kubeconfig] :生成 KubeConfig 文件,存放在/etc/kubernetes目录中,组件之间通信需要使用对应文件。
  • [control-plane]:使用/etc/kubernetes/manifest目录下的YAML文件,安装 Master 组件。
  • [etcd]:使用/etc/kubernetes/manifest/etcd.yaml安装Etcd服务。
  • [wait-control-plane]:等待control-plan部署的Master组件启动。
  • [apiclient]:检查Master组件服务状态。
  • [uploadconfig]:更新配置
  • [kubelet]:使用configMap配置kubelet。
  • [patchnode]:更新CNI信息到Node上,通过注释的方式记录。
  • [mark-control-plane]:为当前节点打标签,打了角色Master,和不可调度标签,这样默认就不会使用Master节点来运行Pod。
  • [bootstrap-token]:生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
  • [addons]:安装附加组件CoreDNS和kube-proxy

7.6 为kubectl准备Kubeconfig文件

kubectl默认会在执行的用户家目录下面的.kube目录下寻找config文件。这里是将在初始化时[kubeconfig]步骤生成的admin.conf拷贝到.kube/config。

  1. [root@master01 ~]# mkdir -p $HOME/.kube
  2. [root@master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. [root@master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

在该配置文件中,记录了API Server的访问地址,所以后面直接执行kubectl命令就可以正常连接到API Server中。

7.7 查看组件状态

  1. [root@master01 ~]# kubectl get cs
  2. NAME STATUS MESSAGE ERROR
  3. controller-manager Healthy ok
  4. scheduler Healthy ok
  5. etcd-0 Healthy {"health":"true"}
  6. [root@master01 ~]# kubectl get nodes
  7. NAME STATUS ROLES AGE VERSION
  8. master01 NotReady master 4m20s v1.14.2

目前只有一个节点,角色是Master,状态是NotReady,状态是NotReady状态是因为还没有安装网络插件

7.8 其他master部署(在master01机器上执行)

在master01将证书文件拷贝至master02、master03节点

  1. #拷贝正式至master02节点
  2. USER=root
  3. CONTROL_PLANE_IPS="master02"
  4. for host in ${CONTROL_PLANE_IPS}; do
  5. ssh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd"
  6. scp /etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/
  7. scp /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/kubernetes/pki/
  8. scp /etc/kubernetes/pki/front-proxy-ca.* "${USER}"@$host:/etc/kubernetes/pki/
  9. scp /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/
  10. scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/
  11. done
  12. #拷贝正式至master03节点
  13. USER=root
  14. CONTROL_PLANE_IPS="master03"
  15. for host in ${CONTROL_PLANE_IPS}; do
  16. ssh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd"
  17. scp /etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/
  18. scp /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/kubernetes/pki/
  19. scp /etc/kubernetes/pki/front-proxy-ca.* "${USER}"@$host:/etc/kubernetes/pki/
  20. scp /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/
  21. scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/
  22. done

在master02上执行,注意注意--experimental-control-plane参数

  1. [root@master02 ~]# kubeadm join 192.168.4.110:8443 --token abcdef.0123456789abcdef \
  2. > --discovery-token-ca-cert-hash sha256:0ca0a5fd28409faecba5d2f21aeb010a945a5dae42023fe361424d621708edc1 \
  3. > --experimental-control-plane
  4. [preflight] Running pre-flight checks
  5. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  6. [preflight] Reading configuration from the cluster...
  7. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  8. [preflight] Running pre-flight checks before initializing the new control plane instance
  9. [preflight] Pulling images required for setting up a Kubernetes cluster
  10. [preflight] This might take a minute or two, depending on the speed of your internet connection
  11. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  12. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  13. [certs] Generating "apiserver-kubelet-client" certificate and key
  14. [certs] Generating "apiserver" certificate and key
  15. [certs] apiserver serving cert is signed for DNS names [master02 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.209.0.1 192.168.4.130 192.168.4.110]
  16. [certs] Generating "etcd/server" certificate and key
  17. [certs] etcd/server serving cert is signed for DNS names [master02 localhost] and IPs [192.168.4.130 127.0.0.1 ::1]
  18. [certs] Generating "etcd/peer" certificate and key
  19. [certs] etcd/peer serving cert is signed for DNS names [master02 localhost] and IPs [192.168.4.130 127.0.0.1 ::1]
  20. [certs] Generating "apiserver-etcd-client" certificate and key
  21. [certs] Generating "etcd/healthcheck-client" certificate and key
  22. [certs] Generating "front-proxy-client" certificate and key
  23. [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
  24. [certs] Using the existing "sa" key
  25. [kubeconfig] Generating kubeconfig files
  26. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  27. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  28. [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
  29. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  30. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  31. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  32. [control-plane] Creating static Pod manifest for "kube-apiserver"
  33. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  34. [control-plane] Creating static Pod manifest for "kube-scheduler"
  35. [check-etcd] Checking that the etcd cluster is healthy
  36. [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
  37. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  38. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  39. [kubelet-start] Activating the kubelet service
  40. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
  41. [etcd] Announced new etcd member joining to the existing etcd cluster
  42. [etcd] Wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
  43. [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
  44. [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  45. [mark-control-plane] Marking the node master02 as control-plane by adding the label "node-role.kubernetes.io/master=''"
  46. [mark-control-plane] Marking the node master02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  47. This node has joined the cluster and a new control plane instance was created:
  48. * Certificate signing request was sent to apiserver and approval was received.
  49. * The Kubelet was informed of the new secure connection details.
  50. * Control plane (master) label and taint were applied to the new node.
  51. * The Kubernetes control plane instances scaled up.
  52. * A new etcd member was added to the local/stacked etcd cluster.
  53. To start administering your cluster from this node, you need to run the following as a regular user:
  54. mkdir -p $HOME/.kube
  55. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  56. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  57. Run 'kubectl get nodes' to see this node join the cluster.

注意**:token有效期是有限的,如果旧的token过期,可以使用kubeadm token create --print-join-command重新创建一条token。

  1. mkdir -p $HOME/.kube \
  2. && cp -i /etc/kubernetes/admin.conf $HOME/.kube/config \
  3. && chown $(id -u):$(id -g) $HOME/.kube/config

在master03上执行,注意注意--experimental-control-plane参数

  1. [root@master03 ~]# kubeadm join 192.168.4.110:8443 --token abcdef.0123456789abcdef \
  2. > --discovery-token-ca-cert-hash sha256:0ca0a5fd28409faecba5d2f21aeb010a945a5dae42023fe361424d621708edc1 \
  3. > --experimental-control-plane
  4. [preflight] Running pre-flight checks
  5. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  6. [preflight] Reading configuration from the cluster...
  7. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  8. [preflight] Running pre-flight checks before initializing the new control plane instance
  9. [preflight] Pulling images required for setting up a Kubernetes cluster
  10. [preflight] This might take a minute or two, depending on the speed of your internet connection
  11. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  12. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  13. [certs] Generating "front-proxy-client" certificate and key
  14. [certs] Generating "etcd/peer" certificate and key
  15. [certs] etcd/peer serving cert is signed for DNS names [master03 localhost] and IPs [192.168.4.133 127.0.0.1 ::1]
  16. [certs] Generating "etcd/healthcheck-client" certificate and key
  17. [certs] Generating "etcd/server" certificate and key
  18. [certs] etcd/server serving cert is signed for DNS names [master03 localhost] and IPs [192.168.4.133 127.0.0.1 ::1]
  19. [certs] Generating "apiserver-etcd-client" certificate and key
  20. [certs] Generating "apiserver" certificate and key
  21. [certs] apiserver serving cert is signed for DNS names [master03 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.209.0.1 192.168.4.133 192.168.4.110]
  22. [certs] Generating "apiserver-kubelet-client" certificate and key
  23. [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
  24. [certs] Using the existing "sa" key
  25. [kubeconfig] Generating kubeconfig files
  26. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  27. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  28. [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
  29. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  30. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  31. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  32. [control-plane] Creating static Pod manifest for "kube-apiserver"
  33. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  34. [control-plane] Creating static Pod manifest for "kube-scheduler"
  35. [check-etcd] Checking that the etcd cluster is healthy
  36. [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
  37. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  38. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  39. [kubelet-start] Activating the kubelet service
  40. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
  41. [etcd] Announced new etcd member joining to the existing etcd cluster
  42. [etcd] Wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
  43. [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
  44. [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  45. [mark-control-plane] Marking the node master03 as control-plane by adding the label "node-role.kubernetes.io/master=''"
  46. [mark-control-plane] Marking the node master03 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  47. This node has joined the cluster and a new control plane instance was created:
  48. * Certificate signing request was sent to apiserver and approval was received.
  49. * The Kubelet was informed of the new secure connection details.
  50. * Control plane (master) label and taint were applied to the new node.
  51. * The Kubernetes control plane instances scaled up.
  52. * A new etcd member was added to the local/stacked etcd cluster.
  53. To start administering your cluster from this node, you need to run the following as a regular user:
  54. mkdir -p $HOME/.kube
  55. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  56. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  57. Run 'kubectl get nodes' to see this node join the cluster.
  1. [root@master03 ~]# mkdir -p $HOME/.kube \
  2. > && cp -i /etc/kubernetes/admin.conf $HOME/.kube/config \
  3. > && chown $(id -u):$(id -g) $HOME/.kube/config
  4. [root@master03 ~]# kubectl get nodes
  5. NAME STATUS ROLES AGE VERSION
  6. master01 NotReady master 15m v1.14.2
  7. master02 NotReady master 3m40s v1.14.2
  8. master03 NotReady master 2m1s v1.14.2

8. node节点部署

在node01、node02执行,注意没有--experimental-control-plane参数

注意**:token有效期是有限的,如果旧的token过期,可以在master节点上使用kubeadm token create --print-join-command重新创建一条token。

在node01和node02上执行下面这条命令

  1. kubeadm join 192.168.4.110:8443 --token lwsk91.y2ywpq0y74wt03tb --discovery-token-ca-cert-hash sha256:0ca0a5fd28409faecba5d2f21aeb010a945a5dae42023fe361424d621708edc1

9. 部署网络插件calico

9.1 下载calico.yaml文件

  1. wget -c https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

9.2 修改calico.yaml(根据实际情况配置)

修改CALICO_IPV4POOL_CIDR这个下面的vaule值,默认是192.168.0.0/16

  1. # The default IPv4 pool to create on startup if none exists. Pod IPs will be
  2. # chosen from this range. Changing this value after installation will have
  3. # no effect. This should fall within `--cluster-cidr`.
  4. - name: CALICO_IPV4POOL_CIDR
  5. value: "10.209.0.0/16"

9.3 执行kubectl apply -f calico.yaml

  1. [root@master01 ~]# kubectl apply -f calico.yaml
  2. configmap/calico-config created
  3. customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
  4. customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
  5. customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
  6. customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
  7. customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
  8. customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
  9. customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
  10. customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
  11. customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
  12. customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
  13. customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
  14. customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
  15. customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
  16. clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
  17. clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
  18. clusterrole.rbac.authorization.k8s.io/calico-node created
  19. clusterrolebinding.rbac.authorization.k8s.io/calico-node created
  20. daemonset.extensions/calico-node created
  21. serviceaccount/calico-node created
  22. deployment.extensions/calico-kube-controllers created
  23. serviceaccount/calico-kube-controllers created

9.4 查看节点状态

一开始没安装网络组件,是显示notReady的,装完cailco后就变成Ready,说明集群已就绪了,可以进行下一步验证集群是否搭建成功

  1. [root@master01 ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. master01 Ready master 23h v1.14.2
  4. master02 Ready master 22h v1.14.2
  5. master03 Ready master 22h v1.14.2
  6. node01 NotReady <none> 19m v1.14.2
  7. node03 NotReady <none> 5s v1.14.2

10. kube-proxy开启ipvs[单个master节点执行]

10.1 修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: "ipvs"

  1. kubectl edit cm kube-proxy -n kube-system

10.2 之后重启各个节点上的kube-proxy pod:

  1. [root@master01 ~]# kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
  2. pod "kube-proxy-8fpjb" deleted
  3. pod "kube-proxy-dqqxh" deleted
  4. pod "kube-proxy-mxvz2" deleted
  5. pod "kube-proxy-np9x9" deleted
  6. pod "kube-proxy-rtzcn" deleted

10.3 查看kube-proxy pod状态

  1. [root@master01 ~]# kubectl get pod -n kube-system | grep kube-proxy
  2. kube-proxy-4fhpg 1/1 Running 0 81s
  3. kube-proxy-9f2x6 1/1 Running 0 109s
  4. kube-proxy-cxl5m 1/1 Running 0 89s
  5. kube-proxy-lvp9q 1/1 Running 0 78s
  6. kube-proxy-v4mg8 1/1 Running 0 99s

10.4 查看是否开启了ipvs

日志中打印出了Using ipvs Proxier,说明ipvs模式已经开启

  1. [root@master01 ~]# kubectl logs kube-proxy-4fhpg -n kube-system
  2. I0705 07:53:05.254157 1 server_others.go:176] Using ipvs Proxier.
  3. W0705 07:53:05.255130 1 proxier.go:380] clusterCIDR not specified, unable to distinguish between internal and external traffic
  4. W0705 07:53:05.255181 1 proxier.go:386] IPVS scheduler not specified, use rr by default
  5. I0705 07:53:05.255599 1 server.go:562] Version: v1.14.2
  6. I0705 07:53:05.280930 1 conntrack.go:52] Setting nf_conntrack_max to 131072
  7. I0705 07:53:05.281426 1 config.go:102] Starting endpoints config controller
  8. I0705 07:53:05.281473 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
  9. I0705 07:53:05.281523 1 config.go:202] Starting service config controller
  10. I0705 07:53:05.281548 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
  11. I0705 07:53:05.381724 1 controller_utils.go:1034] Caches are synced for endpoints config controller
  12. I0705 07:53:05.381772 1 controller_utils.go:1034] Caches are synced for service config controller

11. 查看ipvs状态

  1. [root@master01 ~]# ipvsadm -L -n
  2. IP Virtual Server version 1.2.1 (size=4096)
  3. Prot LocalAddress:Port Scheduler Flags
  4. -> RemoteAddress:Port Forward Weight ActiveConn InActConn
  5. TCP 10.209.0.1:443 rr
  6. -> 192.168.4.129:6443 Masq 1 0 0
  7. -> 192.168.4.130:6443 Masq 1 0 0
  8. -> 192.168.4.133:6443 Masq 1 0 0
  9. TCP 10.209.0.10:53 rr
  10. -> 10.209.59.193:53 Masq 1 0 0
  11. -> 10.209.59.194:53 Masq 1 0 0
  12. TCP 10.209.0.10:9153 rr
  13. -> 10.209.59.193:9153 Masq 1 0 0
  14. -> 10.209.59.194:9153 Masq 1 0 0
  15. UDP 10.209.0.10:53 rr
  16. -> 10.209.59.193:53 Masq 1 0 0
  17. -> 10.209.59.194:53 Masq 1 0 0

12. 测试一个运行一个容器

  1. [root@master01 ~]# kubectl run nginx --image=nginx:1.14 --replicas=2
  2. kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
  3. deployment.apps/nginx created

12.1 查看nginx pod

  1. [root@master01 ~]# kubectl get pods -o wide
  2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  3. nginx-84b67f57c4-d9k8m 1/1 Running 0 59s 10.209.196.129 node01 <none> <none>
  4. nginx-84b67f57c4-zcrxn 1/1 Running 0 59s 10.209.186.193 node03 <none> <none>

12.2 通过curl命令测试nginx

  1. [root@master01 ~]# curl 10.209.196.129
  2. <!DOCTYPE html>
  3. <html>
  4. <head>
  5. <title>Welcome to nginx!</title>
  6. <style>
  7. body {
  8. width: 35em;
  9. margin: 0 auto;
  10. font-family: Tahoma, Verdana, Arial, sans-serif;
  11. }
  12. </style>
  13. </head>
  14. <body>
  15. <h1>Welcome to nginx!</h1>
  16. <p>If you see this page, the nginx web server is successfully installed and
  17. working. Further configuration is required.</p>
  18. <p>For online documentation and support please refer to
  19. <a href="http://nginx.org/">nginx.org</a>.<br/>
  20. Commercial support is available at
  21. <a href="http://nginx.com/">nginx.com</a>.</p>
  22. <p><em>Thank you for using nginx.</em></p>
  23. </body>
  24. </html>
  25. [root@master01 ~]# curl 10.209.186.193
  26. <!DOCTYPE html>
  27. <html>
  28. <head>
  29. <title>Welcome to nginx!</title>
  30. <style>
  31. body {
  32. width: 35em;
  33. margin: 0 auto;
  34. font-family: Tahoma, Verdana, Arial, sans-serif;
  35. }
  36. </style>
  37. </head>
  38. <body>
  39. <h1>Welcome to nginx!</h1>
  40. <p>If you see this page, the nginx web server is successfully installed and
  41. working. Further configuration is required.</p>
  42. <p>For online documentation and support please refer to
  43. <a href="http://nginx.org/">nginx.org</a>.<br/>
  44. Commercial support is available at
  45. <a href="http://nginx.com/">nginx.com</a>.</p>
  46. <p><em>Thank you for using nginx.</em></p>
  47. </body>
  48. </html>

能显示出Welcome to nginx,说明pod运行正常,间接也说明集群可以正常使用

13. 测试dns

进入后执行nslookup kubernetes.default

  1. [root@master01 ~]# kubectl run curl --image=radial/busyboxplus:curl -it
  2. kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
  3. If you don't see a command prompt, try pressing enter.
  4. [ root@curl-66bdcf564-njcqk:/ ]$ nslookup kubernetes.default
  5. Server: 10.209.0.10
  6. Address 1: 10.209.0.10 kube-dns.kube-system.svc.cluster.local
  7. Name: kubernetes.default
  8. Address 1: 10.209.0.1 kubernetes.default.svc.cluster.local #能显示类似这样的输出,说明dns是okay的

至此kubernetes集群部署完成。

14.典型报错

14.1 镜像无法拉取报错

  1. ct: connection timed out
  2. Warning FailedCreatePodSandBox 3m44s (x17 over 28m) kubelet, node01 Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp 74.125.204.82:443: connect: connection timed out

解决办法

先去其他渠道找到对应的镜像,然后docker tag下

  1. docker pull mirrorgooglecontainers/pause:3.1
  2. docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1

kubeadm部署高可用K8S集群(v1.14.2)的更多相关文章

  1. Kubeadm部署高可用K8S集群

    一 基础环境 1.1 资源 节点名称 ip地址 VIP 192.168.12.150 master01 192.168.12.48 master02 192.168.12.242 master03 1 ...

  2. 企业运维实践-还不会部署高可用的kubernetes集群?使用kubeadm方式安装高可用k8s集群v1.23.7

    关注「WeiyiGeek」公众号 设为「特别关注」每天带你玩转网络安全运维.应用开发.物联网IOT学习! 希望各位看友[关注.点赞.评论.收藏.投币],助力每一个梦想. 文章目录: 0x00 前言简述 ...

  3. centos7使用kubeadm配置高可用k8s集群

    CountingStars_ 关注 2018.08.12 09:06* 字数 464 阅读 88评论 0喜欢 0 简介 使用kubeadm配置多master节点,实现高可用. 安装 实验环境说明 实验 ...

  4. Rancher 2.2.2 - HA 部署高可用k8s集群

    对于生产环境,需以高可用的配置安装 Rancher,确保用户始终可以访问 Rancher Server.当安装在Kubernetes集群中时,Rancher将与集群的 etcd 集成,并利用Kuber ...

  5. kubespray -- 快速部署高可用k8s集群 + 扩容节点 scale.yaml

    主机 系统版本      配置       ip Mater.Node,ansible CentOS 7.2                                             4 ...

  6. 使用kubeadm部署一套高可用k8s集群

    使用kubeadm部署一套高可用k8s集群 有疑问的地方可以看官方文档 准备环境 我的机器如下, 系统为ubuntu20.04, kubernetes版本1.21.0 hostname IP 硬件配置 ...

  7. hype-v上centos7部署高可用kubernetes集群实践

    概述 在上一篇中已经实践了 非高可用的bubernetes集群的实践 普通的k8s集群当work node 故障时是高可用的,但是master node故障时将会发生灾难,因为k8s api serv ...

  8. 使用Kubeadm搭建高可用Kubernetes集群

    1.概述 Kubenetes集群的控制平面节点(即Master节点)由数据库服务(Etcd)+其他组件服务(Apiserver.Controller-manager.Scheduler...)组成. ...

  9. 基于Containerd安装部署高可用Kubernetes集群

    转载自:https://blog.weiyigeek.top/2021/7-30-623.html 简述 Kubernetes(后续简称k8s)是 Google(2014年6月) 开源的一个容器编排引 ...

随机推荐

  1. canvas笔记备忘

    备忘 1. canvas标签的宽和高设置是标签属性设置, 不是 css 属性设置. 如果用 css 属性设置大小, canvas 会被拉伸. 标签属性例如: class, id, style, wid ...

  2. js数组、对象处理

    js arry: var arry = []; js object: var obj = {}; obj定义属性: obj.filename=''; obj.id=''; 把 obj 添加到 arry ...

  3. 《Effective Java》第1章 创建和销毁对象

    第1条 用静态工厂方法代替构造器 这个静态工厂,与设计模式中的静态工厂不同,这里的静态工厂方法,替换为“静态方法”比较好理解,主要就是建议编写静态方法来创建对象. 使用静态方法的好处: 1.静态方法有 ...

  4. Session覆盖测试(要验证码提交到后续页面操作的 绕过去的场景)

    测试原理和方法 找回密码逻辑漏洞测试中也会遇到参数不可控的情况,比如要修改的用户名或者绑定 的手机号无法在提交参数时修改,服务端通过读取当前session会话来判断要修改密码的账 号,这种情况下能否对 ...

  5. RestHighLevelClient查询es

    本篇分享的是es官网推荐的es客户端组件RestHighLevelClient的使用,其封装了操作es的crud方法,底层原理就是模拟各种es需要的请求,如put,delete,get等方式:本篇主要 ...

  6. script的src和img的src跨域的区别

    原理上都是利用标签的src可绕过同源限制,跨域请求的特点, 硬要说不同,那么区别在于:img只能单向发送get请求,不可访问响应内容(只是展现),而script可对其进行解析

  7. Any Video Converter Pro for Mac注册码

    Any Video Converter Pro for Mac注册码:name:www.macmofo.comsn:000016-D84U8Q-8BN16B-WP2BV6-9RA73A-X7D4V3- ...

  8. Java之布尔运算

    对于布尔类型boolean,永远只有true和false两个值. 布尔运算是一种关系运算,包括以下几类 比较运算符:>,>=,<,<=,==,!= 与运算 && ...

  9. 【手写代码】计算1-n中总共有多少二进制1

    #include<bits/stdc++.h> #include<vector> using namespace std; //时间复杂度:O(N) int f(int x) ...

  10. 超级简单POI导出Excel实战

    在一般的生产管理系统都会将数据通过页面导出到Excel,这里以Java为例通过第三方开源poi进行对Excel的操作,具体操作如下 1.引入jar包依赖 这里我以maven的方式引入jar包,具体依赖 ...