原理和架构图参考上一篇,这里只记录操作步骤。由于东西较多,篇幅也会较长。

etcd version: 3.2.11

kube version: 1.8.4

contiv version: 1.1.7

docker version: 17.03.2-ce

OS version: debian stretch

三个ETCD节点(contiv插件也要使用etcd,这里每个节点复用跑2个etcd实例)

  1. 192.168.5.84 etcd0,contiv0
  2. 192.168.5.85 etcd1,contiv1
  3. 192.168.2.77 etcd2,contiv2

两个lvs节点,这里lvs代理了三个服务,分别是apiserver、contiv的netmaster、以及由于contiv不支持配置多个etcd所以代理三个etcd实例提供一个vip出来给contiv服务

  1. 192.168.2.56 master
  2. 192.168.2.57 backup

4个k8s节点(3个master,1个node)

  1. 192.168.5.62 master01
  2. 192.168.5.63 master02
  3. 192.168.5.107 master03
  4. 192.168.5.68 node

1、部署ETCD,由于这几个节点系统版本较低,所以没有使用systemd

a、部署k8s使用的etcd集群,直接使用etcd二进制文件启动即可,启动脚本如下:

  1. # cat etcd-start.sh
  2. #获取IP
  3. localip=`ifconfig em2|grep -w inet| awk '{print $2}'|awk -F: '{print $2}'`
  4. pubip=0.0.0.0
  5. #启动服务
  6. etcd --name etcd0 -data-dir /var/lib/etcd \
  7. --initial-advertise-peer-urls http://${localip}:2380 \
  8. --listen-peer-urls http://${localip}:2380 \
  9. --listen-client-urls http://${pubip}:2379 \
  10. --advertise-client-urls http://${pubip}:2379 \
  11. --initial-cluster-token my-etcd-token \
  12. --initial-cluster etcd0=http://192.168.5.84:2380,etcd1=http://192.168.5.85:2380,etcd2=http://192.168.2.77:2380 \
  13. --initial-cluster-state new >> /var/log/etcd.log 2>&1 &
  1. # cat etcd-start.sh
  2. #获取IP
  3. localip=`ifconfig em2|grep -w inet| awk '{print $2}'|awk -F: '{print $2}'`
  4. pubip=0.0.0.0
  5. #启动服务
  6. etcd --name etcd1 -data-dir /var/lib/etcd \
  7. --initial-advertise-peer-urls http://${localip}:2380 \
  8. --listen-peer-urls http://${localip}:2380 \
  9. --listen-client-urls http://${pubip}:2379 \
  10. --advertise-client-urls http://${pubip}:2379 \
  11. --initial-cluster-token my-etcd-token \
  12. --initial-cluster etcd0=http://192.168.5.84:2380,etcd1=http://192.168.5.85:2380,etcd2=http://192.168.2.77:2380 \
  13. --initial-cluster-state new >> /var/log/etcd.log 2>&1 &
  1. # cat etcd-start.sh
  2. #获取IP
  3. localip=`ifconfig bond0|grep -w inet| awk '{print $2}'|awk -F: '{print $2}'`
  4. pubip=0.0.0.0
  5. #启动服务
  6. etcd --name etcd2 -data-dir /var/lib/etcd \
  7. --initial-advertise-peer-urls http://${localip}:2380 \
  8. --listen-peer-urls http://${localip}:2380 \
  9. --listen-client-urls http://${pubip}:2379 \
  10. --advertise-client-urls http://${pubip}:2379 \
  11. --initial-cluster-token my-etcd-token \
  12. --initial-cluster etcd0=http://192.168.5.84:2380,etcd1=http://192.168.5.85:2380,etcd2=http://192.168.2.77:2380 \
  13. --initial-cluster-state new >> /var/log/etcd.log 2>&1 &

b、部署contiv使用的etcd:

  1. # cat etcd-2-start.sh
  2. #!/bin/bash
  3. #获取IP
  4. localip=`ifconfig em2|grep -w inet| awk '{print $2}'|awk -F: '{print $2}'`
  5. pubip=0.0.0.0
  6. #启动服务
  7. etcd --name contiv0 -data-dir /var/etcd/contiv-data \
  8. --initial-advertise-peer-urls http://${localip}:6667 \
  9. --listen-peer-urls http://${localip}:6667 \
  10. --listen-client-urls http://${pubip}:6666 \
  11. --advertise-client-urls http://${pubip}:6666 \
  12. --initial-cluster-token contiv-etcd-token \
  13. --initial-cluster contiv0=http://192.168.5.84:6667,contiv1=http://192.168.5.85:6667,contiv2=http://192.168.2.77:6667 \
  14. --initial-cluster-state new >> /var/log/etcd-contiv.log 2>&1 &
  1. # cat etcd-2-start.sh
  2. #获取IP
  3. localip=`ifconfig em2|grep -w inet| awk '{print $2}'|awk -F: '{print $2}'`
  4. pubip='0.0.0.0'
  5. #启动服务
  6. etcd --name contiv1 -data-dir /var/etcd/contiv-data \
  7. --initial-advertise-peer-urls http://${localip}:6667 \
  8. --listen-peer-urls http://${localip}:6667 \
  9. --listen-client-urls http://${pubip}:6666 \
  10. --advertise-client-urls http://${pubip}:6666 \
  11. --initial-cluster-token contiv-etcd-token \
  12. --initial-cluster contiv0=http://192.168.5.84:6667,contiv1=http://192.168.5.85:6667,contiv2=http://192.168.2.77:6667 \
  13. --initial-cluster-state new >> /var/log/etcd-contiv.log 2>&1 &
  1. # cat etcd-2-start.sh
  2. #获取IP
  3. localip=`ifconfig bond0|grep -w inet| awk '{print $2}'|awk -F: '{print $2}'`
  4. pubip=0.0.0.0
  5. #启动服务
  6. etcd --name contiv2 -data-dir /var/etcd/contiv-data \
  7. --initial-advertise-peer-urls http://${localip}:6667 \
  8. --listen-peer-urls http://${localip}:6667 \
  9. --listen-client-urls http://${pubip}:6666 \
  10. --advertise-client-urls http://${pubip}:6666 \
  11. --initial-cluster-token contiv-etcd-token \
  12. --initial-cluster contiv0=http://192.168.5.84:6667,contiv1=http://192.168.5.85:6667,contiv2=http://192.168.2.77:6667 \
  13. --initial-cluster-state new >> /var/log/etcd-contiv.log 2>&1 &

c、启动服务,直接执行脚本即可。

  1. # bash etcd-start.sh
  2. # bash etcd-2-start.sh

d、验证集群状态

  1. # etcdctl member list
  2. 4e2d8913b0f6d79d, started, etcd2, http://192.168.2.77:2380, http://0.0.0.0:2379
  3. 7b72fa2df0544e1b, started, etcd0, http://192.168.5.84:2380, http://0.0.0.0:2379
  4. 930f118a7f33cf1c, started, etcd1, http://192.168.5.85:2380, http://0.0.0.0:2379
  1. # etcdctl --endpoints=http://192.168.6.17:6666 member list
  2. 21868a2f15be0a01, started, contiv0, http://192.168.5.84:6667, http://0.0.0.0:6666
  3. 63df25ae8bd96b52, started, contiv1, http://192.168.5.85:6667, http://0.0.0.0:6666
  4. cf59e48c1866f41d, started, contiv2, http://192.168.2.77:6667, http://0.0.0.0:6666

e、配置lvs代理contiv的etcd,vip为192.168.6.17。这里顺便把其他两个服务的代理配置全部贴上来,实际上仅仅是多了两段配置而已,apiserver的vip为192.168.6.16

  1. # vim vi_bgp_VI1_yizhuang.inc
  2. vrrp_instance VII_1 {
  3. virtual_router_id 102
  4. interface eth0
  5. include /etc/keepalived/state_VI1.conf
  6. preempt_delay 120
  7. garp_master_delay 0
  8. garp_master_refresh 5
  9. lvs_sync_daemon_interface eth0
  10. authentication {
  11. auth_type PASS
  12. auth_pass opsdk
  13. }
  14. virtual_ipaddress {
  15. #k8s-apiserver
  16. 192.168.6.16
  17. #etcd
  18. 192.168.6.17
  19. }
  20. }

这里单独使用了一个state.conf配置文件来区分主备角色,也就是master和backup节点的配置仅有这一部分不同,其他配置可以直接复制过去。

  1. # vim /etc/keepalived/state_VI1.conf
  2. #uy-s-07
  3. state MASTER
  4. priority 150
  5. #uy-s-45
  6. # state BACKUP
  7. # priority 100
  1. # vim /etc/keepalived/k8s.conf
  2. virtual_server 192.168.6.16 6443 {
  3. lb_algo rr
  4. lb_kind DR
  5. persistence_timeout 0
  6. delay_loop 20
  7. protocol TCP
  8. real_server 192.168.5.62 6443 {
  9. weight 10
  10. TCP_CHECK {
  11. connect_timeout 10
  12. }
  13. }
  14. real_server 192.168.5.63 6443 {
  15. weight 10
  16. TCP_CHECK {
  17. connect_timeout 10
  18. }
  19. }
  20. real_server 192.168.5.107 6443 {
  21. weight 10
  22. TCP_CHECK {
  23. connect_timeout 10
  24. }
  25. }
  26. }
  27. virtual_server 192.168.6.17 6666 {
  28. lb_algo rr
  29. lb_kind DR
  30. persistence_timeout 0
  31. delay_loop 20
  32. protocol TCP
  33. real_server 192.168.5.84 6666 {
  34. weight 10
  35. TCP_CHECK {
  36. connect_timeout 10
  37. }
  38. }
  39. real_server 192.168.5.85 6666 {
  40. weight 10
  41. TCP_CHECK {
  42. connect_timeout 10
  43. }
  44. }
  45. real_server 192.168.2.77 6666 {
  46. weight 10
  47. TCP_CHECK {
  48. connect_timeout 10
  49. }
  50. }
  51. }
  52. virtual_server 192.168.6.16 9999 {
  53. lb_algo rr
  54. lb_kind DR
  55. persistence_timeout 0
  56. delay_loop 20
  57. protocol TCP
  58. real_server 192.168.5.62 9999 {
  59. weight 10
  60. TCP_CHECK {
  61. connect_timeout 10
  62. }
  63. }
  64. real_server 192.168.5.63 9999 {
  65. weight 10
  66. TCP_CHECK {
  67. connect_timeout 10
  68. }
  69. }
  70. real_server 192.168.5.107 9999 {
  71. weight 10
  72. TCP_CHECK {
  73. connect_timeout 10
  74. }
  75. }
  76. }

为etcd的各real-server设置vip:

  1. # vim /etc/network/interfaces
  2. auto lo:17
  3. iface lo:17 inet static
  4. address 192.168.6.17
  5. netmask 255.255.255.255
  6. # ifconfig lo:17 192.168.6.17 netmask 255.255.255.255 up

为apiserver的各real-server设置vip:

  1. # vim /etc/network/interfaces
  2. auto lo:16
  3. iface lo:16 inet static
  4. address 192.168.6.16
  5. netmask 255.255.255.255
  6. # ifconfig lo:16 192.168.6.16 netmask 255.255.255.255 up

为所有real-server设置内核参数:

  1. # vim /etc/sysctl.conf
  2. net.ipv4.conf.lo.arp_ignore = 1
  3. net.ipv4.conf.lo.arp_announce = 2
  4. net.ipv4.conf.all.arp_ignore = 1
  5. net.ipv4.conf.all.arp_announce = 2
  6. net.ipv4.ip_forward = 1
  7. net.netfilter.nf_conntrack_max = 2048000

启动服务,查看服务状态:

  1. # /etc/init.d/keepalived start
  2. # ipvsadm -ln
  3. IP Virtual Server version 1.2.1 (size=1048576)
  4. Prot LocalAddress:Port Scheduler Flags
  5. -> RemoteAddress:Port Forward Weight ActiveConn InActConn
  6. TCP 192.168.6.16:6443 rr
  7. -> 192.168.5.62:6443 Route 10 1 0
  8. -> 192.168.5.63:6443 Route 10 0 0
  9. -> 192.168.5.107:6443 Route 10 4 0
  10. TCP 192.168.6.16:9999 rr
  11. -> 192.168.5.62:9999 Route 10 0 0
  12. -> 192.168.5.63:9999 Route 10 0 0
  13. -> 192.168.5.107:9999 Route 10 0 0
  14. TCP 192.168.6.17:6666 rr
  15. -> 192.168.2.77:6666 Route 10 24 14
  16. -> 192.168.5.84:6666 Route 10 22 13
  17. -> 192.168.5.85:6666 Route 10 18 14

2、部署k8s,由于上篇已经说了详细步骤,这里会略过一些内容

a、安装kubeadm,kubectl,kubelet,由于目前仓库已经更新到最新版本1.9了,所以这里如果要安装低版本需要手动指定版本号

  1. # aptitude install -y kubeadm=1.8.4-00 kubectl=1.8.4-00 kubelet=1.8.4-00

b、使用kubeadm初始化第一个master节点。由于使用的是contiv插件,所以这里可以不设置网络参数podSubnet。因为contiv没有使用controller-manager的subnet-allocating特性,另外,weave也没有使用这个特性。

  1. # cat kubeadm-config.yml
  2. apiVersion: kubeadm.k8s.io/v1alpha1
  3. kind: MasterConfiguration
  4. api:
  5. advertiseAddress: "192.168.5.62"
  6. etcd:
  7. endpoints:
  8. - "http://192.168.5.84:2379"
  9. - "http://192.168.5.85:2379"
  10. - "http://192.168.2.77:2379"
  11. kubernetesVersion: "v1.8.4"
  12. apiServerCertSANs:
  13. - uy06-04
  14. - uy06-05
  15. - uy08-10
  16. - uy08-11
  17. - 192.168.6.16
  18. - 192.168.6.17
  19. - 127.0.0.1
  20. - 192.168.5.62
  21. - 192.168.5.63
  22. - 192.168.5.107
  23. - 192.168.5.108
  24. - 30.0.0.1
  25. - 10.244.0.1
  26. - 10.96.0.1
  27. - kubernetes
  28. - kubernetes.default
  29. - kubernetes.default.svc
  30. - kubernetes.default.svc.cluster
  31. - kubernetes.default.svc.cluster.local
  32. tokenTTL: 0s
  33. networking:
  34. podSubnet: 30.0.0.0/10

执行初始化:

  1. # kubeadm init --config=kubeadm-config.yml
  2. [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
  3. [init] Using Kubernetes version: v1.8.4
  4. [init] Using Authorization modes: [Node RBAC]
  5. [preflight] Running pre-flight checks
  6. [preflight] WARNING: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
  7. [preflight] Starting the kubelet service
  8. [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
  9. [certificates] Generated ca certificate and key.
  10. [certificates] Generated apiserver certificate and key.
  11. [certificates] apiserver serving cert is signed for DNS names [uy06-04 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local uy06-04 uy06-05 uy08-10 uy08-11 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.5.62 192.168.6.16 192.168.6.17 127.0.0.1 192.168.5.62 192.168.5.63 192.168.5.107 192.168.5.108 30.0.0.1 10.244.0.1 10.96.0.1]
  12. [certificates] Generated apiserver-kubelet-client certificate and key.
  13. [certificates] Generated sa key and public key.
  14. [certificates] Generated front-proxy-ca certificate and key.
  15. [certificates] Generated front-proxy-client certificate and key.
  16. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
  17. [kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
  18. [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
  19. [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
  20. [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
  21. [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
  22. [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
  23. [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
  24. [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
  25. [init] This often takes around a minute; or longer if the control plane images have to be pulled.
  26. [apiclient] All control plane components are healthy after 28.502953 seconds
  27. [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  28. [markmaster] Will mark node uy06-04 as master by adding a label and a taint
  29. [markmaster] Master uy06-04 tainted and labelled with key/value: node-role.kubernetes.io/master=""
  30. [bootstraptoken] Using token: 0c8921.578cf94fe0721e01
  31. [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  32. [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  33. [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  34. [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  35. [addons] Applied essential addon: kube-dns
  36. [addons] Applied essential addon: kube-proxy
  37. Your Kubernetes master has initialized successfully!
  38. To start using your cluster, you need to run (as a regular user):
  39. mkdir -p $HOME/.kube
  40. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  41. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  42. You should now deploy a pod network to the cluster.
  43. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  44. http://kubernetes.io/docs/admin/addons/
  45. You can now join any number of machines by running the following on each node
  46. as root:
  47. kubeadm join --token 0c8921.578cf94fe0721e01 192.168.5.62:6443 --discovery-token-ca-cert-hash sha256:58cf1826d49e44fb6ff1590ddb077dd4e530fe58e13c1502ec07ce41ba6cc39e

c、验证通过证书是否能访问到API(这里每个节点都务必验证一下,证书问题会导致各种其它问题)

  1. # cd /etc/kubernetes/pki/
  2. # curl --cacert ca.crt --cert apiserver-kubelet-client.crt --key apiserver-kubelet-client.key https://192.168.5.62:6443

d、让master节点参与调度

  1. # kubectl taint nodes --all node-role.kubernetes.io/master-

e、安装contiv

下载安装包并解压

  1. # curl -L -O https://github.com/contiv/install/releases/download/1.1.7/contiv-1.1.7.tgz
  2. # tar xvf contiv-1.1.7.tgz

修改yaml文件

  1. # cd contiv-1.1.7/
  2. # vim install/k8s/k8s1.6/contiv.yaml
  3. 1、修改ca路径,并将k8sca复制到该路径下
  4. "K8S_CA": "/var/contiv/ca.crt"
  5. 2、修改netmaster的部署类型,把ReplicaSet改为DaemonSet(实现netmaster的高可用),这里使用了nodeSeletor,需要把三个master都打上master标签
  6. nodeSelector:
  7. node-role.kubernetes.io/master: ""
  8. 3、注释掉replicas指令

另外需要注意的是:

  • 将/var/contiv/目录下证书文件复制到三个master节点,netmaster pod需要挂载使用这些证书文件
  • 除了第一个节点外,需要为其他每个节点创建/var/run/contiv/目录,netplugin会生成两个socket文件,如果不手动创建目录,则无法生成socket

Contiv提供了一个安装脚本,执行脚本安装:

  1. # ./install/k8s/install.sh -n 192.168.6.16 -w routing -s etcd://192.168.6.17:6666
  2. Installing Contiv for Kubernetes
  3. secret "aci.key" created
  4. Generating local certs for Contiv Proxy
  5. Setting installation parameters
  6. Applying contiv installation
  7. To customize the installation press Ctrl+C and edit ./.contiv.yaml.
  8. Extracting netctl from netplugin container
  9. dafec6d9f0036d4743bf4b8a51797ddd19f4402eb6c966c417acf08922ad59bb
  10. clusterrolebinding "contiv-netplugin" created
  11. clusterrole "contiv-netplugin" created
  12. serviceaccount "contiv-netplugin" created
  13. clusterrolebinding "contiv-netmaster" created
  14. clusterrole "contiv-netmaster" created
  15. serviceaccount "contiv-netmaster" created
  16. configmap "contiv-config" created
  17. daemonset "contiv-netplugin" created
  18. daemonset "contiv-netmaster" created
  19. Creating network default:contivh1
  20. daemonset "contiv-netplugin" deleted
  21. clusterrolebinding "contiv-netplugin" configured
  22. clusterrole "contiv-netplugin" configured
  23. serviceaccount "contiv-netplugin" unchanged
  24. clusterrolebinding "contiv-netmaster" configured
  25. clusterrole "contiv-netmaster" configured
  26. serviceaccount "contiv-netmaster" unchanged
  27. configmap "contiv-config" unchanged
  28. daemonset "contiv-netplugin" created
  29. daemonset "contiv-netmaster" configured
  30. Installation is complete
  31. =========================================================
  32. Contiv UI is available at https://192.168.6.16:10000
  33. Please use the first run wizard or configure the setup as follows:
  34. Configure forwarding mode (optional, default is routing).
  35. netctl global set --fwd-mode routing
  36. Configure ACI mode (optional)
  37. netctl global set --fabric-mode aci --vlan-range <start>-<end>
  38. Create a default network
  39. netctl net create -t default --subnet=<CIDR> default-net
  40. For example, netctl net create -t default --subnet=20.1.1.0/24 -g 20.1.1.1 default-net
  41. =========================================================

这里使用了三个参数:

  1. -n 表示netmaster的地址。为了实现高可用,这里我起了三个netmaster,然后用lvs代理三个节点提供vip
  2. -w 表示转发模式
  3. -s 表示外部etcd地址,如果指定了外部etcd则不会创建etcd容器,而且无需手动处理。

另外,contiv是自带UI的,监听10000端口,上面安装完成后有提示,可以通过UI来管理网络。默认账号和密码是admin/admin。

不过,如果你知道要做什么的话,用命令会更方便快捷。

创建一个subnet:

  1. # netctl net create -t default --subnet=30.0.0.0/10 -g 30.0.0.1 default-net
  2. # netctl network ls
  3. Tenant Network Nw Type Encap type Packet tag Subnet Gateway IPv6Subnet IPv6Gateway Cfgd Tag
  4. ------ ------- ------- ---------- ---------- ------- ------ ---------- ----------- ---------
  5. default contivh1 infra vxlan 0 132.1.1.0/24 132.1.1.1
  6. default default-net data vxlan 0 30.0.0.0/10 30.0.0.1

创建好网络之后,这时kube-dns pod就能拿到IP地址并运行起来了。

f、部署另外两个master节点

将第一个节点的配置文件和证书全部复制过来:

  1. # scp -r 192.168.5.62:/etc/kubernetes/* /etc/kubernetes/

为新的master节点生成新的证书:

  1. # cat uy06-05.sh
  2. #!/bin/bash
  3. #apiserver-kubelet-client
  4. openssl genrsa -out apiserver-kubelet-client.key 2048
  5. openssl req -new -key apiserver-kubelet-client.key -out apiserver-kubelet-client.csr -subj "/O=system:masters/CN=kube-apiserver-kubelet-client"
  6. openssl x509 -req -set_serial $(date +%s%N) -in apiserver-kubelet-client.csr -CA ca.crt -CAkey ca.key -out apiserver-kubelet-client.crt -days 365 -extensions v3_req -extfile apiserver-kubelet-client-openssl.cnf
  7. #controller-manager
  8. openssl genrsa -out controller-manager.key 2048
  9. openssl req -new -key controller-manager.key -out controller-manager.csr -subj "/CN=system:kube-controller-manager"
  10. openssl x509 -req -set_serial $(date +%s%N) -in controller-manager.csr -CA ca.crt -CAkey ca.key -out controller-manager.crt -days 365 -extensions v3_req -extfile controller-manager-openssl.cnf
  11. #scheduler
  12. openssl genrsa -out scheduler.key 2048
  13. openssl req -new -key scheduler.key -out scheduler.csr -subj "/CN=system:kube-scheduler"
  14. openssl x509 -req -set_serial $(date +%s%N) -in scheduler.csr -CA ca.crt -CAkey ca.key -out scheduler.crt -days 365 -extensions v3_req -extfile scheduler-openssl.cnf
  15. #admin
  16. openssl genrsa -out admin.key 2048
  17. openssl req -new -key admin.key -out admin.csr -subj "/O=system:masters/CN=kubernetes-admin"
  18. openssl x509 -req -set_serial $(date +%s%N) -in admin.csr -CA ca.crt -CAkey ca.key -out admin.crt -days 365 -extensions v3_req -extfile admin-openssl.cnf
  19. #node
  20. openssl genrsa -out $(hostname).key 2048
  21. openssl req -new -key $(hostname).key -out $(hostname).csr -subj "/O=system:nodes/CN=system:node:$(hostname)" -config kubelet-openssl.cnf
  22. openssl x509 -req -set_serial $(date +%s%N) -in $(hostname).csr -CA ca.crt -CAkey ca.key -out $(hostname).crt -days 365 -extensions v3_req -extfile kubelet-openssl.cnf

这里生成了四套证书,使用的openssl配置文件其实是相同的:

  1. [ v3_req ]
  2. # Extensions to add to a certificate request
  3. keyUsage = critical, digitalSignature, keyEncipherment
  4. extendedKeyUsage = clientAuth

用新的证书替换旧证书,这几套证书只有apiserver-kubelet-client的证书是路径引用的,其他的都是直接引用的证书加密过的内容:

  1. #!/bin/bash
  2. VIP=192.168.5.62
  3. APISERVER_PORT=6443
  4. HOSTNAME=$(hostname)
  5. CA_CRT=$(cat ca.crt |base64 -w0)
  6. CA_KEY=$(cat ca.key |base64 -w0)
  7. ADMIN_CRT=$(cat admin.crt |base64 -w0)
  8. ADMIN_KEY=$(cat admin.key |base64 -w0)
  9. CONTROLLER_CRT=$(cat controller-manager.crt |base64 -w0)
  10. CONTROLLER_KEY=$(cat controller-manager.key |base64 -w0)
  11. KUBELET_CRT=$(cat $(hostname).crt |base64 -w0)
  12. KUBELET_KEY=$(cat $(hostname).key |base64 -w0)
  13. SCHEDULER_CRT=$(cat scheduler.crt |base64 -w0)
  14. SCHEDULER_KEY=$(cat scheduler.key |base64 -w0)
  15. #admin
  16. sed -e "s/VIP/$VIP/g" -e "s/APISERVER_PORT/$APISERVER_PORT/g" -e "s/CA_CRT/$CA_CRT/g" -e "s/ADMIN_CRT/$ADMIN_CRT/g" -e "s/ADMIN_KEY/$ADMIN_KEY/g" admin.temp > admin.conf
  17. cp -a admin.conf /etc/kubernetes/admin.conf
  18. #kubelet
  19. sed -e "s/VIP/$VIP/g" -e "s/APISERVER_PORT/$APISERVER_PORT/g" -e "s/HOSTNAME/$HOSTNAME/g" -e "s/CA_CRT/$CA_CRT/g" -e "s/CA_KEY/$CA_KEY/g" -e "s/KUBELET_CRT/$KUBELET_CRT/g" -e "s/KUBELET_KEY/$KUBELET_KEY/g" kubelet.temp > kubelet.conf
  20. cp -a kubelet.conf /etc/kubernetes/kubelet.conf
  21. #controller-manager
  22. sed -e "s/VIP/$VIP/g" -e "s/APISERVER_PORT/$APISERVER_PORT/g" -e "s/CA_CRT/$CA_CRT/g" -e "s/CONTROLLER_CRT/$CONTROLLER_CRT/g" -e "s/CONTROLLER_KEY/$CONTROLLER_KEY/g" controller-manager.temp > controller-manager.conf
  23. cp -a controller-manager.conf /etc/kubernetes/controller-manager.conf
  24. #scheduler
  25. sed -e "s/VIP/$VIP/g" -e "s/APISERVER_PORT/$APISERVER_PORT/g" -e "s/CA_CRT/$CA_CRT/g" -e "s/SCHEDULER_CRT/$SCHEDULER_CRT/g" -e "s/SCHEDULER_KEY/$SCHEDULER_KEY/g" scheduler.temp > scheduler.conf
  26. cp -a scheduler.conf /etc/kubernetes/scheduler.conf
  27. #manifest kube-apiserver-client
  28. cp -a apiserver-kubelet-client.key /etc/kubernetes/pki/
  29. cp -a apiserver-kubelet-client.crt /etc/kubernetes/pki/

另外,由于contiv的netmaster使用了nodeSelector,这里记得要把这两个新部署master节点也打上master角色标签。默认情况下,新加入集群的节点是没有角色标签的。

  1. # kubectl label node uy06-05 node-role.kubernetes.io/master=
  2. # kubectl label node uy08-10 node-role.kubernetes.io/master=

替换证书之后,还要将集群中所有需要访问apiserver的地方修改为vip,以及修改advertise-address为本机地址,修改本地配置之后记得重启kubelet服务。

  1. # sed -i "s@192.168.5.62@192.168.6.16@g" admin.conf
  2. # sed -i "s@192.168.5.62@192.168.6.16@g" controller-manager.conf
  3. # sed -i "s@192.168.5.62@192.168.6.16@g" kubelet.conf
  4. # sed -i "s@192.168.5.62@192.168.6.16@g" scheduler.conf
  1. # kubectl edit cm cluster-info -n kube-public
  2. # kubectl edit cm kube-proxy -n kube-system
  1. # vim manifests/kube-apiserver.yaml
  2. --advertise-address=192.168.5.63
  1. # systemctl restart kubelet

g、验证,尝试通过vip请求apiserver将node节点加入到集群中。

  1. # kubeadm join --token 0c8921.578cf94fe0721e01 192.168.6.16:6443 --discovery-token-ca-cert-hash sha256:58cf1826d49e44fb6ff1590ddb077dd4e530fe58e13c1502ec07ce41ba6cc39e
  2. [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
  3. [preflight] Running pre-flight checks
  4. [preflight] WARNING: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
  5. [discovery] Trying to connect to API Server "192.168.6.16:6443"
  6. [discovery] Created cluster-info discovery client, requesting info from "https://192.168.6.16:6443"
  7. [discovery] Requesting info from "https://192.168.6.16:6443" again to validate TLS against the pinned public key
  8. [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.6.16:6443"
  9. [discovery] Successfully established connection with API Server "192.168.6.16:6443"
  10. [bootstrap] Detected server version: v1.8.4
  11. [bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
  12. Node join complete:
  13. * Certificate signing request sent to master and response
  14. received.
  15. * Kubelet informed of new secure connection details.
  16. Run 'kubectl get nodes' on the master to see this machine join.

h、至此,整个kubernetes集群搭建完成。

  1. # kubectl get no
  2. NAME STATUS ROLES AGE VERSION
  3. uy06-04 Ready master 1d v1.8.4
  4. uy06-05 Ready master 1d v1.8.4
  5. uy08-10 Ready master 1d v1.8.4
  6. uy08-11 Ready <none> 1d v1.8.4
  1. # kubectl get po --all-namespaces
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. development snowflake-f88456558-55jk8 1/1 Running 0 3h
  4. development snowflake-f88456558-5lkjr 1/1 Running 0 3h
  5. development snowflake-f88456558-mm7hc 1/1 Running 0 3h
  6. development snowflake-f88456558-tpbhw 1/1 Running 0 3h
  7. kube-system contiv-netmaster-6ctqj 3/3 Running 0 6h
  8. kube-system contiv-netmaster-w4tx9 3/3 Running 0 3h
  9. kube-system contiv-netmaster-wrlgc 3/3 Running 0 3h
  10. kube-system contiv-netplugin-nbhkm 2/2 Running 0 6h
  11. kube-system contiv-netplugin-rf569 2/2 Running 0 3h
  12. kube-system contiv-netplugin-sczzk 2/2 Running 0 3h
  13. kube-system contiv-netplugin-tlf77 2/2 Running 0 5h
  14. kube-system heapster-59ff54b574-jq52w 1/1 Running 0 3h
  15. kube-system heapster-59ff54b574-nhl56 1/1 Running 0 3h
  16. kube-system heapster-59ff54b574-wchcr 1/1 Running 0 3h
  17. kube-system kube-apiserver-uy06-04 1/1 Running 0 7h
  18. kube-system kube-apiserver-uy06-05 1/1 Running 0 5h
  19. kube-system kube-apiserver-uy08-10 1/1 Running 0 3h
  20. kube-system kube-controller-manager-uy06-04 1/1 Running 0 7h
  21. kube-system kube-controller-manager-uy06-05 1/1 Running 0 5h
  22. kube-system kube-controller-manager-uy08-10 1/1 Running 0 3h
  23. kube-system kube-dns-545bc4bfd4-fcr9q 3/3 Running 0 7h
  24. kube-system kube-dns-545bc4bfd4-ml52t 3/3 Running 0 3h
  25. kube-system kube-dns-545bc4bfd4-p6d7r 3/3 Running 0 3h
  26. kube-system kube-dns-545bc4bfd4-t8ttx 3/3 Running 0 3h
  27. kube-system kube-proxy-bpdr9 1/1 Running 0 3h
  28. kube-system kube-proxy-cjnt5 1/1 Running 0 5h
  29. kube-system kube-proxy-l4w49 1/1 Running 0 7h
  30. kube-system kube-proxy-wmqgg 1/1 Running 0 3h
  31. kube-system kube-scheduler-uy06-04 1/1 Running 0 7h
  32. kube-system kube-scheduler-uy06-05 1/1 Running 0 5h
  33. kube-system kube-scheduler-uy08-10 1/1 Running 0 3h
  34. kube-system kubernetes-dashboard-5c54687f9c-ssklk 1/1 Running 0 3h
  35. production frontend-987698689-7pc56 1/1 Running 0 3h
  36. production redis-master-5f68fbf97c-jft59 1/1 Running 0 3h
  37. production redis-slave-74855dfc5-2bfwj 1/1 Running 0 3h
  38. production redis-slave-74855dfc5-rcrkm 1/1 Running 0 3h
  39. staging cattle-5f67c7948b-2j8jf 1/1 Running 0 2h
  40. staging cattle-5f67c7948b-4zcft 1/1 Running 0 2h
  41. staging cattle-5f67c7948b-gk87r 1/1 Running 0 2h
  42. staging cattle-5f67c7948b-gzhc5 1/1 Running 0 2h
  1. # kubectl get cs
  2. NAME STATUS MESSAGE ERROR
  3. scheduler Healthy ok
  4. controller-manager Healthy ok
  5. etcd-2 Healthy {"health": "true"}
  6. etcd-0 Healthy {"health": "true"}
  7. etcd-1 Healthy {"health": "true"}
  1. # kubectl cluster-info
  2. Kubernetes master is running at https://192.168.6.16:6443
  3. Heapster is running at https://192.168.6.16:6443/api/v1/namespaces/kube-system/services/heapster/proxy
  4. KubeDNS is running at https://192.168.6.16:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy


补充:

默认情况下,kubectl没有权限查看pod的日志,授权方法:

  1. # vim kubelet.rbac.yaml
  2. # This role allows full access to the kubelet API
  3. apiVersion: rbac.authorization.k8s.io/v1
  4. kind: ClusterRole
  5. metadata:
  6. name: kubelet-api-admin
  7. labels:
  8. kubernetes.io/cluster-service: "true"
  9. addonmanager.kubernetes.io/mode: Reconcile
  10. rules:
  11. - apiGroups:
  12. - ""
  13. resources:
  14. - nodes/proxy
  15. - nodes/log
  16. - nodes/stats
  17. - nodes/metrics
  18. - nodes/spec
  19. verbs:
  20. - "*"
  21. ---
  22. apiVersion: rbac.authorization.k8s.io/v1
  23. kind: ClusterRoleBinding
  24. metadata:
  25. name: my-apiserver-kubelet-binding
  26. roleRef:
  27. apiGroup: rbac.authorization.k8s.io
  28. kind: ClusterRole
  29. name: kubelet-api-admin
  30. subjects:
  31. - apiGroup: rbac.authorization.k8s.io
  32. kind: User
  33. name: kube-apiserver-kubelet-client
  1. # kubectl apply -f kubelet.rbac.yaml

部署kubernetes1.8.4+contiv高可用集群的更多相关文章

  1. lvs+keepalived部署k8s v1.16.4高可用集群

    一.部署环境 1.1 主机列表 主机名 Centos版本 ip docker version flannel version Keepalived version 主机配置 备注 lvs-keepal ...

  2. Centos7.6部署k8s v1.16.4高可用集群(主备模式)

    一.部署环境 主机列表: 主机名 Centos版本 ip docker version flannel version Keepalived version 主机配置 备注 master01 7.6. ...

  3. 使用睿云智合开源 Breeze 工具部署 Kubernetes v1.12.3 高可用集群

    一.Breeze简介 Breeze 项目是深圳睿云智合所开源的Kubernetes 图形化部署工具,大大简化了Kubernetes 部署的步骤,其最大亮点在于支持全离线环境的部署,且不需要FQ获取 G ...

  4. 使用开源Breeze工具部署Kubernetes 1.12.1高可用集群

    Breeze项目是深圳睿云智合所开源的Kubernetes图形化部署工具,大大简化了Kubernetes部署的步骤,其最大亮点在于支持全离线环境的部署,且不需要FQ获取Google的相应资源包,尤其适 ...

  5. kubeadm使用外部etcd部署kubernetes v1.17.3 高可用集群

    文章转载自:https://mp.weixin.qq.com/s?__biz=MzI1MDgwNzQ1MQ==&mid=2247483891&idx=1&sn=17dcd7cd ...

  6. kubeadm 使用 Calico CNI 以及外部 etcd 部署 kubernetes v1.23.1 高可用集群

    文章转载自:https://mp.weixin.qq.com/s/2sWHt6SeCf7GGam0LJEkkA 一.环境准备 使用服务器 Centos 8.4 镜像,默认操作系统版本 4.18.0-3 ...

  7. K8S学习笔记之二进制部署Kubernetes v1.13.4 高可用集群

    0x00 概述 本次采用二进制文件方式部署,本文过程写成了更详细更多可选方案的ansible部署方案 https://github.com/zhangguanzhang/Kubernetes-ansi ...

  8. kubernetes之手动部署k8s 1.14.1高可用集群

    1. 架构信息 系统版本:CentOS 7.6 内核:3.10.0-957.el7.x86_64 Kubernetes: v1.14.1 Docker-ce: 18.09.5 推荐硬件配置:4核8G ...

  9. Breeze 部署 Kubernetes 1.12.1高可用集群

    今天看文章介绍了一个开源部署 K8S 的工具,有空研究下~ Github 地址: https://github.com/wise2c-devops/breeze

随机推荐

  1. python 文本特征提取 CountVectorizer, TfidfVectorizer

    1. TF-IDF概述 TF-IDF(term frequency–inverse document frequency)是一种用于资讯检索与文本挖掘的常用加权技术.TF-IDF是一种统计方法,用以评 ...

  2. 初识 tk.mybatis.mapper 通用mapper

    在博客园发表Mybatis Dynamic Query后,一位园友问我知不知道通用mapper,仔细去找了一下,还真的有啊,比较好的就是abel533写的tk.mybatis.mapper. 本次例子 ...

  3. 并行管理工具——pdsh

    1. pdsh安装2. pdsh常规使用2.1 pdsh2.2 pdcp 并行管理的方式有很多种: 命令行 一般是for循环 脚本 一般是expect+ssh等自编辑脚本 工具 pssh,pdsh,m ...

  4. confluence上传文件附件预览乱码问题(linux服务器安装字体操作)

    在confluence上传excel文件,预览时发现乱码问题主要是因为再上传文件的时候一般是Windows下的文件上传,而预览的时候,是linux下的环境,由于linux下没有微软字体,所以预览的时候 ...

  5. 第三个Sprint冲刺第5天

    成员:罗凯旋.罗林杰.吴伟锋.黎文衷 各成员努力完成最后冲刺

  6. 第三个spring冲刺总结(附团队贡献分)

    基于调查需求下完成的四则运算,我们完成了主要的3大功能. 第一,普通的填空题运算,这个是传统的运算练习方式,团队都认为这个选项是必要的,好的传统要留下来,在个人经历中,填空练习是一个不错的选择. 第二 ...

  7. Service Fabric

    Service Fabric 开源 微软的Azure Service Fabric的官方博客在3.24日发布了一篇博客 Service Fabric .NET SDK goes open source ...

  8. C/C++的内存泄漏检测工具Valgrind memcheck的使用经历

    Linux下的Valgrind真是利器啊(不知道Valgrind的请自觉查看参考文献(1)(2)),帮我找出了不少C++中的内存管理错误,前一阵子还在纠结为什么VS 2013下运行良好的程序到了Lin ...

  9. jvm学习二:类加载器

    前一节详细的聊了一下类的加载过程,本节聊一聊类的加载工具,类加载器  ---  ClassLoader 本想自己写的,查资料的时候查到一篇大神的文章,写的十分详细 大家直接过去看吧http://blo ...

  10. 在腾讯云&阿里云上部署JavaWeb项目(Tomcat+MySQL)

    之前做项目都是在本地跑,最近遇到需要在在云服务器(阿里云或者腾讯云都可以,差不多)上部署Java Web项目的问题,一路上遇到了好多坑,在成功部署上去之后写一下部署的步骤与过程,一是帮助自己总结记忆, ...