kubespray(ansible)自动化安装k8s集群

https://github.com/kubernetes-incubator/kubespray

https://kubernetes.io/docs/setup/pick-right-solution/

kubespray本质是一堆ansible的role文件,通过这种方式,即ansible方式可以自动化的安装高可用k8s集群,目前支持1.9.

安装完成后,k8s所有组件都是通过hyperkube的容器化来运行的.

  • 最佳安装centos7

  • 安装docker(kubespray默认给docker添加的启动参数)

  1. /usr/bin/dockerd
  2. --insecure-registry=10.233.0.0/18
  3. --graph=/var/lib/docker
  4. --log-opt max-size=50m
  5. --log-opt max-file=5
  6. --iptables=false
  7. --dns 10.233.0.3
  8. --dns 114.114.114.114
  9. --dns-search default.svc.cluster.local
  10. --dns-search svc.cluster.local
  11. --dns-opt ndots:2
  12. --dns-opt timeout:2
  13. --dns-opt attempts:2

  1. - 规划3(n1 n2 n3)主2从(n4 n5)
  2. - hosts
  3. 192.168.2.11 n1.ma.com n1
  4. 192.168.2.12 n2.ma.com n2
  5. 192.168.2.13 n3.ma.com n3
  6. 192.168.2.14 n4.ma.com n4
  7. 192.168.2.15 n5.ma.com n5
  8. 192.168.2.16 n6.ma.com n6
  9. - 1.9ansible-playbook里涉及到这些镜像
  10. gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1
  11. gcr.io/google_containers/pause-amd64:3.0
  12. gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7
  13. gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
  14. gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
  15. gcr.io/google_containers/elasticsearch:v2.4.1
  16. gcr.io/google_containers/fluentd-elasticsearch:1.22
  17. gcr.io/google_containers/kibana:v4.6.1
  18. gcr.io/kubernetes-helm/tiller:v2.7.2
  19. gcr.io/google_containers/kubernetes-dashboard-amd64:v1.0.1
  20. gcr.io/google_containers/kubernetes-dashboard-init-amd64:v1.7.1
  21. quay.io/l23network/k8s-netchecker-agent:v1.0
  22. quay.io/l23network/k8s-netchecker-server:v1.0
  23. quay.io/coreos/etcd:v3.2.4
  24. quay.io/coreos/flannel:v0.9.1
  25. quay.io/coreos/flannel-cni:v0.3.0
  26. quay.io/calico/ctl:v1.6.1
  27. quay.io/calico/node:v2.6.2
  28. quay.io/calico/cni:v1.11.0
  29. quay.io/calico/kube-controllers:v1.0.0
  30. quay.io/calico/routereflector:v0.4.0
  31. quay.io/coreos/hyperkube:v1.9.0_coreos.0
  32. quay.io/ant31/kargo:master
  33. quay.io/external_storage/local-volume-provisioner-bootstrap:v1.0.0
  34. quay.io/external_storage/local-volume-provisioner:v1.0.0
  35. - 1.9镜像dockerhub
  36. lanny/gcr.io_google_containers_cluster-proportional-autoscaler-amd64:1.1.1
  37. lanny/gcr.io_google_containers_pause-amd64:3.0
  38. lanny/gcr.io_google_containers_k8s-dns-kube-dns-amd64:1.14.7
  39. lanny/gcr.io_google_containers_k8s-dns-dnsmasq-nanny-amd64:1.14.7
  40. lanny/gcr.io_google_containers_k8s-dns-sidecar-amd64:1.14.7
  41. lanny/gcr.io_google_containers_elasticsearch:v2.4.1
  42. lanny/gcr.io_google_containers_fluentd-elasticsearch:1.22
  43. lanny/gcr.io_google_containers_kibana:v4.6.1
  44. lanny/gcr.io_kubernetes-helm_tiller:v2.7.2
  45. lanny/gcr.io_google_containers_kubernetes-dashboard-init-amd64:v1.0.1
  46. lanny/gcr.io_google_containers_kubernetes-dashboard-amd64:v1.7.1
  47. lanny/quay.io_l23network_k8s-netchecker-agent:v1.0
  48. lanny/quay.io_l23network_k8s-netchecker-server:v1.0
  49. lanny/quay.io_coreos_etcd:v3.2.4
  50. lanny/quay.io_coreos_flannel:v0.9.1
  51. lanny/quay.io_coreos_flannel-cni:v0.3.0
  52. lanny/quay.io_calico_ctl:v1.6.1
  53. lanny/quay.io_calico_node:v2.6.2
  54. lanny/quay.io_calico_cni:v1.11.0
  55. lanny/quay.io_calico_kube-controllers:v1.0.0
  56. lanny/quay.io_calico_routereflector:v0.4.0
  57. lanny/quay.io_coreos_hyperkube:v1.9.0_coreos.0
  58. lanny/quay.io_ant31_kargo:master
  59. lanny/quay.io_external_storage_local-volume-provisioner-bootstrap:v1.0.0
  60. lanny/quay.io_external_storage_local-volume-provisioner:v1.0.0
  61. - 配置文件
  62. kubespray/inventory/group_vars/k8s-cluster.yml 为控制一些基础信息的配置文件。
  63. 修改为flannel
  64. kubespray/inventory/group_vars/all.yml 控制一些需要详细配置的信息
  65. 修改为centos
  66. - 替换flannelvxlanhost-gw
  67. roles/network_plugin/flannel/defaults/main.yml
  68. 可以通过grep -r 'vxlan' . 这种方式来找到
  69. - 修改kube_api_pwd
  70. vi roles/kubespray-defaults/defaults/main.yaml
  71. kube_api_pwd: xxxx
  72. - 证书时间
  73. kubespray/roles/kubernetes/secrets/files/make-ssl.sh
  74. - 修改image为个人仓库地址
  75. 可以通过grep -r 'gcr.io' . 这种方式来找到
  76. sed -i 's#gcr\.io\/google_containers\/#lanny/gcr\.io_google_containers_#g' roles/download/defaults/main.yml
  77. sed -i 's#gcr\.io\/google_containers\/#lanny/gcr\.io_google_containers_#g' roles/dnsmasq/templates/dnsmasq-autoscaler.yml.j2
  78. sed -i 's#gcr\.io\/google_containers\/#lanny/gcr\.io_google_containers_#g' roles/kubernetes-apps/ansible/defaults/main.yml
  79. sed -i 's#gcr\.io\/kubernetes-helm\/#lanny/gcr\.io_kubernetes-helm_#g' roles/download/defaults/main.yml
  80. sed -i 's#quay\.io\/coreos\/#lanny/quay\.io_coreos_#g' roles/download/defaults/main.yml
  81. sed -i 's#quay\.io\/calico\/#lanny/quay\.io_calico_#g' roles/download/defaults/main.yml
  82. sed -i 's#quay\.io\/l23network\/#lanny/quay\.io_l23network_#g' roles/download/defaults/main.yml
  83. sed -i 's#quay\.io\/l23network\/#lanny/quay\.io_l23network_#g' docs/netcheck.md
  84. sed -i 's#quay\.io\/external_storage\/#lanny/quay\.io_external_storage_#g' roles/kubernetes-apps/local_volume_provisioner/defaults/main.yml
  85. sed -i 's#quay\.io\/ant31\/kargo#lanny/quay\.io_ant31_kargo_#g' .gitlab-ci.yml
  86. - 安装完后查看镜像(用到的镜像(calico))
  87. nginx:1.13
  88. quay.io/coreos/hyperkube:v1.9.0_coreos.0
  89. gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
  90. gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7
  91. gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
  92. quay.io/calico/node:v2.6.2
  93. gcr.io/google_containers/kubernetes-dashboard-init-amd64:v1.0.1
  94. quay.io/calico/cniv:1.11.0
  95. gcr.io/google_containers/kubernetes-dashboard-amd64:v1.7.1
  96. quay.io/calico/ctlv:1.6.1
  97. quay.io/calico/routereflectorv:0.4.0
  98. quay.io/coreos/etcdv:3.2.4
  99. gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1
  100. gcr.io/google_containers/pause-amd64:3.0
  101. - kubespray生成配置所需的环境(python3 ansible) Ansible v2.4 (or newer) Jinja 2.9 (or newer)
  102. yum install python34 python34-pip python-pip python-netaddr -y
  103. cd
  104. mkdir .pip
  105. cd .pip
  106. cat > pip.conf <<EOF
  107. [global]
  108. index-url = http://mirrors.aliyun.com/pypi/simple/
  109. [install]
  110. trusted-host=mirrors.aliyun.com
  111. EOF
  112. yum install gcc libffi-devel python-devel openssl-devel -y
  113. pip install Jinja2-2.10-py2.py3-none-any.whl # https://pypi.python.org/pypi/Jinja2
  114. pip install cryptography
  115. pip install ansible
  116. - 克隆
  117. git clone https://github.com/kubernetes-incubator/kubespray.git
  118. - 修改配置
  119. 1. 使dockerpull gcr的镜像
  120. vim inventory/group_vars/all.yml
  121. 2 bootstrap_os: centos
  122. 95 http_proxy: "http://192.168.1.88:1080/"
  123. 2. 如果vm内存<=1G,如果>=3G,则无需修改
  124. vim roles/kubernetes/preinstall/tasks/verify-settings.yml
  125. 52 - name: Stop if memory is too small for masters
  126. 53 assert:
  127. 54 that: ansible_memtotal_mb <= 1500
  128. 55 ignore_errors: "{{ ignore_assert_errors }}"
  129. 56 when: inventory_hostname in groups['kube-master']
  130. 57
  131. 58 - name: Stop if memory is too small for nodes
  132. 59 assert:
  133. 60 that: ansible_memtotal_mb <= 1024
  134. 61 ignore_errors: "{{ ignore_assert_errors }}"
  135. 3. 修改swap,
  136. vim roles/download/tasks/download_container.yml
  137. 75 - name: Stop if swap enabled
  138. 76 assert:
  139. 77 that: ansible_swaptotal_mb == 0
  140. 78 when: kubelet_fail_swap_on|default(false)
  141. 所有机器执行: 关闭swap
  142. swapoff -a
  143. [root@n1 kubespray]# free -m
  144. total used free shared buff/cache available
  145. Mem: 2796 297 1861 8 637 2206
  146. Swap: 0 0 0 #这栏为0,表示关闭
  147. - 生成 kubespray配置,开始ansible安装k8s之旅(非常用时间,大到1h,小到20min)
  148. cd kubespray
  149. IPS=(192.168.2.11 192.168.2.12 192.168.2.13 192.168.2.14 192.168.2.15)
  150. CONFIG_FILE=inventory/inventory.cfg python3 contrib/inventory_builder/inventory.py ${IPS[@]}
  151. ansible-playbook -i inventory/inventory.cfg cluster.yml -b -v --private-key=~/.ssh/id_rsa
  152. - 修改好的配置
  153. git clone https://github.com/lannyMa/kubespray
  154. run.sh 有启动命令
  155. - 我的inventory.cfg, 我不想让nodemaster混在一起手动改了下
  156. [root@n1 kubespray]# cat inventory/inventory.cfg
  157. [all]
  158. node1 ansible_host=192.168.2.11 ip=192.168.2.11
  159. node2 ansible_host=192.168.2.12 ip=192.168.2.12
  160. node3 ansible_host=192.168.2.13 ip=192.168.2.13
  161. node4 ansible_host=192.168.2.14 ip=192.168.2.14
  162. node5 ansible_host=192.168.2.15 ip=192.168.2.15
  163. [kube-master]
  164. node1
  165. node2
  166. node3
  167. [kube-node]
  168. node4
  169. node5
  170. [etcd]
  171. node1
  172. node2
  173. node3
  174. [k8s-cluster:children]
  175. kube-node
  176. kube-master
  177. [calico-rr]

遇到的问题wait for the apiserver to be running

  1. ansible-playbook -i inventory/inventory.ini cluster.yml -b -v --private-key=~/.ssh/id_rsa
  2. ...
  3. <!-- We recommend using snippets services like https://gist.github.com/ etc. -->
  4. RUNNING HANDLER [kubernetes/master : Master | wait for the apiserver to be running] ***
  5. Thursday 23 March 2017 10:46:16 +0800 (0:00:00.468) 0:08:32.094 ********
  6. FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (10 retries left).
  7. FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (10 retries left).
  8. FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (9 retries left).
  9. FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (9 retries left).
  10. FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (8 retries left).
  11. FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (8 retries left).
  12. FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (7 retries left).
  13. FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (7 retries left).
  14. FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (6 retries left).
  15. FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (6 retries left).
  16. FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (5 retries left).
  17. FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (5 retries left).
  18. FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (4 retries left).
  19. FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (4 retries left).
  20. FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (3 retries left).
  21. FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (3 retries left).
  22. FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (2 retries left).
  23. FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (2 retries left).
  24. FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (1 retries left).
  25. FAILED - RETRYING: HANDLER: kubernetes/master : Master | wait for the apiserver to be running (1 retries left).
  26. fatal: [node1]: FAILED! => {"attempts": 10, "changed": false, "content": "", "failed": true, "msg": "Status code was not [200]: Request failed: <urlopen error [Errno 111] Connection refused>", "redirected": false, "status": -1, "url": "http://localhost:8080/healthz"}
  27. fatal: [node2]: FAILED! => {"attempts": 10, "changed": false, "content": "", "failed": true, "msg": "Status code was not [200]: Request failed: <urlopen error [Errno 111] Connection refused>", "redirected": false, "status": -1, "url": "http://localhost:8080/healthz"}
  28. to retry, use: --limit @/home/dev_dean/kargo/cluster.retry

解决: 所有节点关闭swap

  1. swapoff -a

快捷命令

  1. alias kk='kubectl get pod --all-namespaces -o wide --show-labels'
  2. alias ks='kubectl get svc --all-namespaces -o wide'
  3. alias kss='kubectl get svc --all-namespaces -o wide --show-labels'
  4. alias kd='kubectl get deploy --all-namespaces -o wide'
  5. alias wk='watch kubectl get pod --all-namespaces -o wide --show-labels'
  6. alias kv='kubectl get pv -o wide'
  7. alias kvc='kubectl get pvc -o wide --all-namespaces --show-labels'
  8. alias kbb='kubectl run -it --rm --restart=Never busybox --image=busybox sh'
  9. alias kbbc='kubectl run -it --rm --restart=Never curl --image=appropriate/curl sh'
  10. alias kd='kubectl get deployment --all-namespaces --show-labels'
  11. alias kcm='kubectl get cm --all-namespaces -o wide'
  12. alias kin='kubectl get ingress --all-namespaces -o wide'

kubespray的默认启动参数

  1. ps -ef|egrep "apiserver|controller-manager|scheduler"
  2. /hyperkube apiserver \
  3. --advertise-address=192.168.2.11 \
  4. --etcd-servers=https://192.168.2.11:2379,https://192.168.2.12:2379,https://192.168.2.13:2379 \
  5. --etcd-quorum-read=true \
  6. --etcd-cafile=/etc/ssl/etcd/ssl/ca.pem \
  7. --etcd-certfile=/etc/ssl/etcd/ssl/node-node1.pem \
  8. --etcd-keyfile=/etc/ssl/etcd/ssl/node-node1-key.pem \
  9. --insecure-bind-address=127.0.0.1 \
  10. --bind-address=0.0.0.0 \
  11. --apiserver-count=3 \
  12. --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ValidatingAdmissionWebhook,ResourceQuota \
  13. --service-cluster-ip-range=10.233.0.0/18 \
  14. --service-node-port-range=30000-32767 \
  15. --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  16. --profiling=false \
  17. --repair-malformed-updates=false \
  18. --kubelet-client-certificate=/etc/kubernetes/ssl/node-node1.pem \
  19. --kubelet-client-key=/etc/kubernetes/ssl/node-node1-key.pem \
  20. --service-account-lookup=true \
  21. --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem \
  22. --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem \
  23. --proxy-client-cert-file=/etc/kubernetes/ssl/apiserver.pem \
  24. --proxy-client-key-file=/etc/kubernetes/ssl/apiserver-key.pem \
  25. --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem \
  26. --secure-port=6443 \
  27. --insecure-port=8080
  28. --storage-backend=etcd3 \
  29. --runtime-config=admissionregistration.k8s.io/v1alpha1 --v=2 \
  30. --allow-privileged=true \
  31. --anonymous-auth=False \
  32. --authorization-mode=Node,RBAC \
  33. --feature-gates=Initializers=False \
  34. PersistentLocalVolumes=False
  35. /hyperkube controller-manager \
  36. --kubeconfig=/etc/kubernetes/kube-controller-manager-kubeconfig.yaml \
  37. --leader-elect=true \
  38. --service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem \
  39. --root-ca-file=/etc/kubernetes/ssl/ca.pem \
  40. --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  41. --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  42. --enable-hostpath-provisioner=false \
  43. --node-monitor-grace-period=40s \
  44. --node-monitor-period=5s \
  45. --pod-eviction-timeout=5m0s \
  46. --profiling=false \
  47. --terminated-pod-gc-threshold=12500 \
  48. --v=2 \
  49. --use-service-account-credentials=true \
  50. --feature-gates=Initializers=False \
  51. PersistentLocalVolumes=False
  52. /hyperkube scheduler \
  53. --leader-elect=true \
  54. --kubeconfig=/etc/kubernetes/kube-scheduler-kubeconfig.yaml \
  55. --profiling=false --v=2 \
  56. --feature-gates=Initializers=False \
  57. PersistentLocalVolumes=False

  1. /usr/local/bin/kubelet \
  2. --logtostderr=true --v=2 \
  3. --address=192.168.2.14 \
  4. --node-ip=192.168.2.14 \
  5. --hostname-override=node4 \
  6. --allow-privileged=true \
  7. --pod-manifest-path=/etc/kubernetes/manifests \
  8. --cadvisor-port=0 \
  9. --pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0 \
  10. --node-status-update-frequency=10s \
  11. --docker-disable-shared-pid=True \
  12. --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  13. --tls-cert-file=/etc/kubernetes/ssl/node-node4.pem \
  14. --tls-private-key-file=/etc/kubernetes/ssl/node-node4-key.pem \
  15. --anonymous-auth=false \
  16. --cgroup-driver=cgroupfs \
  17. --cgroups-per-qos=True \
  18. --fail-swap-on=True \
  19. --enforce-node-allocatable= \
  20. --cluster-dns=10.233.0.3 \
  21. --cluster-domain=cluster.local \
  22. --resolv-conf=/etc/resolv.conf \
  23. --kubeconfig=/etc/kubernetes/node-kubeconfig.yaml \
  24. --require-kubeconfig \
  25. --kube-reserved cpu=100m,memory=256M \
  26. --node-labels=node-role.kubernetes.io/node=true \
  27. --feature-gates=Initializers=False,PersistentLocalVolumes=False \
  28. --network-plugin=cni --cni-conf-dir=/etc/cni/net.d \
  29. --cni-bin-dir=/opt/cni/bin
  30. ps -ef|grep kube-porxy
  31. hyperkube proxy --v=2 \
  32. --kubeconfig=/etc/kubernetes/kube-proxy-kubeconfig.yaml\
  33. --bind-address=192.168.2.14 \
  34. --cluster-cidr=10.233.64.0/18 \
  35. --proxy-mode=iptables

我把所需要的镜像v1.9推送到dockerhub了

  1. lanny/gcr.io_google_containers_cluster-proportional-autoscaler-amd64:1.1.1
  2. lanny/gcr.io_google_containers_pause-amd64:3.0
  3. lanny/gcr.io_google_containers_k8s-dns-kube-dns-amd64:1.14.7
  4. lanny/gcr.io_google_containers_k8s-dns-dnsmasq-nanny-amd64:1.14.7
  5. lanny/gcr.io_google_containers_k8s-dns-sidecar-amd64:1.14.7
  6. lanny/gcr.io_google_containers_elasticsearch:v2.4.1
  7. lanny/gcr.io_google_containers_fluentd-elasticsearch:1.22
  8. lanny/gcr.io_google_containers_kibana:v4.6.1
  9. lanny/gcr.io_kubernetes-helm_tiller:v2.7.2
  10. lanny/gcr.io_google_containers_kubernetes-dashboard-init-amd64:v1.0.1
  11. lanny/gcr.io_google_containers_kubernetes-dashboard-amd64:v1.7.1
  12. lanny/quay.io_l23network_k8s-netchecker-agent:v1.0
  13. lanny/quay.io_l23network_k8s-netchecker-server:v1.0
  14. lanny/quay.io_coreos_etcd:v3.2.4
  15. lanny/quay.io_coreos_flannel:v0.9.1
  16. lanny/quay.io_coreos_flannel-cni:v0.3.0
  17. lanny/quay.io_calico_ctl:v1.6.1
  18. lanny/quay.io_calico_node:v2.6.2
  19. lanny/quay.io_calico_cni:v1.11.0
  20. lanny/quay.io_calico_kube-controllers:v1.0.0
  21. lanny/quay.io_calico_routereflector:v0.4.0
  22. lanny/quay.io_coreos_hyperkube:v1.9.0_coreos.0
  23. lanny/quay.io_ant31_kargo:master
  24. lanny/quay.io_external_storage_local-volume-provisioner-bootstrap:v1.0.0
  25. lanny/quay.io_external_storage_local-volume-provisioner:v1.0.0
  26. gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1
  27. gcr.io/google_containers/pause-amd64:3.0
  28. gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7
  29. gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
  30. gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
  31. gcr.io/google_containers/elasticsearch:v2.4.1
  32. gcr.io/google_containers/fluentd-elasticsearch:1.22
  33. gcr.io/google_containers/kibana:v4.6.1
  34. gcr.io/kubernetes-helm/tiller:v2.7.2
  35. gcr.io/google_containers/kubernetes-dashboard-amd64:v1.0.1
  36. gcr.io/google_containers/kubernetes-dashboard-init-amd64:v1.7.1
  37. quay.io/l23network/k8s-netchecker-agent:v1.0
  38. quay.io/l23network/k8s-netchecker-server:v1.0
  39. quay.io/coreos/etcd:v3.2.4
  40. quay.io/coreos/flannel:v0.9.1
  41. quay.io/coreos/flannel-cni:v0.3.0
  42. quay.io/calico/ctl:v1.6.1
  43. quay.io/calico/node:v2.6.2
  44. quay.io/calico/cni:v1.11.0
  45. quay.io/calico/kube-controllers:v1.0.0
  46. quay.io/calico/routereflector:v0.4.0
  47. quay.io/coreos/hyperkube:v1.9.0_coreos.0
  48. quay.io/ant31/kargo:master
  49. quay.io/external_storage/local-volume-provisioner-bootstrap:v1.0.0
  50. quay.io/external_storage/local-volume-provisioner:v1.0.0
  51. sed -i 's#gcr\.io\/google_containers\/#lanny/gcr\.io_google_containers_#g' roles/download/defaults/main.yml
  52. sed -i 's#gcr\.io\/google_containers\/#lanny/gcr\.io_google_containers_#g' roles/dnsmasq/templates/dnsmasq-autoscaler.yml.j2
  53. sed -i 's#gcr\.io\/google_containers\/#lanny/gcr\.io_google_containers_#g' roles/kubernetes-apps/ansible/defaults/main.yml
  54. sed -i 's#gcr\.io\/kubernetes-helm\/#lanny/gcr\.io_kubernetes-helm_#g' roles/download/defaults/main.yml
  55. sed -i 's#quay\.io\/coreos\/#lanny/quay\.io_coreos_#g' roles/download/defaults/main.yml
  56. sed -i 's#quay\.io\/calico\/#lanny/quay\.io_calico_#g' roles/download/defaults/main.yml
  57. sed -i 's#quay\.io\/l23network\/#lanny/quay\.io_l23network_#g' roles/download/defaults/main.yml
  58. sed -i 's#quay\.io\/l23network\/#lanny/quay\.io_l23network_#g' docs/netcheck.md
  59. sed -i 's#quay\.io\/external_storage\/#lanny/quay\.io_external_storage_#g' roles/kubernetes-apps/local_volume_provisioner/defaults/main.yml
  60. sed -i 's#quay\.io\/ant31\/kargo#lanny/quay\.io_ant31_kargo_#g' .gitlab-ci.yml

参考: https://jicki.me/2017/12/08/kubernetes-kubespray-1.8.4/

构建etcd容器集群

  1. /usr/bin/docker run
  2. --restart=on-failure:5
  3. --env-file=/etc/etcd.env
  4. --net=host
  5. -v /etc/ssl/certs:/etc/ssl/certs:ro
  6. -v /etc/ssl/etcd/ssl:/etc/ssl/etcd/ssl:ro
  7. -v /var/lib/etcd:/var/lib/etcd:rw
  8. --memory=512M
  9. --oom-kill-disable
  10. --blkio-weight=1000
  11. --name=etcd1
  12. lanny/quay.io_coreos_etcd:v3.2.4
  13. /usr/local/bin/etcd
  14. [root@node1 ~]# cat /etc/etcd.env
  15. ETCD_DATA_DIR=/var/lib/etcd
  16. ETCD_ADVERTISE_CLIENT_URLS=https://192.168.2.11:2379
  17. ETCD_INITIAL_ADVERTISE_PEER_URLS=https://192.168.2.11:2380
  18. ETCD_INITIAL_CLUSTER_STATE=existing
  19. ETCD_METRICS=basic
  20. ETCD_LISTEN_CLIENT_URLS=https://192.168.2.11:2379,https://127.0.0.1:2379
  21. ETCD_ELECTION_TIMEOUT=5000
  22. ETCD_HEARTBEAT_INTERVAL=250
  23. ETCD_INITIAL_CLUSTER_TOKEN=k8s_etcd
  24. ETCD_LISTEN_PEER_URLS=https://192.168.2.11:2380
  25. ETCD_NAME=etcd1
  26. ETCD_PROXY=off
  27. ETCD_INITIAL_CLUSTER=etcd1=https://192.168.2.11:2380,etcd2=https://192.168.2.12:2380,etcd3=https://192.168.2.13:2380
  28. ETCD_AUTO_COMPACTION_RETENTION=8
  29. # TLS settings
  30. ETCD_TRUSTED_CA_FILE=/etc/ssl/etcd/ssl/ca.pem
  31. ETCD_CERT_FILE=/etc/ssl/etcd/ssl/member-node1.pem
  32. ETCD_KEY_FILE=/etc/ssl/etcd/ssl/member-node1-key.pem
  33. ETCD_PEER_TRUSTED_CA_FILE=/etc/ssl/etcd/ssl/ca.pem
  34. ETCD_PEER_CERT_FILE=/etc/ssl/etcd/ssl/member-node1.pem
  35. ETCD_PEER_KEY_FILE=/etc/ssl/etcd/ssl/member-node1-key.pem
  36. ETCD_PEER_CLIENT_CERT_AUTH=true
  37. [root@node1 ssl]# docker exec b1159a1c6209 env|grep -i etcd
  38. ETCD_DATA_DIR=/var/lib/etcd
  39. ETCD_ADVERTISE_CLIENT_URLS=https://192.168.2.11:2379
  40. ETCD_INITIAL_ADVERTISE_PEER_URLS=https://192.168.2.11:2380
  41. ETCD_INITIAL_CLUSTER_STATE=existing
  42. ETCD_METRICS=basic
  43. ETCD_LISTEN_CLIENT_URLS=https://192.168.2.11:2379,https://127.0.0.1:2379
  44. ETCD_ELECTION_TIMEOUT=5000
  45. ETCD_HEARTBEAT_INTERVAL=250
  46. ETCD_INITIAL_CLUSTER_TOKEN=k8s_etcd
  47. ETCD_LISTEN_PEER_URLS=https://192.168.2.11:2380
  48. ETCD_NAME=etcd1
  49. ETCD_PROXY=off
  50. ETCD_INITIAL_CLUSTER=etcd1=https://192.168.2.11:2380,etcd2=https://192.168.2.12:2380,etcd3=https://192.168.2.13:2380
  51. ETCD_AUTO_COMPACTION_RETENTION=8
  52. ETCD_TRUSTED_CA_FILE=/etc/ssl/etcd/ssl/ca.pem
  53. ETCD_CERT_FILE=/etc/ssl/etcd/ssl/member-node1.pem
  54. ETCD_KEY_FILE=/etc/ssl/etcd/ssl/member-node1-key.pem
  55. ETCD_PEER_TRUSTED_CA_FILE=/etc/ssl/etcd/ssl/ca.pem
  56. ETCD_PEER_CERT_FILE=/etc/ssl/etcd/ssl/member-node1.pem
  57. ETCD_PEER_KEY_FILE=/etc/ssl/etcd/ssl/member-node1-key.pem
  58. ETCD_PEER_CLIENT_CERT_AUTH=true
  59. [root@node1 ssl]# tree /etc/ssl/etcd/ssl
  60. /etc/ssl/etcd/ssl
  61. ├── admin-node1-key.pem
  62. ├── admin-node1.pem
  63. ├── admin-node2-key.pem
  64. ├── admin-node2.pem
  65. ├── admin-node3-key.pem
  66. ├── admin-node3.pem
  67. ├── ca-key.pem
  68. ├── ca.pem
  69. ├── member-node1-key.pem
  70. ├── member-node1.pem
  71. ├── member-node2-key.pem
  72. ├── member-node2.pem
  73. ├── member-node3-key.pem
  74. ├── member-node3.pem
  75. ├── node-node1-key.pem
  76. ├── node-node1.pem
  77. ├── node-node2-key.pem
  78. ├── node-node2.pem
  79. ├── node-node3-key.pem
  80. ├── node-node3.pem
  81. ├── node-node4-key.pem
  82. ├── node-node4.pem
  83. ├── node-node5-key.pem
  84. └── node-node5.pem
  85. [root@n2 ~]# tree /etc/ssl/etcd/ssl
  86. /etc/ssl/etcd/ssl
  87. ├── admin-node2-key.pem
  88. ├── admin-node2.pem
  89. ├── ca-key.pem
  90. ├── ca.pem
  91. ├── member-node2-key.pem
  92. ├── member-node2.pem
  93. ├── node-node1-key.pem
  94. ├── node-node1.pem
  95. ├── node-node2-key.pem
  96. ├── node-node2.pem
  97. ├── node-node3-key.pem
  98. ├── node-node3.pem
  99. ├── node-node4-key.pem
  100. ├── node-node4.pem
  101. ├── node-node5-key.pem
  102. └── node-node5.pem
  103. admin-node1.crt
  104. CN = etcd-admin-node1
  105. DNS Name=localhost
  106. DNS Name=node1
  107. DNS Name=node2
  108. DNS Name=node3
  109. IP Address=192.168.2.11
  110. IP Address=192.168.2.11
  111. IP Address=192.168.2.12
  112. IP Address=192.168.2.12
  113. IP Address=192.168.2.13
  114. IP Address=192.168.2.13
  115. IP Address=127.0.0.1
  116. member-node1.crt
  117. CN = etcd-member-node1
  118. DNS Name=localhost
  119. DNS Name=node1
  120. DNS Name=node2
  121. DNS Name=node3
  122. IP Address=192.168.2.11
  123. IP Address=192.168.2.11
  124. IP Address=192.168.2.12
  125. IP Address=192.168.2.12
  126. IP Address=192.168.2.13
  127. IP Address=192.168.2.13
  128. IP Address=127.0.0.1
  129. node-node1.crt
  130. CN = etcd-node-node1
  131. DNS Name=localhost
  132. DNS Name=node1
  133. DNS Name=node2
  134. DNS Name=node3
  135. IP Address=192.168.2.11
  136. IP Address=192.168.2.11
  137. IP Address=192.168.2.12
  138. IP Address=192.168.2.12
  139. IP Address=192.168.2.13
  140. IP Address=192.168.2.13
  141. IP Address=127.0.0.1
  142. [root@node1 bin]# tree
  143. .
  144. ├── etcd
  145. ├── etcdctl
  146. ├── etcd-scripts
  147.    └── make-ssl-etcd.sh
  148. ├── kubectl
  149. ├── kubelet
  150. └── kubernetes-scripts
  151. ├── kube-gen-token.sh
  152. └── make-ssl.sh

[k8s]kubespray(ansible)自动化安装k8s集群的更多相关文章

  1. Redis自动化安装以及集群实现

    Redis实例安装 安装说明:自动解压缩安装包,按照指定路径编译安装,复制配置文件模板到Redis实例路的数据径下,根据端口号修改配置文件模板 三个必须文件:1,配置文件,2,当前shell脚本,3, ...

  2. 利用ansible书写playbook在华为云上批量配置管理工具自动化安装ceph集群

    首先在华为云上购买搭建ceph集群所需云主机: 然后购买ceph所需存储磁盘 将购买的磁盘挂载到用来搭建ceph的云主机上 在跳板机上安装ansible 查看ansible版本,检验ansible是否 ...

  3. Ansible自动化部署K8S集群

    Ansible自动化部署K8S集群 1.1 Ansible介绍 Ansible是一种IT自动化工具.它可以配置系统,部署软件以及协调更高级的IT任务,例如持续部署,滚动更新.Ansible适用于管理企 ...

  4. Kubernetes(K8s) 安装(使用kubeadm安装Kubernetes集群)

    背景: 由于工作发生了一些变动,很长时间没有写博客了. 概述: 这篇文章是为了介绍使用kubeadm安装Kubernetes集群(可以用于生产级别).使用了Centos 7系统. 一.Centos7 ...

  5. k8s中安装rabbitmq集群

    官方文档地址:https://www.rabbitmq.com/kubernetes/operator/quickstart-operator.html 要求 1.k8s版本要1.18及其以上 2.能 ...

  6. 利用ansible进行自动化构建etcd集群

    上一篇进行了手动安装etcd集群,此篇利用自动化工具ansible为三个节点构建etcd集群 环境: master:192.168.101.14,node1:192.168.101.15,node2: ...

  7. 简单了解一下K8S,并搭建自己的集群

    距离上次更新已经有一个月了,主要是最近工作上的变动有点频繁,现在才暂时稳定下来.这篇博客的本意是带大家从零开始搭建K8S集群的.但是我后面一想,如果是我看了这篇文章,会收获什么?就是跟着步骤一步一走吗 ...

  8. lvs+keepalived部署k8s v1.16.4高可用集群

    一.部署环境 1.1 主机列表 主机名 Centos版本 ip docker version flannel version Keepalived version 主机配置 备注 lvs-keepal ...

  9. Centos7.6部署k8s v1.16.4高可用集群(主备模式)

    一.部署环境 主机列表: 主机名 Centos版本 ip docker version flannel version Keepalived version 主机配置 备注 master01 7.6. ...

随机推荐

  1. JavaScript 之 截取字符串函数

    一.函数:split() 功能:使用一个指定的分隔符把一个字符串分割存储到数组 例子: str=”jpg|bmp|gif|ico|png”; arr=theString.split(”|”); //a ...

  2. ubuntu 软件包管理工具 dpkg,apt-get,aptitude 区别

    ubuntu 软件包管理工具 dpkg,apt-get,aptitude 区别 一:dpkg dpkg 是一种比较低层的软件包安装管理工具,在安装时,不会安装软件包的依赖关系:只能安装所要求的软件包: ...

  3. gradle 项目转成maven项目

    找到一个个子项目目录下的build.gradle文件,在文件开头添加以下内容: apply plugin: 'java' apply plugin: 'maven' compileJava.optio ...

  4. 谈谈Boost网络编程(2)—— 新系统的设计

    写文章之前.我们一般会想要採用何种方式,是"开门见山",还是"疑问式开头".写代码也有些类似.在编码之前我们须要考虑系统总体方案,这也就是各种设计文档的作用.在 ...

  5. MySQL的各种SHOW

    . SHOW语法 13.5.4.1. SHOW CHARACTER SET语法 13.5.4.2. SHOW COLLATION语法 13.5.4.3. SHOW COLUMNS语法 13.5.4.4 ...

  6. Oracle 11g 分区拆分与合并

    时间范围分区拆分create table emp (id number(6) not null,hire_date date not null)partition by range(hire_date ...

  7. 〖Linux〗apt-get wait for another apt process

    #!/bin/bash i= tput sc >& || \ >&; do )) in ) j="-" ;; ) j="\\" ;; ...

  8. Linux下通用线程池的创建与使用

    线程池:简单地说,线程池 就是预先创建好一批线程,方便.快速地处理收到的业务.比起传统的到来一个任务,即时创建一个线程来处理,节省了线程的创建和回收的开销,响应更快,效率更高. 在linux中,使用的 ...

  9. WinForm如何调用Web Service

    参考地址 今天看了李天平关于WinForm调用Web Service的代码,我自己模仿做一个代码基本都是复制粘贴的,结果不好使.郁闷的是,又碰到那个该死的GET调用Web Service,我想肯定又是 ...

  10. HDUOJ--1874 畅通工程续

    畅通工程续 Time Limit: 3000/1000 MS (Java/Others)    Memory Limit: 32768/32768 K (Java/Others)Total Submi ...