环境:

主机 IP地址 组件
ansible 192.168.175.130 ansible
master 192.168.175.140 docker,kubectl,kubeadm,kubelet
node 192.168.175.141 docker,kubectl,kubeadm,kubelet
node 192.168.175.142 docker,kubectl,kubeadm,kubelet

检查及调试相关命令:

  1. $ ansible-playbook -v k8s-time-sync.yaml --syntax-check
  2. $ ansible-playbook -v k8s-*.yaml -C
  3. $ ansible-playbook -v k8s-yum-cfg.yaml -C --start-at-task="Clean origin dir" --step
  4. $ ansible-playbook -v k8s-kernel-cfg.yaml --step

主机inventory文件:

/root/ansible/hosts

  1. [k8s_cluster]
  2. master ansible_host=192.168.175.140
  3. node1 ansible_host=192.168.175.141
  4. node2 ansible_host=192.168.175.142
  5. [k8s_cluster:vars]
  6. ansible_port=22
  7. ansible_user=root
  8. ansible_password=hello123

检查网络:k8s-check.yaml

  • 检查k8s各主机的网络是否可达;
  • 检查k8s各主机操作系统版本是否达到要求;
  1. - name: step01_check
  2. hosts: k8s_cluster
  3. gather_facts: no
  4. tasks:
  5. - name: check network
  6. shell:
  7. cmd: "ping -c 3 -m 2 {{ansible_host}}"
  8. delegate_to: localhost
  9. - name: get system version
  10. shell: cat /etc/system-release
  11. register: system_release
  12. - name: check system version
  13. vars:
  14. system_version: "{{ system_release.stdout | regex_search('([7-9].[0-9]+).*?') }}"
  15. suitable_version: 7.5
  16. debug:
  17. msg: "{{ 'The version of the operating system is '+ system_version +', suitable!' if (system_version | float >= suitable_version) else 'The version of the operating system is unsuitable' }}"

调试命令:

  1. $ ansible-playbook --ssh-extra-args '-o StrictHostKeyChecking=no' -v -C k8s-check.yaml
  2. $ ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -v -C k8s-check.yaml
  3. $ ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -v k8s-check.yaml --start-at-task="get system version"

连接配置:k8s-conn-cfg.yaml

  • ansible服务器的/etc/hosts文件中添加k8s主机名解析配置
  • 生成密钥对,配置ansible免密登录到k8s各主机
  1. - name: step02_conn_cfg
  2. hosts: k8s_cluster
  3. gather_facts: no
  4. vars_prompt:
  5. - name: RSA
  6. prompt: Generate RSA or not(Yes/No)?
  7. default: "no"
  8. private: no
  9. - name: password
  10. prompt: input your login password?
  11. default: "hello123"
  12. tasks:
  13. - name: Add DNS of k8s to ansible
  14. delegate_to: localhost
  15. lineinfile:
  16. path: /etc/hosts
  17. line: "{{ansible_host}} {{inventory_hostname}}"
  18. backup: yes
  19. - name: Generate RSA
  20. run_once: true
  21. delegate_to: localhost
  22. shell:
  23. cmd: ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
  24. creates: /root/.ssh/id_rsa
  25. when: RSA | bool
  26. - name: Configure password free login
  27. delegate_to: localhost
  28. shell: |
  29. /usr/bin/ssh-keyscan {{ ansible_host }} >> /root/.ssh/known_hosts 2> /dev/null
  30. /usr/bin/ssh-keyscan {{ inventory_hostname }} >> /root/.ssh/known_hosts 2> /dev/null
  31. /usr/bin/sshpass -p'{{ password }}' ssh-copy-id root@{{ ansible_host }}
  32. #/usr/bin/sshpass -p'{{ password }}' ssh-copy-id root@{{ inventory_hostname }}
  33. - name: Test ssh
  34. shell: hostname

执行:

  1. $ ansible-playbook k8s-conn-cfg.yaml
  2. Generate RSA or not(Yes/No)? [no]: yes
  3. input your login password? [hello123]:
  4. PLAY [step02_conn_cfg] **********************************************************************************************************
  5. TASK [Add DNS of k8s to ansible] ************************************************************************************************
  6. ok: [master -> localhost]
  7. ok: [node1 -> localhost]
  8. ok: [node2 -> localhost]
  9. TASK [Generate RSA] *************************************************************************************************************
  10. changed: [master -> localhost]
  11. TASK [Configure password free login] ********************************************************************************************
  12. changed: [node1 -> localhost]
  13. changed: [master -> localhost]
  14. changed: [node2 -> localhost]
  15. TASK [Test ssh] *****************************************************************************************************************
  16. changed: [master]
  17. changed: [node1]
  18. changed: [node2]
  19. PLAY RECAP **********************************************************************************************************************
  20. master : ok=4 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
  21. node1 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
  22. node2 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

配置k8s集群dns解析: k8s-hosts-cfg.yaml

  • 设置主机名
  • /etc/hosts文件中互相添加dns解析
  1. - name: step03_cfg_host
  2. hosts: k8s_cluster
  3. gather_facts: no
  4. tasks:
  5. - name: set hostname
  6. hostname:
  7. name: "{{ inventory_hostname }}"
  8. use: systemd
  9. - name: Add dns to each other
  10. lineinfile:
  11. path: /etc/hosts
  12. backup: yes
  13. line: "{{item.value.ansible_host}} {{item.key}}"
  14. loop: "{{ hostvars | dict2items }}"
  15. loop_control:
  16. label: "{{ item.key }} {{ item.value.ansible_host }}"

执行:

  1. $ ansible-playbook k8s-hosts-cfg.yaml
  2. PLAY [step03_cfg_host] **********************************************************************************************************
  3. TASK [set hostname] *************************************************************************************************************
  4. ok: [master]
  5. ok: [node1]
  6. ok: [node2]
  7. TASK [Add dns to each other] ****************************************************************************************************
  8. ok: [node2] => (item=node1 192.168.175.141)
  9. ok: [master] => (item=node1 192.168.175.141)
  10. ok: [node1] => (item=node1 192.168.175.141)
  11. ok: [node2] => (item=node2 192.168.175.142)
  12. ok: [master] => (item=node2 192.168.175.142)
  13. ok: [node1] => (item=node2 192.168.175.142)
  14. ok: [node2] => (item=master 192.168.175.140)
  15. ok: [master] => (item=master 192.168.175.140)
  16. ok: [node1] => (item=master 192.168.175.140)
  17. PLAY RECAP **********************************************************************************************************************
  18. master : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
  19. node1 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
  20. node2 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

配置yum源:k8s-yum-cfg.yaml

  1. - name: step04_yum_cfg
  2. hosts: k8s_cluster
  3. gather_facts: no
  4. tasks:
  5. - name: Create back-up directory
  6. file:
  7. path: /etc/yum.repos.d/org/
  8. state: directory
  9. - name: Back-up old Yum files
  10. shell:
  11. cmd: mv -f /etc/yum.repos.d/*.repo /etc/yum.repos.d/org/
  12. removes: /etc/yum.repos.d/org/
  13. - name: Add new Yum files
  14. copy:
  15. src: ./files_yum/
  16. dest: /etc/yum.repos.d/
  17. - name: Check yum.repos.d
  18. shell:
  19. cmd: ls /etc/yum.repos.d/*

时钟同步:k8s-time-sync.yaml

  1. - name: step05_time_sync
  2. hosts: k8s_cluster
  3. gather_facts: no
  4. tasks:
  5. - name: Start chronyd.service
  6. systemd:
  7. name: chronyd.service
  8. state: started
  9. enabled: yes
  10. - name: Modify time zone & clock
  11. shell: |
  12. cp -f /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
  13. clock -w
  14. hwclock -w
  15. - name: Check time now
  16. command: date

禁用iptable、firewalld、NetworkManager服务

  1. - name: step06_net_service
  2. hosts: k8s_cluster
  3. gather_facts: no
  4. tasks:
  5. - name: Stop some services for net
  6. systemd:
  7. name: "{{ item }}"
  8. state: stopped
  9. enabled: no
  10. loop:
  11. - firewalld
  12. - iptables
  13. - NetworkManager

执行:

  1. $ ansible-playbook -v k8s-net-service.yaml
  2. ... ...
  3. failed: [master] (item=iptables) => {
  4. "ansible_loop_var": "item",
  5. "changed": false,
  6. "item": "iptables"
  7. }
  8. MSG:
  9. Could not find the requested service iptables: host
  10. ... ...
  11. PLAY RECAP **********************************************************************************************************************
  12. master : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
  13. node1 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
  14. node2 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0

禁用SElinux、swap:k8s-SE-swap-disable.yaml

  1. - name: step07_net_service
  2. hosts: k8s_cluster
  3. gather_facts: no
  4. tasks:
  5. - name: SElinux disabled
  6. lineinfile:
  7. path: /etc/selinux/config
  8. line: SELINUX=disabled
  9. regexp: ^SELINUX=
  10. state: present
  11. backup: yes
  12. - name: Swap disabled
  13. lineinfile:
  14. path: /etc/fstab
  15. line: '#\1'
  16. regexp: '(^/dev/mapper/centos-swap.*$)'
  17. backrefs: yes
  18. state: present
  19. backup: yes

修改内核:k8s-kernel-cfg.yaml

  1. - name: step08_kernel_cfg
  2. hosts: k8s_cluster
  3. gather_facts: no
  4. tasks:
  5. - name: Create /etc/sysctl.d/kubernetes.conf
  6. copy:
  7. content: ''
  8. dest: /etc/sysctl.d/kubernetes.conf
  9. force: yes
  10. - name: Cfg bridge and ip_forward
  11. lineinfile:
  12. path: /etc/sysctl.d/kubernetes.conf
  13. line: "{{ item }}"
  14. state: present
  15. loop:
  16. - 'net.bridge.bridge-nf-call-ip6tables = 1'
  17. - 'net.bridge.bridge-nf-call-iptables = 1'
  18. - 'net.ipv4.ip_forward = 1'
  19. - name: Load cfg
  20. shell:
  21. cmd: |
  22. sysctl -p
  23. modprobe br_netfilter
  24. removes: /etc/sysctl.d/kubernetes.conf
  25. - name: Check cfg
  26. shell:
  27. cmd: '[ $(lsmod | grep br_netfilter | wc -l) -ge 2 ] && exit 0 || exit 3'

执行:

  1. $ ansible-playbook -v k8s-kernel-cfg.yaml --step
  2. TASK [Check cfg] ****************************************************************************************************************
  3. changed: [master] => {
  4. "changed": true,
  5. "cmd": "[ $(lsmod | grep br_netfilter | wc -l) -ge 2 ] && exit 0 || exit 3",
  6. "delta": "0:00:00.011574",
  7. "end": "2022-02-27 04:26:01.332896",
  8. "rc": 0,
  9. "start": "2022-02-27 04:26:01.321322"
  10. }
  11. changed: [node2] => {
  12. "changed": true,
  13. "cmd": "[ $(lsmod | grep br_netfilter | wc -l) -ge 2 ] && exit 0 || exit 3",
  14. "delta": "0:00:00.016331",
  15. "end": "2022-02-27 04:26:01.351208",
  16. "rc": 0,
  17. "start": "2022-02-27 04:26:01.334877"
  18. }
  19. changed: [node1] => {
  20. "changed": true,
  21. "cmd": "[ $(lsmod | grep br_netfilter | wc -l) -ge 2 ] && exit 0 || exit 3",
  22. "delta": "0:00:00.016923",
  23. "end": "2022-02-27 04:26:01.355983",
  24. "rc": 0,
  25. "start": "2022-02-27 04:26:01.339060"
  26. }
  27. PLAY RECAP **********************************************************************************************************************
  28. master : ok=4 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
  29. node1 : ok=4 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
  30. node2 : ok=4 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

配置ipvs:k8s-ipvs-cfg.yaml

  1. - name: step09_ipvs_cfg
  2. hosts: k8s_cluster
  3. gather_facts: no
  4. tasks:
  5. - name: Install ipset and ipvsadm
  6. yum:
  7. name: "{{ item }}"
  8. state: present
  9. loop:
  10. - ipset
  11. - ipvsadm
  12. - name: Load modules
  13. shell: |
  14. modprobe -- ip_vs
  15. modprobe -- ip_vs_rr
  16. modprobe -- ip_vs_wrr
  17. modprobe -- ip_vs_sh
  18. modprobe -- nf_conntrack_ipv4
  19. - name: Check cfg
  20. shell:
  21. cmd: '[ $(lsmod | grep -e -ip_vs -e nf_conntrack_ipv4 | wc -l) -ge 2 ] && exit 0 || exit 3'

安装docker:k8s-docker-install.yaml

  1. - name: step10_docker_install
  2. hosts: k8s_cluster
  3. gather_facts: no
  4. tasks:
  5. - name: Install docker-ce
  6. yum:
  7. name: docker-ce-18.06.3.ce-3.el7
  8. state: present
  9. - name: Cfg docker
  10. copy:
  11. src: ./files_docker/daemon.json
  12. dest: /etc/docker/
  13. - name: Start docker
  14. systemd:
  15. name: docker.service
  16. state: started
  17. enabled: yes
  18. - name: Check docker version
  19. shell:
  20. cmd: docker --version

安装k8s组件[kubeadm\kubelet\kubectl]:k8s-install-kubepkgs.yaml

  1. - name: step11_k8s_install_kubepkgs
  2. hosts: k8s_cluster
  3. gather_facts: no
  4. tasks:
  5. - name: Install k8s components
  6. yum:
  7. name: "{{ item }}"
  8. state: present
  9. loop:
  10. - kubeadm-1.17.4-0
  11. - kubelet-1.17.4-0
  12. - kubectl-1.17.4-0
  13. - name: Cfg k8s
  14. copy:
  15. src: ./files_k8s/kubelet
  16. dest: /etc/sysconfig/
  17. force: no
  18. backup: yes
  19. - name: Start kubelet
  20. systemd:
  21. name: kubelet.service
  22. state: started
  23. enabled: yes

安装集群镜像:k8s-apps-images.yaml

  1. - name: step12_apps_images
  2. hosts: k8s_cluster
  3. gather_facts: no
  4. vars:
  5. apps:
  6. - kube-apiserver:v1.17.4
  7. - kube-controller-manager:v1.17.4
  8. - kube-scheduler:v1.17.4
  9. - kube-proxy:v1.17.4
  10. - pause:3.1
  11. - etcd:3.4.3-0
  12. - coredns:1.6.5
  13. vars_prompt:
  14. - name: cfg_python
  15. prompt: Do you need to install docker pkg for python(Yes/No)?
  16. default: "no"
  17. private: no
  18. tasks:
  19. - block:
  20. - name: Install python-pip
  21. yum:
  22. name: python-pip
  23. state: present
  24. - name: Install docker pkg for python
  25. shell:
  26. cmd: |
  27. pip install docker==4.4.4
  28. pip install websocket-client==0.32.0
  29. creates: /usr/lib/python2.7/site-packages/docker/
  30. when: cfg_python | bool
  31. - name: Pull images
  32. community.docker.docker_image:
  33. name: "registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }}"
  34. source: pull
  35. loop: "{{ apps }}"
  36. - name: Tag images
  37. community.docker.docker_image:
  38. name: "registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }}"
  39. repository: "k8s.gcr.io/{{ item }}"
  40. force_tag: yes
  41. source: local
  42. loop: "{{ apps }}"
  43. - name: Remove images for ali
  44. community.docker.docker_image:
  45. name: "registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }}"
  46. state: absent
  47. loop: "{{ apps }}"

执行:

  1. $ ansible-playbook k8s-apps-images.yaml
  2. Do you need to install docker pkg for python(Yes/No)? [no]:
  3. PLAY [step12_apps_images] *******************************************************************************************************
  4. TASK [Install python-pip] *******************************************************************************************************
  5. skipping: [node1]
  6. skipping: [master]
  7. skipping: [node2]
  8. TASK [Install docker pkg for python] ********************************************************************************************
  9. skipping: [master]
  10. skipping: [node1]
  11. skipping: [node2]
  12. TASK [Pull images] **************************************************************************************************************
  13. changed: [node1] => (item=kube-apiserver:v1.17.4)
  14. changed: [node2] => (item=kube-apiserver:v1.17.4)
  15. changed: [master] => (item=kube-apiserver:v1.17.4)
  16. changed: [node1] => (item=kube-controller-manager:v1.17.4)
  17. changed: [master] => (item=kube-controller-manager:v1.17.4)
  18. changed: [node1] => (item=kube-scheduler:v1.17.4)
  19. changed: [master] => (item=kube-scheduler:v1.17.4)
  20. changed: [node1] => (item=kube-proxy:v1.17.4)
  21. changed: [node2] => (item=kube-controller-manager:v1.17.4)
  22. changed: [master] => (item=kube-proxy:v1.17.4)
  23. changed: [node1] => (item=pause:3.1)
  24. changed: [master] => (item=pause:3.1)
  25. changed: [node2] => (item=kube-scheduler:v1.17.4)
  26. changed: [node1] => (item=etcd:3.4.3-0)
  27. changed: [master] => (item=etcd:3.4.3-0)
  28. changed: [node2] => (item=kube-proxy:v1.17.4)
  29. changed: [node1] => (item=coredns:1.6.5)
  30. changed: [master] => (item=coredns:1.6.5)
  31. changed: [node2] => (item=pause:3.1)
  32. changed: [node2] => (item=etcd:3.4.3-0)
  33. changed: [node2] => (item=coredns:1.6.5)
  34. TASK [Tag images] ***************************************************************************************************************
  35. ok: [node1] => (item=kube-apiserver:v1.17.4)
  36. ok: [master] => (item=kube-apiserver:v1.17.4)
  37. ok: [node2] => (item=kube-apiserver:v1.17.4)
  38. ok: [node1] => (item=kube-controller-manager:v1.17.4)
  39. ok: [master] => (item=kube-controller-manager:v1.17.4)
  40. ok: [node2] => (item=kube-controller-manager:v1.17.4)
  41. ok: [master] => (item=kube-scheduler:v1.17.4)
  42. ok: [node1] => (item=kube-scheduler:v1.17.4)
  43. ok: [node2] => (item=kube-scheduler:v1.17.4)
  44. ok: [master] => (item=kube-proxy:v1.17.4)
  45. ok: [node1] => (item=kube-proxy:v1.17.4)
  46. ok: [node2] => (item=kube-proxy:v1.17.4)
  47. ok: [master] => (item=pause:3.1)
  48. ok: [node1] => (item=pause:3.1)
  49. ok: [node2] => (item=pause:3.1)
  50. ok: [master] => (item=etcd:3.4.3-0)
  51. ok: [node1] => (item=etcd:3.4.3-0)
  52. ok: [node2] => (item=etcd:3.4.3-0)
  53. ok: [master] => (item=coredns:1.6.5)
  54. ok: [node1] => (item=coredns:1.6.5)
  55. ok: [node2] => (item=coredns:1.6.5)
  56. TASK [Remove images for ali] ****************************************************************************************************
  57. changed: [master] => (item=kube-apiserver:v1.17.4)
  58. changed: [node2] => (item=kube-apiserver:v1.17.4)
  59. changed: [node1] => (item=kube-apiserver:v1.17.4)
  60. changed: [master] => (item=kube-controller-manager:v1.17.4)
  61. changed: [node1] => (item=kube-controller-manager:v1.17.4)
  62. changed: [node2] => (item=kube-controller-manager:v1.17.4)
  63. changed: [node1] => (item=kube-scheduler:v1.17.4)
  64. changed: [master] => (item=kube-scheduler:v1.17.4)
  65. changed: [node2] => (item=kube-scheduler:v1.17.4)
  66. changed: [master] => (item=kube-proxy:v1.17.4)
  67. changed: [node1] => (item=kube-proxy:v1.17.4)
  68. changed: [node2] => (item=kube-proxy:v1.17.4)
  69. changed: [node1] => (item=pause:3.1)
  70. changed: [master] => (item=pause:3.1)
  71. changed: [node2] => (item=pause:3.1)
  72. changed: [master] => (item=etcd:3.4.3-0)
  73. changed: [node1] => (item=etcd:3.4.3-0)
  74. changed: [node2] => (item=etcd:3.4.3-0)
  75. changed: [master] => (item=coredns:1.6.5)
  76. changed: [node1] => (item=coredns:1.6.5)
  77. changed: [node2] => (item=coredns:1.6.5)
  78. PLAY RECAP **********************************************************************************************************************
  79. master : ok=3 changed=2 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
  80. node1 : ok=3 changed=2 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
  81. node2 : ok=3 changed=2 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0

k8s集群初始化:k8s-cluster-init.yaml

  1. - name: step13_cluster_init
  2. hosts: master
  3. gather_facts: no
  4. tasks:
  5. - block:
  6. - name: Kubeadm init
  7. shell:
  8. cmd:
  9. kubeadm init
  10. --apiserver-advertise-address={{ ansible_host }}
  11. --kubernetes-version=v1.17.4
  12. --service-cidr=10.96.0.0/12
  13. --pod-network-cidr=10.244.0.0/16
  14. --image-repository registry.aliyuncs.com/google_containers
  15. - name: Create /root/.kube
  16. file:
  17. path: /root/.kube/
  18. state: directory
  19. owner: root
  20. group: root
  21. - name: Copy /root/.kube/config
  22. copy:
  23. src: /etc/kubernetes/admin.conf
  24. dest: /root/.kube/config
  25. remote_src: yes
  26. backup: yes
  27. owner: root
  28. group: root
  29. - name: Copy kube-flannel
  30. copy:
  31. src: ./files_k8s/kube-flannel.yml
  32. dest: /root/
  33. backup: yes
  34. - name: Apply kube-flannel
  35. shell:
  36. cmd: kubectl apply -f /root/kube-flannel.yml
  37. - name: Get token
  38. shell:
  39. cmd: kubeadm token create --print-join-command
  40. register: join_token
  41. - name: debug join_token
  42. debug:
  43. var: join_token.stdout

Ansible部署K8s集群的更多相关文章

  1. Ansible自动化部署K8S集群

    Ansible自动化部署K8S集群 1.1 Ansible介绍 Ansible是一种IT自动化工具.它可以配置系统,部署软件以及协调更高级的IT任务,例如持续部署,滚动更新.Ansible适用于管理企 ...

  2. 【02】Kubernets:使用 kubeadm 部署 K8S 集群

    写在前面的话 通过上一节,知道了 K8S 有 Master / Node 组成,但是具体怎么个组成法,就是这一节具体谈的内容.概念性的东西我们会尽量以实验的形式将其复现. 部署 K8S 集群 互联网常 ...

  3. 部署K8S集群

    1.Kubernetes 1.1.概念 kubernetes(通常称为k8s)用于自动部署.扩展和管理容器化应用程序的开源系统.它旨在提供“跨主机集群的自动部署.扩展以及运行应用程序容器的平台”.支持 ...

  4. 菜鸟系列k8s——快速部署k8s集群

    快速部署k8s集群 1. 安装Rancher Rancher是业界唯一完全开源的企业级容器管理平台,为企业用户提供在生产环境中落地使用容器所需的一切功能与组件. Rancher2.0基于Kuberne ...

  5. 使用RKE快速部署k8s集群

    一.环境准备 1.1环境信息 IP地址 角色 部署软件 10.10.100.5 K8s Master Etcd.Control 10.10.100.17 K8s Worker1 Worker 10.1 ...

  6. 使用kubeadm部署k8s集群[v1.18.0]

    使用kubeadm部署k8s集群 环境 IP地址 主机名 节点 10.0.0.63 k8s-master1 master1 10.0.0.63 k8s-master2 master2 10.0.0.6 ...

  7. centos7.8 安装部署 k8s 集群

    centos7.8 安装部署 k8s 集群 目录 centos7.8 安装部署 k8s 集群 环境说明 Docker 安装 k8s 安装准备工作 Master 节点安装 k8s 版本查看 安装 kub ...

  8. 二进制方法-部署k8s集群部署1.18版本

    二进制方法-部署k8s集群部署1.18版本 1. 前置知识点 1.1 生产环境可部署kubernetes集群的两种方式 目前生产部署Kubernetes集群主要有两种方式 kuberadm Kubea ...

  9. 通过kubeadm工具部署k8s集群

    1.概述 kubeadm是一工具箱,通过kubeadm工具,可以快速的创建一个最小的.可用的,并且符合最佳实践的k8s集群. 本文档介绍如何通过kubeadm工具快速部署一个k8s集群. 2.主机规划 ...

随机推荐

  1. Julia语言介绍

    官网:https://julialang.org/ 中文社区:https://cn.julialang.org/ Julia 是一个面向科学计算的高性能动态高级程序设计语言. 首先定位是通用编程语言, ...

  2. Android 12(S) 图形显示系统 - 示例应用(二)

    1 前言 为了更深刻的理解Android图形系统抽象的概念和BufferQueue的工作机制,这篇文章我们将从Native Level入手,基于Android图形系统API写作一个简单的图形处理小程序 ...

  3. Allure测试报告完整学习笔记

    目录 简介 安装Allure Allure测试报告的结构 Java TestNG集成Allure Report Python Pytest集成Allure Report 简介 假如你想让测试报告变得漂 ...

  4. MySQL语句SQL应用

    目录 一:sql语句 1.什么是SQL语句? 二:基本SQL语句之库操作 三:基本SQL语句之表操作 1.查看当前所在库名称 2.切换数据库 四:基本SQL语句之记录操作 五:创建表的完整语法 一:s ...

  5. linux文件权限全面解析

    目录 linux文件权限全面解析 一:linux文件的权限有哪些? 1,权限分为3个部分 2,权限位 3,每一个权限拥有一个数字编号 4,在添加权限的时候,可以将权限加起来 5,linux添加权限命令 ...

  6. Spring专题1: 静态代理和动态代理

    合集目录 Spring专题1: 静态代理和动态代理 为什么需要代理模式? 代理对象处于访问者和被访问者之间,可以隔离这两者之间的直接交互,访问者与代理对象打交道就好像在跟被访者者打交道一样,因为代理者 ...

  7. Python Study Note 1

    Learn The First Day OF Operation Notes

  8. AWS SAA_C01 考试分享。

    Saa-c01 经验分享! 序言1.介绍自己的情况,我是一个做后台开发的初级java程序员.还是处于在写业务逻辑的阶段,我对aws可谓是啥都不懂,纯种的小白,完全是从0基础开始学习的.希望分享一些我的 ...

  9. uni微信小程序优化,几行代码就能省100kb的主包空间?

    不是标题党,我们公司的项目确确实实是省下了100kb的主包空间,而且还是在没有牺牲任何的性能和业务的前提下实现的. 但是100kb是根据项目大小,所以你用这个插件可能省下超过100kb或者更少. 直接 ...

  10. C++/WinUI 3 技术笔记(一)

    微软在 Windows 10 Version 1809 上正式发布了新的 UI 框架,命名为 WinUI 3. 这已经是微软发布的第不知道多少个 UI 框架了,但是微软宣称它将支持原生 C++ 和 W ...