K8S Node节点部署

  • 1、部署kubelet

(1)二进制包准备
  1. [root@linux-node1 ~]# cd /usr/local/src/kubernetes/server/bin/
  2. [root@linux-node1 bin]# cp kubelet kube-proxy /opt/kubernetes/bin/
  3. [root@linux-node1 bin]# scp kubelet kube-proxy 192.168.56.120:/opt/kubernetes/bin/
  4. [root@linux-node1 bin]# scp kubelet kube-proxy 192.168.56.130:/opt/kubernetes/bin/
(2)创建角色绑定

kubelet启动时会向kube-apiserver发送tsl bootstrap请求,所以需要将bootstrap的token设置成对应的角色,这样kubectl才有权限创建该请求。

  1. [root@linux-node1 ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
  2. clusterrolebinding "kubelet-bootstrap" created
(3)创建 kubelet bootstrapping kubeconfig 文件 设置集群参数
  1. [root@linux-node1 ~]# cd /usr/local/src/ssl
    [root@linux-node1 ssl]# kubectl config set-cluster kubernetes \
  2. --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  3. --embed-certs=true \
  4. --server=https://192.168.56.110:6443 \
  5. --kubeconfig=bootstrap.kubeconfig
  6. Cluster "kubernetes" set.
(4)设置客户端认证参数
  1. [root@linux-node1 ssl]# kubectl config set-credentials kubelet-bootstrap \
  2. --token=ad6d5bb607a186796d8861557df0d17f \
  3. --kubeconfig=bootstrap.kubeconfig
  4. User "kubelet-bootstrap" set.
(5)设置上下文参数
  1. [root@linux-node1 ssl]# kubectl config set-context default \
  2. --cluster=kubernetes \
  3. --user=kubelet-bootstrap \
  4. --kubeconfig=bootstrap.kubeconfig
  5. Context "default" created.
(6)选择默认上下文
  1. [root@linux-node1 ssl]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
  2. Switched to context "default".
  3. [root@linux-node1 ssl]# cp bootstrap.kubeconfig /opt/kubernetes/cfg
  4. [root@linux-node1 ssl]# scp bootstrap.kubeconfig 192.168.56.120:/opt/kubernetes/cfg
  5. [root@linux-node1 ssl]# scp bootstrap.kubeconfig 192.168.56.130:/opt/kubernetes/cfg
  • 2、部署kubelet 1.设置CNI支持

(1)配置CNI
  1. [root@linux-node2 ~]# mkdir -p /etc/cni/net.d
  2. [root@linux-node2 ~]# vim /etc/cni/net.d/-default.conf
  3. {
  4. "name": "flannel",
  5. "type": "flannel",
  6. "delegate": {
  7. "bridge": "docker0",
  8. "isDefaultGateway": true,
  9. "mtu":
  10. }
  11. }
  12. [root@linux-node3 ~]# mkdir -p /etc/cni/net.d
  13. [root@linux-node2 ~]# scp /etc/cni/net.d/-default.conf 192.168.56.130:/etc/cni/net.d/-default.conf
(2)创建kubelet数据存储目录
  1. [root@linux-node2 ~]# mkdir /var/lib/kubelet
  2. [root@linux-node3 ~]# mkdir /var/lib/kubelet
(3)创建kubelet服务配置
  1. [root@linux-node2 ~]# vim /usr/lib/systemd/system/kubelet.service
  2. [Unit]
  3. Description=Kubernetes Kubelet
  4. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  5. After=docker.service
  6. Requires=docker.service
  7.  
  8. [Service]
  9. WorkingDirectory=/var/lib/kubelet
  10. ExecStart=/opt/kubernetes/bin/kubelet \
  11. --address=192.168.56.120 \
  12. --hostname-override=192.168.56.120 \
  13. --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \
  14. --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
  15. --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
  16. --cert-dir=/opt/kubernetes/ssl \
  17. --network-plugin=cni \
  18. --cni-conf-dir=/etc/cni/net.d \
  19. --cni-bin-dir=/opt/kubernetes/bin/cni \
  20. --cluster-dns=10.1.0.2 \
  21. --cluster-domain=cluster.local. \
  22. --hairpin-mode hairpin-veth \
  23. --allow-privileged=true \
  24. --fail-swap-on=false \
  25. --logtostderr=true \
  26. --v= \
  27. --logtostderr=false \
  28. --log-dir=/opt/kubernetes/log
  29. Restart=on-failure
  30. RestartSec=
  31.  
  32. [root@linux-node3 ~]# vim /usr/lib/systemd/system/kubelet.service
  33. [Unit]
  34. Description=Kubernetes Kubelet
  35. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  36. After=docker.service
  37. Requires=docker.service
  38.  
  39. [Service]
  40. WorkingDirectory=/var/lib/kubelet
  41. ExecStart=/opt/kubernetes/bin/kubelet \
  42. --address=192.168.56.130 \
  43. --hostname-override=192.168.56.130 \
  44. --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \
  45. --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
  46. --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
  47. --cert-dir=/opt/kubernetes/ssl \
  48. --network-plugin=cni \
  49. --cni-conf-dir=/etc/cni/net.d \
  50. --cni-bin-dir=/opt/kubernetes/bin/cni \
  51. --cluster-dns=10.1.0.2 \
  52. --cluster-domain=cluster.local. \
  53. --hairpin-mode hairpin-veth \
  54. --allow-privileged=true \
  55. --fail-swap-on=false \
  56. --logtostderr=true \
  57. --v= \
  58. --logtostderr=false \
  59. --log-dir=/opt/kubernetes/log
  60. Restart=on-failure
  61. RestartSec=
(4)启动Kubelet
  1. [root@linux-node2 ~]# systemctl daemon-reload
  2. [root@linux-node2 ~]# systemctl enable kubelet
  3. [root@linux-node2 ~]# systemctl start kubelet
  4. [root@linux-node2 kubernetes]# systemctl status kubelet
  5.  
  6. [root@linux-node3 ~]# systemctl daemon-reload
  7. [root@linux-node3 ~]# systemctl enable kubelet
  8. [root@linux-node3 ~]# systemctl start kubelet
  9. [root@linux-node3 kubernetes]# systemctl status kubelet

在查看kubelet的状态,发现有如下报错Failed to get system container stats for "/system.slice/kubelet.service": failed to...此时需要调整kubelet的启动参数。

解决方法: 
/usr/lib/systemd/system/kubelet.service[service]新增: Environment="KUBELET_MY_ARGS=--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice" 
修改ExecStart: 在末尾新增$KUBELET_MY_ARGS

  1. [root@linux-node2 system]# systemctl status kubelet
  2. kubelet.service - Kubernetes Kubelet
  3. Loaded: loaded (/usr/lib/systemd/system/kubelet.service; static; vendor preset: disabled)
  4. Active: active (running) since -- :: CST; 16h ago
  5. Docs: https://github.com/GoogleCloudPlatform/kubernetes
  6. Main PID: (kubelet)
  7. CGroup: /system.slice/kubelet.service
  8. └─ /opt/kubernetes/bin/kubelet --address=192.168.56.120 --hostname-override=192.168.56.120 --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 --experiment...
  9.  
  10. 6 :: linux-node2.example.com kubelet[]: E0601 ::09.355765 summary.go:] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
  11. 6 :: linux-node2.example.com kubelet[]: E0601 ::19.363906 summary.go:] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
  12. 6 :: linux-node2.example.com kubelet[]: E0601 ::29.385439 summary.go:] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
  13. 6 :: linux-node2.example.com kubelet[]: E0601 ::39.393790 summary.go:] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
  14. 6 :: linux-node2.example.com kubelet[]: E0601 ::49.401081 summary.go:] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
  15. 6 :: linux-node2.example.com kubelet[]: E0601 ::59.407863 summary.go:] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
  16. 6 :: linux-node2.example.com kubelet[]: E0601 ::09.415552 summary.go:] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
  17. 6 :: linux-node2.example.com kubelet[]: E0601 ::19.425998 summary.go:] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
  18. 6 :: linux-node2.example.com kubelet[]: E0601 ::29.443804 summary.go:] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
  19. 6 :: linux-node2.example.com kubelet[]: E0601 ::39.450814 summary.go:] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
  20. Hint: Some lines were ellipsized, use -l to show in full.
(5)查看csr请求 注意是在linux-node1上执行。
  1. [root@linux-node1 ssl]# kubectl get csr
  2. NAME AGE REQUESTOR CONDITION
  3. node-csr-6Wc7kmqBIaPOw83l2F1uCKN-uUaxfkVhIU8K93S5y1U 1m kubelet-bootstrap Pending
  4. node-csr-fIXcxO7jyR1Au7nrpUXht19eXHnX1HdFl99-oq2sRsA 1m kubelet-bootstrap Pending
(6)批准kubelet 的 TLS 证书请求
  1. [root@linux-node1 ssl]# kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve
  2. certificatesigningrequest.certificates.k8s.io "node-csr-6Wc7kmqBIaPOw83l2F1uCKN-uUaxfkVhIU8K93S5y1U" approved
  3. certificatesigningrequest.certificates.k8s.io "node-csr-fIXcxO7jyR1Au7nrpUXht19eXHnX1HdFl99-oq2sRsA" approved
  4.  
  5. [root@linux-node1 ssl]# kubectl get csr
  6. NAME AGE REQUESTOR CONDITION
  7. node-csr-6Wc7kmqBIaPOw83l2F1uCKN-uUaxfkVhIU8K93S5y1U 2m kubelet-bootstrap Approved,Issued
  8. node-csr-fIXcxO7jyR1Au7nrpUXht19eXHnX1HdFl99-oq2sRsA 2m kubelet-bootstrap Approved,Issued
  9.  
  10. 执行完毕后,查看节点状态已经是Ready的状态了
  11. [root@linux-node1 ssl]# kubectl get node
  12. NAME STATUS ROLES AGE VERSION
  13. 192.168.56.120 Ready <none> 50m v1.10.1
  14. 192.168.56.130 Ready <none> 46m v1.10.1
  • 3、部署Kubernetes Proxy

(1)配置kube-proxy使用LVS
  1. [root@linux-node2 ~]# yum install -y ipvsadm ipset conntrack
  2. [root@linux-node3 ~]# yum install -y ipvsadm ipset conntrack
(2)创建 kube-proxy 证书请求
  1. [root@linux-node1 ~]# cd /usr/local/src/ssl/
  2. [root@linux-node1 ssl]# vim kube-proxy-csr.json
  3. {
  4. "CN": "system:kube-proxy",
  5. "hosts": [],
  6. "key": {
  7. "algo": "rsa",
  8. "size":
  9. },
  10. "names": [
  11. {
  12. "C": "CN",
  13. "ST": "BeiJing",
  14. "L": "BeiJing",
  15. "O": "k8s",
  16. "OU": "System"
  17. }
  18. ]
  19. }
(3)生成证书
  1. [root@linux-node1~]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
  2. -ca-key=/opt/kubernetes/ssl/ca-key.pem \
  3. -config=/opt/kubernetes/ssl/ca-config.json \
  4. -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
(4)分发证书到所有Node节点
  1. [root@linux-node1 ssl]# cp kube-proxy*.pem /opt/kubernetes/ssl/
  2. [root@linux-node1 ssl]# scp kube-proxy*.pem 192.168.56.120:/opt/kubernetes/ssl/
  3. [root@linux-node1 ssl]# scp kube-proxy*.pem 192.168.56.120:/opt/kubernetes/ssl/
(5)创建kube-proxy配置文件
  1. [root@linux-node1 ssl]# kubectl config set-cluster kubernetes \
  2. --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  3. --embed-certs=true \
  4. --server=https://192.168.56.110:6443 \
  5. --kubeconfig=kube-proxy.kubeconfig
  6. Cluster "kubernetes" set.
  7.  
  8. [root@linux-node1 ssl]# kubectl config set-credentials kube-proxy \
  9. --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \
  10. --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \
  11. --embed-certs=true \
  12. --kubeconfig=kube-proxy.kubeconfig
  13. User "kube-proxy" set.
  14.  
  15. [root@linux-node1 ssl]# kubectl config set-context default \
  16. --cluster=kubernetes \
  17. --user=kube-proxy \
  18. --kubeconfig=kube-proxy.kubeconfig
  19. Context "default" created.
  20.  
  21. [root@linux-node1 ssl]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  22. Switched to context "default".
(6)分发kubeconfig配置文件
  1. [root@linux-node1 ssl]# cp kube-proxy.kubeconfig /opt/kubernetes/cfg/
  2. [root@linux-node1 ssl]# scp kube-proxy.kubeconfig 192.168.56.120:/opt/kubernetes/cfg/
  3. [root@linux-node1 ssl]# scp kube-proxy.kubeconfig 192.168.56.130:/opt/kubernetes/cfg/
(7)创建kube-proxy服务配置
  1. [root@linux-node1 ssl]# mkdir /var/lib/kube-proxy
  2. [root@linux-node2 ssl]# mkdir /var/lib/kube-proxy
  3. [root@linux-node3 ssl]# mkdir /var/lib/kube-proxy
  4.  
  5. [root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-proxy.service
  6. [Unit]
  7. Description=Kubernetes Kube-Proxy Server
  8. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  9. After=network.target
  10.  
  11. [Service]
  12. WorkingDirectory=/var/lib/kube-proxy
  13. ExecStart=/opt/kubernetes/bin/kube-proxy \
  14. --bind-address=192.168.56.120 \
  15. --hostname-override=192.168.56.120 \
  16. --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig \
  17. --masquerade-all \
  18. --feature-gates=SupportIPVSProxyMode=true \
  19. --proxy-mode=ipvs \
  20. --ipvs-min-sync-period=5s \
  21. --ipvs-sync-period=5s \
  22. --ipvs-scheduler=rr \
  23. --logtostderr=true \
  24. --v= \
  25. --logtostderr=false \
  26. --log-dir=/opt/kubernetes/log
  27.  
  28. Restart=on-failure
  29. RestartSec=
  30. LimitNOFILE=
  31.  
  32. [Install]
  33. WantedBy=multi-user.target
  34.  
  35. [root@linux-node1 ssl]# scp /usr/lib/systemd/system/kube-proxy.service 192.168.56.120:/usr/lib/systemd/system/kube-proxy.service
  36. kube-proxy.service % .4KB/s :
  37. [root@linux-node1 ssl]# scp /usr/lib/systemd/system/kube-proxy.service 192.168.56.130:/usr/lib/systemd/system/kube-proxy.service
  38. kube-proxy.service % .9KB/s :
(8)启动Kubernetes Proxy
  1. [root@linux-node2 ~]# systemctl daemon-reload
  2. [root@linux-node2 ~]# systemctl enable kube-proxy
  3. [root@linux-node2 ~]# systemctl start kube-proxy
  4. [root@linux-node2 ~]# systemctl status kube-proxy
  5.  
  6. [root@linux-node3 ~]# systemctl daemon-reload
  7. [root@linux-node3 ~]# systemctl enable kube-proxy
  8. [root@linux-node3 ~]# systemctl start kube-proxy
  9. [root@linux-node3 ~]# systemctl status kube-proxy

  10. 检查LVS状态,可以看到已经创建了一个LVS集群,将来自10.1.0.1:443的请求转到192.168.56.110:6443,而6443就是api-server的端口
  11. [root@linux-node2 ~]# ipvsadm -Ln
  12. IP Virtual Server version 1.2. (size=)
  13. Prot LocalAddress:Port Scheduler Flags
  14. -> RemoteAddress:Port Forward Weight ActiveConn InActConn
  15. TCP 10.1.0.1: rr persistent
  16. -> 192.168.56.110: Masq
  17.  
  18. [root@linux-node3 ~]# ipvsadm -Ln
  19. IP Virtual Server version 1.2. (size=)
  20. Prot LocalAddress:Port Scheduler Flags
  21. -> RemoteAddress:Port Forward Weight ActiveConn InActConn
  22. TCP 10.1.0.1: rr persistent
  23. -> 192.168.56.110: Masq
  24. 如果你在两台实验机器都安装了kubeletproxy服务,使用下面的命令可以检查状态:
  25.  
  26. [root@linux-node1 ssl]# kubectl get node
  27. NAME STATUS ROLES AGE VERSION
  28. 192.168.56.120 Ready <none> 22m v1.10.1
  29. 192.168.56.130 Ready <none> 3m v1.10.1

到此,K8S的集群就部署完毕,由于K8S本身不支持网络,需要借助第三方网络才能进行创建Pod,将在下一节学习Flannel网络为K8S提供网络支持。

(9)遇到的问题:kubelet无法启动,kubectl get node 提示:no resource found

  1. [root@linux-node1 ssl]# kubectl get node
  2. No resources found.
  3.  
  4. [root@linux-node3 ~]# systemctl status kubelet
  5. kubelet.service - Kubernetes Kubelet
  6. Loaded: loaded (/usr/lib/systemd/system/kubelet.service; static; vendor preset: disabled)
  7. Active: activating (auto-restart) (Result: exit-code) since Wed -- :: EDT; 1s ago
  8. Docs: https://github.com/GoogleCloudPlatform/kubernetes
  9. Process: ExecStart=/opt/kubernetes/bin/kubelet --address=192.168.56.130 --hostname-override=192.168.56.130 --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --cert-dir=/opt/kubernetes/ssl --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/kubernetes/bin/cni --cluster-dns=10.1.0.2 --cluster-domain=cluster.local. --hairpin-mode hairpin-veth --allow-privileged=true --fail-swap-on=false --logtostderr=true --v= --logtostderr=false --log-dir=/opt/kubernetes/log (code=exited, status=)
  10. Main PID: (code=exited, status=)
  11.  
  12. May :: linux-node3.example.com systemd[]: Unit kubelet.service entered failed state.
  13. May :: linux-node3.example.com systemd[]: kubelet.service failed.
  14. [root@linux-node3 ~]# tailf /var/log/messages
  15. ......
  16. May :: linux-node3 kubelet: F0530 ::24.134612 server.go:] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"

  17. 提示kubelet使用的cgroup驱动类型和dockercgroup驱动类型不一致。进行查看docker.service
  18.  
  19. [Unit]
  20. Description=Docker Application Container Engine
  21. Documentation=http://docs.docker.com
  22. After=network.target
  23. Wants=docker-storage-setup.service
  24. Requires=docker-cleanup.timer
  25.  
  26. [Service]
  27. Type=notify
  28. NotifyAccess=all
  29. KillMode=process
  30. EnvironmentFile=-/etc/sysconfig/docker
  31. EnvironmentFile=-/etc/sysconfig/docker-storage
  32. EnvironmentFile=-/etc/sysconfig/docker-network
  33. Environment=GOTRACEBACK=crash
  34. Environment=DOCKER_HTTP_HOST_COMPAT=
  35. Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
  36. ExecStart=/usr/bin/dockerd-current \
  37. --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
  38. --default-runtime=docker-runc \
  39. --exec-opt native.cgroupdriver=systemd \ ###修改此处"systemd"为"cgroupfs"
  40. --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
  41. $OPTIONS \
  42. $DOCKER_STORAGE_OPTIONS \
  43. $DOCKER_NETWORK_OPTIONS \
  44. $ADD_REGISTRY \
  45. $BLOCK_REGISTRY \
  46. $INSECURE_REGISTRY
  47. ExecReload=/bin/kill -s HUP $MAINPID
  48. LimitNOFILE=
  49. LimitNPROC=
  50. LimitCORE=infinity
  51. TimeoutStartSec=
  52. Restart=on-abnormal
  53. MountFlags=slave
  54.  
  55. [Install]
  56. WantedBy=multi-user.target
  57. [root@linux-node3 ~]# systemctl daemon-reload
  58. [root@linux-node3 ~]# systemctl restart docker.service
  59. [root@linux-node3 ~]# systemctl restart kubelet

Kubernetes学习之路(四)之Node节点二进制部署的更多相关文章

  1. Kubernetes学习之路(八)之Kubeadm部署集群

    一.环境说明 节点名称 ip地址 部署说明 Pod 网段 Service网段 系统说明 k8s-master 192.168.56.11 docker.kubeadm.kubectl.kubelet ...

  2. Kubernetes学习之路(28)之镜像仓库Harbor部署

    Harbor的部署 官方文档 Harbor有两种安装的方式: 在线安装:直接从Docker Hub下载Harbor的镜像,并启动. 离线安装:在官网上下载离线安装包其地址为:https://githu ...

  3. Kubernetes学习之路目录

    Kubernetes基础篇 环境说明 版本说明 系统环境 Centos 7.2 Kubernetes版本 v1.11.2 Docker版本 v18.09 Kubernetes学习之路(一)之概念和架构 ...

  4. springboot 学习之路 5(打成war包部署tomcat)

    目录:[持续更新.....] spring 部分常用注解 spring boot 学习之路1(简单入门) spring boot 学习之路2(注解介绍) spring boot 学习之路3( 集成my ...

  5. Kubernetes学习之路(十四)之服务发现Service

    一.Service的概念 运行在Pod中的应用是向客户端提供服务的守护进程,比如,nginx.tomcat.etcd等等,它们都是受控于控制器的资源对象,存在生命周期,我们知道Pod资源对象在自愿或非 ...

  6. Kubernetes学习之路(三)之Mater节点二进制部署

    K8S Mater节点部署 1.部署Kubernetes API服务部署 apiserver提供集群管理的REST API接口,包括认证授权.数据校验以及集群状态变更等. 只有API Server才能 ...

  7. Kubernetes学习之路(26)之kubeasz+ansible部署集群

    目录 1.环境说明 2.准备工作 3.分步骤安装 3.1.创建证书和安装准备 3.2.安装etcd集群 3.3.安装docker 3.4.安装master节点 3.5.安装node节点 3.6.部署集 ...

  8. Kubernetes学习之路(二十)之K8S组件运行原理详解总结

    目录 一.看图说K8S 二.K8S的概念和术语 三.K8S集群组件 1.Master组件 2.Node组件 3.核心附件 四.K8S的网络模型 五.Kubernetes的核心对象详解 1.Pod资源对 ...

  9. Kubernetes学习之路(五)之Flannel网络二进制部署和测试

    一.K8S的ip地址 Node IP:节点设备的IP,如物理机,虚拟机等容器宿主的实际IP. Pod IP:Pod的IP地址,是根据docker0网络IP段进行分配的. Cluster IP:Serv ...

随机推荐

  1. iOS设计模式 - 备忘录

    iOS设计模式 - 备忘录 原理图 说明 1. 在不破坏封装的情况下,捕获一个对象的内部状态,并在该对象之外保存这个状态,这样以后就可以将该对象恢复到原先保存的状态 2. 本人已经将创建状态与恢复状态 ...

  2. [翻译] CRPixellatedView-用CIPixellate滤镜动态渲染UIView

    CRPixellatedView-用CIPixellate滤镜动态渲染UIView https://github.com/chroman/CRPixellatedView 本人测试的效果: Usage ...

  3. 裸机恢复 (BMR) 和系统状态恢复

    DPM 将系统保护数据源视为两个组成部分 – 裸机恢复 (BMR) 和系统状态保护. BMR 涉及保护操作系统文件和重要卷上的所有数据,用户数据除外. 系统状态保护涉及保护操作系统文件. DPM 使用 ...

  4. linux中ftp提示--553 Could not create file

    今天在阿里云的linux上搭建ftp服务的时候,搭建成功之后,上传文件时总提示553 Could not create file,找了半天原因,终于解决了 ftp主目录为/home/myftp /ho ...

  5. el表达式便利map集合

    <c:forEach items="${b.goodMap}" var="entry" varStatus="status"> ...

  6. November 14th 2016 Week 47th Monday

    There are far, far better things ahead than any we leave behind. 前方,有更美好的未来. Can I see those better ...

  7. Git忽略提交 .gitignore配置。自动生成IDE的.gitignore。解决gitignore不生效

    语法 以”#”号开头表示注释: 以斜杠“/”开头表示目录: 以星号“*”通配多个字符: 以问号“?”通配单个字符 以方括号“[]”包含单个字符的匹配列表: 以叹号“!”表示不忽略(跟踪)匹配到的文件或 ...

  8. 索引&切片 切割split

    索引   s[n]                                                        # 中括号里n为一个数字 切片    s[0:9]           ...

  9. 1031. [JSOI2007]字符加密【后缀数组】

    Description 喜欢钻研问题的JS同学,最近又迷上了对加密方法的思考.一天,他突然想出了一种他认为是终极的加密办法 :把需要加密的信息排成一圈,显然,它们有很多种不同的读法.例如下图,可以读作 ...

  10. 1060. [ZJOI2007]时态同步【树形DP】

    Description 小Q在电子工艺实习课上学习焊接电路板.一块电路板由若干个元件组成,我们不妨称之为节点,并将其用数 字1,2,3….进行标号.电路板的各个节点由若干不相交的导线相连接,且对于电路 ...