环境
  centos 7

Kubernetes有三种安装方式:yum、二进制、kubeadm,这里演示kubeadm。

一、准备工作
1、软件版本

软件 版本
kubernetes v1.15.3
CentOS7.6 CentOS Linux release 7.6.1810(Core)
Docker docker-ce-19.03.1-3.el7.x86_64
flannel 0.11.0

2、集群拓扑

IP 角色 主机名
192.168.118.106 master node106 k8s-master
192.168.118.107 node01 node107 k8s-node01
192.168.118.108 node02 node108 k8s-node02

节点及网络规划如下:

3、系统设置
3.1 配置主机名-/etc/hosts

  1. 192.168.118.106 node106 k8s-master
  2. 192.168.118.107 node107 k8s-node01
  3. 192.168.118.108 node108 k8s-node02

3.2 关闭防火墙

  1. [root@node106 ~]# yum install -y net-tools
  2. #关闭防火墙
  3. [root@node106 ~]# systemctl stop firewalld
  4. #禁用防火墙
  5. [root@node106 ~]# systemctl disable firewalld

3.3 文件权限相关-关闭SELinux
目的是允许容器访问主机文件系统。

  1. [root@node106 ~]# sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
  2. [root@node106 ~]# setenforce

3.4 关闭swap
kubernetes的想法是将实例紧密包装到尽可能接近100%,所有的部署应该与CPU/内存限制固定在一起,所以如果调度程序发送一个pod到一台机器,它不应该使用交换。
设计者不想交换,因为它会减慢速度,所以关闭swap主要是为了性能考虑。当然为了一些节省资源的场景,比如运行容器数量较多,可添加kubelet参数 --fail-swap-on=false来解决

  1. [root@node106 ~]# swapoff -a
  2. [root@node106 ~]# sed -i 's/.*swap.*/#&/' /etc/fstab

3.5 配置转发参数
RHEL/CentOS7上由于iptables被绕过而导致流量路由不正确的问题,需要将net.bridge.bridge-nf-call-iptables在sysctl配置中设置为1。
确保br_netfilter在此步骤之前加载了模块。这可以通过运行来完成lsmod | grep br_netfilter。要加载它显式调用modprobe br_netfilter。
(1)首先查看是否加载了模块br_netfilter

  1. [root@node106 ~]# lsmod | grep br_netfilter
  2. br_netfilter
  3. bridge br_netfilter

(2)如果未加载,进行加载

  1. [root@node106 ~]# modprobe br_netfilter

(3)配置net.bridge.bridge-nf-call-iptables

  1. [root@node106 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
  2. > net.bridge.bridge-nf-call-ip6tables =
  3. > net.bridge.bridge-nf-call-iptables =
  4. > EOF
  5. [root@node106 ~]# sysctl --system
  6. * Applying /usr/lib/sysctl.d/-system.conf ...
  7. net.bridge.bridge-nf-call-ip6tables =
  8. net.bridge.bridge-nf-call-iptables =
  9. net.bridge.bridge-nf-call-arptables =
  10. * Applying /usr/lib/sysctl.d/-default-yama-scope.conf ...
  11. kernel.yama.ptrace_scope =
  12. * Applying /usr/lib/sysctl.d/-default.conf ...
  13. kernel.sysrq =
  14. kernel.core_uses_pid =
  15. net.ipv4.conf.default.rp_filter =
  16. net.ipv4.conf.all.rp_filter =
  17. net.ipv4.conf.default.accept_source_route =
  18. net.ipv4.conf.all.accept_source_route =
  19. net.ipv4.conf.default.promote_secondaries =
  20. net.ipv4.conf.all.promote_secondaries =
  21. fs.protected_hardlinks =
  22. fs.protected_symlinks =
  23. * Applying /etc/sysctl.d/-sysctl.conf ...
  24. * Applying /etc/sysctl.d/k8s.conf ...
  25. net.bridge.bridge-nf-call-ip6tables =
  26. net.bridge.bridge-nf-call-iptables =
  27. * Applying /etc/sysctl.conf ...

4、docker安装
(1)设置docker源。

  1. [root@node106 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
  2. [root@node106 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#禁用docker-ce-edge开发版本 不稳定

  1. [root@node106 ~]# yum-config-manager --disable docker-ce-edge
  2. [root@node106 ~]# yum makecache fast

(2)查看目前官方仓库的docker版本

  1. [root@node106 yum.repos.d]# yum list docker-ce.x86_64 --showduplicates |sort -r
  2. * updates: mirrors.aliyun.com
  3. Loading mirror speeds from cached hostfile
  4. Loaded plugins: fastestmirror
  5. * extras: mirrors.aliyun.com
  6. docker-ce.x86_64 :19.03.-.el7 docker-ce-stable
  7. docker-ce.x86_64 :19.03.-.el7 docker-ce-stable
  8. docker-ce.x86_64 :18.09.-.el7 docker-ce-stable
  9. docker-ce.x86_64 :18.09.-.el7 docker-ce-stable
  10. docker-ce.x86_64 :18.09.-.el7 docker-ce-stable
  11. docker-ce.x86_64 :18.09.-.el7 docker-ce-stable
  12. docker-ce.x86_64 :18.09.-.el7 docker-ce-stable
  13. docker-ce.x86_64 :18.09.-.el7 docker-ce-stable
  14. docker-ce.x86_64 :18.09.-.el7 docker-ce-stable
  15. docker-ce.x86_64 :18.09.-.el7 docker-ce-stable
  16. docker-ce.x86_64 :18.09.-.el7 docker-ce-stable
  17. docker-ce.x86_64 18.06..ce-.el7 docker-ce-stable
  18. docker-ce.x86_64 18.06..ce-.el7 docker-ce-stable
  19. docker-ce.x86_64 18.06..ce-.el7 docker-ce-stable
  20. docker-ce.x86_64 18.06..ce-.el7 docker-ce-stable
  21. docker-ce.x86_64 18.03..ce-.el7.centos docker-ce-stable
  22. docker-ce.x86_64 18.03..ce-.el7.centos docker-ce-stable
  23. docker-ce.x86_64 17.12..ce-.el7.centos docker-ce-stable
  24. docker-ce.x86_64 17.12..ce-.el7.centos docker-ce-stable
  25. docker-ce.x86_64 17.09..ce-.el7.centos docker-ce-stable
  26. docker-ce.x86_64 17.09..ce-.el7.centos docker-ce-stable
  27. docker-ce.x86_64 17.06..ce-.el7.centos docker-ce-stable
  28. docker-ce.x86_64 17.06..ce-.el7.centos docker-ce-stable
  29. docker-ce.x86_64 17.06..ce-.el7.centos docker-ce-stable
  30. docker-ce.x86_64 17.03..ce-.el7 docker-ce-stable
  31. docker-ce.x86_64 17.03..ce-.el7.centos docker-ce-stable
  32. docker-ce.x86_64 17.03..ce-.el7.centos docker-ce-stable
  33. docker-ce.x86_64 17.03..ce-.el7.centos docker-ce-stable
  34. * base: mirrors.aliyun.com
  35. Available Packages

(3)安装docker

  1. [root@node106 ~]# yum install docker-ce-19.03.-.el7 -y

(4)配置国内镜像仓库加速器

  1. [root@node106 ~]# mkdir -p /etc/docker
  2. [root@node106 ~]# tee /etc/docker/daemon.json <<-'EOF'
  3. {
  4. "registry-mirrors": ["https://qr09dqf9.mirror.aliyuncs.com"]
  5. }
  6. EOF

(5)启动docker

  1. [root@node106 ~]# systemctl daemon-reload
  2. [root@node106 ~]# systemctl enable docker
  3. [root@node106 ~]# systemctl start docker

验证:

  1. [root@node106 ~]# docker -v
  2. Docker version 19.03., build 74b1e89

5、安装kubernetes相关组件
5.1设置国内kubernetes阿里云源。

  1. cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  5. enabled=
  6. gpgcheck=
  7. repo_gpgcheck=
  8. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  9. EOF

#增量更新缓存

  1. [root@node106 ~]# yum makecache fast -y

#查看kubectl kubelet kubeadm列表

  1. [root@node106 ~]# yum list kubectl kubelet kubeadm
  2. Loaded plugins: fastestmirror
  3. Loading mirror speeds from cached hostfile
  4. * base: mirrors.aliyun.com
  5. * extras: mirrors.aliyun.com
  6. * updates: mirrors.aliyun.com
  7. Available Packages
  8. kubeadm.x86_64 1.15.- kubernetes
  9. kubectl.x86_64 1.15.- kubernetes
  10. kubelet.x86_64 1.15.-

#安装

  1. [root@node106 ~]# yum install -y kubectl kubelet kubeadm

开启kubelet服务

  1. [root@node106 ~]# systemctl enable --now kubelet
  2. Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

6、加载IPVS内核

ipvs (IP Virtual Server) 实现了传输层负载均衡,也就是我们常说的4层LAN交换,作为 Linux 内核的一部分。ipvs运行在主机上,在真实服务器集群前充当负载均衡器。ipvs可以将基于TCP和UDP的服务请求转发到真实服务器上,并使真实服务器的服务在单个 IP 地址上显示为虚拟服务。pod的负载均衡是用kube-proxy来实现的,实现方式有两种,一种是默认的iptables,一种是ipvs,ipvs比iptable的性能更好而已。
(1)加载ipvs内核,使node节点kube-proxy支持ipvs代理规则。

  1. #检查有没有开启
  2. [root@node106 ~]# cut -f1 -d " " /proc/modules | grep -e ip_vs -e nf_conntrack_ipv4
  3. ip_vs_sh
  4. ip_vs_wrr
  5. ip_vs_rr
  6. ip_vs
  7. nf_conntrack_ipv4
  8.  
  9. #如果没有开启 使用如下命令开启:
  10. modprobe ip_vs
  11. modprobe ip_vs_rr
  12. modprobe ip_vs_wrr
  13. modprobe ip_vs_sh
  14. modprobe nf_conntrack_ipv4

(2)添加到开机启动文件/etc/rc.local里面

  1. cat <<EOF >> /etc/rc.local
  1. modprobe ip_vs
    modprobe ip_vs_rr
  2. modprobe ip_vs_wrr
  3. modprobe ip_vs_sh
    modprobe nf_conntrack_ipv4
  1. EOF

(3)ipvs还需要ipset

  1. [root@node106 ~]# yum install ipset ipvsadm -y

参考:

k8s集群中ipvs负载详解
如何在kubernetes中启用ipvs

kubernetes的ipvs模式和iptables模式

二、安装master节点
1、初始化master节点
kubeadm init --kubernetes-version=v1.15.3

1)初始化遇到的问题
第一次init:

  1. [root@node106 ~]# kubeadm init --kubernetes-version=v1.15.3
  2. [init] Using Kubernetes version: v1.15.3
  3. [preflight] Running pre-flight checks
  4. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  5. [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.. Latest validated version: 18.09
  6. [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
  7. error execution phase preflight: [preflight] Some fatal errors occurred:
  8. [ERROR NumCPU]: the number of available CPUs is less than the required
  9. [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

分析:
警告1:[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".
警告2:[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
版本警告
警告3:[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
解决:[root@node106 ~]# systemctl enable kubelet.service
错误1:[ERROR NumCPU]:设置虚拟机CPU核心数>1个即可

第二次init:

  1. [root@node106 ~]# kubeadm init --kubernetes-version=v1.15.3
  2. [init] Using Kubernetes version: v1.15.3
  3. [preflight] Running pre-flight checks
  4. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  5. [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.. Latest validated version: 18.09
  6. [preflight] Pulling images required for setting up a Kubernetes cluster
  7. [preflight] This might take a minute or two, depending on the speed of your internet connection
  8. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  9. error execution phase preflight: [preflight] Some fatal errors occurred:
  10. [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.15.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  11. , error: exit status
  12. [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.15.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  13. , error: exit status
  14. [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.15.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  15. , error: exit status
  16. [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.15.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  17. , error: exit status
  18. [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  19. , error: exit status
  20. [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.3.: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  21. , error: exit status
  22. [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.3.: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  23. , error: exit status
  24. [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
  25. [root@node106 ~]#

分析:
错误1:[ERROR ImagePull] 拉取Image失败,因为连接的是google服务器,可以根据报错中版本号使用docker拉取或者通过kubeadm config images list查看需要下载的版本

  1. [root@node106 ~]# kubeadm config images list
  2. W0906 ::52.841583 version.go:] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  3. W0906 ::52.841780 version.go:] falling back to the local client version: v1.15.3
  4. k8s.gcr.io/kube-apiserver:v1.15.3
  5. k8s.gcr.io/kube-controller-manager:v1.15.3
  6. k8s.gcr.io/kube-scheduler:v1.15.3
  7. k8s.gcr.io/kube-proxy:v1.15.3
  8. k8s.gcr.io/pause:3.1
  9. k8s.gcr.io/etcd:3.3.
  10. k8s.gcr.io/coredns:1.3.
  11. [root@node106 ~]#

(2)准备镜像
mirrorgooglecontainers 在 docker hub 同步了所有 k8s 最新的镜像,先从这儿下载,然后修改 tag 即可。
#拉镜像

  1. [root@node106 ~]# kubeadm config images list |sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#mirrorgooglecontainers#g' |sh -x && docker pull coredns/coredns:1.3.

#修改tag,将镜像标记为k8s.gcr.io的名称

  1. [root@node106 ~]# docker images |grep mirrorgooglecontainers |awk '{print "docker tag ",$1":"$2,$1":"$2}' |sed -e 's#mirrorgooglecontainers#k8s.gcr.io#2' |sh -x && docker tag coredns/coredns:1.3. k8s.gcr.io/coredns:1.3.

#删除无用的镜像

  1. [root@node106 ~]# docker images | grep mirrorgooglecontainers | awk '{print "docker rmi " $1":"$2}' | sh -x && docker rmi coredns/coredns:1.3.

最终:

  1. [root@node106 ~]# docker images
  2. REPOSITORY TAG IMAGE ID CREATED SIZE
  3. k8s.gcr.io/kube-proxy v1.15.3 232b5c793146 weeks ago .4MB
  4. k8s.gcr.io/kube-apiserver v1.15.3 5eb2d3fc7a44 weeks ago 207MB
  5. k8s.gcr.io/kube-controller-manager v1.15.3 e77c31de5547 weeks ago 159MB
  6. k8s.gcr.io/kube-scheduler v1.15.3 703f9c69a5d5 weeks ago .1MB
  7. k8s.gcr.io/coredns 1.3. eb516548c180 months ago .3MB
  8. k8s.gcr.io/etcd 3.3. 2c4adeb21b4f months ago 258MB
  9. k8s.gcr.io/pause 3.1 da86e6ba6ca1 months ago 742kB
  10. [root@node106 ~]#

(3)初始化

因为后面要安装网络插件flannel ,所有这里要添加参数, --pod-network-cidr=10.244.0.0/16,10.244.0.0/16是flannel插件固定使用的ip段,它的值取决于你准备安装哪个网络插件

  1. [root@node106 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=1.15.3
  2. [init] Using Kubernetes version: v1.15.3
  3. [preflight] Running pre-flight checks
  4. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  5. [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.. Latest validated version: 18.09
  6. [preflight] Pulling images required for setting up a Kubernetes cluster
  7. [preflight] This might take a minute or two, depending on the speed of your internet connection
  8. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  9. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  10. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  11. [kubelet-start] Activating the kubelet service
  12. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  13. [certs] Generating "etcd/ca" certificate and key
  14. [certs] Generating "etcd/server" certificate and key
  15. [certs] etcd/server serving cert is signed for DNS names [node106 localhost] and IPs [192.168.118.106 127.0.0.1 ::]
  16. [certs] Generating "apiserver-etcd-client" certificate and key
  17. [certs] Generating "etcd/peer" certificate and key
  18. [certs] etcd/peer serving cert is signed for DNS names [node106 localhost] and IPs [192.168.118.106 127.0.0.1 ::]
  19. [certs] Generating "etcd/healthcheck-client" certificate and key
  20. [certs] Generating "ca" certificate and key
  21. [certs] Generating "apiserver" certificate and key
  22. [certs] apiserver serving cert is signed for DNS names [node106 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.118.106]
  23. [certs] Generating "apiserver-kubelet-client" certificate and key
  24. [certs] Generating "front-proxy-ca" certificate and key
  25. [certs] Generating "front-proxy-client" certificate and key
  26. [certs] Generating "sa" key and public key
  27. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  28. [kubeconfig] Writing "admin.conf" kubeconfig file
  29. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  30. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  31. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  32. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  33. [control-plane] Creating static Pod manifest for "kube-apiserver"
  34. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  35. [control-plane] Creating static Pod manifest for "kube-scheduler"
  36. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  37. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  38. [apiclient] All control plane components are healthy after 20.007081 seconds
  39. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  40. [kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
  41. [upload-certs] Skipping phase. Please see --upload-certs
  42. [mark-control-plane] Marking the node node106 as control-plane by adding the label "node-role.kubernetes.io/master=''"
  43. [mark-control-plane] Marking the node node106 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  44. [bootstrap-token] Using token: unqj7v.wr7yvcj8i7wan93g
  45. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  46. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  47. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  48. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  49. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  50. [addons] Applied essential addon: CoreDNS
  51. [addons] Applied essential addon: kube-proxy
  52.  
  53. Your Kubernetes control-plane has initialized successfully!
  54.  
  55. To start using your cluster, you need to run the following as a regular user:
  56.  
  57. mkdir -p $HOME/.kube
  58. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  59. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  60.  
  61. You should now deploy a pod network to the cluster.
  62. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  63. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  64.  
  65. Then you can join any number of worker nodes by running the following on each as root:
  66.  
  67. kubeadm join 192.168.118.106:6443 --token unqj7v.wr7yvcj8i7wan93g \
  68. --discovery-token-ca-cert-hash sha256:011f55be71445e7031ac7a582afc7a4350cdf6d8ae8bef790d2517634d93f337

后续操作:

  1. [root@node106 ~]# mkdir -p $HOME/.kube
  2. [root@node106 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. [root@node106 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl命令默认从$HOME/.kube/config这个位置读取配置,不做这个操作,使用kubectl会报错。

2、给pod配置网络

Flannel是 CoreOS 团队针对 Kubernetes 设计的一个覆盖网络(Overlay Network)工具,其目的在于帮助每一个使用 Kuberentes 的 CoreOS 主机拥有一个完整的子网。
Flannel通过给每台宿主机分配一个子网的方式为容器提供虚拟网络,它基于Linux TUN/TAP,使用UDP封装IP包来创建overlay网络,并借助etcd维护网络的分配情况。

#下载Flannel插件配置

  1. [root@node106 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  2. [root@node106 ~]# ll
  3. total
  4. -rw-------. root root Aug : anaconda-ks.cfg
  5. -rw-r--r-- root root Sep : kube-flannel.yml

#kube安装kube-flannel.yml

  1. [root@node106 ~]# kubectl apply -f kube-flannel.yml
  2. podsecuritypolicy.policy/psp.flannel.unprivileged created
  3. clusterrole.rbac.authorization.k8s.io/flannel created
  4. clusterrolebinding.rbac.authorization.k8s.io/flannel created
  5. serviceaccount/flannel created
  6. configmap/kube-flannel-cfg created
  7. daemonset.apps/kube-flannel-ds-amd64 created
  8. daemonset.apps/kube-flannel-ds-arm64 created
  9. daemonset.apps/kube-flannel-ds-arm created
  10. daemonset.apps/kube-flannel-ds-ppc64le created
  11. daemonset.apps/kube-flannel-ds-s390x created

#查看Master状态

  1. [root@node106 ~]# kubectl get pods --all-namespaces
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. kube-system coredns-5c98db65d4-dwjfs / Running 3h57m
  4. kube-system coredns-5c98db65d4-xxdr2 / Running 3h57m
  5. kube-system etcd-node106 / Running 3h56m
  6. kube-system kube-apiserver-node106 / Running 3h56m
  7. kube-system kube-controller-manager-node106 / Running 3h56m
  8. kube-system kube-flannel-ds-amd64-srdxz / Running 2m32s
  9. kube-system kube-proxy-8mxmm / Running 3h57m
  10. kube-system kube-scheduler-node106 / Running 3h56m

不是running状态,就说明出错了,通过以下操作来来排错:
查看描述:

  1. [root@node106 ~]# kubectl describe pod kube-scheduler-node106 -n kube-system

查看日志:

  1. [root@node106 ~]# kubectl logs kube-scheduler-node106 -n kube-system

参考:Flannel安装部署

三、安装node节点

1、下载需要的镜像
node107和node108节点只需要安装kube-proxy和pause镜像

  1. [root@node107 ~]# docker images
  2. REPOSITORY TAG IMAGE ID CREATED SIZE
  3. k8s.gcr.io/kube-proxy v1.15.3 232b5c793146 weeks ago .4MB
  4. k8s.gcr.io/pause 3.1 da86e6ba6ca1 months ago 742kB

2、添加节点
在master上初始化节点成功时,最后有一个kubeadm join,就是用来添加节点的
在node107和node108上操作:

  1. [root@node107 ~]# kubeadm join 192.168.118.106:6443 --token unqj7v.wr7yvcj8i7wan93g \
  2. > --discovery-token-ca-cert-hash sha256:011f55be71445e7031ac7a582afc7a4350cdf6d8ae8bef790d2517634d93f337
  3. [preflight] Running pre-flight checks
  4. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  5. [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.. Latest validated version: 18.09
  6. [preflight] Reading configuration from the cluster...
  7. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  8. [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
  9. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  10. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  11. [kubelet-start] Activating the kubelet service
  12. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
  13.  
  14. This node has joined the cluster:
  15. * Certificate signing request was sent to apiserver and a response was received.
  16. * The Kubelet was informed of the new secure connection details.
  17.  
  18. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

提示:如果执行join命令时提示token过期,按照提示在Master 上执行kubeadm token create生成一个新的token。如果忘记token,可以使用kubeadm token list查看。

四、验证集群
1、节点状态

  1. [root@node106 ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. node106 Ready master 4h53m v1.15.3
  4. node107 Ready <none> 101s v1.15.3
  5. node108 Ready <none> 82s v1.15.3

2、组件状态

  1. [root@node106 ~]# kubectl get cs
  2. NAME STATUS MESSAGE ERROR
  3. controller-manager Healthy ok
  4. scheduler Healthy ok
  5. etcd- Healthy {"health":"true"}

3、服务账户

  1. [root@node106 ~]# kubectl get serviceaccount
  2. NAME SECRETS AGE
  3. default 5h1m

4、集群信息

  1. [root@node106 ~]# kubectl cluster-info
  2. Kubernetes master is running at https://192.168.118.106:6443
  3. KubeDNS is running at https://192.168.118.106:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
  4.  
  5. To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

5、验证dns功能

  1. [root@node106 ~]# kubectl run curl --image=radial/busyboxplus:curl -it
  2. kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
  3. If you don't see a command prompt, try pressing enter.
  4. [ root@curl-6bf6db5c4f-dn65h:/ ]$ nslookup kubernetes.default
  5. Server: 10.96.0.10
  6. Address : 10.96.0.10 kube-dns.kube-system.svc.cluster.local
  7.  
  8. Name: kubernetes.default
  9. Address : 10.96.0.1 kubernetes.default.svc.cluster.local

五、案例验证
创建一个nginx的service试一下集群是否可用。

(1)创建并运行deployment

  1. [root@node106 ~]# kubectl run nginx --replicas= --labels="run=load-balancer-example" --image=nginx --port=
  2. kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
  3. deployment.apps/nginx created

(2)把服务通过nodeport的形式暴露出来

  1. [root@node106 ~]# kubectl expose deployment nginx --type=NodePort --name=example-service
  2. service/example-service exposed
  1. #查看服务的详细信息
  2. [root@node106 ~]# kubectl expose deployment nginx --type=NodePort --name=example-service
  3. service/example-service exposed
  4. [root@node106 ~]# kubectl describe service example-service
  5. Name: example-service
  6. Namespace: default
  7. Labels: run=load-balancer-example
  8. Annotations: <none>
  9. Selector: run=load-balancer-example
  10. Type: NodePort
  11. IP: 10.108.73.249
  12. Port: <unset> /TCP
  13. TargetPort: /TCP
  14. NodePort: <unset> /TCP
  15. Endpoints: 10.244.1.4:,10.244.2.2:
  16. Session Affinity: None
  17. External Traffic Policy: Cluster
  18. Events: <none>
  19.  
  20. #查看服务状态
  21. [root@node106 ~]# kubectl get service
  22. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  23. example-service NodePort 10.108.73.249 <none> :/TCP 91s
  24. kubernetes ClusterIP 10.96.0.1 <none> /TCP 44h
  25. [root@node106 ~]#
  26.  
  27. #查看pod
  28. 应用的配置和当前状态信息保存在 etcd 中,执行 kubectl get pod API Server 会从 etcd 中读取这些数据。
  29. [root@node106 ~]# kubectl get pods
  30. NAME READY STATUS RESTARTS AGE
  31. curl-6bf6db5c4f-dn65h / Running 39h
  32. nginx-5c47ff5dd6-hjxq8 / Running 3m10s
  33. nginx-5c47ff5dd6-qj9k2 / Running 3m10s

(3)访问服务IP

  1. [root@node106 ~]# curl 10.108.73.249:
  2. <!DOCTYPE html>
  3. <html>
  4. <head>
  5. <title>Welcome to nginx!</title>
  6. <style>
  7. body {
  8. width: 35em;
  9. margin: auto;
  10. font-family: Tahoma, Verdana, Arial, sans-serif;
  11. }
  12. </style>
  13. </head>
  14. <body>
  15. <h1>Welcome to nginx!</h1>
  16. <p>If you see this page, the nginx web server is successfully installed and
  17. working. Further configuration is required.</p>
  18.  
  19. <p>For online documentation and support please refer to
  20. <a href="http://nginx.org/">nginx.org</a>.<br/>
  21. Commercial support is available at
  22. <a href="http://nginx.com/">nginx.com</a>.</p>
  23.  
  24. <p><em>Thank you for using nginx.</em></p>
  25. </body>
  26. </html>

访问endpoint,与访问服务ip结果相同。这些IP只能在 Kubernetes Cluster中的容器和节点访问。endpoint与service 之间有映射关系。service实际上是负载均衡着后端的endpoint。其原理是通过iptables实现的

  1. [root@node106 ~]# curl 10.244.1.4:
  2. [root@node106 ~]# curl 10.244.2.2:

访问节点ip,与访问集群ip相同,可以在集群外部访问

  1. [root@node106 ~]# curl 192.168.118.107:
  2. [root@node106 ~]# curl 192.168.118.108:

整个部署过程是这样的:
① kubectl 发送部署请求到 API Server。
② API Server 通知 Controller Manager 创建一个 deployment 资源。
③ Scheduler 执行调度任务,将两个副本 Pod 分发到 node01 和 node02。
④ node01 和 node02 上的kubelet 在各自的节点上创建并运行 Pod。
flannel 会为每个 Pod 都分配 IP。

参考:
yum安装Kubernetes
二进制安装Kubernetes
kubeadm安装Kubernetes
手把手教你在CentOS上搭建Kubernetes集群
官网Installing kubeadm

Kubernetes最新版本安装过程和注意事项

kubeadm安装kubernetes1.13集群

【Kubernetes学习之二】Kubernetes集群安装的更多相关文章

  1. kafka学习2:kafka集群安装与配置

    在前一篇:kafka学习1:kafka安装 中,我们安装了单机版的Kafka,而在实际应用中,不可能是单机版的应用,必定是以集群的方式出现.本篇介绍Kafka集群的安装过程: 一.准备工作 1.开通Z ...

  2. Etcd学习(二)集群搭建Clustering

    1.单个etcd节点(测试开发用) 之前我一直开发测试一直是用的一个Etcd节点,然后启动命令一直都是直接打一个etcd(我已经将etcd安装目录的bin目录加入到PATH环 境变量中),然后启动信息 ...

  3. hbase和ZooKeeper集群安装配置

    一:ZooKeeper集群安装配置 1:解压zookeeper-3.3.2.tar.gz并重命名为zookeeper. 2:进入~/zookeeper/conf目录: 拷贝zoo_sample.cfg ...

  4. Oracle 10G RAC集群安装

    一,基本环境配置 01,hosts cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.loc ...

  5. Kubernetes 深入学习(一) —— 入门和集群安装部署

    一.简介 1.Kubernetes 是什么 Kubernetes 是一个全新的基于容器技术的分布式架构解决方案,是 Google 开源的一个容器集群管理系统,Kubernetes 简称 K8S. Ku ...

  6. K8s集群安装--最新版 Kubernetes 1.14.1

    K8s集群安装--最新版 Kubernetes 1.14.1 前言 网上有很多关于k8s安装的文章,但是我参照一些文章安装时碰到了不少坑.今天终于安装好了,故将一些关键点写下来与大家共享. 我安装是基 ...

  7. 【Kubernetes学习之三】Kubernetes分布式集群架构

    环境 centos 7 一.Kubernetes分布式集群架构1.Kubernetes服务注册和服务发现问题怎么解决的?每个服务分配一个不变的虚拟IP+端口, 系统env环境变量里有每个服务的服务名称 ...

  8. [转帖]K8s集群安装--最新版 Kubernetes 1.14.1

    K8s集群安装--最新版 Kubernetes 1.14.1 http://www.cnblogs.com/jieky/p/10679998.html 原作者写的比较简单 大略流程和跳转的多一些 改天 ...

  9. kubernetes kubeadm部署高可用集群

    k8s kubeadm部署高可用集群 kubeadm是官方推出的部署工具,旨在降低kubernetes使用门槛与提高集群部署的便捷性. 同时越来越多的官方文档,围绕kubernetes容器化部署为环境 ...

  10. hadoop学习之hadoop完全分布式集群安装

    注:本文的主要目的是为了记录自己的学习过程,也方便与大家做交流.转载请注明来自: http://blog.csdn.net/ab198604/article/details/8250461 要想深入的 ...

随机推荐

  1. Python - 记录我开始学习Python的时间节点

    记录我开始学习Python的时间节点 2019-09-22 从明天开始我要开始学习Python了,坚持学习.坚持写博客,慢慢积累. 结合实例项目,最好能把目前在做的项目用Python实现. 加油!

  2. 完美转发(perfect forwarding)、universal reference、引用折叠(collasping)

    首先要分清: C++里的值只有两种值:左值.右值.—— 其本质应该是内存中存储的值/instance分两种:一种是持久的,一种是“短暂的” 也只有两种引用: 左值引用.右值引用. ——引用,就是这个内 ...

  3. Web Api 模型绑定 一

    [https://docs.microsoft.com/zh-cn/aspnet/core/web-api/?view=aspnetcore-2.2]1.模型绑定 简单模型绑定针对简单类型(如stri ...

  4. APP手势密码绕过

    之前写的文章收到了很多的好评,主要就是帮助到了大家学习到了新的思路.自从发布了第一篇文章,我就开始筹备第二篇文章了,最终打算在07v8首发,这篇文章我可以保障大家能够学习到很多思路.之前想准备例子视频 ...

  5. mysql 多条数据中,分组获取值最大的数据记录

    摘要: 多条纪录中,几个字段相同,但是其中一个或者多个字段不同,则去该字段最大(这里只有一个不同) 源数据: 目的是移除:在同一天中只能存在一天数据,则取审核日期最大,数据库脚本如下: SELECT ...

  6. Elasticsearch的快照备份

    该文档适用于备份使用NAS的仓库类型.所有Elasticsearch集群中的服务通过挂载NAS目录来存放备份快照数据. 1.创建备份仓库 创建一个仓库名称:backup curl -H "C ...

  7. sql server 如何在全库中查找数据在哪个表

    1.查找字段在库中哪个表 如果要查找FName select   a.name,b.name   from   syscolumns a   inner   join   sysobjects   b ...

  8. c++产生验证码字符串

    // // Created by lk on 18-10-14. // #include <iostream> #include <cstdlib> #include < ...

  9. pycharm Launching unittests with arguments

    在运行程序时出现 但是代码没有错 源代码是: 这是运行时启动了测试 解决方法: File-> Settings -> Tools -> Python Integrated Tools ...

  10. Python基础之输出格式和If判断

    格式化输出的三种方式 一.占位符 #占位符 name = 'nick' age = 19 print('my name is %s my age is %s' % (name, age)) age = ...