简介:

Kubernetes作为Google开源的容器运行平台,受到了大家的热捧。搭建一套完整的kubernetes平台,也成为试用这套平台必须迈过的坎儿。kubernetes1.5版本以及之前,安装还是相对比较方便的,官方就有通过yum源在centos7安装kubernetes。但是在kubernetes1.6之后,安装就比较繁琐了,需要证书各种认证,对于刚接触kubernetes的人来说很不友好。

docker : kubernetes依赖的容器运行时
kubelet: kubernetes最核心的agent组件,每个节点都会启动一个,负责像pods及节点的生命周期等管理
kubectl: kubernetes的命令行控制工具,只可以在master上使用.
kubeadm: 用来bootstrap kubernetes. 初始化一个k8s集群.

架构说明:

配置host

[root@master /]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 18.16.202.163 master
18.16.202.227 slaver1
18.16.202.95 slaver2

配置代理上网:

etc/profile文件内添加:

export http_proxy="http://18.16.202.169:8118"
export https_proxy="https://18.16.202.169:8118" printf -v no_ip_proxy '%s,' 18.16.202.{1..255}; export no_proxy=.baidu.com,.aliyun.com,.aliyuncs.com,.360doc.com,.163.com,.163yun.com,.tencent.com,qq.com,.daocloud.io,.cn,local,localhost,localdomain,127.0.0.1,"${no_ip_proxy%,}"

注意这里不能使用星号模糊匹配,Linux中不支持

ip_host="192.168.3.7:8118"
export http_proxy="http://${ip_host}"
export https_proxy="https://${ip_host}" printf -v no_ip_proxy '%s,' 192.168.236.{1..255}; export no_proxy=.baidu.com,.aliyun.com,.aliyuncs.com,.360doc.com,.163.com,.163yun.com,.tencent.com,qq.com,.daocloud.io,.cn,local,localhost,localdomain,127.0.0.1,"${no_ip_proxy%,}"

如果要取消代理,可以直接命令设置:

unset https_proxy
unset http_proxy

再次使用代理:

source /etc/profile

所有节点前置配置:

关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

iptables对bridge的数据进行处理:

创建/etc/sysctl.d/k8s.conf文件,添加如下内容:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

执行命令使修改生效。

modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

禁用SELinux

setenforce 0

编辑文件/etc/selinux/config,将SELINUX修改为disabled,如下:

sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux

#SELINUX=disabled

关闭系统Swap

Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。方法一,通过kubelet的启动参数–fail-swap-on=false更改这个限制。方法二,关闭系统的Swap。

swapoff -a

修改/etc/fstab文件,注释掉SWAP的自动挂载,使用free -m确认swap已经关闭。

#注释掉swap分区
[root@localhost /]# sed -i 's/.*swap.*/#&/' /etc/fstab #/dev/mapper/centos-swap swap swap defaults 0 0 [root@localhost /]# free -m
total used free shared buff/cache available
Mem: 962 154 446 6 361 612
Swap: 0 0 0

永久禁用swap

echo "vm.swappiness = 0">> /etc/sysctl.conf

kube-proxy开启ipvs的前置条件

由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:

ip_vs

ip_vs_rr

ip_vs_wrr

ip_vs_sh

nf_conntrack_ipv4

在所有的Kubernetes节点node1和node2上执行以下脚本:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

上面脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。

接下来还需要确保各个节点上已经安装了ipset软件包。

yum install -y ipset

为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm 。

yum install -y ipvsadm

如果以上前提条件如果不满足,则即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式。

安装docker

sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum makecache fast

查看最新docker版本:

[root@localhost /]# yum list docker-ce.x86_64  --showduplicates |sort -r
已加载插件:fastestmirror
可安装的软件包
* updates: mirrors.aliyun.com
Loading mirror speeds from cached hostfile
* extras: mirrors.aliyun.com
* elrepo: mirrors.tuna.tsinghua.edu.cn
docker-ce.x86_64 3:19.03.0-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable
docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable

安装docker:

# sudo yum -y install docker-ce
sudo yum install -y --setopt=obsoletes=0 docker-ce-18.09.8-3.el7
systemctl enable docker.service
systemctl restart docker

我这里安装的是docker-ce 18.09

设置为开机启动:

[root@master /]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

修改docker cgroup driver为systemd

对于使用systemd作为init system的Linux的发行版,使用systemd作为docker的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里修改各个节点上docker的cgroup driver为systemd。

创建或修改/etc/docker/daemon.json

{
"registry-mirrors": ["https://tqvgn53t.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}

重启docker:

systemctl restart docker

docker info | grep Cgroup
Cgroup Driver: systemd

安装kubeadm和kubelet

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

测试地址https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64是否可用。

curl https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

安装:

yum makecache fast
yum install -y kubelet kubeadm kubectl

安装kubeadm init初始化集群所需docker镜像

开始初始化集群之前可以使用kubeadm config images pull预先在各个节点上拉取所k8s需要的docker镜像

[root@localhost /]# kubeadm config images list
W0725 10:52:57.395062 8776 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0725 10:52:57.395395 8776 version.go:99] falling back to the local client version: v1.15.1
k8s.gcr.io/kube-apiserver:v1.15.1
k8s.gcr.io/kube-controller-manager:v1.15.1
k8s.gcr.io/kube-scheduler:v1.15.1
k8s.gcr.io/kube-proxy:v1.15.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
[root@localhost /]# kubeadm config images pull
W0725 10:55:12.586377 8781 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: proxyconnect tcp: net/http: TLS handshake timeout
W0725 10:55:12.586550 8781 version.go:99] falling back to the local client version: v1.15.1

明显是网络问题,k8s.gcr.io 资源获取不了

在网上找了其他的资源,创建一个shell文件,粘贴运行

MY_REGISTRY=gcr.azk8s.cn/google-containers

## 拉取镜像
docker pull ${MY_REGISTRY}/kube-apiserver:v1.15.1
docker pull ${MY_REGISTRY}/kube-controller-manager:v1.15.1
docker pull ${MY_REGISTRY}/kube-scheduler:v1.15.1
docker pull ${MY_REGISTRY}/kube-proxy:v1.15.1
docker pull ${MY_REGISTRY}/pause:3.1
docker pull ${MY_REGISTRY}/etcd:3.3.10
docker pull ${MY_REGISTRY}/coredns:1.3.1 ## 添加Tag
docker tag ${MY_REGISTRY}/kube-apiserver:v1.15.1 k8s.gcr.io/kube-apiserver:v1.15.1
docker tag ${MY_REGISTRY}/kube-controller-manager:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1
docker tag ${MY_REGISTRY}/kube-scheduler:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1
docker tag ${MY_REGISTRY}/kube-proxy:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1
docker tag ${MY_REGISTRY}/pause:3.1 k8s.gcr.io/pause:3.1
docker tag ${MY_REGISTRY}/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag ${MY_REGISTRY}/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1 #删除无用的镜像
docker images | grep ${MY_REGISTRY} | awk '{print "docker rmi " $1":"$2}' | sh -x echo "end"

上面的所有操作可以在一个节点上面完成,然后对进行复制即可。

集群操作

kubeadm初始化配置

使用kubeadm config print init-defaults可以打印集群初始化默认的使用的配置:

[root@localhost /]# kubeadm config print init-defaults
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: localhost.localdomain
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.14.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: {}

从默认的配置中可以看到,可以使用imageRepository定制在集群初始化时拉取k8s所需镜像的地址。

基于默认配置定制出本次使用kubeadm初始化集群所需的配置文件

kubeadm.yaml

apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 18.16.202.163
bindPort: 6443
nodeRegistration:
taints:
- effect: PreferNoSchedule
key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.15.1
networking:
podSubnet: 10.244.0.0/16

使用kubeadm默认配置初始化的集群,会在master节点打上node-role.kubernetes.io/master:NoSchedule的污点,阻止master节点接受调度运行工作负载。这里测试环境只有两个节点,所以将这个taint修改为node-role.kubernetes.io/master:PreferNoSchedule

kubeadm初始化集群

使用kubeadm初始化集群,在master上执行下面的命令:

因为我使用的是虚拟机,只分配一个cpu,所以指定了参数--ignore-preflight-errors=NumCPU,如果你的cpu足够,不要添加这个参数.

[root@master /]# kubeadm init --config /home/kubeadm.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=NumCPU
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
[WARNING NumCPU]: the number of available CPUs 1 is less than the required 2
[WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "https://18.16.202.169:8118". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
[WARNING HTTPProxyCIDR]: connection to "10.244.0.0/16" uses proxy "https://18.16.202.169:8118". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [18.16.202.163 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [18.16.202.163 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 18.16.202.163]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 46.528199 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule]
[bootstrap-token] Using token: jrts59.18pe12atfafgcxca
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 18.16.202.163:6443 --token jrts59.18pe12atfafgcxca \
--discovery-token-ca-cert-hash sha256:56d6c7d7b63a9109444ece68a1b155d8a9ac049ba57febab2c72d40d8ab7d426

上面记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个Kubernetes集群所需要的关键步骤。 其中有以下关键内容:

  • [kubelet-start] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml”

  • [certs]生成相关的各种证书

  • [kubeconfig]生成相关的kubeconfig文件

  • [control-plane]使用/etc/kubernetes/manifests目录中的yaml文件创建apiserver、controller-manager、scheduler的静态pod

  • [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到

  • 下面的命令是配置常规用户如何使用kubectl访问集群:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 最后给出了将节点加入集群的命令kubeadm join 18.16.202.163:6443 --token jrts59.18pe12atfafgcxca \ --discovery-token-ca-cert-hash sha256:56d6c7d7b63a9109444ece68a1b155d8a9ac049ba57febab2c72d40d8ab7d426

如果初始化过程出现问题,使用如下命令重置:

kubeadm reset

查看一下集群状态,确认个组件都处于healthy状态:

[root@master /]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}

将slaver节点添加到集群

kubeadm join 18.16.202.163:6443 --token jrts59.18pe12atfafgcxca \
--discovery-token-ca-cert-hash sha256:56d6c7d7b63a9109444ece68a1b155d8a9ac049ba57febab2c72d40d8ab7d426

查看集群信息

在master中查看:

[root@master /]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master Ready master 5h7m v1.15.1 18.16.202.163 <none> CentOS Linux 7 (Core) 4.17.6-1.el7.elrepo.x86_64 docker://18.9.8
slaver1 Ready <none> 4h38m v1.15.1 18.16.202.227 <none> CentOS Linux 7 (Core) 4.17.6-1.el7.elrepo.x86_64 docker://18.9.8
slaver2 Ready <none> 4h35m v1.15.1 18.16.202.95 <none> CentOS Linux 7 (Core) 4.17.6-1.el7.elrepo.x86_64 docker://18.9.8

重启kubelet:

# 重载所有修改过的配置文件
systemctl daemon-reload
# 重启kubelet
systemctl start kubelet.service
# 开机重启
systemctl enable kubelet.service

查看集群信息:

[root@master /]# kubectl cluster-info
Kubernetes master is running at https://18.16.202.163:6443
KubeDNS is running at https://18.16.202.163:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

查看Pod运行:

[root@master /]#  kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-5c98db65d4-gts57 1/1 Running 0 5h9m 10.244.2.2 slaver2 <none> <none>
kube-system coredns-5c98db65d4-qhwrw 1/1 Running 0 5h9m 10.244.1.2 slaver1 <none> <none>
kube-system etcd-master 1/1 Running 2 5h9m 18.16.202.163 master <none> <none>
kube-system kube-apiserver-master 1/1 Running 2 5h8m 18.16.202.163 master <none> <none>
kube-system kube-controller-manager-master 1/1 Running 5 5h9m 18.16.202.163 master <none> <none>
kube-system kube-flannel-ds-amd64-2lwl8 1/1 Running 0 41m 18.16.202.227 slaver1 <none> <none>
kube-system kube-flannel-ds-amd64-9bjck 1/1 Running 0 41m 18.16.202.95 slaver2 <none> <none>
kube-system kube-flannel-ds-amd64-gxxqg 1/1 Running 0 41m 18.16.202.163 master <none> <none>
kube-system kube-proxy-6gxw9 1/1 Running 0 4h39m 18.16.202.227 slaver1 <none> <none>
kube-system kube-proxy-rx8vv 1/1 Running 0 4h37m 18.16.202.95 slaver2 <none> <none>
kube-system kube-proxy-skw5b 1/1 Running 3 5h9m 18.16.202.163 master <none> <none>
kube-system kube-scheduler-master 1/1 Running 6 5h8m 18.16.202.163 master <none> <none>

安装Pod Network

接下来安装flannel network add-on:

mkdir -p ~/k8s/
cd ~/k8s
curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

如果Node有多个网卡的话,参考flannel issues 39701,目前需要在kube-flannel.yml中使用--iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将kube-flannel.yml下载到本地,flanneld启动参数加上--iface=<iface-name>

# 如果Node有多个网卡的话,参考flannel issues 39701,
# https://github.com/kubernetes/kubernetes/issues/39701
# 目前需要在kube-flannel.yml中使用--iface参数指定集群主机内网网卡的名称,
# 否则可能会出现dns无法解析。容器无法通信的情况,需要将kube-flannel.yml下载到本地,
# flanneld启动参数加上--iface=<iface-name>
containers:
- name: kube-flannel
image: registry.cn-shanghai.aliyuncs.com/gcr-k8s/flannel:v0.10.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=ens33
- --iface=eth0
⚠️⚠️⚠️--iface=ens33 的值,是你当前的网卡,或者可以指定多网卡

测试集群DNS是否可用

[root@master /]# kubectl run curl --image=radial/busyboxplus:curl -it
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.

进入后执行nslookup kubernetes.default确认解析正常:

[ root@curl-6bf6db5c4f-vhsqc:/ ]$ nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

当前容器网络:

[ root@curl-6bf6db5c4f-vhsqc:/ ]$ ifconfig
eth0 Link encap:Ethernet HWaddr D6:20:96:C7:DA:5A
inet addr:10.244.2.3 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:22 errors:0 dropped:0 overruns:0 frame:0
TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2285 (2.2 KiB) TX bytes:889 (889.0 B) lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) [ root@curl-6bf6db5c4f-vhsqc:/ ]$ exit
Session ended, resume using 'kubectl attach curl-6bf6db5c4f-vhsqc -c curl -i -t' command when the pod is running

查看node:

# 只有网络插件也安装配置完成之后,才能会显示为ready状态

[root@master /]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready master 6h45m v1.15.1
slaver1 Ready <none> 6h15m v1.15.1
slaver2 Ready <none> 6h12m v1.15.1

从集群中移除Node

如果需要从集群中移除node2这个Node执行下面的命令:

在master节点上执行:

kubectl drain node2 --delete-local-data --force --ignore-daemonsets
kubectl delete node node2

在node2上执行:

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

kube-proxy开启ipvs

修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: "ipvs"

[root@master /]# kubectl edit cm kube-proxy -n kube-system
configmap/kube-proxy edited

cm为configmaps缩写

kube-proxy配置修改后为:

apiVersion: v1

data:

config.conf: |-

apiVersion: kubeproxy.config.k8s.io/v1alpha1

bindAddress: 0.0.0.0

clientConnection:

acceptContentTypes: ""

burst: 10

contentType: application/vnd.kubernetes.protobuf

kubeconfig: /var/lib/kube-proxy/kubeconfig.conf

qps: 5

clusterCIDR: 10.244.0.0/16

configSyncPeriod: 15m0s

conntrack:

maxPerCore: 32768

min: 131072

tcpCloseWaitTimeout: 1h0m0s

tcpEstablishedTimeout: 24h0m0s

enableProfiling: false

healthzBindAddress: 0.0.0.0:10256

hostnameOverride: ""

iptables:

masqueradeAll: false

masqueradeBit: 14

minSyncPeriod: 0s

syncPeriod: 30s

ipvs:

excludeCIDRs: null

minSyncPeriod: 0s

scheduler: ""

strictARP: false

syncPeriod: 30s

kind: KubeProxyConfiguration

metricsBindAddress: 127.0.0.1:10249

mode: "ipvs"

nodePortAddresses: null

oomScoreAdj: -999

portRange: ""

resourceContainer: /kube-proxy

加粗部分就为修改部分。

重启各个节点上的kube-proxy pod:

[root@master /]# kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-6gxw9" deleted
pod "kube-proxy-rx8vv" deleted
pod "kube-proxy-skw5b" deleted

查看:

[root@master /]# kubectl get pod -n kube-system -o wide | grep kube-proxy
kube-proxy-8cwj4 1/1 Running 0 2m35s 18.16.202.163 master <none> <none>
kube-proxy-j9zpz 1/1 Running 0 2m48s 18.16.202.227 slaver1 <none> <none>
kube-proxy-vfgjv 1/1 Running 0 2m38s 18.16.202.95 slaver2 <none> <none> [root@master /]# kubectl logs kube-proxy-8cwj4 -n kube-system
I0729 07:05:35.580934 1 server_others.go:170] Using ipvs Proxier.
W0729 07:05:35.585891 1 proxier.go:401] IPVS scheduler not specified, use rr by default
I0729 07:05:35.588572 1 server.go:534] Version: v1.15.1
I0729 07:05:35.642475 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0729 07:05:35.653344 1 config.go:96] Starting endpoints config controller
I0729 07:05:35.654584 1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
I0729 07:05:35.654629 1 config.go:187] Starting service config controller
I0729 07:05:35.654649 1 controller_utils.go:1029] Waiting for caches to sync for service config controller
I0729 07:05:35.755738 1 controller_utils.go:1036] Caches are synced for endpoints config controller
I0729 07:05:35.755806 1 controller_utils.go:1036] Caches are synced for service config controller

日志中打印出了Using ipvs Proxier,说明ipvs模式已经开启。

参考:

使用kubeadm安装Kubernetes 1.15

kubeadm安装kubernetes1.13集群

Kubernetes Install

kubectl命令大全

使用kubeadm 安装 kubernetes 1.15.1的更多相关文章

  1. 在CentOS 7.6 以 kubeadm 安装 Kubernetes 1.15 最佳实践

    前言 Kubernetes作为容器编排工具,简化容器管理,提升工作效率而颇受青睐.很多新手部署Kubernetes由于"scientifically上网"问题举步维艰,本文以实战经 ...

  2. Centos 使用kubeadm安装Kubernetes 1.15.3

    本来没打算搞这个文章的,第一里面有瑕疵(没搞定的地方),第二在我的Ubuntu 18 Kubernetes集群的安装和部署 以及Helm的安装 也有安装,第三 和社区的问文章比较雷同 https:// ...

  3. 使用kubeadm安装Kubernetes 1.15.3 并开启 ipvs

    一.安装前准备 机器列表 主机名 IP node-1(master) 1.1.1.101 node-2(node) 1.1.1.102 node-3(node) 1.1.1.103 设置时区 cp / ...

  4. Kubeadm安装Kubernetes 1.15.1

    一.实验环境准备 服务器虚拟机准备 IP CPU 内存 hostname 192.168.198.200 >=2c >=2G master 192.168.198.201 >=2c ...

  5. kubeadm安装Kubernetes 1.15 实践

    原地址参考github 一.环境准备(在全部设备上进行) 3 台 centos7.5 服务器,网络使用 Calico. IP地址 节点角色 CPU 内存 Hostname 10.0.1.45 mast ...

  6. 使用kubeadm安装kubernetes 1.15

    1.主机准备篇 使用vmware Workstation 10创建一台虚拟机,配置为2C/2G/50G,操作系统为CentOS Linux release 7.6.1810 (Core). IP地址为 ...

  7. Centos7 使用 kubeadm 安装Kubernetes 1.13.3

    目录 目录 什么是Kubeadm? 什么是容器存储接口(CSI)? 什么是CoreDNS? 1.环境准备 1.1.网络配置 1.2.更改 hostname 1.3.配置 SSH 免密码登录登录 1.4 ...

  8. 使用 kubeadm 安装 kubernetes v1.16.0

    近日通过kubeadm 安装 kubernetes v1.16.0,踩过不少坑,现记录下安装过程. 安装环境: 系           统:CentOS Linux release 7.6 Docke ...

  9. kubeadm安装kubernetes V1.11.1 集群

    之前测试了离线环境下使用二进制方法安装配置Kubernetes集群的方法,安装的过程中听说 kubeadm 安装配置集群更加方便,因此试着折腾了一下.安装过程中,也有一些坑,相对来说操作上要比二进制方 ...

随机推荐

  1. 2014百度之星 Party

    Party Time Limit: 6000/3000 MS (Java/Others)    Memory Limit: 32768/32768 K (Java/Others)Total Submi ...

  2. 在Eclipse中使用Beyond Compare做为比较工具

    1.下载org.eclipse.externaltools-Update-0.8.9.v201003051612.zip插件包 接下来,要下载Beyond Compare的插件,http://beyo ...

  3. Mysql获取字符串中的数字函数方法和调用

    )) ) BEGIN ; ) default ''; set v_length=CHAR_LENGTH(Varstring); DO )) )) ) THEN )); END IF; ; END WH ...

  4. C#读写设置修改调整UVC摄像头画面-对比度

    有时,我们需要在C#代码中对摄像头的对比度进行读和写,并立即生效.如何实现呢? 建立基于SharpCamera的项目 首先,请根据之前的一篇博文 点击这里 中的说明,建立基于SharpCamera的摄 ...

  5. 2019-07-29 ThinkPHP简单的增删改查

    在model里面,建立表名Model.class.php的控制器,用以连接数据表,代码如下 namespace Home\Model; use Think\Model; class NewsModel ...

  6. MFC 文件保存对话框的设置的那些秘密

    搬家自CSDN 2015-5-14 CFileDialog::CFileDialog( BOOL bOpenFileDialog, LPCTSTR lpszDefExt = NULL, LPCTSTR ...

  7. Java 之 匿名对象

    一.匿名对象 创建对象时,只有创建对象的语句,却没有把对象地址赋值给某个变量. 虽然是创建对象的简化写法,但是应用场景非常有限. 匿名对象:没有变量名的对象. 语法格式: new 类名(参数列表): ...

  8. Java 之 数学相关类 Math、BigInteger、BigDecimal

    一.java.lang.Math 类 一.Math 类概述 java.lang.Math 类包含用于执行基本数学运算的方法,如指数.对数.平方根和三角函数.类似于这样的类,其所有方法均为静态方法,并且 ...

  9. Java 之 多线程

    一.并发与并行 1.并发 指两个或多个事件在同一时间段内发生. 2.并行 指两个或多个事件在同一时刻发生(同时发生). 在操作系统中,安装了多个程序,并发指的是在一段时间内宏观上有多个程序同时运行,这 ...

  10. 解决Ubuntu18.10 网络图标经常消失连不上网问题

    我不知道是什么原因,Ubuntu虚拟机经常会出现无法上网的问题? 此时右上角没有网络标志,Settings->NetWork也只有VPN一项,不知道咋用. 在网上终于找到了方法,亲测有效:htt ...