应用背景

截止目前为止,高热度的kubernetes版本已经发布至1.14,在此记录一下安装部署步骤和过程中的问题排查。

部署k8s一般两种方式:kubeadm(官方称目前已经GA,可以在生产环境使用);二进制安装(比较繁琐)。

这里暂且采用kubeadm方式部署测试。

测试环境

System Hostname IP
CentOS 7.6 k8s-master 138.138.82.14
CentOS 7.6 k8s-node1 138.138.82.15
CentOS 7.6 k8s-node2 138.138.82.16

网络插件:calico

具体步骤

1. 环境预设(在所有主机上操作)

关闭firewalld:

systemctl stop firewalld && systemctl disable firewalld

关闭SElinux:

setenforce && sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

关闭Swap:

swapoff -a && sed -i "s/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g" /etc/fstab

使用阿里云yum源:

wget -O /etc/yum.repos.d/CentOS7-Aliyun.repo http://mirrors.aliyun.com/repo/Centos-7.repo

更新 /etc/hosts 文件:在每一台主机的该文件中添加k8s所有节点的IP和对应主机名,否则初始化的时候回出现告警甚至错误。

2. 安装docker引擎(在所有主机上操作)

安装阿里云docker源:

wget -O /etc/yum.repos.d/docker-ce http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

安装docker:

yum install docker-ce -y

启动docker:

systemctl enable docker && systemctl start docker

调整docker部分参数:

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://5twf62k1.mirror.aliyuncs.com"],   // 改为阿里镜像
"exec-opts": ["native.cgroupdriver=systemd"]  // 默认cgroupfs,k8s官方推荐systemd,否则初始化出现Warning
}
EOF
systemctl daemon-reload
systemctl restart docker

检查确认docker的Cgroup Driver信息:

[root@k8s-master ~]# docker info |grep Cgroup
Cgroup Driver: systemd

3. 安装kubernetes初始化工具(在所有主机上操作)

使用阿里云的kubernetes源:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=
gpgcheck=
repo_gpgcheck=
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装工具: yum install -y kubelet kubeadm kubectl   // 此时最新版本1.14.1

启动kubelet: systemctl enable kubelet && systemctl start kubelet   // 此时启动不成功正常,后面初始化的时候会变成功

4. 预下载相关镜像(在master节点上操作)

查看集群初始化所需镜像及对应依赖版本号:

[root@k8s-master ~]# kubeadm config images list
……
k8s.gcr.io/kube-apiserver:v1.14.1
k8s.gcr.io/kube-controller-manager:v1.14.1
k8s.gcr.io/kube-scheduler:v1.14.1
k8s.gcr.io/kube-proxy:v1.14.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.
k8s.gcr.io/coredns:1.3.

因为这些重要镜像都被墙了,所以要预先单独下载好,然后才能初始化集群。

下载脚本:

#!/bin/bash

set -e

KUBE_VERSION=v1.14.1
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.
CORE_DNS_VERSION=1.3. GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION}) for imageName in ${images[@]} ; do
docker pull $ALIYUN_URL/$imageName
docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done

5. 初始化集群(在master节点上操作)

kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=192.168.0.0/

注意:初始化之后会安装网络插件,这里选择了calico,所以修改 --pod-network-cidr=192.168.0.0/

初始化输出记录样例:

[root@k8s-master ~]# kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=192.168.0.0/
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 138.138.82.14]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [138.138.82.14 127.0.0.1 ::]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [138.138.82.14 127.0.0.1 ::]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.002739 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 57iu95.6narx7y8peauts76
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 138.138.82.14: --token 57iu95.6narx7y8peauts76 \
--discovery-token-ca-cert-hash sha256:5dc8beaa3b0e6fa26b97e2cc3b8ae776d000277fd23a7f8692dc613c6e59f5e4

以上输出显示初始化成功,并给出了接下来的必要步骤和节点加入集群的命令,照着做即可。

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看已经运行的pod

[root@k8s-master ~]# kubectl get pod -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-fb8b8dccf-6mgks / Pending 9m6s <none> <none> <none> <none>
coredns-fb8b8dccf-cbtlx / Pending 9m6s <none> <none> <none> <none>
etcd-k8s-master / Running 8m22s 138.138.82.14 k8s-master <none> <none>
kube-apiserver-k8s-master / Running 8m19s 138.138.82.14 k8s-master <none> <none>
kube-controller-manager-k8s-master / Running 8m30s 138.138.82.14 k8s-master <none> <none>
kube-proxy-c9xd2 / Running 9m7s 138.138.82.14 k8s-master <none> <none>
kube-scheduler-k8s-master / Running 8m6s 138.138.82.14 k8s-master <none> <none>

到这里,会发现除了coredns未ready,这是正常的,因为还没有网络插件,接下来安装calico后就变为正常running了。

6. 安装calico(在master节点上操作)

Calico官网:https://docs.projectcalico.org/v3.6/getting-started/kubernetes/

kubectl apply -f \
https://docs.projectcalico.org/v3.5/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

应用官方的yaml文件之后,过一会查看所有pod已经正常running状态了,也分配出了对应IP:

[root@k8s-master ~]# kubectl get pod -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-node-r5mlj / Running 72s 138.138.82.14 k8s-master <none> <none>
coredns-fb8b8dccf-6mgks / Running 15m 192.168.0.7 k8s-master <none> <none>
coredns-fb8b8dccf-cbtlx / Running 15m 192.168.0.6 k8s-master <none> <none>
etcd-k8s-master / Running 15m 138.138.82.14 k8s-master <none> <none>
kube-apiserver-k8s-master / Running 15m 138.138.82.14 k8s-master <none> <none>
kube-controller-manager-k8s-master / Running 15m 138.138.82.14 k8s-master <none> <none>
kube-proxy-c9xd2 / Running 15m 138.138.82.14 k8s-master <none> <none>
kube-scheduler-k8s-master / Running 14m 138.138.82.14 k8s-master <none> <none>

查看节点状态

[root@k8s-master ~]# kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready master 22m v1.14.1 138.138.82.14 <none> CentOS Linux (Core) 3.10.-957.10..el7.x86_64 docker://18.9.5

至此,集群初始化和主节点都准备就绪,接下来就是加入其他工作节点至集群中。

7. 加入集群(在非master节点上操作)

先在需要加入集群的节点上下载必要镜像,下载脚本如下:

#!/bin/bash

set -e

KUBE_VERSION=v1.14.1
KUBE_PAUSE_VERSION=3.1 GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers images=(kube-proxy-amd64:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}) for imageName in ${images[@]} ; do
docker pull $ALIYUN_URL/$imageName
docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done

然后在主节点初始化输出中获取加入集群的命令,复制到工作节点执行即可:

[root@k8s-node1 ~]# kubeadm join 138.138.82.14: --token 57iu95.6narx7y8peauts76 \
> --discovery-token-ca-cert-hash sha256:5dc8beaa3b0e6fa26b97e2cc3b8ae776d000277fd23a7f8692dc613c6e59f5e4
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

8. 在master节点上查看各节点工作状态

[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 26m v1.14.1
k8s-node1 Ready <none> 84s v1.14.1
k8s-node2 Ready <none> 74s v1.14.1

至此,最简单的集群已经部署完成。

接下来,部署其他插件。

下一篇:calico客户端工具calicoctl

结束.

centos7使用kubeadm安装部署kubernetes 1.14的更多相关文章

  1. [转帖]centos7 使用kubeadm 快速部署 kubernetes 国内源

    centos7 使用kubeadm 快速部署 kubernetes 国内源 https://www.cnblogs.com/qingfeng2010/p/10540832.html 前言 搭建kube ...

  2. centos7 使用kubeadm 快速部署 kubernetes 国内源

    前言 搭建kubernetes时看文档以及资料走了很多弯路,so 整理了最后成功安装的过程已做记录.网上的搭建文章总是少一些步骤,想本人这样的小白总是部署不成功(^_^). 准备两台或两台以上的虚拟机 ...

  3. Kubeadm 安装部署 Kubernetes 集群

    阅读目录: 准备工作 部署 Master 管理节点 部署 Minion 工作节点 部署 Hello World 应用 安装 Dashboard 插件 安装 Heapster 插件 后记 相关文章:Ku ...

  4. 使用 Kubeadm 安装部署 Kubernetes 1.12.1 集群

    手工搭建 Kubernetes 集群是一件很繁琐的事情,为了简化这些操作,就产生了很多安装配置工具,如 Kubeadm ,Kubespray,RKE 等组件,我最终选择了官方的 Kubeadm 主要是 ...

  5. kubeadm安装部署kubernetes 1.11.3(单主节点)

    由于此处docker代理无法使用,因此,请各位设置有效代理进行部署,勿使用文档中的docker代理.整体部署步骤不用改动.谢谢各位支持. 1.部署背景 操作系统版本:CentOS Linux rele ...

  6. 使用 kubeadm 安装部署 kubernetes 1.9-部署heapster插件

    1.先到外网下载好镜像倒进各个节点 2.下载yaml文件和创建应用 mkdir -p ~/k8s/heapster cd ~/k8s/heapster wget https://raw.githubu ...

  7. Centos7 使用 kubeadm 安装Kubernetes 1.13.3

    目录 目录 什么是Kubeadm? 什么是容器存储接口(CSI)? 什么是CoreDNS? 1.环境准备 1.1.网络配置 1.2.更改 hostname 1.3.配置 SSH 免密码登录登录 1.4 ...

  8. kubernetes系列03—kubeadm安装部署K8S集群

    本文收录在容器技术学习系列文章总目录 1.kubernetes安装介绍 1.1 K8S架构图 1.2 K8S搭建安装示意图 1.3 安装kubernetes方法 1.3.1 方法1:使用kubeadm ...

  9. Centos7 安装部署Kubernetes(k8s)集群

    目录 一.系统环境 二.前言 三.Kubernetes 3.1 概述 3.2 Kubernetes 组件 3.2.1 控制平面组件 3.2.2 Node组件 四.安装部署Kubernetes集群 4. ...

随机推荐

  1. 解析高德地图api获取省市区,生成最新三级联动sql表

    前言: 最近项目中用到了全国省市区三级信息,但是网上找到的信息都是比较旧的信息.与现在最新的地区信息匹配不上.后来想到高德地图上可能有这些信息.所以解析了一下api接口,生成了相关省市区的sql信息. ...

  2. PHP信号管理

    PHP信号管理   SIGHUP     终止进程     终端线路挂断 SIGINT     终止进程     中断进程 SIGQUIT    建立CORE文件终止进程,并且生成core文件 SIG ...

  3. 视频文件列表hover添加视频播放按钮

    默认效果图: 鼠标hover效果: 代码如下: <!DOCTYPE html> <html> <head> <meta charset="UTF-8 ...

  4. Spring boot 入门(四):集成 Shiro 实现登陆认证和权限管理

    本文是接着上篇博客写的:Spring boot 入门(三):SpringBoot 集成结合 AdminLTE(Freemarker),利用 generate 自动生成代码,利用 DataTable 和 ...

  5. 网络爬虫BeautifulSoup库的使用

    使用BeautifulSoup库提取HTML页面信息 #!/usr/bin/python3 import requests from bs4 import BeautifulSoup url='htt ...

  6. C# 霍尼韦尔扫码枪扫码打印

    程序运行背景条件: 1.将扫码枪调制串口驱动模式 2.将扫码枪所在串口拆分成几个虚拟串口 3.扫码枪扫描条码就打印条码 4.WinForm程序 条码控件使用 DevExpress.XtraEditor ...

  7. Wsus Content内容误删处理

    问题:在wsus content文件夹下误删除文件,需要重新下载文件解决方法:打开cmdcd C:\Program Files\Update Services\Tools\.\wsusutil.exe ...

  8. EasyUI之DataGird动态组合列

    Dojo.ExtJS.Jquery(EasyUI.jQgrid.ligerui.DWZ).还有asp.net中的服务器控件.当然也少不了HTML 标签之table标签了.其中dojo.ExtJS.Jq ...

  9. Java 8 Stream介绍及使用2

    (原) stream中另一些比较常用的方法. 1. public static<T> Stream<T> generate(Supplier<T> s) 通过gen ...

  10. 一键发布部署vs插件[AntDeploy]开源了

    deploy to remote server by one button click 功能 支持docker一键部署(支持netcore) 支持iis一键部署(支持netcore和framework ...