应用背景

截止目前为止,高热度的kubernetes版本已经发布至1.14,在此记录一下安装部署步骤和过程中的问题排查。

部署k8s一般两种方式:kubeadm(官方称目前已经GA,可以在生产环境使用);二进制安装(比较繁琐)。

这里暂且采用kubeadm方式部署测试。

测试环境

System Hostname IP
CentOS 7.6 k8s-master 138.138.82.14
CentOS 7.6 k8s-node1 138.138.82.15
CentOS 7.6 k8s-node2 138.138.82.16

网络插件:calico

具体步骤

1. 环境预设(在所有主机上操作)

关闭firewalld:

systemctl stop firewalld && systemctl disable firewalld

关闭SElinux:

setenforce && sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

关闭Swap:

swapoff -a && sed -i "s/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g" /etc/fstab

使用阿里云yum源:

wget -O /etc/yum.repos.d/CentOS7-Aliyun.repo http://mirrors.aliyun.com/repo/Centos-7.repo

更新 /etc/hosts 文件:在每一台主机的该文件中添加k8s所有节点的IP和对应主机名,否则初始化的时候回出现告警甚至错误。

2. 安装docker引擎(在所有主机上操作)

安装阿里云docker源:

wget -O /etc/yum.repos.d/docker-ce http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

安装docker:

yum install docker-ce -y

启动docker:

systemctl enable docker && systemctl start docker

调整docker部分参数:

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://5twf62k1.mirror.aliyuncs.com"],   // 改为阿里镜像
"exec-opts": ["native.cgroupdriver=systemd"]  // 默认cgroupfs,k8s官方推荐systemd,否则初始化出现Warning
}
EOF
systemctl daemon-reload
systemctl restart docker

检查确认docker的Cgroup Driver信息:

[root@k8s-master ~]# docker info |grep Cgroup
Cgroup Driver: systemd

3. 安装kubernetes初始化工具(在所有主机上操作)

使用阿里云的kubernetes源:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=
gpgcheck=
repo_gpgcheck=
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装工具: yum install -y kubelet kubeadm kubectl   // 此时最新版本1.14.1

启动kubelet: systemctl enable kubelet && systemctl start kubelet   // 此时启动不成功正常,后面初始化的时候会变成功

4. 预下载相关镜像(在master节点上操作)

查看集群初始化所需镜像及对应依赖版本号:

[root@k8s-master ~]# kubeadm config images list
……
k8s.gcr.io/kube-apiserver:v1.14.1
k8s.gcr.io/kube-controller-manager:v1.14.1
k8s.gcr.io/kube-scheduler:v1.14.1
k8s.gcr.io/kube-proxy:v1.14.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.
k8s.gcr.io/coredns:1.3.

因为这些重要镜像都被墙了,所以要预先单独下载好,然后才能初始化集群。

下载脚本:

#!/bin/bash

set -e

KUBE_VERSION=v1.14.1
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.
CORE_DNS_VERSION=1.3. GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION}) for imageName in ${images[@]} ; do
docker pull $ALIYUN_URL/$imageName
docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done

5. 初始化集群(在master节点上操作)

kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=192.168.0.0/

注意:初始化之后会安装网络插件,这里选择了calico,所以修改 --pod-network-cidr=192.168.0.0/

初始化输出记录样例:

[root@k8s-master ~]# kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=192.168.0.0/
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 138.138.82.14]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [138.138.82.14 127.0.0.1 ::]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [138.138.82.14 127.0.0.1 ::]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.002739 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 57iu95.6narx7y8peauts76
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 138.138.82.14: --token 57iu95.6narx7y8peauts76 \
--discovery-token-ca-cert-hash sha256:5dc8beaa3b0e6fa26b97e2cc3b8ae776d000277fd23a7f8692dc613c6e59f5e4

以上输出显示初始化成功,并给出了接下来的必要步骤和节点加入集群的命令,照着做即可。

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看已经运行的pod

[root@k8s-master ~]# kubectl get pod -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-fb8b8dccf-6mgks / Pending 9m6s <none> <none> <none> <none>
coredns-fb8b8dccf-cbtlx / Pending 9m6s <none> <none> <none> <none>
etcd-k8s-master / Running 8m22s 138.138.82.14 k8s-master <none> <none>
kube-apiserver-k8s-master / Running 8m19s 138.138.82.14 k8s-master <none> <none>
kube-controller-manager-k8s-master / Running 8m30s 138.138.82.14 k8s-master <none> <none>
kube-proxy-c9xd2 / Running 9m7s 138.138.82.14 k8s-master <none> <none>
kube-scheduler-k8s-master / Running 8m6s 138.138.82.14 k8s-master <none> <none>

到这里,会发现除了coredns未ready,这是正常的,因为还没有网络插件,接下来安装calico后就变为正常running了。

6. 安装calico(在master节点上操作)

Calico官网:https://docs.projectcalico.org/v3.6/getting-started/kubernetes/

kubectl apply -f \
https://docs.projectcalico.org/v3.5/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

应用官方的yaml文件之后,过一会查看所有pod已经正常running状态了,也分配出了对应IP:

[root@k8s-master ~]# kubectl get pod -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-node-r5mlj / Running 72s 138.138.82.14 k8s-master <none> <none>
coredns-fb8b8dccf-6mgks / Running 15m 192.168.0.7 k8s-master <none> <none>
coredns-fb8b8dccf-cbtlx / Running 15m 192.168.0.6 k8s-master <none> <none>
etcd-k8s-master / Running 15m 138.138.82.14 k8s-master <none> <none>
kube-apiserver-k8s-master / Running 15m 138.138.82.14 k8s-master <none> <none>
kube-controller-manager-k8s-master / Running 15m 138.138.82.14 k8s-master <none> <none>
kube-proxy-c9xd2 / Running 15m 138.138.82.14 k8s-master <none> <none>
kube-scheduler-k8s-master / Running 14m 138.138.82.14 k8s-master <none> <none>

查看节点状态

[root@k8s-master ~]# kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready master 22m v1.14.1 138.138.82.14 <none> CentOS Linux (Core) 3.10.-957.10..el7.x86_64 docker://18.9.5

至此,集群初始化和主节点都准备就绪,接下来就是加入其他工作节点至集群中。

7. 加入集群(在非master节点上操作)

先在需要加入集群的节点上下载必要镜像,下载脚本如下:

#!/bin/bash

set -e

KUBE_VERSION=v1.14.1
KUBE_PAUSE_VERSION=3.1 GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers images=(kube-proxy-amd64:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}) for imageName in ${images[@]} ; do
docker pull $ALIYUN_URL/$imageName
docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done

然后在主节点初始化输出中获取加入集群的命令,复制到工作节点执行即可:

[root@k8s-node1 ~]# kubeadm join 138.138.82.14: --token 57iu95.6narx7y8peauts76 \
> --discovery-token-ca-cert-hash sha256:5dc8beaa3b0e6fa26b97e2cc3b8ae776d000277fd23a7f8692dc613c6e59f5e4
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

8. 在master节点上查看各节点工作状态

[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 26m v1.14.1
k8s-node1 Ready <none> 84s v1.14.1
k8s-node2 Ready <none> 74s v1.14.1

至此,最简单的集群已经部署完成。

接下来,部署其他插件。

下一篇:calico客户端工具calicoctl

结束.

centos7使用kubeadm安装部署kubernetes 1.14的更多相关文章

  1. [转帖]centos7 使用kubeadm 快速部署 kubernetes 国内源

    centos7 使用kubeadm 快速部署 kubernetes 国内源 https://www.cnblogs.com/qingfeng2010/p/10540832.html 前言 搭建kube ...

  2. centos7 使用kubeadm 快速部署 kubernetes 国内源

    前言 搭建kubernetes时看文档以及资料走了很多弯路,so 整理了最后成功安装的过程已做记录.网上的搭建文章总是少一些步骤,想本人这样的小白总是部署不成功(^_^). 准备两台或两台以上的虚拟机 ...

  3. Kubeadm 安装部署 Kubernetes 集群

    阅读目录: 准备工作 部署 Master 管理节点 部署 Minion 工作节点 部署 Hello World 应用 安装 Dashboard 插件 安装 Heapster 插件 后记 相关文章:Ku ...

  4. 使用 Kubeadm 安装部署 Kubernetes 1.12.1 集群

    手工搭建 Kubernetes 集群是一件很繁琐的事情,为了简化这些操作,就产生了很多安装配置工具,如 Kubeadm ,Kubespray,RKE 等组件,我最终选择了官方的 Kubeadm 主要是 ...

  5. kubeadm安装部署kubernetes 1.11.3(单主节点)

    由于此处docker代理无法使用,因此,请各位设置有效代理进行部署,勿使用文档中的docker代理.整体部署步骤不用改动.谢谢各位支持. 1.部署背景 操作系统版本:CentOS Linux rele ...

  6. 使用 kubeadm 安装部署 kubernetes 1.9-部署heapster插件

    1.先到外网下载好镜像倒进各个节点 2.下载yaml文件和创建应用 mkdir -p ~/k8s/heapster cd ~/k8s/heapster wget https://raw.githubu ...

  7. Centos7 使用 kubeadm 安装Kubernetes 1.13.3

    目录 目录 什么是Kubeadm? 什么是容器存储接口(CSI)? 什么是CoreDNS? 1.环境准备 1.1.网络配置 1.2.更改 hostname 1.3.配置 SSH 免密码登录登录 1.4 ...

  8. kubernetes系列03—kubeadm安装部署K8S集群

    本文收录在容器技术学习系列文章总目录 1.kubernetes安装介绍 1.1 K8S架构图 1.2 K8S搭建安装示意图 1.3 安装kubernetes方法 1.3.1 方法1:使用kubeadm ...

  9. Centos7 安装部署Kubernetes(k8s)集群

    目录 一.系统环境 二.前言 三.Kubernetes 3.1 概述 3.2 Kubernetes 组件 3.2.1 控制平面组件 3.2.2 Node组件 四.安装部署Kubernetes集群 4. ...

随机推荐

  1. Django 提交 form 表单(使用sqlite3保存数据)

    优化 提交 form 表单,https://www.cnblogs.com/klvchen/p/10608143.html 创建数据库的字段,在 models.py 中添加 from django.d ...

  2. 碰到了通过Movie显示gif图片,有部分图片的duration为0导致gif只显示第一帧

    解决办法,改为使用android-gif-drawable.jar来显示gif图片(需要配合com.android.support:support-v4:18.0.0使用) GifImageView ...

  3. windows linux 子系统折腾记

    最近买了部新电脑,海尔n4105的一体机,好像叫s7. 放在房间里面,看看资料.因为性能孱弱,所以不敢安装太强大的软件,然后又有一颗折腾的心.所以尝试了win10自带的linux子系统. 然后在应用商 ...

  4. HTML 、XHTML、H5的区别:

    概括: HTML指的是HTML4.01:HTML是标记/设计语言.XHTML是HTML的过渡版:XHTML是可扩展的标记语言. H5是HTML的升级版.H5是一门编程语言. 区别: 1.XHTML标签 ...

  5. shell 中 if then语句中会跟着-ne -ge之类的参数的含义

    if [ 1 -ne 1 ];then...fi这是指当1不等于1时执行then后的语句 -eq:等于-ne:不等于-le:小于等于-ge:大于等于-lt:小于-gt:大于

  6. Linux SVN安装

    step1:检查是否已经安装Svn Server. svnserve --version step2:执行安装 step3:创建代码仓库 进入对应目录: 说明: conf:配置文件 db:数据存储文件 ...

  7. Redis 安装总结记录 附送redis-desktop-manager工具

    使用redis已几年有余,之前写过Redis关于master-slave(主从)同步原理的文章.这里介绍下安装过程,因为前前后后有些命令也记不住了,所以此篇文章和之前文章一样起个备注记录作用,也供屏幕 ...

  8. c/c++ 网络编程 read,write函数深入理解

    read,write函数深入理解 1,服务端的write函数,可以指定发送数据的长度(第三个参数length) write(connfd, &buff[i], length); 2,客户端的r ...

  9. Hexo server报错TypeError: Cannot read property 'utcOffset' of null解决方法

    最近刚刚开始使用Hexo,新建了一篇article,运行hexo server时候总是报错Cannot read property 'offset' of null. 最后发现是因为手贱把_confi ...

  10. netcore开发windows普通服务(非Web)并一键发布到服务器

    如何开发并一键发布WindowsService项目(netcore普通项目) netcore下开发windows服务如果是web项目的话,由于aspnetcore本身是支持的,把默认的host.Run ...