之前测试了离线环境下使用二进制方法安装配置Kubernetes集群的方法,安装的过程中听说 kubeadm 安装配置集群更加方便,因此试着折腾了一下。安装过程中,也有一些坑,相对来说操作上要比二进制方便一点,毕竟不用手工创建那么多的配置文件,但是对于了解Kubernetes的运作方式,可能不如二进制方式好。同时,因为kubeadm方式,很多集群依赖的组件都是以容器方式运行在Master节点上,感觉对于虚拟机资源的消耗要比二进制方式厉害。

0. kubeadm 介绍与准备工作

kubeadm is designed to be a simple way for new users to start trying Kubernetes out, possibly for the first time, a way for existing users to test their application on and stitch together a cluster easily, and also to be a building block in other ecosystem and/or installer tool with a larger scope.

kubeadm是一个python写的项目,代码在这里,用来帮助快速部署Kubernetes集群环境,但是目前仅仅是作为测试环境使用,如果你想在生产环境使用,可是要三思。

本文所用的环境:

  • 虚拟机软件:VirtualBox
  • 操作系统:Centos 7.3 minimal 安装
  • 网卡:两块网卡,一块 Host-Only方式,一块 Nat 方式。
  • 网络规划:
    • Master:192.168.0.101
    • Node:192.168.0.102-104

0.1 关掉 selinux

  1. $ setenforce 0
  2. $ sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux

0.2 关掉防火墙

  1. $ systemctl stop firewalld
  2. $ systemctl disable firewalld

0.3 关闭 swap

  1. $ swapoff -a
  2. $ sed -i 's/.*swap.*/#&/' /etc/fstab

0.4 配置转发参数

  1. $ cat <<EOF > /etc/sysctl.d/k8s.conf
  2. net.bridge.bridge-nf-call-ip6tables = 1
  3. net.bridge.bridge-nf-call-iptables = 1
  4. EOF
  5. $ sysctl --system

0.5 设置国内 yum 源

  1. $ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  5. enabled=1
  6. gpgcheck=1
  7. repo_gpgcheck=1
  8. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  9. EOF

0.6 安装一些必备的工具

  1. $ yum install -y epel-release
  2. $ yum install -y net-tools wget vim ntpdate

1. 安装 kubeadm 必须的软件,在所有节点上运行

1.1 安装Docker

  1. $ yum install -y docker
  2. $ systemctl enable docker && systemctl start docker
  3. $ #设置系统服务,如果不设置后面 kubeadm init 的时候会有 warning
  4. $ systemctl enable docker.service

如果想要用二进制方法安装最新版本的Docker,可以参考我之前的文章在Redhat 7.3中采用离线方式安装Docker

1.2 安装kubeadm、kubectl、kubelet

  1. $ yum install -y kubelet kubeadm kubectl kubernetes-cni
  2. $ systemctl enable kubelet && systemctl start kubelet

这一步之后kubelet还不能正常运行,还处于下面的状态。

The kubelet is now restarting every few seconds, as it waits in a crashloop for kubeadm to tell it what to do.

2. 安装Master节点

因为国内没办法访问Google的镜像源,变通的方法是从其他镜像源下载后,修改tag。执行下面这个Shell脚本即可。

  1. #!/bin/bash
  2. images=(kube-proxy-amd64:v1.11.0 kube-scheduler-amd64:v1.11.0 kube-controller-manager-amd64:v1.11.0 kube-apiserver-amd64:v1.11.0
  3. etcd-amd64:3.2.18 coredns:1.1.3 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14.9 k8s-dns-kube-dns-amd64:1.14.9
  4. k8s-dns-dnsmasq-nanny-amd64:1.14.9 )
  5. for imageName in ${images[@]} ; do
  6. docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName
  7. docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName k8s.gcr.io/$imageName
  8. #docker rmi registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName
  9. done
  10. docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1

接下来执行Master节点的初始化,因为我的虚拟机是双网卡,需要指定apiserver的监听地址。

  1. [root@devops-101 ~]# kubeadm init --kubernetes-version=v1.11.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.101
  2. [init] using Kubernetes version: v1.11.0
  3. [preflight] running pre-flight checks
  4. I0724 08:36:35.636931 3409 kernel_validator.go:81] Validating kernel version
  5. I0724 08:36:35.637052 3409 kernel_validator.go:96] Validating kernel config
  6. [WARNING Hostname]: hostname "devops-101" could not be reached
  7. [WARNING Hostname]: hostname "devops-101" lookup devops-101 on 172.20.10.1:53: no such host
  8. [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
  9. [preflight/images] Pulling images required for setting up a Kubernetes cluster
  10. [preflight/images] This might take a minute or two, depending on the speed of your internet connection
  11. [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
  12. [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  13. [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  14. [preflight] Activating the kubelet service
  15. [certificates] Generated ca certificate and key.
  16. [certificates] Generated apiserver certificate and key.
  17. [certificates] apiserver serving cert is signed for DNS names [devops-101 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.101]
  18. [certificates] Generated apiserver-kubelet-client certificate and key.
  19. [certificates] Generated sa key and public key.
  20. [certificates] Generated front-proxy-ca certificate and key.
  21. [certificates] Generated front-proxy-client certificate and key.
  22. [certificates] Generated etcd/ca certificate and key.
  23. [certificates] Generated etcd/server certificate and key.
  24. [certificates] etcd/server serving cert is signed for DNS names [devops-101 localhost] and IPs [127.0.0.1 ::1]
  25. [certificates] Generated etcd/peer certificate and key.
  26. [certificates] etcd/peer serving cert is signed for DNS names [devops-101 localhost] and IPs [192.168.0.101 127.0.0.1 ::1]
  27. [certificates] Generated etcd/healthcheck-client certificate and key.
  28. [certificates] Generated apiserver-etcd-client certificate and key.
  29. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
  30. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
  31. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
  32. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
  33. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
  34. [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
  35. [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
  36. [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
  37. [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
  38. [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
  39. [init] this might take a minute or longer if the control plane images have to be pulled
  40. [apiclient] All control plane components are healthy after 46.002877 seconds
  41. [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  42. [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
  43. [markmaster] Marking the node devops-101 as master by adding the label "node-role.kubernetes.io/master=''"
  44. [markmaster] Marking the node devops-101 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  45. [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "devops-101" as an annotation
  46. [bootstraptoken] using token: wkj0bo.pzibll6rd9gyi5z8
  47. [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  48. [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  49. [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  50. [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
  51. [addons] Applied essential addon: CoreDNS
  52. [addons] Applied essential addon: kube-proxy
  53. Your Kubernetes master has initialized successfully!
  54. To start using your cluster, you need to run the following as a regular user:
  55. mkdir -p $HOME/.kube
  56. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  57. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  58. You should now deploy a pod network to the cluster.
  59. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  60. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  61. You can now join any number of machines by running the following on each node
  62. as root:
  63. kubeadm join 192.168.0.101:6443 --token wkj0bo.pzibll6rd9gyi5z8 --discovery-token-ca-cert-hash sha256:51985223a369a1f8c226f3ccdcf97f4ad5ff201a7c8c708e1636eea0739c0f05

看到以上信息表示Master节点已经初始化成功了。如果需要用普通用户管理集群,可以按照提示进行操作,如果是使用root用户管理,执行下面的命令。

  1. [root@devops-101 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
  2. [root@devops-101 ~]# kubectl get nodes
  3. NAME STATUS ROLES AGE VERSION
  4. devops-101 NotReady master 7m v1.11.1
  5. [root@devops-101 ~]# kubectl get pods --all-namespaces
  6. NAMESPACE NAME READY STATUS RESTARTS AGE
  7. kube-system coredns-78fcdf6894-8sd6g 0/1 Pending 0 7m
  8. kube-system coredns-78fcdf6894-lgvd9 0/1 Pending 0 7m
  9. kube-system etcd-devops-101 1/1 Running 0 6m
  10. kube-system kube-apiserver-devops-101 1/1 Running 0 6m
  11. kube-system kube-controller-manager-devops-101 1/1 Running 0 6m
  12. kube-system kube-proxy-bhmj8 1/1 Running 0 7m
  13. kube-system kube-scheduler-devops-101 1/1 Running 0 6m

可以看到节点还没有Ready,dns的两个pod也没不正常,还需要安装网络配置。

3. Master节点的网络配置

这里我选用了 Flannel 的方案。

kubeadm only supports Container Network Interface (CNI) based networks (and does not support kubenet).

修改系统设置,创建 flannel 网络。

  1. [root@devops-101 ~]# sysctl net.bridge.bridge-nf-call-iptables=1
  2. net.bridge.bridge-nf-call-iptables = 1
  3. [root@devops-101 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
  4. clusterrole.rbac.authorization.k8s.io/flannel created
  5. clusterrolebinding.rbac.authorization.k8s.io/flannel created
  6. serviceaccount/flannel created
  7. configmap/kube-flannel-cfg created
  8. daemonset.extensions/kube-flannel-ds created

flannel 默认会使用主机的第一张网卡,如果你有多张网卡,需要通过配置单独指定。修改 kube-flannel.yml 中的以下部分

  1. containers:
  2. - name: kube-flannel
  3. image: quay.io/coreos/flannel:v0.10.0-amd64
  4. command:
  5. - /opt/bin/flanneld
  6. args:
  7. - --ip-masq
  8. - --kube-subnet-mgr
  9. - --iface=enp0s3 #指定内网网卡

执行成功后,Master并不能马上变成Ready状态,稍等几分钟,就可以看到所有状态都正常了。

  1. [root@devops-101 ~]# kubectl get pods --all-namespaces
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. kube-system coredns-78fcdf6894-8sd6g 1/1 Running 0 14m
  4. kube-system coredns-78fcdf6894-lgvd9 1/1 Running 0 14m
  5. kube-system etcd-devops-101 1/1 Running 0 13m
  6. kube-system kube-apiserver-devops-101 1/1 Running 0 13m
  7. kube-system kube-controller-manager-devops-101 1/1 Running 0 13m
  8. kube-system kube-flannel-ds-6zljr 1/1 Running 0 48s
  9. kube-system kube-proxy-bhmj8 1/1 Running 0 14m
  10. kube-system kube-scheduler-devops-101 1/1 Running 0 13m
  11. [root@devops-101 ~]# kubectl get nodes
  12. NAME STATUS ROLES AGE VERSION
  13. devops-101 Ready master 14m v1.11.1

4. 加入节点

Node节点的加入集群前,首先需要按照本文的第0节和第1节做好准备工作,然后下载镜像。

  1. $ docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/kube-proxy-amd64:v1.11.0
  2. $ docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.1
  3. $ docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
  4. $ docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/kube-proxy-amd64:v1.11.0 k8s.gcr.io/kube-proxy-amd64:v1.11.0
  5. $ docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.1 k8s.gcr.io/pause:3.1

最后再根据Master节点的提示加入集群。

  1. $ kubeadm join 192.168.0.101:6443 --token wkj0bo.pzibll6rd9gyi5z8 --discovery-token-ca-cert-hash sha256:51985223a369a1f8c226f3ccdcf97f4ad5ff201a7c8c708e1636eea0739c0f05

节点的启动也需要一点时间,稍后再到Master上查看状态。

  1. [root@devops-101 ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. devops-101 Ready master 1h v1.11.1
  4. devops-102 Ready <none> 11m v1.11.1

我把安装中需要用到的一些命令整理成了几个脚本,放在我的Github上,大家可以下载使用。

X. 坑

pause:3.1

安装的过程中,发现kubeadmin会找 pause:3.1 的镜像,所以需要重新 tag 。

  1. $ docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.1 k8s.gcr.io/pause:3.1

两台服务器时间不同步。

报错信息

  1. [discovery] Failed to request cluster info, will try again: [Get https://192.168.0.101:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid]

解决方法,设定一个时间服务器同步两台服务器的时间。

  1. $ ntpdate ntp1.aliyun.com

参考资料

  1. centos7.3 kubernetes/k8s 1.10 离线安装
  2. Kubeadm安装Kubernetes环境
  3. Steps to install kubernetes
  4. kubeadm reference guide
  5. kubeadm安装Kubernetes V1.10集群详细文档
  6. kubeadm reference
  7. kubeadm搭建kubernetes1.7.5集群
  8. 安装部署 Kubernetes 集群
  9. linux 命令 ---- 同步当前服务器时间
  10. CentOS 7.4 安装 K8S v1.11.0 集群所遇到的问题
  11. 使用kubeadm部署kubernetes

kubeadm安装kubernetes V1.11.1 集群的更多相关文章

  1. CentOS 7.4 安装 K8S v1.11.0 集群所遇到的问题

    0.引言 最近打算将现有项目的 Docker 部署到阿里云上面,但是之前是单机部署,现在阿里云上面有 3 台机器,所以想做一个 Docker 集群.之前考虑是用 Docker Swarm 来做这个事情 ...

  2. Ubuntu16.04搭建kubernetes v1.11.2集群

    1.节点介绍         master      cluster-1      cluster-2      cluster-3 hostname        k8s-55      k8s-5 ...

  3. kubeadm安装kubernetes 1.13.1集群完整部署记录

    k8s是什么 Kubernetes简称为k8s,它是 Google 开源的容器集群管理系统.在 Docker 技术的基础上,为容器化的应用提供部署运行.资源调度.服务发现和动态伸缩等一系列完整功能,提 ...

  4. 使用Kubeadm搭建Kubernetes(1.12.2)集群

    Kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,在2018年将进入GA状态,说明离生产环境中使用的距离越来 ...

  5. 使用 kubeadm 安装 kubernetes v1.16.0

    近日通过kubeadm 安装 kubernetes v1.16.0,踩过不少坑,现记录下安装过程. 安装环境: 系           统:CentOS Linux release 7.6 Docke ...

  6. 使用kubeadm安装kubernetes v1.14.1

    使用kubeadm安装kubernetes v1.14.1 一.环境准备 操作系统:Centos 7.5 ​ ⼀ 一台或多台运⾏行行着下列列系统的机器器: ​ Ubuntu 16.04+ ​ Debi ...

  7. 安装 kubernetes v1.11.1

    kubernetes 版本 v1.11.1 系统版本:Centos 7.4 3.10.0-693.el7.x86_64 master: 192.168.0.205 node1: 192.168.0.2 ...

  8. 使用kubeadm搭建Kubernetes(1.10.2)集群(国内环境)

    目录 目标 准备 主机 软件 步骤 (1/4)安装 kubeadm, kubelet and kubectl (2/4)初始化master节点 (3/4) 安装网络插件 (4/4)加入其他节点 (可选 ...

  9. kubeadm安装Kubernetes V1.10集群详细文档

    https://www.kubernetes.org.cn/3808.html?tdsourcetag=s_pcqq_aiomsg 1:服务器信息以及节点介绍 系统信息:centos1708 mini ...

随机推荐

  1. NodeJs>------->>第一章:Node.js介绍

    一:章节前言 二:Node.js概述 1:使用node.js能够解决什么问题 2:实现高性能服务器 3:非阻塞型I/O及事件环形机制 4:node.js适合开发的程序 三:node.js安装 一.No ...

  2. 搭建ssh框架项目(一)

    一.创建web项目 二.导入jar包 三.创建数据库(MySQL) 四.建立javaBean对象(ElecText.java),属于持久层对象(PO对象) package com.cppdy.ssh. ...

  3. 网页异步加载之AJAX理解

    AJAX AJAX介绍 AJAX = Asynchronous JavaScript and XML(异步的 JavaScript 和 XML). AJAX 是一种用于创建快速动态网页的技术 AJAX ...

  4. 【转】角落的开发工具集之Vs(Visual Studio)2017插件推荐

    因为最近录制视频的缘故,很多朋友都在QQ群留言,或者微信公众号私信我,问我一些工具和一些插件啊,怎么使用的啊?那么今天我忙里偷闲整理一下清单,然后在这里面公布出来. Visual Studio 201 ...

  5. 为什么Nginx性能比Apache高

    Nginx的工作原理 nginx在启动后,会有一个master进程和多个worker进程.master进程主要用来管理worker进程,包含:接收来自外界的信号,向各worker进程发送信号,监控wo ...

  6. <sdoi2017>树点涂色

    题解: 首先,按照原树,构建出一个全部都是虚边的LCTLCT ,并用树剖维护每个点到根节点的路径权值valval.可以发现,每个点到根节点的路径权值就是每个点到根节点的路径上实链的个数. 我们发现,操 ...

  7. mysql explain优化

    简介 MySQL 提供了一个 EXPLAIN 命令, 它可以对 SELECT 语句进行分析, 并输出 SELECT 执行的详细信息, 以供开发人员针对性优化.EXPLAIN 命令用法十分简单, 在 S ...

  8. 用VScode代码调试Python

    Python扩展支持许多类型的Python应用程序的调试,包括以下一般功能: 观看窗口 评估表达式 当地人 参数 扩大孩子 断点 条件断点 暂停(进入)正在运行的程序 自定义启动目录 要熟悉这些常规功 ...

  9. HDU 3625 Examining the Rooms【第一类斯特灵数】

    <题目链接> <转载于 >>> > 题目大意:有n个锁着的房间和对应n扇门的n把钥匙,每个房间内有一把钥匙.你可以破坏一扇门,取出其中的钥匙,然后用取出钥匙打 ...

  10. Hash值破解工具Hashcat使用

    Hash值破解工具Hashcat使用 Hashcat介绍 HashCat系列软件拥有十分灵活的破解方式,可以满足绝大多数的破解需求. Hashcat系列软件是比较牛逼的密码破解软件,系列软件包含Has ...