通过Kubeadm只需几条命令即起一个单机版kubernetes集群系统,而后快速上手k8s。
在kubeadm中,需手动安装Docker和kubeket服务,Docker运行容器引擎,kubelet是启动Pod的核心组件,每一个节点都安装好kubelet和Docker,那么运行容器和Pod的环境就准备好了,在此基础之上,使用Kubeadm工具自动配置kubelet并启动kubelet服务,将Master所有组件和Node上剩余的kube-proxy组件都运行为Pod,托管在k8s之上。

服务器规划

三台机器:一台master、两个Node:

  • k8s-master:10.3.1.20
  • k8s-node01:10.3.1.21
  • k8s-node02:10.3.1.25
  • OS:Ubuntu16.04
  • Docker:17.03.2-ce

安装前准备

1、master节点到各Node节点SSH免密登录。
2、时间同步。
3、各Node必须关闭swap:swapoff -a,否则kubelet启动失败。
4、各节点主机名和IP加入/etc/hosts解析

安装Docker

所有k8s节点安装Docker Daemon:

apt-get update
apt-get install \
apt-transport-https \
ca-certificates \
curl \
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
apt-key fingerprint 0EBFCD88
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
apt-get update
apt-get install -y docker-ce=17.03.2~ce-0~ubuntu-xenial

安装完Docker后,设置FORWARD规则为ACCEPT

#默认为DROP
iptables -P FORWARD ACCEPT

安装kubeadm工具

  • 所有节点都需要安装kubeadm
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' >/etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y kubeadm #它会自动安装kubeadm、kubectl、kubelet、kubernetes-cni、socat

安装完后,设置kubelet服务开机自启:

systemctl enable kubelet

必须设置Kubelet开机自启动,才能让k8s集群各组件在系统重启后自动运行。

部署集群

有了上面这些基础设置后,就可以开始用kubeadm init部署k8s集群了。

在k8s-master上操作

这一步之前确保swap已关闭。

kubeadm init -h可查看帮助信息:

root@k8s-master:~# kubeadm  init -h
##查看init可用的参数,这里使用这两个参数:
--pod-network-cidr string : 自定义Pod网络
--ignore-preflight-errors strings: 忽略一些错误

开始初始化集群

root@k8s-master:~# kubeadm init  --pod-network-cidr 192.168.0.0/16 --ignore-preflight-errors=all

输出如下信息:

#初始化kubernetes,Kubeadm默认安装当前最新版本kubernetes
[init] using Kubernetes version: v1.12.0
#安装之前检测当前是否符合k8s运行环境,因为忽略了所有错误所以这里很快就通过了。
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
#写入kubelet相关配置文件,并启动kubelet服务
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
#自动生成集群用到的证书
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [10.3.1.20 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.3.1.20]
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
#生成kubelet配置信息写入相应的文件
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
#生成pod清单文件,kubelet将根据此清单文件创建各组件的Pod
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
#根据清单目录开始启动Pod
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 20.003530 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
#给Master设置一个labele且设置一个taint
[markmaster] Marking the node k8s-master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node k8s-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
#集群的一些基础设置
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
#设置了一个引导令牌,在节点加入时使用
[bootstraptoken] using token: mwfr7m.57rmd56ghjyu0716
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
#提示master已初始化成功
Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user:
#需要执行下面三条命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node
as root: kubeadm join 10.3.1.20:6443 --token mwfr7m.57rmd56ghjyu0716 --discovery-token-ca-cert-hash sha256:8fbd33519b0203e9aa03cc882cb5489b5e6ad455f97581b1abf8ceb1dca8f622 #把上面kubeadm join这一句话把这个记录下来,否则以后找起来有点麻烦,在node端加入时用到。

初始化完成,一台Master节点就部署好了,初始化过程中需要一定时间来pull镜像,也可以使用下面的命令提前下载好镜像:

root@k8s-master:~# kubeadm  config images pull 

[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.12.0
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.12.0
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.12.0
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.12.0
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd:3.2.24
[config/images] Pulled k8s.gcr.io/coredns:1.2.2 #
root@k8s-master:~# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-apiserver v1.12.0 ab60b017e34f 16 hours ago 194 MB
k8s.gcr.io/kube-controller-manager v1.12.0 07e068033cf2 16 hours ago 164 MB
k8s.gcr.io/kube-scheduler v1.12.0 5a1527e735da 16 hours ago 58.3 MB
k8s.gcr.io/kube-proxy v1.12.0 9c3a9d3f09a0 16 hours ago 96.6 MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 7 days ago 220 MB
k8s.gcr.io/coredns 1.2.2 367cdc8433a4 4 weeks ago 39.2 MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 9 months ago 742 kB
  • 根据提示执行:

    root@k8s-master:~# mkdir -p $HOME/.kube
    root@k8s-master:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    root@k8s-master:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config

    在Node上操作

    在所有Node上使用kubeadm join加入集群:

#确保swap已关闭
#复制在master节点上记录下的那句话,以加入集群
root@k8s-node01:~# kubeadm join 10.3.1.20:6443 --token mwfr7m.57rmd56ghjyu0716 --discovery-token-ca-cert-hash sha256:8fbd33519b0203e9aa03cc882cb5489b5e6ad455f97581b1abf8ceb1dca8f622

输出如下信息:

#加入时也会做一些预检
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support [discovery] Trying to connect to API Server "10.3.1.20:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.3.1.20:6443"
[discovery] Requesting info from "https://10.3.1.20:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.3.1.20:6443"
[discovery] Successfully established connection with API Server "10.3.1.20:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node01" as an annotation This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.

init完后,节点已加入群集。

最后,在master节点查看:

root@k8s-master:~# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 6m39s v1.12.0
k8s-node01 NotReady <none> 4m31s v1.12.0
k8s-node02 NotReady <none> 97s v1.12.0

去除master的taint,使用master也能被调度pod

root@k8s-master:~# kubectl taint nodes k8s-master node-role.kubernetes.io/master-
node/k8s-master untainted

安装CNI插件

各Node节点处于"NotReady" ,需要安装一个CNI网络插件:

root@k8s-master:~#  kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
configmap/calico-config configured
daemonset.extensions/calico-etcd created
service/calico-etcd created
daemonset.extensions/calico-node created
deployment.extensions/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
serviceaccount/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
serviceaccount/calico-kube-controllers unchanged
root@k8s-master:~#

几分钟后,各Node全部Ready:

#各节点已正常运行
root@k8s-master:~# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 1h v1.12.0
k8s-node01 Ready <none> 1h v1.12.0
k8s-node02 Ready <none> 1h v1.12.0

至此,所有组件全部运行:

root@k8s-master:~# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-etcd-d7v6w 1/1 Running 0 3m21s
calico-kube-controllers-75fb4f8996-dg7hd 1/1 Running 0 3m20s
calico-node-794nd 2/2 Running 0 3m19s
calico-node-b852z 2/2 Running 0 3m19s
calico-node-z7f4n 2/2 Running 0 3m19s
coredns-576cbf47c7-7svmm 1/1 Running 0 15h
coredns-576cbf47c7-kzbv2 1/1 Running 0 15h
etcd-k8s-master 1/1 Running 0 15h
kube-apiserver-k8s-master 1/1 Running 0 15h
kube-controller-manager-k8s-master 1/1 Running 0 15h
kube-proxy-7n5z9 1/1 Running 0 15h
kube-proxy-rwq9g 1/1 Running 0 15h
kube-proxy-v7qnx 1/1 Running 0 15h
kube-scheduler-k8s-master 1/1 Running 0 15h
root@k8s-master:~#

测试集群

  • 配置kubectl的命令补全功能
    命令补全功能由安装包"bash-completion"提供,Ubuntu系统中默认已安装。

    当前shell生效:
    source <(kubectl completion bash)
    永久生效:
    echo "source <(kubectl completion bash)" >> ~/.bashrc
  • 启动一个pod验证集群是否正常运行。
#run一个deployment
kubectl run -h
Usage:
kubectl run NAME --image=image [--env="key=value"] [--port=port] [--replicas=replicas] [--dry-run=bool]
[--overrides=inline-json] [--command] -- [COMMAND] [args...] [options]

启动一个nginx

kubectl run nginx --image=nginx:1.10 --port=80
deployment.apps/nginx created
#查看
root@k8s-master:~# kubectl get pod -w -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
nginx-787b58fd95-p9jwl 0/1 ContainerCreating 0 59s <none> k8s-node02 <none>
nginx-787b58fd95-p9jwl 1/1 Running 0 70s 192.168.58.193 k8s-node02 <none>
  • 测试nginx正常访问

    root@k8s-master:~# curl  -I 192.168.58.193
    HTTP/1.1 200 OK
    Server: nginx/1.10.3
    Date: Sat, 29 Sep 2018 02:42:06 GMT
    Content-Type: text/html
    Content-Length: 612
    Last-Modified: Tue, 31 Jan 2017 15:01:11 GMT
    Connection: keep-alive
    ETag: "5890a6b7-264"
    Accept-Ranges: bytes
  • 把nginx暴露一个端口出来,以使集群之外能访问
    kubectl expose -h
    Usage:
    kubectl expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP|SCTP] [--target-port=number-or-name]
    [--name=name] [--external-ip=external-ip-of-service] [--type=type] [options]
    root@k8s-master:~# kubectl expose deployment nginx --port=801 --target-port=80 --type=NodePort --name nginx-svc
    service/nginx-svc exposed
    root@k8s-master:~#
  • 查看服务:
    root@k8s-master:~# kubectl get svc
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16h
    nginx-svc NodePort 10.100.84.207 <none> 801:30864/TCP 25s

现在可以访问任意Node的30864端口访问到nginx服务:

root@k8s-node01:~# curl 10.3.1.21:30864

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>

如果发现哪个某个Node端口无法访问,则设置默认FORWARD规则为ACCEPT

 iptables -P FORWARD ACCEPT

kubeadm集群配置到此完成,可参考官方文档

kubernetes之Kubeadm快速安装v1.12.0版的更多相关文章

  1. Kubeadm部署安装kubernetes1.12.1

    1.环境准备(这里是master) CentOS 7.6 两台配置如下,自己更改主机名,加入hosts, master和node 名字不能一样 # hostname master # hostname ...

  2. 处理畅捷通的T+ 12.0版,web服务无故自动停止的问题

    用了几个月的畅捷通T+ 12.0版,一直都挺正常,但最近这两周,出现了好几次web服务自动停止的情况,今天抽空仔细看了Windows的日志,发现在半夜2点左右,TPlusProWebService12 ...

  3. centos环境 使用kubeadm快速安装k8s集群v1.16.2

    全程使用root用户运行,宿主机需要连接外网 浏览一下官方kubeadm[有些镜像用不了] https://kubernetes.io/docs/setup/production-environmen ...

  4. kubeadm快速安装k8s

    1.安装net-tools [root@localhost ~]# yum install -y net-tools 2.关闭firewalld [root@localhost ~]# systemc ...

  5. 使用kubeadm部署K8S v1.17.0集群

    kubeadm部署K8S集群 安装前的准备 集群机器 172.22.34.34 K8S00 172.22.34.35 K8S01 172.22.34.36 K8S02 注意: 本文档中的 etcd . ...

  6. CentOS7.0安装Nginx-1.12.0

    一.安装准备 首先由于nginx的一些模块依赖一些lib库,所以在安装nginx之前,必须先安装这些lib库,这些依赖库主要有g++.gcc.openssl-devel.pcre-devel和zlib ...

  7. CentOS 7 yum安装nginx-1.12.0

    CentOS 7 中的 yum 没法直接使用 yum install nginx 这个指令去安装nginx,因为nginx这个服务不是yum库中自带的.图1是官方提供的大致安装步骤,图2是官网提供的多 ...

  8. 腾讯云服务器 Centos6.5 安装 nginx1.12.0

    今天买了腾讯云,不要问我为什么没有买阿里云... 入正题: 如果出现 CentOS ping: unknown host 的话,表示没有配置dns vim /etc/sysconfig/network ...

  9. FreeBSD 12.0 版发布

    FreeBSD 是一个自由且开源的类 Unix 操作系统,是 BSD Unix 的直系继承者.起始于 1993 年,FreeBSD 拥有悠久的历史与诸多衍生版本.其饱经考验,是最广泛应用的开源 BSD ...

随机推荐

  1. robotframework环境安装

    1.安装 robotframework 执行命令 pip install robotframework 2.安装seleniumlibrary 执行命令 pip install --upgrade r ...

  2. vue eventBus 跳坑的办法

    前言(feihua): 最近闲来没事写了一个小的demo,在小的数据传输上没有必要去使用vuex,对于非父子组件的传值方法总结了一点心得体会供大家参考(如有太low,还请大神别喷俺) 先上官方文档: ...

  3. jmeter分布式测试教程和远程的代理机无法连接网络的问题解决方法

    一.Jmeter分布式执行原理: 1.Jmeter分布式测试时,选择其中一台作为控制机(Controller),其它机器做为代理机(Agent). 2.执行时,Controller会把脚本发送到每台A ...

  4. [转] word2vec

    from: https://www.cnblogs.com/peghoty/p/3857839.html 另附一个比较好的介绍:https://zhuanlan.zhihu.com/p/2630679 ...

  5. vertical-align作用的前提++图片不上下居中

    5.3.2 vertical-align作用的前提 很多人,尤其CSS新手,会问这么一个问题:“为什么我设置了vertical-align却没任何作用?” 因为vertical-align起作用是有前 ...

  6. CF666E Forensic Examination [后缀自动机,线段树合并]

    洛谷 Codeforces 思路 最初的想法:后缀数组+区间众数,似乎并不能过. 既然后缀数组不行,那就按照套路建出广义SAM,然后把\(S\)放在上面跑,得到以每个点结尾会到SAM上哪个节点. 询问 ...

  7. 实验一《Java开发环境的熟悉》_实验报告

    实验一<Java开发环境的熟悉>_实验报告 一.实验内容与主要步骤 1.Linux系统命令行下java程序开发 实验要求 1 建立"自己学号exp1"的目录 2 在&q ...

  8. 高可用Redis(一):通用命令,数据结构和内部编码,单线程架构

    1.通用API 1.1 keys命令和dbsize命令 keys * 遍历所有key keys [pattern] 遍历模式下所有的key dbsize 计算Redis中所有key的总数 例子: 12 ...

  9. layUI 实现自定义弹窗

    需求描述:点击表格中的数据,弹出一张具体信息表.描述的不是很清楚,放效果图,就明白了,上图 放心,能看到的数据,都不是生产数据,我造的假数据,但是功能效果就是这样,点击列表中的一行,弹出某些要展示的信 ...

  10. 在django中使用redis

    方式一 utils文件夹下,简历redis_pool.py import redis POOL = redis.ConnectionPool(host='127.0.0.1', port=6379,p ...