使用kubeadm部署kubernetes1.9.1+coredns+kube-router(ipvs)高可用集群
由于之前已经写了两篇部署kubernetes的文章,整个过程基本一致,所以这篇只着重说一下coredns和kube-router的部署。
kube version: 1.9.1
docker version: 17.03.2-ce
OS version: debian stretch
依然是三个master节点、一个node节点。
1、准备镜像,自行科学下载。
# docker images| grep 1.9.1
gcr.io/google_containers/kube-apiserver-amd64 v1.9.1
gcr.io/google_containers/kube-controller-manager-amd64 v1.9.1
gcr.io/google_containers/kube-scheduler-amd64 v1.9.1
gcr.io/google_containers/kube-proxy-amd64 v1.9.1
2、安装新版本的kubeadm、kubectl、kubelet。
# aptitude install -y kubeadm kubectl kubelet
3、部署第一个master节点。准备kubeadm的配置文件,这里官方提供的配置说明并不完善,应该说无法使用,经过了一番查找和测试。
# cat kubeadm-config-191.yml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: "192.168.5.62"
etcd:
endpoints:
- "http://192.168.5.84:2379"
- "http://192.168.5.85:2379"
- "http://192.168.2.77:2379"
kubernetesVersion: "v1.9.1"
apiServerCertSANs:
- uy06-04
- uy06-05
- uy08-10
- uy08-11
- 192.168.6.16
- 192.168.6.17
- 127.0.0.1
- 192.168.5.62
- 192.168.5.63
- 192.168.5.107
- 192.168.5.108
- 30.0.0.1
- 10.96.0.1
- kubernetes
- kubernetes.default
- kubernetes.default.svc
- kubernetes.default.svc.cluster
- kubernetes.default.svc.cluster.local
tokenTTL: 0s
networking:
podSubnet: 30.0.0.0/10
apiServerExtraArgs:
enable-swagger-ui: "true"
insecure-bind-address: 0.0.0.0
insecure-port: "8088"
endpoint-reconciler-type: "lease"
controllerManagerExtraArgs:
address: 0.0.0.0
schedulerExtraArgs:
address: 0.0.0.0
featureGates:
CoreDNS: true
kubeProxy:
config:
featureGates: "SupportIPVSProxyMode=true"
mode: "ipvs"
需要提醒一下的是,这里开启的是kube-proxy的ipvs模式,部署的时候部署的依然是kube-proxy,而不是kube-router。
如果你打算使用kube-router作为网络插件,是可以不考虑kube-proxy的配置的,kube-proxy会被删掉。kube-router不仅取代kube-proxy代理svc,并且它也是网络插件。
4、使用kubeadm执行初始化。
# kubeadm init --config=kubeadm-config-191.yml --ignore-preflight-errors=all
[init] Using Kubernetes version: v1.9.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING CRI]: unable to check if the container runtime at "/var/run/dockershim.sock" is running: exit status 1
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [uy06-04 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local uy06-04 uy06-05 uy08-10 uy08-11 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.5.62 192.168.6.16 192.168.6.17 127.0.0.1 192.168.5.62 192.168.5.63 192.168.5.107 192.168.5.108 30.0.0.1 10.96.0.1]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 90.501851 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node uy06-04 as master by adding a label and a taint
[markmaster] Master uy06-04 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: b1bd11.9ecfaaad5274f9d1
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token b1bd11.9ecfaaad5274f9d1 192.168.5.62:6443 --discovery-token-ca-cert-hash sha256:09438d4384c393880a5ac18e2d3d06b547dae7242061c18c03f0fbb1bad76ade
验证kube-proxy的mode:
# kubectl exec -it kube-proxy-hr48q -n kube-system -- sh
# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 Jan11 ? 00:04:12 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf
root 29012 0 0 19:24 ? 00:00:00 sh
root 29043 29012 0 19:24 ? 00:00:00 ps -ef
# cat /var/lib/kube-proxy/config.conf
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
qps: 5
clusterCIDR: 30.0.0.0/10
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
featureGates: SupportIPVSProxyMode=true
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
minSyncPeriod: 0s
scheduler: ""
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: ipvs <- 这里
oomScoreAdj: -999
portRange: ""
resourceContainer: /kube-proxy
udpTimeoutMilliseconds: 250ms
5、使master节点参与调度。
# kubectl taint nodes --all node-role.kubernetes.io/master-
6、部署kube-router。
a、下载yaml文件。
# curl -L -O https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter-all-features.yaml
b、修改busybox的镜像拉取策略为imagePullPolicy: IfNotPresent
,因为这里自动拉取镜像时总是连接超时,导致pod无法启动。
c、应用yaml文件。
# kubectl apply -f kubeadm-kuberouter-all-features.yaml
7、这时,核心组件应该都运行起来了。
# kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-65dcdb4cf-mlr9j 1/1 Running 0 22h
kube-system kube-apiserver-uy06-04 1/1 Running 0 22h
kube-system kube-controller-manager-uy06-04 1/1 Running 0 22h
kube-system kube-proxy-hr48q 1/1 Running 0 22h
kube-system kube-router-9lh8x 1/1 Running 0 22h
kube-system kube-scheduler-uy06-04 1/1 Running 0 22h
8、删除kube-proxy,并清除iptables规则。
# kubectl delete ds kube-proxy -n kube-system
# docker run --privileged --net=host gcr.io/google_containers/kube-proxy-amd64:v1.7.3 kube-proxy --cleanup-iptables
9、部署另外两个master节点,尝试通过vip请求apiserver将node节点添加到集群,以及剩余的其他配置,请参考之前的两篇。
10、部署完成后是这个样子。
# kubectl get no
NAME STATUS ROLES AGE VERSION
uy02-07 Ready <none> 1d v1.9.1
uy05-13 Ready master 2d v1.9.1
uy08-07 Ready <none> 1d v1.9.1
uy08-08 Ready <none> 1d v1.9.1
# kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default frontend-66d686db4b-jkbdk 1/1 Running 0 1d
default redis-master-5fd44c4c6-gf4zm 1/1 Running 0 1d
default redis-slave-74fc6595b4-kp8sl 1/1 Running 0 1d
default redis-slave-74fc6595b4-shtx6 1/1 Running 0 1d
default snowflake-5c98868c55-8crlt 1/1 Running 0 3h
default snowflake-5c98868c55-9psss 1/1 Running 0 1d
default snowflake-5c98868c55-ccsfc 1/1 Running 0 1d
default snowflake-5c98868c55-p2tjh 1/1 Running 0 1d
kube-system coredns-65dcdb4cf-bv95f 1/1 Running 0 2d
kube-system coredns-65dcdb4cf-cv48z 1/1 Running 0 1h
kube-system coredns-65dcdb4cf-grxkw 1/1 Running 0 1d
kube-system coredns-65dcdb4cf-n5kkm 1/1 Running 0 1d
kube-system heapster-7bddb97655-5hbsp 1/1 Running 0 1d
kube-system heapster-7bddb97655-8dqgd 1/1 Running 0 1h
kube-system heapster-7bddb97655-fd4mb 1/1 Running 0 1d
kube-system heapster-7bddb97655-gznsm 1/1 Running 0 1d
kube-system kube-apiserver-uy05-13 1/1 Running 0 1d
kube-system kube-apiserver-uy08-07 1/1 Running 0 1d
kube-system kube-apiserver-uy08-08 1/1 Running 0 1d
kube-system kube-controller-manager-uy05-13 1/1 Running 0 23h
kube-system kube-controller-manager-uy08-07 1/1 Running 0 23h
kube-system kube-controller-manager-uy08-08 1/1 Running 0 23h
kube-system kube-router-57mws 1/1 Running 0 1d
kube-system kube-router-j6rks 1/1 Running 0 2d
kube-system kube-router-mfwqv 1/1 Running 0 1d
kube-system kube-router-txp8p 1/1 Running 0 1d
kube-system kube-scheduler-uy05-13 1/1 Running 0 23h
kube-system kube-scheduler-uy08-07 1/1 Running 0 23h
kube-system kube-scheduler-uy08-08 1/1 Running 1 23h
kube-system kubernetes-dashboard-79cb6d66b9-74cf4 1/1 Running 0 3h
kubernator kubernator-659cf655b6-9prx2 1/1 Running 0 1d
monitoring alertmanager-main-0 2/2 Running 0 1h
monitoring alertmanager-main-1 2/2 Running 0 1h
monitoring alertmanager-main-2 2/2 Running 0 1h
monitoring grafana-6b67b479d5-zj66c 2/2 Running 0 1h
monitoring kube-state-metrics-6f7b5c94f-v42tj 2/2 Running 0 1h
monitoring node-exporter-5b8p2 1/1 Running 0 1h
monitoring node-exporter-m85xx 1/1 Running 0 1h
monitoring node-exporter-pg2qz 1/1 Running 0 1h
monitoring node-exporter-x9lb6 1/1 Running 0 1h
monitoring prometheus-k8s-0 2/2 Running 0 1h
monitoring prometheus-k8s-1 2/2 Running 0 1h
monitoring prometheus-operator-8697c7fff9-dpn9r 1/1 Running 0 1h
# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
# kubectl cluster-info
Kubernetes master is running at https://192.168.6.15:6443
Heapster is running at https://192.168.6.15:6443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://192.168.6.15:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
补充:
安装ipvsadm查看lvs规则。
# aptitude install -y ipvsadm
# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.5.42:30001 rr
-> 20.0.2.17:9090 Masq 1 0 497
TCP 192.168.5.42:30211 rr
-> 20.0.2.12:80 Masq 1 0 497
TCP 192.168.5.42:30900 rr
-> 20.0.2.20:9090 Masq 1 0 250
-> 20.0.4.15:9090 Masq 1 0 250
TCP 192.168.5.42:30902 rr
-> 20.0.2.19:3000 Masq 1 0 500
TCP 192.168.5.42:30903 rr
-> 20.0.0.19:9093 Masq 1 0 166
-> 20.0.2.21:9093 Masq 1 0 166
-> 20.0.4.14:9093 Masq 1 0 166
TCP 192.168.5.42:31001 rr
-> 20.0.0.8:80 Masq 1 0 497
TCP 10.96.0.1:443 rr persistent 10800
-> 192.168.5.42:6443 Masq 1 2 0
-> 192.168.5.104:6443 Masq 1 0 0
-> 192.168.5.105:6443 Masq 1 0 0
TCP 10.96.0.10:53 rr
-> 20.0.0.2:53 Masq 1 0 0
-> 20.0.1.2:53 Masq 1 0 0
-> 20.0.2.2:53 Masq 1 0 0
-> 20.0.4.9:53 Masq 1 0 0
TCP 10.97.245.128:6379 rr
-> 20.0.0.7:6379 Masq 1 0 0
TCP 10.98.159.23:80 rr
-> 20.0.2.17:9090 Masq 1 0 0
TCP 10.101.179.96:8080 rr
-> 20.0.4.12:8080 Masq 1 0 0
TCP 10.101.209.232:80 rr
-> 20.0.2.12:80 Masq 1 0 0
TCP 10.101.255.18:9090 rr
-> 20.0.2.20:9090 Masq 1 0 0
-> 20.0.4.15:9090 Masq 1 0 0
TCP 10.104.53.117:8080 rr
-> 20.0.4.13:8080 Masq 1 0 0
TCP 10.105.5.201:3000 rr
-> 20.0.2.19:3000 Masq 1 0 0
TCP 10.105.21.201:80 rr
-> 20.0.0.4:8082 Masq 1 0 0
-> 20.0.1.3:8082 Masq 1 0 0
-> 20.0.2.3:8082 Masq 1 0 0
-> 20.0.4.10:8082 Masq 1 0 0
TCP 10.105.113.2:6379 rr
-> 20.0.1.8:6379 Masq 1 0 0
-> 20.0.2.8:6379 Masq 1 0 0
TCP 10.105.159.162:9093 rr
-> 20.0.0.19:9093 Masq 1 0 0
-> 20.0.2.21:9093 Masq 1 0 0
-> 20.0.4.14:9093 Masq 1 0 0
TCP 10.110.48.172:80 rr
-> 20.0.0.8:80 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 20.0.0.2:53 Masq 1 0 14
-> 20.0.1.2:53 Masq 1 0 15
-> 20.0.2.2:53 Masq 1 0 15
-> 20.0.4.9:53 Masq 1 0 15
使用kubeadm部署kubernetes1.9.1+coredns+kube-router(ipvs)高可用集群的更多相关文章
- Kubeadm 1.9 HA 高可用集群本地离线镜像部署【已验证】
k8s介绍 k8s 发展速度很快,目前很多大的公司容器集群都基于该项目,如京东,腾讯,滴滴,瓜子二手车,易宝支付,北森等等. kubernetes1.9版本发布2017年12月15日,每三个月一个迭代 ...
- [K8s 1.9实践]Kubeadm 1.9 HA 高可用 集群 本地离线镜像部署
k8s介绍 k8s 发展速度很快,目前很多大的公司容器集群都基于该项目,如京东,腾讯,滴滴,瓜子二手车,北森等等. kubernetes1.9版本发布2017年12月15日,每是那三个月一个迭代, W ...
- kubeadm部署高可用集群Kubernetes 1.14.1版本
Kubernetes高可用集群部署 部署架构: Master 组件: kube-apiserver Kubernetes API,集群的统一入口,各组件协调者,以HTTP API提供接口服务,所有对象 ...
- kubeadm使用外部etcd部署kubernetes v1.17.3 高可用集群
文章转载自:https://mp.weixin.qq.com/s?__biz=MzI1MDgwNzQ1MQ==&mid=2247483891&idx=1&sn=17dcd7cd ...
- kubeadm 使用 Calico CNI 以及外部 etcd 部署 kubernetes v1.23.1 高可用集群
文章转载自:https://mp.weixin.qq.com/s/2sWHt6SeCf7GGam0LJEkkA 一.环境准备 使用服务器 Centos 8.4 镜像,默认操作系统版本 4.18.0-3 ...
- kubeadm部署k8s1.9高可用集群--4部署master节点
部署master节点 kubernetes master 节点包含的组件: kube-apiserver kube-scheduler kube-controller-manager 本文档介绍部署一 ...
- [转帖]Breeze部署kubernetes1.13.2高可用集群
Breeze部署kubernetes1.13.2高可用集群 2019年07月23日 10:51:41 willblog 阅读数 673 标签: kubernetes 更多 个人分类: kubernet ...
- kubernetes kubeadm部署高可用集群
k8s kubeadm部署高可用集群 kubeadm是官方推出的部署工具,旨在降低kubernetes使用门槛与提高集群部署的便捷性. 同时越来越多的官方文档,围绕kubernetes容器化部署为环境 ...
- 使用二进制的方式部署 K8S-1.16 高可用集群
一.项目介绍 项目致力于让有意向使用原生kubernetes集群的企业或个人,可以方便的.系统的使用二进制的方式手工搭建kubernetes高可用集群.并且让相关的人员可以更好的理解kubernete ...
随机推荐
- Momenta电话面试笔记
- 一文让你熟练掌握Linux的ncat(nc)命令
一文让你熟练掌握Linux的ncat(nc)命令 ncat 或者说 nc 是一款功能类似 cat 的工具,但是是用于网络的.它是一款拥有多种功能的 CLI 工具,可以用来在网络上读.写以及重定向数据. ...
- SpringMvc返回Json调试
spring-web-5.0.6.RELEASE.jar!/org/springframework/web/method/support/HandlerMethodReturnValueHandler ...
- A. Vasya and Chocolate
链接 [http://codeforces.com/contest/1065/problem/A] 分析 一个公式完事 代码 #include<bits/stdc++.h> using n ...
- VS2015安装及单元测试
今天跟大家分享一下我的VS2015的安装过程以及对单元测试的操作步骤.VS2015是一款非常好用的编程软件,内容很多很广泛,是深受欢迎的一款软件,较之于VC++6.0有着一些好处,对VC6.0++来说 ...
- 第八次Scrum meeting
第八次Scrum meeting 任务及完成度: 成员 12.29 12.30 陈谋 任务1040:完成stackoverflow的数据处理后的json处理(99%) 任务1114-1:完成对网页数 ...
- ml-模型评估与选择
1.基本概念 错误率E=分类错误的样本数a/总样本数m:精度=1-a/m 经验误差/训练误差:在训练集上产生的 泛化误差:在测试集上产生的=====>要把这个泛化误差降到最小化. 2.评估方法 ...
- 北京大学信息科学技术学院本科生课程体系课程大纲选登——计算机网络与WEB技术
- TCP报文格式详解
TCP报文是TCP层传输的数据单元,也叫报文段. 1.端口号:用来标识同一台计算机的不同的应用进程. 1)源端口:源端口和IP地址的作用是标识报文的返回地址. 2)目的端口:端口指明接收方计算机上的应 ...
- 软件工程_4th weeks
本周要进行阿尔法版本的发布,因此我们做了一些代码和测试方面的工作.当然了下了课后第一件事还是巩固课上讲的知识,比如MVP.四象限.看了演讲<最后一课>等. 一.结对编程 本周的结对编程继续 ...