kubernetes-集群构建
| 主机 | 主机名 | 集色角色 |
| 192.168.1.200 | master | deploy、etcd、lb1、master1 |
| 192.168.1.201 | master2 | lb2、master2 |
| 192.168.1.202 | node | etcd2、node1 |
| 192.168.1.203 | node2 | etcd3、node2 |
| 192.168.1.250 | vip |
yum install -y epel-release
yum install -y python
iptables -F
setenforce
[root@master ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:cfoSPSgeEkAkgY08UIVWK2t2eNJIrKph5wkRkZX7AKs root@master
The key's randomart image is:
+---[RSA ]----+
|BOB=+ |
|oB=o . |
| oB + . . |
| +.O . * |
|o.B B o S o |
|Eo.+ + o o . |
|oo . . . . |
|o.+ . . |
|. o |
+----[SHA256]-----+
[root@master ~]# for ip in ; do ssh-copy-id 192.168..$ip; done
5:测试是否可以免密登录
[root@master ~]# ssh 192.168.1.200
Last login: Wed Dec :: from 192.168.1.2
[root@master ~]# exit
登出
Connection to 192.168.1.200 closed.
[root@master ~]# ssh 192.168.1.201
Last login: Wed Dec :: from 192.168.1.2
[root@master2 ~]# exit
登出
Connection to 192.168.1.201 closed.
[root@master ~]# ssh 192.168.1.202
Last login: Wed Dec :: from 192.168.1.200
[root@node1 ~]# exit
登出
Connection to 192.168.1.202 closed.
[root@master ~]# ssh 192.168.1.203
Last login: Wed Dec :: from 192.168.1.2
[root@node2 ~]# exit
登出
Connection to 192.168.1.203 closed.
[root@master ~]# chmod +x easzup
[root@master ~]# ./easzup -D
[root@master ~]# ls /etc/ansible/
.prepare.yml .docker.yml .network.yml .upgrade.yml .setup.yml bin down pics tools
.etcd.yml .kube-master.yml .cluster-addon.yml .backup.yml .clean.yml dockerfiles example README.md
.containerd.yml .kube-node.yml .harbor.yml .restore.yml ansible.cfg docs manifests roles
7:配置hosts集群参数
[root@master ~ ]# cd /etc/ansible
[root@master ansible]# cp example/hosts.multi-node hosts
[root@master ansible]# vim hosts
[etcd] ##设置etcd节点ip
192.168.1.200 NODE_NAME=etcd1
192.168.1.202 NODE_NAME=etcd2
192.168.1.203 NODE_NAME=etcd3 [kube-master] ##设置master节点ip
192.168.1.200
192.168.1.201 [kube-node] ##设置node节点ip
192.168.1.202
192.168.1.203 [ex-lb] ##设置lb节点ip和VIP
192.168.1.200 LB_ROLE=backup EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=
192.168.1.201 LB_ROLE=master EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=
[root@master ansible]# ansible all -m ping
[DEPRECATION WARNING]: The TRANSFORM_INVALID_GROUP_CHARS settings is set to allow bad characters in group names by default, this will change, but still be user
configurable on deprecation. This feature will be removed in version 2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details 192.168.1.201 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.1.202 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.1.203 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.1.200 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
二、开始部署集群
【deploy节点操作】手动安装方式
1:安装ca证书
[root@master ansible]# ansible-playbook .prepare.yml
[root@master ansible]# for ip in ; do ETCDCTL_API= etcdctl --endpoints=https://192.168.1.$ip:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem endpoint healt; done
https://192.168.1.200:2379 is healthy: successfully committed proposal: took = 5.658163ms
https://192.168.1.202:2379 is healthy: successfully committed proposal: took = 6.384588ms
https://192.168.1.203:2379 is healthy: successfully committed proposal: took = 7.386942ms
3:安装docker
[root@master ansible]# ansible-playbook .docker.yml
[root@master ansible]# kubectl get componentstatus
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd- Healthy {"health":"true"}
etcd- Healthy {"health":"true"}
etcd- Healthy {"health":"true"}
5:安装node节点
[root@master ansible]# ansible-playbook .kube-node.yml
[root@master ansible]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.1.200 Ready,SchedulingDisabled master 4m45s v1.15.0
192.168.1.201 Ready,SchedulingDisabled master 4m45s v1.15.0
192.168.1.202 Ready node 12s v1.15.0
192.168.1.203 Ready node 12s v1.15.0
[root@master ansible]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-amd64-7bk5w / Running 61s
kube-flannel-ds-amd64-blcxx / Running 61s
kube-flannel-ds-amd64-c4sfx / Running 61s
kube-flannel-ds-amd64-f8pnz / Running 61s
[root@master ansible]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
heapster ClusterIP 10.68.191.0 <none> /TCP 13m
kube-dns ClusterIP 10.68.0.2 <none> /UDP,/TCP,/TCP 15m
kubernetes-dashboard NodePort 10.68.115.45 <none> :/TCP 13m
metrics-server ClusterIP 10.68.116.163 <none> /TCP 15m
traefik-ingress-service NodePort 10.68.106.241 <none> :/TCP,:/TCP 12m
【自动安装方式】
一步执行上面所有手动安装操作
[root@master ansible]# ansible-playbook .setup.yml
[root@master ansible]# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
192.168.1.200 58m % 960Mi %
192.168.1.201 34m % 1018Mi %
192.168.1.202 76m % 549Mi %
192.168.1.203 89m % 568Mi %
[root@master ansible]# kubectl top pod --all-namespaces
NAMESPACE NAME CPU(cores) MEMORY(bytes)
kube-system coredns-797455887b-9nscp 5m 22Mi
kube-system coredns-797455887b-k92wv 5m 19Mi
kube-system heapster-5f848f54bc-vvwzx 1m 11Mi
kube-system kube-flannel-ds-amd64-7bk5w 3m 20Mi
kube-system kube-flannel-ds-amd64-blcxx 2m 19Mi
kube-system kube-flannel-ds-amd64-c4sfx 2m 18Mi
kube-system kube-flannel-ds-amd64-f8pnz 2m 10Mi
kube-system kubernetes-dashboard-5c7687cf8-hnbdp 1m 22Mi
kube-system metrics-server-85c7b8c8c4-6q4vj 1m 16Mi
kube-system traefik-ingress-controller-766dbfdddd-98trv 4m 17Mi
查看集群信息
[root@master ansible]# kubectl cluster-info
Kubernetes master is running at https://192.168.1.200:6443
CoreDNS is running at https://192.168.1.200:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at https://192.168.1.200:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
Metrics-server is running at https://192.168.1.200:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
:测试DNS
①创建一个nginx.service
[root@master ansible]# kubectl run nginx --image=nginx --expose --port=
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
service/nginx created
deployment.apps/nginx created
[root@master ansible]# kubectl run busybox --rm -it --image=busybox /bin/sh
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx.default.svc.cluster.local
Server: 10.68.0.2
Address: 10.68.0.2: Name: nginx.default.svc.cluster.local
Address: 10.68.243.55
三、增加node节点,IP:192.168.1.204
【deploy节点操作】
1:拷贝公钥到新的node节点机器上
[root@master ansible]# ssh-copy-id 192.168.1.204
2:修改hosts文件,添加新的node节点IP
[root@master ansible]# vim hosts
[kube-node]
192.168.1.202
192.168.1.203
192.168.1.204
[root@master ansible]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.1.200 Ready,SchedulingDisabled master 9h v1.15.0
192.168.1.201 Ready,SchedulingDisabled master 9h v1.15.0
192.168.1.202 Ready node 9h v1.15.0
192.168.1.203 Ready node 9h v1.15.0
192.168.1.204 Ready node 2m11s v1.15.0
[root@master ansible]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-797455887b-9nscp / Running 31h 172.20.3.2 192.168.1.203 <none> <none>
coredns-797455887b-k92wv / Running 31h 172.20.2.2 192.168.1.202 <none> <none>
heapster-5f848f54bc-vvwzx / Running 31h 172.20.2.4 192.168.1.202 <none> <none>
kube-flannel-ds-amd64-7bk5w / Running 31h 192.168.1.202 192.168.1.202 <none> <none>
kube-flannel-ds-amd64-blcxx / Running 31h 192.168.1.200 192.168.1.200 <none> <none>
kube-flannel-ds-amd64-c4sfx / Running 31h 192.168.1.203 192.168.1.203 <none> <none>
kube-flannel-ds-amd64-f8pnz / Running 31h 192.168.1.201 192.168.1.201 <none> <none>
kube-flannel-ds-amd64-vdd7n / Running 21h 192.168.1.204 192.168.1.204 <none> <none>
kubernetes-dashboard-5c7687cf8-hnbdp / Running 31h 172.20.3.3 192.168.1.203 <none> <none>
metrics-server-85c7b8c8c4-6q4vj / Running 31h 172.20.2.3 192.168.1.202 <none> <none>
traefik-ingress-controller-766dbfdddd-98trv / Running 31h 172.20.3.4 192.168.1.203 <none> <none>
kubernetes-集群构建的更多相关文章
- 企业运维实践-丢弃手中的 docker build , 使用Kaniko直接在Kubernetes集群或Containerd环境中快速进行构建推送容器镜像
关注「WeiyiGeek」公众号 设为「特别关注」每天带你玩转网络安全运维.应用开发.物联网IOT学习! 希望各位看友[关注.点赞.评论.收藏.投币],助力每一个梦想. 本章目录 目录 首发地址: h ...
- kubeadm搭建kubernetes集群之一:构建标准化镜像
使用docker可以批量管理多个容器,但都是在同一台电脑内进行的,这在实际生产环境中是不够用的,如何突破单机的限制?让多个电脑上的容器可以像单机上的docker-compose.yml管理的那样方便呢 ...
- Kubernetes — 从0到1:搭建一个完整的Kubernetes集群
准备工作 首先,准备机器.最直接的办法,自然是到公有云上申请几个虚拟机.当然,如果条件允许的话,拿几台本地的物理服务器来组集群是最好不过了.这些机器只要满足如下几个条件即可: 满足安装 Docker ...
- kubernetes集群pod使用tc进行网络资源限额
kubernetes集群pod使用tc进行网络资源限额 Docker容器可以实现CPU,内存,磁盘的IO限额,但是没有实现网络IO的限额.主要原因是在实际使用中,构建的网络环境是往超级复杂的大型网络. ...
- Kubernetes集群搭建之企业级环境中基于Harbor搭建自己的私有仓库
搭建背景 企业环境中使用Docker环境,一般出于安全考虑,业务使用的镜像一般不会从第三方公共仓库下载.那么就要引出今天的主题 企业级环境中基于Harbor搭建自己的安全认证仓库 介绍 名称:Harb ...
- Centos7部署Kubernetes集群
目录贴:Kubernetes学习系列 1.环境介绍及准备: 1.1 物理机操作系统 物理机操作系统采用Centos7.3 64位,细节如下. [root@localhost ~]# uname -a ...
- Harbor快速部署到Kubernetes集群及登录问题解决
Harbor(https://goharbor.io)是一个功能强大的容器镜像管理和服务系统,用于提供专有容器镜像服务.随着云原生架构的广泛使用,原来由VMWare开发的Harbor也加入了云原生基金 ...
- kubeadm搭建kubernetes集群之二:创建master节点
在上一章kubeadm搭建kubernetes集群之一:构建标准化镜像中我们用VMware安装了一个CentOS7虚拟机,并且打算用这个虚拟机的镜像文件作为后续整个kubernetes的标准化镜像,现 ...
- Gitlab CI 集成 Kubernetes 集群部署 Spring Boot 项目
在上一篇博客中,我们成功将 Gitlab CI 部署到了 Docker 中去,成功创建了 Gitlab CI Pipline 来执行 CI/CD 任务.那么这篇文章我们更进一步,将它集成到 K8s 集 ...
- 阿里云上万个 Kubernetes 集群大规模管理实践
点击下载<不一样的 双11 技术:阿里巴巴经济体云原生实践> 本文节选自<不一样的 双11 技术:阿里巴巴经济体云原生实践>一书,点击上方图片即可下载! 作者 | 汤志敏,阿里 ...
随机推荐
- runlevel init
init概念存在于cnetos7以下,配置文件/etc/inittab init 以及 文本和图形界面切换(可以用ctrl+alt+n 或者 init3 5切换,不是重启切) 命令init N 0 关 ...
- Linux定时任务 crontab(-l -e)、at、batch
1.周期性定时任务crontab cron['krɒn] 一时间单位 table crontab -e 进入编辑定时任务界面,每一行代表一个定时任务,#开头的行为注释行,一行分成6列 分钟 小时 日 ...
- shell脚本1——变量 $、read、``
与Shell变量相关的几个命令: 变量只在当前Shell中生效. source 这个命令让脚本影响他们父Shell的环境(. 可以代替source命令) export 这个命令可以让脚本影响其子She ...
- 使用Charles设置https代理到http以及证书安装(服务端篇)
1.下载ssl证书到[登录],并且设置证书[始终信任] 2.SSL Proxying设置,Location为*,可以抓全部接口的https请求 参考:https://www.jianshu.com/p ...
- python中的__str__和__repr__方法
如果要把一个类的实例变成 str,就需要实现特殊方法__str__(): class A(object): def __init__(self,name,age): self.name=name se ...
- sqlalchemy 源码分析之create_engine引擎的创建
引擎是sqlalchemy的核心,不管是 sql core 还是orm的使用都需要依赖引擎的创建,为此我们研究下,引擎是如何创建的. from sqlalchemy import create_eng ...
- ViewPage+Fragment的使用用法
一.概述 从前面几篇文章,我们知道,实现ViewPager是要有适配器的,我们前面用的适配器是PagerAdapter,而对于fragment,它所使用的适配器是:FragmentPagerAdapt ...
- 分享一个撩妹、装13神技能,0基础用Python暴力破解WiFi密码
WiFi密码Python暴力破解 Python密码破解部分截图 获取视频资料,转发此文+点击喜欢,然后获取资料请加Python交流群:580478401,就可以获取视频教程+源码 环境准备: py ...
- 【Luogu P2024&P1892】食物链&团伙(并查集拓展域)
Luogu P1892 Luogu P2024 这两道一眼看过去很容易发现可以用并查集来做--但是当我们仔细阅读题面后,会发现其实并没有那么简单. 我们知道并查集可以很轻松地维护具有传递性的信息,也就 ...
- Selenium+Java(六)Selenium 强制等待、显式等待、隐实等待
前言 在实际测试过程中,由于网速或性能方面的原因,打开相应的网页后或在网页上做了相应的操作,网页上的元素可能不会马上加载出来,这个时候需要在定位元素前等待一下,等元素加载出来后再进行定位,根据实际使用 ...