kubernetes-集群构建
主机 | 主机名 | 集色角色 |
192.168.1.200 | master | deploy、etcd、lb1、master1 |
192.168.1.201 | master2 | lb2、master2 |
192.168.1.202 | node | etcd2、node1 |
192.168.1.203 | node2 | etcd3、node2 |
192.168.1.250 | vip |
yum install -y epel-release
yum install -y python
iptables -F
setenforce
[root@master ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:cfoSPSgeEkAkgY08UIVWK2t2eNJIrKph5wkRkZX7AKs root@master
The key's randomart image is:
+---[RSA ]----+
|BOB=+ |
|oB=o . |
| oB + . . |
| +.O . * |
|o.B B o S o |
|Eo.+ + o o . |
|oo . . . . |
|o.+ . . |
|. o |
+----[SHA256]-----+
[root@master ~]# for ip in ; do ssh-copy-id 192.168..$ip; done
5:测试是否可以免密登录
[root@master ~]# ssh 192.168.1.200
Last login: Wed Dec :: from 192.168.1.2
[root@master ~]# exit
登出
Connection to 192.168.1.200 closed.
[root@master ~]# ssh 192.168.1.201
Last login: Wed Dec :: from 192.168.1.2
[root@master2 ~]# exit
登出
Connection to 192.168.1.201 closed.
[root@master ~]# ssh 192.168.1.202
Last login: Wed Dec :: from 192.168.1.200
[root@node1 ~]# exit
登出
Connection to 192.168.1.202 closed.
[root@master ~]# ssh 192.168.1.203
Last login: Wed Dec :: from 192.168.1.2
[root@node2 ~]# exit
登出
Connection to 192.168.1.203 closed.
[root@master ~]# chmod +x easzup
[root@master ~]# ./easzup -D
[root@master ~]# ls /etc/ansible/
.prepare.yml .docker.yml .network.yml .upgrade.yml .setup.yml bin down pics tools
.etcd.yml .kube-master.yml .cluster-addon.yml .backup.yml .clean.yml dockerfiles example README.md
.containerd.yml .kube-node.yml .harbor.yml .restore.yml ansible.cfg docs manifests roles
7:配置hosts集群参数
[root@master ~ ]# cd /etc/ansible
[root@master ansible]# cp example/hosts.multi-node hosts
[root@master ansible]# vim hosts
[etcd] ##设置etcd节点ip
192.168.1.200 NODE_NAME=etcd1
192.168.1.202 NODE_NAME=etcd2
192.168.1.203 NODE_NAME=etcd3 [kube-master] ##设置master节点ip
192.168.1.200
192.168.1.201 [kube-node] ##设置node节点ip
192.168.1.202
192.168.1.203 [ex-lb] ##设置lb节点ip和VIP
192.168.1.200 LB_ROLE=backup EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=
192.168.1.201 LB_ROLE=master EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=
[root@master ansible]# ansible all -m ping
[DEPRECATION WARNING]: The TRANSFORM_INVALID_GROUP_CHARS settings is set to allow bad characters in group names by default, this will change, but still be user
configurable on deprecation. This feature will be removed in version 2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details 192.168.1.201 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.1.202 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.1.203 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.1.200 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
二、开始部署集群
【deploy节点操作】手动安装方式
1:安装ca证书
[root@master ansible]# ansible-playbook .prepare.yml
[root@master ansible]# for ip in ; do ETCDCTL_API= etcdctl --endpoints=https://192.168.1.$ip:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem endpoint healt; done
https://192.168.1.200:2379 is healthy: successfully committed proposal: took = 5.658163ms
https://192.168.1.202:2379 is healthy: successfully committed proposal: took = 6.384588ms
https://192.168.1.203:2379 is healthy: successfully committed proposal: took = 7.386942ms
3:安装docker
[root@master ansible]# ansible-playbook .docker.yml
[root@master ansible]# kubectl get componentstatus
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd- Healthy {"health":"true"}
etcd- Healthy {"health":"true"}
etcd- Healthy {"health":"true"}
5:安装node节点
[root@master ansible]# ansible-playbook .kube-node.yml
[root@master ansible]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.1.200 Ready,SchedulingDisabled master 4m45s v1.15.0
192.168.1.201 Ready,SchedulingDisabled master 4m45s v1.15.0
192.168.1.202 Ready node 12s v1.15.0
192.168.1.203 Ready node 12s v1.15.0
[root@master ansible]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-amd64-7bk5w / Running 61s
kube-flannel-ds-amd64-blcxx / Running 61s
kube-flannel-ds-amd64-c4sfx / Running 61s
kube-flannel-ds-amd64-f8pnz / Running 61s
[root@master ansible]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
heapster ClusterIP 10.68.191.0 <none> /TCP 13m
kube-dns ClusterIP 10.68.0.2 <none> /UDP,/TCP,/TCP 15m
kubernetes-dashboard NodePort 10.68.115.45 <none> :/TCP 13m
metrics-server ClusterIP 10.68.116.163 <none> /TCP 15m
traefik-ingress-service NodePort 10.68.106.241 <none> :/TCP,:/TCP 12m
【自动安装方式】
一步执行上面所有手动安装操作
[root@master ansible]# ansible-playbook .setup.yml
[root@master ansible]# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
192.168.1.200 58m % 960Mi %
192.168.1.201 34m % 1018Mi %
192.168.1.202 76m % 549Mi %
192.168.1.203 89m % 568Mi %
[root@master ansible]# kubectl top pod --all-namespaces
NAMESPACE NAME CPU(cores) MEMORY(bytes)
kube-system coredns-797455887b-9nscp 5m 22Mi
kube-system coredns-797455887b-k92wv 5m 19Mi
kube-system heapster-5f848f54bc-vvwzx 1m 11Mi
kube-system kube-flannel-ds-amd64-7bk5w 3m 20Mi
kube-system kube-flannel-ds-amd64-blcxx 2m 19Mi
kube-system kube-flannel-ds-amd64-c4sfx 2m 18Mi
kube-system kube-flannel-ds-amd64-f8pnz 2m 10Mi
kube-system kubernetes-dashboard-5c7687cf8-hnbdp 1m 22Mi
kube-system metrics-server-85c7b8c8c4-6q4vj 1m 16Mi
kube-system traefik-ingress-controller-766dbfdddd-98trv 4m 17Mi
查看集群信息
[root@master ansible]# kubectl cluster-info
Kubernetes master is running at https://192.168.1.200:6443
CoreDNS is running at https://192.168.1.200:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at https://192.168.1.200:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
Metrics-server is running at https://192.168.1.200:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
:测试DNS
①创建一个nginx.service
[root@master ansible]# kubectl run nginx --image=nginx --expose --port=
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
service/nginx created
deployment.apps/nginx created
[root@master ansible]# kubectl run busybox --rm -it --image=busybox /bin/sh
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx.default.svc.cluster.local
Server: 10.68.0.2
Address: 10.68.0.2: Name: nginx.default.svc.cluster.local
Address: 10.68.243.55
三、增加node节点,IP:192.168.1.204
【deploy节点操作】
1:拷贝公钥到新的node节点机器上
[root@master ansible]# ssh-copy-id 192.168.1.204
2:修改hosts文件,添加新的node节点IP
[root@master ansible]# vim hosts
[kube-node]
192.168.1.202
192.168.1.203
192.168.1.204
[root@master ansible]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.1.200 Ready,SchedulingDisabled master 9h v1.15.0
192.168.1.201 Ready,SchedulingDisabled master 9h v1.15.0
192.168.1.202 Ready node 9h v1.15.0
192.168.1.203 Ready node 9h v1.15.0
192.168.1.204 Ready node 2m11s v1.15.0
[root@master ansible]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-797455887b-9nscp / Running 31h 172.20.3.2 192.168.1.203 <none> <none>
coredns-797455887b-k92wv / Running 31h 172.20.2.2 192.168.1.202 <none> <none>
heapster-5f848f54bc-vvwzx / Running 31h 172.20.2.4 192.168.1.202 <none> <none>
kube-flannel-ds-amd64-7bk5w / Running 31h 192.168.1.202 192.168.1.202 <none> <none>
kube-flannel-ds-amd64-blcxx / Running 31h 192.168.1.200 192.168.1.200 <none> <none>
kube-flannel-ds-amd64-c4sfx / Running 31h 192.168.1.203 192.168.1.203 <none> <none>
kube-flannel-ds-amd64-f8pnz / Running 31h 192.168.1.201 192.168.1.201 <none> <none>
kube-flannel-ds-amd64-vdd7n / Running 21h 192.168.1.204 192.168.1.204 <none> <none>
kubernetes-dashboard-5c7687cf8-hnbdp / Running 31h 172.20.3.3 192.168.1.203 <none> <none>
metrics-server-85c7b8c8c4-6q4vj / Running 31h 172.20.2.3 192.168.1.202 <none> <none>
traefik-ingress-controller-766dbfdddd-98trv / Running 31h 172.20.3.4 192.168.1.203 <none> <none>
kubernetes-集群构建的更多相关文章
- 企业运维实践-丢弃手中的 docker build , 使用Kaniko直接在Kubernetes集群或Containerd环境中快速进行构建推送容器镜像
关注「WeiyiGeek」公众号 设为「特别关注」每天带你玩转网络安全运维.应用开发.物联网IOT学习! 希望各位看友[关注.点赞.评论.收藏.投币],助力每一个梦想. 本章目录 目录 首发地址: h ...
- kubeadm搭建kubernetes集群之一:构建标准化镜像
使用docker可以批量管理多个容器,但都是在同一台电脑内进行的,这在实际生产环境中是不够用的,如何突破单机的限制?让多个电脑上的容器可以像单机上的docker-compose.yml管理的那样方便呢 ...
- Kubernetes — 从0到1:搭建一个完整的Kubernetes集群
准备工作 首先,准备机器.最直接的办法,自然是到公有云上申请几个虚拟机.当然,如果条件允许的话,拿几台本地的物理服务器来组集群是最好不过了.这些机器只要满足如下几个条件即可: 满足安装 Docker ...
- kubernetes集群pod使用tc进行网络资源限额
kubernetes集群pod使用tc进行网络资源限额 Docker容器可以实现CPU,内存,磁盘的IO限额,但是没有实现网络IO的限额.主要原因是在实际使用中,构建的网络环境是往超级复杂的大型网络. ...
- Kubernetes集群搭建之企业级环境中基于Harbor搭建自己的私有仓库
搭建背景 企业环境中使用Docker环境,一般出于安全考虑,业务使用的镜像一般不会从第三方公共仓库下载.那么就要引出今天的主题 企业级环境中基于Harbor搭建自己的安全认证仓库 介绍 名称:Harb ...
- Centos7部署Kubernetes集群
目录贴:Kubernetes学习系列 1.环境介绍及准备: 1.1 物理机操作系统 物理机操作系统采用Centos7.3 64位,细节如下. [root@localhost ~]# uname -a ...
- Harbor快速部署到Kubernetes集群及登录问题解决
Harbor(https://goharbor.io)是一个功能强大的容器镜像管理和服务系统,用于提供专有容器镜像服务.随着云原生架构的广泛使用,原来由VMWare开发的Harbor也加入了云原生基金 ...
- kubeadm搭建kubernetes集群之二:创建master节点
在上一章kubeadm搭建kubernetes集群之一:构建标准化镜像中我们用VMware安装了一个CentOS7虚拟机,并且打算用这个虚拟机的镜像文件作为后续整个kubernetes的标准化镜像,现 ...
- Gitlab CI 集成 Kubernetes 集群部署 Spring Boot 项目
在上一篇博客中,我们成功将 Gitlab CI 部署到了 Docker 中去,成功创建了 Gitlab CI Pipline 来执行 CI/CD 任务.那么这篇文章我们更进一步,将它集成到 K8s 集 ...
- 阿里云上万个 Kubernetes 集群大规模管理实践
点击下载<不一样的 双11 技术:阿里巴巴经济体云原生实践> 本文节选自<不一样的 双11 技术:阿里巴巴经济体云原生实践>一书,点击上方图片即可下载! 作者 | 汤志敏,阿里 ...
随机推荐
- Python 常用模块系列学习(1)--random模块常用function总结--简单应用--验证码生成
random模块--random是一个生成器 首先: import random #导入模块 print (help(random)) #打印random模块帮助信息 常用function ...
- three.js使用卷积法实现物体描边效果
法线延展法 网上使用法线延展法实现物体描边效果的文章比较多,这里不再描述. 但是这种方法有个缺点:当两个面的法线夹角差别较大时,两个面的描边无法完美连接.如下图所示: 卷积法 这里使用另一种方法卷积法 ...
- SpringBoot学习(二)—— springboot快速整合spring security组件
Spring Security 简介 spring security的核心功能为认证(Authentication),授权(Authorization),即认证用户是否能访问该系统,和授权用户可以在系 ...
- elementui分页记录,reserve-selection
第一步:在<el-table></el-table>标签中加上 :row-key="getRowKeys" 第二步:在<el-table-column ...
- opencv resize图片为正方形尺寸
在深度学习中,模型的输入size通常是正方形尺寸的,比如300 x 300这样.直接resize的话,会把图像拉的变形.通常我们希望resize以后仍然保持图片的宽高比. 例如: 如果直接resize ...
- iOS开发tips-PhotoKit
概述 PhotoKit应该是iOS 8 开始引入为了替代之前ALAssetsLibrary的相册资源访问的标准库,后者在iOS 9开始被弃用.当然相对于ALAssetsLibrary其扩展性更高,ap ...
- 2019-10-9:渗透测试,基础学习the-backdoor-factory-master(后门工厂)初接触
该文章仅供学习,利用方法来自网络文章,仅供参考 the-backdoor-factory-master(后门工制造厂)原理:可执行二进制文件中有大量的00,这些00是不包含数据的,将这些数据替换成pa ...
- linux常见配置文件路径
1:/etc/sysconfig/i18n (语言配置文件). 2:/etc/sysconfig/network-scripts/ifcfg-eth0 ...
- 【C/C++】之C/C++快速入门
1 基本数据类型 C/C++语言中的基本数据类型及其属性如下表所示: 类型 取值范围 大致范围 整形 int -2147483648 ~ +2147483647 (即-231 ~ +(231-1 ...
- centos 7 Atlas keepalived 实现高可用 MySQL 5.7 MHA环境读写分离
目录 简介 相关链接 环境准备 Atlas 环境 MySQL 集群环境 Atlas 安装 和 配置 为数据库的密码加密 修改配置文件 启动 Keepalived 安装配置 安装 master 配置 K ...