一、利用ansible部署kubernetes集群环境准备

  基于二进制方式部署和利用ansible-playbook实现自动化:既提供一键安装脚本,也可以分步执行安装各个组件,同时讲解每一步主要参数配置和注意事项;二进制方式部署有助于理解系统各组件的交互原理和熟悉组件启动参数,有助于快速排查解决实际问题。

版本组件

  kubernetes v1.9.7
  etcd v3.3.4
  docker 18.03.0-ce
  calico/node:v3.0.6
  calico/cni:v2.0.5
  calico/kube-controllers:v2.0.4
  centos 7.3+
 
集群规划和基础参数设定:
  1.高可用集群所需节点配置如下:
    部署节点×1:运行ansible脚本的节点
    etcd节点×3:注意etcd集群节点必须是1,3,5,7……奇数个节点
    master节点×1:运行集群主要组件
    node节点×3:真正应用部署的节点,根据需要增加机器配置和节点数
  2.在部署节点准备ansible:使用ansible的docker环境启动:
    1° 下载内部源配置脚本并运行脚本:
wget http://download2.yunwei.edu/shell/yum-repo.sh
bash yum-repo.sh

    2°下载并安装docker:

wget http://download2.yunwei.edu/shell/docker.tar.gz
解压后切换到docker目录下
运行docker.sh脚本
查看docker服务是否启动
docker image
#/bin/bash

tar zxvf docker-app.tar.gz -C /usr/local/bin/

mkdir -p /etc/docker
mkdir -p /etc/docker/certs.d/reg.yunwei.edu cp ca.crt /etc/docker/certs.d/reg.yunwei.edu/ echo "172.16.254.20 reg.yunwei.edu">>/etc/hosts cat <<EOF>/etc/docker/daemon.json
{
"registry-mirrors": ["http://cc83932c.m.daocloud.io"],
"max-concurrent-downloads": ,
"log-driver": "json-file",
"log-level": "warn",
"log-opts": {
"max-size": "10m",
"max-file": ""
}
}
EOF cat <<EOF>/etc/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io [Service]
Environment="PATH=/usr/local/bin:/bin:/sbin:/usr/bin:/usr/sbin"
ExecStart=/usr/local/bin/dockerd
ExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/ -j ACCEPT
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process [Install]
WantedBy=multi-user.target
EOF systemctl daemon-reload && systemctl enable docker.service && systemctl start docker.service

docker.sh   

    3°下载并运行docker版的ansible:

docker pull reg.yunwei.edu/learn/ansible:alpine3
docker run -itd -v /etc/ansible:/etc/ansible -v /etc/kubernetes/:/etc/kubernetes/ -v /root/.kube:/root/.kube -v /usr/local/bin/:/usr/local/bin/ 1acb4fd5df5b /bin/sh

    4° 配置每台机器之间主机名解析

    5° 进入ansible容器,设置免密钥登录

测试各节点是否正常
ansible all -m ping

  3.在部署节点上传ansible工作文件:

 wget http://download2.yunwei.edu/shell/kubernetes.tar.gz
解压后:
bash       harbor-offline-installer-v1.4.0.tgz  k8s197.tar.gz           scope.yaml
ca.tar.gz  image.tar.gz                         kube-yunwei-197.tar.gz  sock-shop 
    

解压kube-yunwei-197.tar.gz 并将kube-yunwei-下的所有文件移动到ansible下,删除kube-yunwei-
[root@localhost kubernetes]# tar xf kube-yunwei-.tar.gz
[root@localhost kubernetes]# ls
bash harbor-offline-installer-v1.4.0.tgz k8s197.tar.gz kube-yunwei-.tar.gz sock-shop
ca.tar.gz image.tar.gz kube-yunwei- scope.yaml
[root@localhost kubernetes]# cd kube-yunwei-
[root@localhost kube-yunwei-]# ls
.prepare.yml .docker.yml .kube-node.yml .clean.yml bin manifests tools
.etcd.yml .kube-master.yml .network.yml ansible.cfg example roles
[root@localhost kube-yunwei-]# mv * /etc/ansible/
解压 k8s197.tar.gz 并将bin下的所有文件移动到ansible下的bin目录下,删除kubernetes下的bin
[root@localhost kubernetes]# tar xf k8s197.tar.gz
[root@localhost kubernetes]# ls
bash ca.tar.gz image k8s197.tar.gz kube-yunwei-.tar.gz sock-shop
bin harbor-offline-installer-v1.4.0.tgz image.tar.gz kube-yunwei- scope.yaml
[root@localhost kubernetes]# cd bin
[root@localhost bin]# ls
bridge docker dockerd etcdctl kubectl portmap
calicoctl docker-compose docker-init flannel kubelet
cfssl docker-containerd docker-proxy host-local kube-proxy
cfssl-certinfo docker-containerd-ctr docker-runc kube-apiserver kube-scheduler
cfssljson docker-containerd-shim etcd kube-controller-manager loopback
[root@localhost bin]# mv * /etc/ansible/bin/
[root@localhost bin]# ls
[root@localhost bin]# cd /etc/ansible/bin/
[root@localhost bin]# ls
bridge docker dockerd etcdctl kubectl portmap
calicoctl docker-compose docker-init flannel kubelet VERSION.md
cfssl docker-containerd docker-proxy host-local kube-proxy
cfssl-certinfo docker-containerd-ctr docker-runc kube-apiserver kube-scheduler
cfssljson docker-containerd-shim etcd kube-controller-manager loopback
切换到example目录下,将 hosts.s-master.example 文件复制到ansible目录下,并改名为hosts
[root@localhost kubernetes]# cd /etc/ansible/
[root@localhost ansible]# ls
.prepare.yml .docker.yml .kube-node.yml .clean.yml bin manifests tools
.etcd.yml .kube-master.yml .network.yml ansible.cfg example roles
[root@localhost ansible]# cd example/
[root@localhost example]# ls
hosts.s-master.example
[root@localhost example]# cp hosts.s-master.example ../hosts
[root@localhost example]# cd ..
[root@localhost ansible]# ls
.prepare.yml .docker.yml .kube-node.yml .clean.yml bin hosts roles
.etcd.yml .kube-master.yml .network.yml ansible.cfg example manifests tools
[root@localhost ansible]# vim hosts
# 在deploy节点生成CA相关证书,以及kubedns.yaml配置文件
- hosts: deploy
roles:
- deploy # 集群节点的公共配置任务
- hosts:
- kube-master
- kube-node
- deploy
- etcd
- lb
roles:
- prepare # [可选]多master部署时的负载均衡配置
- hosts: lb
roles:
- lb

01.prepare.yml

- hosts: etcd
roles:
- etcd

02.etcd.yml

- hosts:
- kube-master
- kube-node
roles:
- docker

03.docker.yml

- hosts: kube-master
roles:
- kube-master
- kube-node
# 禁止业务 pod调度到 master节点
tasks:
- name: 禁止业务 pod调度到 master节点
shell: "{{ bin_dir }}/kubectl cordon {{ NODE_IP }} "
when: DEPLOY_MODE != "allinone"
ignore_errors: true

04.kube-master.yml

- hosts: kube-node
roles:
- kube-node

05.kube-node.yml

# 集群网络插件部署,只能选择一种安装
- hosts:
- kube-master
- kube-node
roles:
- { role: calico, when: "CLUSTER_NETWORK == 'calico'" }
- { role: flannel, when: "CLUSTER_NETWORK == 'flannel'" }

06.network.yml

编辑hosts文件
# 部署节点:运行ansible 脚本的节点
[deploy]
192.168.42.30 # etcd集群请提供如下NODE_NAME、NODE_IP变量,请注意etcd集群必须是1,,,...奇数个节点
[etcd]
192.168.42.121 NODE_NAME=etcd1 NODE_IP="192.168.42.121"
192.168.42.122 NODE_NAME=etcd2 NODE_IP="192.168.42.122"
192.168.42.172 NODE_NAME=etcd3 NODE_IP="192.168.42.172" [kube-master]
192.168.42.121 NODE_IP="192.168.42.121" [kube-node]
192.168.42.121 NODE_IP="192.168.42.121"
192.168.42.122 NODE_IP="192.168.42,122"
192.168.42.172 NODE_IP="192.168.42.172" [all:vars]
# ---------集群主要参数---------------
#集群部署模式:allinone, single-master, multi-master
DEPLOY_MODE=single-master #集群 MASTER IP
MASTER_IP="192.168.42.121" #集群 APISERVER
KUBE_APISERVER="https://192.168.42.121:6443" #TLS Bootstrapping 使用的 Token,使用 head -c /dev/urandom | od -An -t x | tr -d ' ' 生成
BOOTSTRAP_TOKEN="d18f94b5fa585c7123f56803d925d2e7" # 集群网络插件,目前支持calico和flannel
CLUSTER_NETWORK="calico" # 部分calico相关配置,更全配置可以去roles/calico/templates/calico.yaml.j2自定义
# 设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 .安装calico网络组件.md
CALICO_IPV4POOL_IPIP="always"
# 设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手动指定端口"interface=eth0"或使用如下自动发现
IP_AUTODETECTION_METHOD="can-reach=223.5.5.5" # 部分flannel配置,详见roles/flannel/templates/kube-flannel.yaml.j2
FLANNEL_BACKEND="vxlan" # 服务网段 (Service CIDR),部署前路由不可达,部署后集群内使用 IP:Port 可达
SERVICE_CIDR="10.68.0.0/16" # POD 网段 (Cluster CIDR),部署前路由不可达,**部署后**路由可达
CLUSTER_CIDR="172.20.0.0/16" # 服务端口范围 (NodePort Range)
NODE_PORT_RANGE="20000-40000" # kubernetes 服务 IP (预分配,一般是 SERVICE_CIDR 中第一个IP)
CLUSTER_KUBERNETES_SVC_IP="10.68.0.1" # 集群 DNS 服务 IP (从 SERVICE_CIDR 中预分配)
CLUSTER_DNS_SVC_IP="10.68.0.2" # 集群 DNS 域名
CLUSTER_DNS_DOMAIN="cluster.local." # etcd 集群间通信的IP和端口, **根据实际 etcd 集群成员设置**
ETCD_NODES="etcd1=https://192.168.42.121:2380,etcd2=https://192.168.42.122:2380,etcd3=https://192.168.42.172:2380" # etcd 集群服务地址列表, **根据实际 etcd 集群成员设置**
ETCD_ENDPOINTS="https://192.168.42.121:2379,https://192.168.42.122:2379,https://192.168.42.172:2379" # 集群basic auth 使用的用户名和密码
BASIC_AUTH_USER="admin"
BASIC_AUTH_PASS="admin" # ---------附加参数--------------------
#默认二进制文件目录
bin_dir="/usr/local/bin" #证书目录
ca_dir="/etc/kubernetes/ssl" #部署目录,即 ansible 工作目录
base_dir="/etc/ansible"

二、部署kubernetes过程

进入容器,查看ansible目录下是否有文件,并且查看能否ping通其他节点
[root@localhost ansible]# docker exec -it 0918862b8730 /bin/sh
/ # cd /etc/ansible/
/etc/ansible # ls
.prepare.yml .network.yml hosts
.etcd.yml .clean.yml manifests
.docker.yml ansible.cfg roles
.kube-master.yml bin tools
.kube-node.yml example
/etc/ansible # ansible all -m ping
192.168.42.122 | SUCCESS => {
"changed": false,
"ping": "pong"
}
192.168.42.172 | SUCCESS => {
"changed": false,
"ping": "pong"
}
192.168.42.121 | SUCCESS => {
"changed": false,
"ping": "pong"
}
192.168.42.30 | SUCCESS => {
"changed": false,
"ping": "pong"
}
ansible-playbook .prepare.yml
ansible-playbook .etcd.yml
ansible-playbook .docker.yml
ansible-playbook .kube-master.yml
ansible-playbook .kube-node.yml
在执行06.network.yml之前要确保其他节点有镜像,所以解压image.tar.gz
[root@cicd kubernetes]# ls
bash ca.tar.gz image k8s197.tar.gz kube-yunwei- scope.yaml
bin harbor-offline-installer-v1.4.0.tgz image.tar.gz kubernetes.tar.gz kube-yunwei-.tar.gz sock-shop
[root@cicd kubernetes]# cd image
[root@cicd image]# ls
calico-cni-v2.0.5.tar coredns-1.0..tar.gz influxdb-v1.3.3.tar
calico-kube-controllers-v2.0.4.tar grafana-v4.4.3.tar kubernetes-dashboard-amd64-v1.8.3.tar.gz
calico-node-v3.0.6.tar heapster-v1.5.1.tar pause-amd64-3.1.tar
[root@cicd image]# scp ./* node1:/root/image
[root@cicd image]# scp ./* node2:/root/image
[root@cicd image]# scp ./* node3:/root/image
在node节点:
[root@node1 image]# for i in `ls`;do docker load -i $i;done
部署节点:
ansible-playbook .network.yml
CoreDNS,该DNS服务器利用SkyDNS的库来为Kubernetes pod和服务提供DNS请求。
/etc/ansible # ls
.prepare.yml .docker.yml .network.yml bin manifests
.etcd.retry .kube-master.yml .clean.yml example roles
.etcd.yml .kube-node.yml ansible.cfg hosts tools
/etc/ansible # cd manifests/
/etc/ansible/manifests # ls
coredns dashboard efk heapster ingress kubedns
/etc/ansible/manifests # cd coredns/
/etc/ansible/manifests/coredns # ls
coredns.yaml
/etc/ansible/manifests/coredns # kubectl create -f .
serviceaccount "coredns" created
clusterrole "system:coredns" created
clusterrolebinding "system:coredns" created
configmap "coredns" created
deployment "coredns" created
service "coredns" created /etc/ansible/manifests # ls
coredns    dashboard  efk        heapster   ingress    kubedns
/etc/ansible/manifests # cd dashboard/
/etc/ansible/manifests/dashboard # ls
1.6.3                      kubernetes-dashboard.yaml  ui-read-rbac.yaml
admin-user-sa-rbac.yaml    ui-admin-rbac.yaml
/etc/ansible/manifests/dashboard # kubectl create -f .
serviceaccount "admin-user" created
clusterrolebinding "admin-user" created
secret "kubernetes-dashboard-certs" created
serviceaccount "kubernetes-dashboard" created
role "kubernetes-dashboard-minimal" created
rolebinding "kubernetes-dashboard-minimal" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
clusterrole "ui-admin" created
rolebinding "ui-admin-binding" created
clusterrole "ui-read" created
rolebinding "ui-read-binding" created
[root@cicd ansible]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
coredns-6ff7588dc6-l8q4h / Running 7m 172.20.0.2 192.168.42.122
coredns-6ff7588dc6-x2jq5 / Running 7m 172.20.1.2 192.168.42.172
kube-flannel-ds-c688h / Running 14m 192.168.42.172 192.168.42.172
kube-flannel-ds-d4p4j / Running         14m 192.168.42.122 192.168.42.122
kube-flannel-ds-f8gp2 / Running 14m 192.168.42.121 192.168.42.121
kubernetes-dashboard-545b66db97-z9nr4 / Running 1m 172.20.1.3 192.168.42.172
[root@cicd ansible]# kubectl cluster-info
Kubernetes master is running at https://192.168.42.121:6443
CoreDNS is running at https://192.168.42.121:6443/api/v1/namespaces/kube-system/services/coredns:dns/proxy
kubernetes-dashboard is running at https://192.168.42.121:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

 密钥获取:kubectl -n kube-system describe secret $(kubectl -n kube-system get secret|grep admin-user|awk '{print $1}')
 
 
 

k8s集群部署(3)的更多相关文章

  1. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录

    0.目录 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.感谢 在此感谢.net ...

  2. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之集群部署环境规划(一)

    0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.环境规划 软件 版本 ...

  3. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之自签TLS证书及Etcd集群部署(二)

    0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.服务器设置 1.把每一 ...

  4. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之flanneld网络介绍及部署(三)

    0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.flanneld介绍 ...

  5. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之部署master/node节点组件(四)

    0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 1.部署master组件 ...

  6. (视频)asp.net core系列之k8s集群部署视频

    0.前言 应许多网友的要求,特此录制一下k8s集群部署的视频.在录制完成后发现视频的声音存在一点瑕疵,不过不影响大家的观感. 一.视频说明 1.视频地址: 如果有不懂,或者有疑问的欢迎留言.视频分为两 ...

  7. 在k8s集群部署SonarQube

    目录 1.2.存储环境 1.3.sonarqube版本 2.部署sonarqube 2.1.部署PostgreSQL 2.2.部署SonarQube 2.3.访问检查 SonarQube 是一款用于代 ...

  8. 基于k8s集群部署prometheus监控ingress nginx

    目录 基于k8s集群部署prometheus监控ingress nginx 1.背景和环境概述 2.修改prometheus配置 3.检查是否生效 4.配置grafana图形 基于k8s集群部署pro ...

  9. 基于k8s集群部署prometheus监控etcd

    目录 基于k8s集群部署prometheus监控etcd 1.背景和环境概述 2.修改prometheus配置 3.检查是否生效 4.配置grafana图形 基于k8s集群部署prometheus监控 ...

  10. 菜鸟系列k8s——k8s集群部署(2)

    k8s集群部署 1. 角色分配 角色 IP 安装组件 k8s-master 10.0.0.170 kube-apiserver,kube-controller-manager,kube-schedul ...

随机推荐

  1. Hadoop_14_MapReduce框架结构及其运行流程

    1.MapReduce原理篇 Mapreduce是一个分布式运算程序的编程框架,是用户开发“基于hadoop的数据分析应用”的核心框架: Mapreduce核心功能是将用户编写的业务逻辑代码和自带默认 ...

  2. 3. Dictionaries and Sets

    1. Generic Mapping Types The collections.abc module provides the Mapping and MutableMapping ABCs to ...

  3. websocket链接报错 ERR_CONNECTION_TIME_OUT

    每次打开页面都会报这个错误,链接超时,之前一直是可以的,查看日志之后发现链接数据库失败,修改启动配置文件,修改数据库配置信息,我准备数据库配置信息写死了,然后启动.解决!!!!

  4. Hosts 文件切换工具

    建议删除无用的注释,否则在启用全部的时候会把注释和说明内容的注释取消掉 下载地址:

  5. BZOJ1101——莫比乌斯函数&&入门

    题目 链接 有$50000$次查询,对于给定的整数$a,b$和$d$,有多少正整数对$x$和$y$,满足$x \leq a$,$y \leq b$,并且$gcd(x, y)=d$.$1 \leq k ...

  6. 本地资源图片无法通过 WXSS 获取,可以使用网络图片,或者 base64,或者使用<image/>标签

    在微信小程序开发中,当在CSS中使用背景图片格式为png时就会出现: 只要把png格式改掉就可以或者在<image/>标签里面写,我实测用JPG格式和把图片转成base64是没问题的.

  7. [Python自学] day-18 (1) (JS正则、第三方组件)

    一.JS的正则表达式 JS正则提供了两个方法: test():用于判断字符串是否符合规定: exec():获取匹配的数据: 1.test() 定义一个正则表达式: reg = /\d+/; // 用于 ...

  8. Error: 'The service did not respond in a timely fashion'

    Windows启动时候报这个错,不应在OnStart放执行长的过程,需要开另一个线程来做才能顺利启动 Windows Services: OnStart loop - do I need to del ...

  9. gzip/bzip/xz/tar

    说明 归档和压缩并不是一回事,压缩是按照压缩算法将文件进行压缩,一般是直接压缩文件,不同的压缩工具的压缩比率是不一样的,同时还支持在压缩工具中指定压缩比,gz < bz2 <xz 压缩增大 ...

  10. ros中同时订阅两个topic(2张图像)合并成一个topic(1张图像)

    2019-12-06 15:42:39 先暂时做个资料保存 要同时用两个红外相机,但是没有做硬件上的 时间戳同步,就是笔记本上同时插着两个相机. 两个topic发布各自相机的图像,然后要有个节点同时订 ...