前言

为了配置kubernetes中的ingress的高可用,对于kubernetes集群以外只暴露一个访问入口,需要使用keepalived排除单点问题。需要使用daemonset方式将ingress-controller部署在边缘节点上。

边缘节点

首先解释下什么叫边缘节点(Edge Node),所谓的边缘节点即集群内部用来向集群外暴露服务能力的节点,集群外部的服务通过该节点来调用集群内部的服务,边缘节点是集群内外交流的一个Endpoint。

边缘节点要考虑两个问题

  • 边缘节点的高可用,不能有单点故障,否则整个kubernetes集群将不可用
  • 对外的一致暴露端口,即只能有一个外网访问IP和端口

架构

为了满足边缘节点的以上需求,我们使用keepalived来实现。

在Kubernetes中添加了ingress后,在DNS中添加A记录,域名为你的ingress中host的内容,IP为你的keepalived的VIP,这样集群外部就可以通过域名来访问你的服务,也解决了单点故障。

选择Kubernetes的三个node作为边缘节点,并安装keepalived。

安装keepalived服务

yum install -y keepalived

  

node1的keealived配置

cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
router_id edgenode
} vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 88
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.40.0.109 dev eth0 label eth0:1
}
}

  

node2 keepalive配置

cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
router_id edgenode
} vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 88
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.40.0.109 dev eth0 label eth0:1
}
}

  

node3 keepalived配置

cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
router_id edgenode
} vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 88
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.40.0.109 dev eth0 label eth0:1
}
}

 

设置keepalived开机自启动

systemctl enable keepalived.service

  

启动keepalived服务

systemctl start keepalived.service

  

安装ingress-nginx

给边缘节点打标签

kubectl label nodes 10.40.0.105 edgenode=true
kubectl label nodes 10.40.0.106 edgenode=true
kubectl label nodes 10.40.0.107 edgenode=true

  

查看node标签

[root@k8s-master01 ingress]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
10.40.0.105 Ready <none> 25d v1.12.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,edgenode=true,kubernetes.io/hostname=10.40.0.105
10.40.0.106 Ready <none> 25d v1.12.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,edgenode=true,env_role=dev,kubernetes.io/hostname=10.40.0.106
10.40.0.107 Ready <none> 17d v1.12.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,edgenode=true,kubernetes.io/hostname=10.40.0.

将原来ingress的yaml文件由deployment改为daemonset

cat ingress-daemonset.yaml

apiVersion: extensions/v1beta1
#kind: Deployment
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
spec:
#replicas: 3
#selector:
#matchLabels:
#app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
serviceAccountName: nginx-ingress-serviceaccount
hostNetwork: true
nodeSelector:
edgenode: 'true'
containers:
- name: nginx-ingress-controller
image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.20.0
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1

  

namespace部署文件

cat namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx

configmap部署文件

cat configmap.yaml 

apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx

  

rbac部署文件

cat rbac.yaml 

apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update --- apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get --- apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx

  

tcp-service-configmap

cat tcp-services-configmap.yaml 

kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx

  

udp-servcie-configmap

cat udp-services-configmap.yaml 

kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx

  

default-backend

cat default-backend.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
app: default-http-backend
namespace: ingress-nginx
spec:
replicas: 1
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: registry.cn-hangzhou.aliyuncs.com/google_containers/defaultbackend:1.4
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
--- apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: ingress-nginx
labels:
app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: default-http-backend

  

部署

kubectl create -f namespace.yaml
kubectl create -f configmap.yaml
kubectl create -f rbac.yaml
kubectl create -f default-backend.yaml
kubectl create -f tcp-services-configmap.yaml
kubectl create -f udp-services-configmap.yaml
kubectl create -f ingress-daemonset.yaml

  

查看ingress-controller

[root@k8s-master01 ingress]# kubectl get ds -n ingress-nginx
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
nginx-ingress-controller edgenode=true 57m
[root@k8s-master01 ingress]# kubectl get pods -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
default-http-backend-86569b9d95-x4bsn / Running 24d 172.17.65.6 10.40.0.105 <none>
nginx-ingress-controller-5b7xg / Running 58m 10.40.0.105 10.40.0.105 <none>
nginx-ingress-controller-b5mxc / Running 58m 10.40.0.106 10.40.0.106 <none>
nginx-ingress-controller-t5n5k / Running 58m 10.40.0.107 10.40.0.107 <none>

ingress高可用--使用DaemonSet方式部署ingress-nginx的更多相关文章

  1. ProxySQL Cluster 高可用集群环境部署记录

    ProxySQL在早期版本若需要做高可用,需要搭建两个实例,进行冗余.但两个ProxySQL实例之间的数据并不能共通,在主实例上配置后,仍需要在备用节点上进行配置,对管理来说非常不方便.但是Proxy ...

  2. 大数据学习笔记——Hbase高可用+完全分布式完整部署教程

    Hbase高可用+完全分布式完整部署教程 本篇博客承接上一篇sqoop的部署教程,将会详细介绍完全分布式并且是高可用模式下的Hbase的部署流程,废话不多说,我们直接开始! 1. 安装准备 部署Hba ...

  3. MySQL高可用方案MHA的部署和原理

    MHA(Master High Availability)是一套相对成熟的MySQL高可用方案,能做到在0~30s内自动完成数据库的故障切换操作,在master服务器不宕机的情况下,基本能保证数据的一 ...

  4. Centos7.5基于MySQL5.7的 InnoDB Cluster 多节点高可用集群环境部署记录

    一.   MySQL InnoDB Cluster 介绍MySQL的高可用架构无论是社区还是官方,一直在技术上进行探索,这么多年提出了多种解决方案,比如MMM, MHA, NDB Cluster, G ...

  5. MySQL高可用架构-MMM环境部署记录

    MMM介绍MMM(Master-Master replication manager for MySQL)是一套支持双主故障切换和双主日常管理的脚本程序.MMM使用Perl语言开发,主要用来监控和管理 ...

  6. MySQL高可用架构-MHA环境部署记录

    一.MHA介绍 MHA(Master High Availability)目前在MySQL高可用方面是一个相对成熟的解决方案,它由日本DeNA公司youshimaton(现就职于Facebook公司) ...

  7. 使用Ansible实现nginx+keepalived高可用负载均衡自动化部署

    本篇文章记录通过Ansible自动化部署nginx的负载均衡高可用,前端代理使用nginx+keepalived,端web server使用3台nginx用于负载效果的体现,结构图如下: 部署前准备工 ...

  8. kubeadm实现k8s高可用集群环境部署与配置

    高可用架构 k8s集群的高可用实际是k8s各核心组件的高可用,这里使用主备模式,架构如下: 主备模式高可用架构说明: 核心组件 高可用模式 高可用实现方式 apiserver 主备 keepalive ...

  9. openstack高可用集群21-生产环境高可用openstack集群部署记录

    第一篇 集群概述 keepalived + haproxy +Rabbitmq集群+MariaDB Galera高可用集群   部署openstack时使用单个控制节点是非常危险的,这样就意味着单个节 ...

随机推荐

  1. [DB2]Linux下安装db2 v9.7

    https://www.cnblogs.com/cancer-sun/p/5168728.html

  2. 提取windows用户明文密码

    前段时间mimikatz热传,主要是因为可以直接提取当前登录用户明文密码. 其实,有个更厉害的神器,无需那么多命令操作,一个命令搞定: C:\>wce -w WCE v1.3beta (Wind ...

  3. POJ 2236 Wireless Network [并查集+几何坐标 ]

    An earthquake takes place in Southeast Asia. The ACM (Asia Cooperated Medical team) have set up a wi ...

  4. 16、Django实战第16天:优化url

    今天完成的是一个优化url.... 前面我们所有的url都是配置在一个mxonline.urls.py中.因为我们根据项目实际情况配置了多个app,那么我们相应的url是可以配置在自己的app中的,这 ...

  5. [Codeforces 30D] Kings Problem

    Brief Intro: 有n+1个点,其中n个点在X轴上,求从第k个点出发最短的汉密尔顿路径 Solution: 分类讨论+逐个枚举 设dist(i)是第i个点到n+1的距离 cal1(l,r)是n ...

  6. HDOJ 5385 The path

    Dicription You have a connected directed graph.Let $d(x)$ be the length of the shortest path from $1 ...

  7. 【平面图】【最小割】【最短路】【Heap-Dijkstra】bzoj1001 [BeiJing2006]狼抓兔子

    http://wenku.baidu.com/view/8f1fde586edb6f1aff001f7d.html #include<cstdio> #include<queue&g ...

  8. Windows python 3 安装OpenCV

    本文适用于想在window下使用python 3 的童鞋,安装openCV 有问题的参考 一.你要确定自己的python版本是3.x,在命令行窗口输入python 本人使用的是python 3.6 二 ...

  9. ActiveMQ实战-集群

    原文:http://blog.csdn.net/lifetragedy/article/details/51869032 ActiveMQ的集群 内嵌代理所引发的问题: 消息过载 管理混乱 如何解决这 ...

  10. 关于在.NET中 DAL+IDAL+Model+BLL+Web

    其实三层架构是一个程序最基本的 在.Net开发中通常是多层开发比如说    BLL 就是business Logic laywer(业务逻辑层) 他只负责向数据提供者也就是DAL调用数据 然后传递给 ...