服务器规划

角色

IP

组件

k8s-master1

192.168.31.63

kube-apiserver

kube-controller-manager

kube-scheduler

etcd

k8s-master2

192.168.31.64

kube-apiserver

kube-controller-manager

kube-scheduler

k8s-node1

192.168.31.65

kubelet

kube-proxy

docker

etcd

k8s-node2

192.168.31.66

kubelet

kube-proxy

docker

etcd

Load Balancer(Master)

192.168.31.61

192.168.31.60 (VIP)

Nginx L4

Load Balancer(Backup)

192.168.31.62

Nginx L4

一 - 系统初始化

关闭防火墙:

# systemctl stop firewalld

# systemctl disable firewalld

关闭selinux:

# setenforce  # 临时

# sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久

关闭swap:

# swapoff -a  # 临时

# vim /etc/fstab  # 永久

同步系统时间:

# ntpdate time.windows.com

添加hosts:

# vim /etc/hosts

192.168.31.63 k8s-master1

192.168.31.64 k8s-master2

192.168.31.65 k8s-node1

192.168.31.66 k8s-node2

修改主机名:

hostnamectl set-hostname k8s-master1

二 - Etcd集群

可在任意节点完成以下操作。

2.1 生成etcd证书

# cd TLS/etcd

安装cfssl工具:

# ./cfssl.sh

修改请求文件中hosts字段包含所有etcd节点IP:

# vi server-csr.json

{

    "CN": "etcd",

    "hosts": [

        "192.168.31.63",

        "192.168.31.64",

        "192.168.31.65"

        ],

    "key": {

        "algo": "rsa",

        "size": 

    },

    "names": [

        {

            "C": "CN",

            "L": "BeiJing",

            "ST": "BeiJing"

        }

    ]

}

# ./generate_etcd_cert.sh

# ls *pem

ca-key.pem  ca.pem  server-key.pem  server.pem

2.2 部署三个Etcd节点

# tar zxvf etcd.tar.gz

# cd etcd

# cp TLS/etcd/ssl/{ca,server,server-key}.pem ssl

分别拷贝到Etcd三个节点:

# scp –r etcd root@192.168.31.63:/opt

# scp etcd.service root@192.168.31.63:/usr/lib/systemd/system

登录三个节点修改配置文件 名称和IP:

# vi /opt/etcd/cfg/etcd.conf

#[Member]

ETCD_NAME="etcd-1"

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="https://192.168.31.63:2380"

ETCD_LISTEN_CLIENT_URLS="https://192.168.31.63:2379"

#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.63:2380"

ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.63:2379"

ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.63:2380,etcd-2=https://192.168.31.64:2380,etcd-3=https://192.168.31.65:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"
# systemctl start etcd

# systemctl enable etcd

2.3 查看集群状态

# /opt/etcd/bin/etcdctl \

> --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \

> --endpoints="https://192.168.31.63:2379,https://192.168.31.64:2379,https://192.168.31.65:2379" \

> cluster-health

member 37f20611ff3d9209 is healthy: got healthy result from https://192.168.31.63:2379

member b10f0bac3883a232 is healthy: got healthy result from https://192.168.31.64:2379

member b46624837acedac9 is healthy: got healthy result from https://192.168.31.65:2379

cluster is healthy

三 - 部署Master Node

3.1 生成apiserver证书

# cd TLS/k8s

修改请求文件中hosts字段包含所有etcd节点IP:

# vi server-csr.json

{

    "CN": "kubernetes",

    "hosts": [

      "10.0.0.1",

      "127.0.0.1",

      "kubernetes",

      "kubernetes.default",

      "kubernetes.default.svc",

      "kubernetes.default.svc.cluster",

      "kubernetes.default.svc.cluster.local",

      "192.168.31.60",

      "192.168.31.61",

      "192.168.31.62",

      "192.168.31.63",

      "192.168.31.64",

      "192.168.31.65",

      "192.168.31.66"

    ],

    "key": {

        "algo": "rsa",

        "size": 

    },

    "names": [

        {

            "C": "CN",

            "L": "BeiJing",

            "ST": "BeiJing",

            "O": "k8s",

            "OU": "System"

        }

    ]

}

# ./generate_k8s_cert.sh

# ls *pem

ca-key.pem  ca.pem  kube-proxy-key.pem  kube-proxy.pem  server-key.pem  server.pem

3.2 部署apiserver,controller-manager和scheduler

在Master节点完成以下操作。

二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#v1161

二进制文件位置:kubernetes/serverr/bin

# tar zxvf k8s-master.tar.gz

# cd kubernetes

# cp TLS/k8s/ssl/*.pem ssl

# cp –rf kubernetes /opt

# cp kube-apiserver.service kube-controller-manager.service kube-scheduler.service /usr/lib/systemd/system

# cat /opt/kubernetes/cfg/kube-apiserver.conf

KUBE_APISERVER_OPTS="--logtostderr=false \

--v=2 \

--log-dir=/opt/kubernetes/logs \

--etcd-servers=https://192.168.31.63:2379,https://192.168.31.64:2379,https://192.168.31.65:2379 \

--bind-address=192.168.31.63 \

--secure-port=6443 \

--advertise-address=192.168.31.63 \

……

# systemctl start kube-apiserver

# systemctl start kube-controller-manager

# systemctl start kube-scheduler

# systemctl enable kube-apiserver

# systemctl enable kube-controller-manager

# systemctl enable kube-scheduler

3.3 启用TLS Bootstrapping

为kubelet TLS Bootstrapping 授权:

# cat /opt/kubernetes/cfg/token.csv

c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,,"system:node-bootstrapper"

格式:token,用户,uid,用户组

给kubelet-bootstrap授权:

kubectl create clusterrolebinding kubelet-bootstrap \

--clusterrole=system:node-bootstrapper \

--user=kubelet-bootstrap

token也可自行生成替换:

head -c  /dev/urandom | od -An -t x | tr -d ' '

但apiserver配置的token必须要与node节点bootstrap.kubeconfig配置里一致。

四 - 部署Worker Node

4.1 安装Docker

二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/


# tar zxvf k8s-node.tar.gz

# tar zxvf docker-18.09..tgz

# mv docker/* /usr/bin

# mkdir /etc/docker

# mv daemon.json /etc/docker

# mv docker.service /usr/lib/systemd/system

# systemctl start docker

# systemctl enable docker

4.2 部署kubelet和kube-proxy

拷贝证书到Node:

# cd TLS/k8s

# scp ca.pem kube-proxy*.pem root@192.168.31.65:/opt/kubernetes/ssl/

# cp kube-apiserver.service kube-controller-manager.service kube-

# tar zxvf k8s-node.tar.gz

# mv kubernetes /opt

# cp kubelet.service kube-proxy.service /usr/lib/systemd/system

修改以下三个文件中IP地址:

# grep  *

bootstrap.kubeconfig:    server: https://192.168.31.63:6443

kubelet.kubeconfig:    server: https://192.168.31.63:6443

kube-proxy.kubeconfig:    server: https://192.168.31.63:6443

修改以下两个文件中主机名:

# grep hostname *

kubelet.conf:--hostname-override=k8s-node1 \

kube-proxy-config.yml:hostnameOverride: k8s-node1

# systemctl start kubelet

# systemctl start kube-proxy

# systemctl enable kubelet

# systemctl enable kube-proxy

4.3 允许给Node颁发证书

# kubectl get csr

# kubectl certificate approve node-csr-MYUxbmf_nmPQjmH3LkbZRL2uTO-_FCzDQUoUfTy7YjI

# kubectl get node

4.4 部署CNI网络

二进制包下载地址:https://github.com/containernetworking/plugins/releases

# mkdir /opt/cni/bin /etc/cni/net.d

# tar zxvf cni-plugins-linux-amd64-v0.8.2.tgz –C /opt/cni/bin

确保kubelet启用CNI:

# cat /opt/kubernetes/cfg/kubelet.conf

--network-plugin=cni

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

在Master执行:

kubectl apply –f kube-flannel.yaml

# kubectl get pods -n kube-system

NAME                          READY   STATUS    RESTARTS   AGE

kube-flannel-ds-amd64-5xmhh   /     Running             171m

kube-flannel-ds-amd64-ps5fx   /     Running             150m

4.5 授权apiserver访问kubelet

为提供安全性,kubelet禁止匿名访问,必须授权才可以。

# cat /opt/kubernetes/cfg/kubelet-config.yml

……

authentication:

  anonymous:

    enabled: false

  webhook:

    cacheTTL: 2m0s

    enabled: true

  x509:

clientCAFile: /opt/kubernetes/ssl/ca.pem

……

# kubectl apply –f apiserver-to-kubelet-rbac.yaml

五. 部署Web UI和DNS

https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/

# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

# vi recommended.yaml

…

kind: Service

apiVersion: v1

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: kubernetes-dashboard

  namespace: kubernetes-dashboard

spec:

  type: NodePort

  ports:

    - port: 

      targetPort: 

      nodePort: 

  selector:

    k8s-app: kubernetes-dashboard

…

# kubectl apply -f recommended.yaml

创建service account并绑定默认cluster-admin管理员集群角色:

# cat dashboard-adminuser.yaml

apiVersion: v1

kind: ServiceAccount

metadata:

  name: admin-user

  namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  name: admin-user

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: cluster-admin

subjects:

- kind: ServiceAccount

  name: admin-user

  namespace: kubernetes-dashboard

获取token:

# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

访问地址:http://NodeIP:30001

使用输出的token登录Dashboard。

# kubectl apply –f coredns.yaml

# kubectl get pods -n kube-system

六. Master高可用

6.1 部署Master组件Master1一致

拷贝master1/opt/kubernetes和service文件:

# scp –r /opt/kubernetes root@192.168.31.64:/opt

# scp –r /opt/etcd/ssl root@192.168.31.64:/opt/etcd

# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.31.64:/usr/lib/systemd/system

修改apiserver配置文件为本地IP:

# cat /opt/kubernetes/cfg/kube-apiserver.conf

KUBE_APISERVER_OPTS="--logtostderr=false \

--v= \

--log-dir=/opt/kubernetes/logs \

--etcd-servers=https://192.168.31.63:2379,https://192.168.31.64:2379,https://192.168.31.65:2379 \

--bind-address=192.168.31.64 \

--secure-port= \

--advertise-address=192.168.31.64 \

……

# systemctl start kube-apiserver

# systemctl start kube-controller-manager

# systemctl start kube-scheduler

# systemctl enable kube-apiserver

# systemctl enable kube-controller-manager

# systemctl enable kube-scheduler

6.2 部署Nginx负载均衡

nginx rpm包:http://nginx.org/packages/rhel/7/x86_64/RPMS/

# rpm -vih http://nginx.org/packages/rhel/7/x86_64/RPMS/nginx-1.16.0-1.el7.ngx.x86_64.rpm

# vim /etc/nginx/nginx.conf

……

stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {

                server 192.168.31.63:;

                server 192.168.31.64:;

            }

    server {

       listen ;

       proxy_pass k8s-apiserver;

    }

}

……

# systemctl start nginx

# systemctl enable nginx

6.3 Nginx+Keepalived高可用

主节点:

 

# yum install keepalived

# vi /etc/keepalived/keepalived.conf

global_defs {

   notification_email {

     acassen@firewall.loc

     failover@firewall.loc

     sysadmin@firewall.loc

   }

   notification_email_from Alexandre.Cassen@firewall.loc  

   smtp_server 127.0.0.1

   smtp_connect_timeout 

   router_id NGINX_MASTER

}

vrrp_script check_nginx {

    script "/etc/keepalived/check_nginx.sh"

}

vrrp_instance VI_1 {

    state MASTER

    interface ens33

    virtual_router_id  # VRRP 路由 ID实例,每个实例是唯一的

    priority     # 优先级,备服务器设置 

    advert_int     # 指定VRRP 心跳包通告间隔时间,默认1秒

    authentication {

        auth_type PASS      

        auth_pass 

    }  

    virtual_ipaddress {

        192.168.31.60/

    }

    track_script {

        check_nginx

    }

}

# cat /etc/keepalived/check_nginx.sh

#!/bin/bash

count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq  ];then

    exit 

else

    exit 

fi

# systemctl start keepalived

# systemctl enable keepalived

备节点:

 

# cat /etc/keepalived/keepalived.conf

global_defs {

   notification_email {

     acassen@firewall.loc

     failover@firewall.loc

     sysadmin@firewall.loc

   }

   notification_email_from Alexandre.Cassen@firewall.loc  

   smtp_server 127.0.0.1

   smtp_connect_timeout 

   router_id NGINX_BACKUP

}

vrrp_script check_nginx {

    script "/etc/keepalived/check_nginx.sh"

}

vrrp_instance VI_1 {

    state BACKUP

    interface ens33

    virtual_router_id  # VRRP 路由 ID实例,每个实例是唯一的

    priority     # 优先级,备服务器设置 

    advert_int     # 指定VRRP 心跳包通告间隔时间,默认1秒

    authentication {

        auth_type PASS      

        auth_pass 

    }  

    virtual_ipaddress {

        192.168.31.60/

    }

    track_script {

        check_nginx

    }

}

# cat /etc/keepalived/check_nginx.sh

#!/bin/bash

count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq  ];then

    exit 

else

    exit 

fi

# systemctl start keepalived

# systemctl enable keepalived

测试:

 

# ip a

: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu  qdisc pfifo_fast state UP group default qlen 

    link/ether :0c::9d:ee: brd ff:ff:ff:ff:ff:ff

    inet 192.168.31.63/ brd 192.168.31.255 scope global noprefixroute ens33

       valid_lft forever preferred_lft forever

    inet 192.168.31.60/ scope global secondary ens33

       valid_lft forever preferred_lft forever

    inet6 fe80::20c:29ff:fe9d:ee30/ scope link

       valid_lft forever preferred_lft forever

关闭nginx测试VIP是否漂移到备节点。

6.4 修改Node连接VIP

测试VIP是否正常工作:

# curl -k --header "Authorization: Bearer c47ffb939f5ca36231d9e3121a252940" https://192.168.31.60:6443/version

{

  "major": "",

  "minor": "",

  "gitVersion": "v1.16.0",

  "gitCommit": "2bd9643cee5b3b3a5ecbd3af49d09018f0773c77",

  "gitTreeState": "clean",

  "buildDate": "2019-09-18T14:27:17Z",

  "goVersion": "go1.12.9",

  "compiler": "gc",

  "platform": "linux/amd64"

}

将Node连接VIP:

# cd /opt/kubernetes/cfg

# grep  *

bootstrap.kubeconfig:    server: https://192.168.31.63:6443

kubelet.kubeconfig:    server: https://192.168.31.636443

kube-proxy.kubeconfig:    server: https://192.168.31.63:6443

批量修改:

sed -i 's#192.168.31.63#192.168.31.60#g' *

二进制搭建一个完整的K8S集群部署文档的更多相关文章

  1. 11. 搭建一个完整的K8S集群

    11. 搭建一个完整的Kubernetes集群 1. kubectl的命令遵循分类的原则(重点) 语法1: kubectl 动作 类 具体的对象 例如: """ kube ...

  2. 搭建一个完整的K8S集群-------基于CentOS 8系统

    创建三个centos节点: 192.168.5.141 k8s-master 192.168.5.142 k8s-nnode1 192.168.5.143 k8s-nnode2 查看centos系统版 ...

  3. HP DL160 Gen9服务器集群部署文档

    HP DL160 Gen9服务器集群部署文档 硬件配置=======================================================Server        Memo ...

  4. redis多机集群部署文档

    redis多机集群部署文档(centos6.2) (要让集群正常工作至少需要3个主节点,在这里我们要创建6个redis节点,其中三个为主节点,三个为从节点,对应的redis节点的ip和端口对应关系如下 ...

  5. Redis集群部署文档(Ubuntu15.10系统)

    Redis集群部署文档(Ubuntu15.10系统)(要让集群正常工作至少需要3个主节点,在这里我们要创建6个redis节点,其中三个为主节点,三个为从节点,对应的redis节点的ip和端口对应关系如 ...

  6. Kubernetes — 从0到1:搭建一个完整的Kubernetes集群

    准备工作 首先,准备机器.最直接的办法,自然是到公有云上申请几个虚拟机.当然,如果条件允许的话,拿几台本地的物理服务器来组集群是最好不过了.这些机器只要满足如下几个条件即可: 满足安装 Docker ...

  7. Apache ZooKeeper 单机、集群部署文档

    简介: Apache ZooKeeper 是一个分布式应用的高性能协调服务,功能包括:配置维护.统一命名.状态同步.集群管理.仲裁选举等. 下载地址:http://apache.fayea.com/z ...

  8. kafka集群部署文档(转载)

    原文链接:http://www.cnblogs.com/luotianshuai/p/5206662.html Kafka初识 1.Kafka使用背景 在我们大量使用分布式数据库.分布式计算集群的时候 ...

  9. 从零开始,无DNS vcenter 6.7 vmotion热迁移,存储集群部署文档。

    1,环境准备 准备:Vmware workstation环境 IP地址段规划 ESXI主机IP地址段 192.168.197.4-192.168.197.10 Vcenter Server集群IP地址 ...

随机推荐

  1. Dubbox本地 JAR包部署与安装

    Dubbox的jar包并没有部署到Maven的中央仓库中,大家在Maven的中央仓库中可以查找到Dubbo的最终版本是2.5.3 , 阿里巴巴解散了Dubbo团队后由当当网继续维护此项目,并改名为 D ...

  2. jquery 下拉框左右选择

    html <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <ti ...

  3. pip飞起来了

    这里说下Windows下的修改方法,看了网上很多的教程发现都不行,尝试了好久终于发现了可行的方法. 找到python安装目录下的:\Lib\site-packages\pip\models\index ...

  4. SVN更新提示内容被锁定

    SVN更新提示内容被锁定   SVN要管理好,并且及时将最新的更新内容上传到SVN上. 在我使用从SVN上更新内容到本地时,总是提示“**********已经锁定”.如果出现这种情况,选择SVN选项“ ...

  5. java.sql.SQLException: Illegal mix of collations (utf8mb4_general_ci,IMPLICIT) and (utf8mb4_0900_ai_ci,IMPLICIT) for operation '='

    查询视图时报错:java.sql.SQLException: Illegal mix of collations (utf8mb4_general_ci,IMPLICIT) and (utf8mb4_ ...

  6. djangorestframework-jwt 分页器 三种

    数据准备 # models class Teacher(models.Model): name = models.CharField(max_length=32) salary = models.De ...

  7. 小程序推送消息(Template)

    最近搞小程序模拟推送消息,才发现小程序推送消息接口准备下线. 请注意,小程序模板消息接口将于2020年1月10日下线,开发者可使用订阅消息功能 咱们现在有需求,所以不管下不下,完成再说. 一:”获取a ...

  8. DMA实验总结

    一.RCC设置 没什么好写的之前USART的基本一样 /************************************************************************ ...

  9. nucleus plus学习总结(后续)

    前言:     刚刚抽筋点了保存发布,结果要审核,那就分开写个续好了. 内容: signal     信号是异步通知task的一种机制,HISR是不可以接收信号的,但是可以发送信号.     TCB中 ...

  10. paper 145:caffe-深度学习框架的搭建

    参考来源于:http://www.cnblogs.com/goodluckcwl/p/5686094.html  (部分内容做了修改) Caffe是一个深度学习框架,本文讲阐述如何在linux下安装G ...