环境:

192.168.30.20 VIP(虚拟)

192.168.30.21 master1

192.168.30.22 master2

192.168.30.23 node1

192.168.30.24 node2

192.168.30.25 k8s-LB1 (master)

192.168.30.26 k8s-LB2 (backup)

关闭swap   swapoff -a 临时     永久 注释:vim /etc/fstab

关闭防火墙和selinux

关闭防火墙: systemctl stop firewalld

systemctl disable firewalld

Iptables -F

关闭selinux: $ sed -i 's/enforcing/disabled/' /etc/selinux/config $ setenforce 0

临时 $ setenforce 0

 

实验由初:部署k8s高可用集群只需要对kube-apiserver进行做keepalived的高可用

Controller-manager和scheduler配置文件中可直接加入--leader-elect=true

可以自动实现leader选举

例如,某一个pod对象创建的请求被3个控制器实例分别执行一次进而创建出一个pod对象副本来。因此,在某一时刻,仅能有一个kube-controller-manager实例处于正常工作状态,余下的均处于备用状态,或者称为等待状态

注意多个实例要都同时启用--leader-elect=true

这种leader选举操作时分布式锁机制的一种应用,它通过创建和维护k8s资源对象来维护锁状态,初始状态时,各controller-manager实例通过竞争的方式去抢占指定的Endpoints。胜利者被选为leader

一、签发证书

Rz -E

传入etcd ca证书json文件便于认证,这里是用于以下证书的配置,我集合在一个脚本里面了

下面我分批执行

[root@k8s-master1 ~]# mkdir k8s

[root@k8s-master1 ~]# cd k8s

[root@k8s-master1 k8s]# mkdir etcd-cert

[root@k8s-master1 k8s]# mkdir k8s-cert

[root@k8s-master1 k8s]# cd etcd-cert/

[root@k8s-master1 etcd-cert]# rz -E

rz waiting to receive.

[root@k8s-master1 etcd-cert]# rz -E

rz waiting to receive.

[root@k8s-master1 etcd-cert]# ls

cfssl.sh  etcd-cert.sh

这里我传了一个cfssl的认证工具,有两种认证形式,openssl、cfssl、我们这里用的cfssl

cfssl是一个证书工具,json、详细信息生成,并赋予权限

这里也可以直接执行,我是把它放到一个脚本里,直接执行的

[root@k8s-master1 etcd-cert]# cat cfssl.sh

curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl

curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson

curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo

chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo

[root@k8s-master1 etcd-cert]# sh cfssl.sh

证书机构准备为你颁发证书,我这里写的是10年,可以根据你自己情况而定

[root@k8s-master1 etcd-cert]# cat > ca-config.json <<EOF

{

"signing": {

"default": {

"expiry": "87600h"

},

"profiles": {

"www": {

"expiry": "87600h",

"usages": [

"signing",

"key encipherment",

"server auth",

"client auth"

]

}

}

}

}

EOF

这里写的都是CA证书自己的地址

[root@k8s-master1 etcd-cert]# cat > ca-csr.json <<EOF

{

"CN": "etcd CA",

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"L": "Beijing",

"ST": "Beijing"

}

]

}

EOF

写完之后查看目录下会给我们生成2个json文件

[root@k8s-master1 etcd-cert]# ls

ca-config.json  ca-csr.json  cfssl.sh  etcd-cert.sh

生成根证书,这也是自身拥有的

用cfssl工具初始化一个CA机构生成文件,通过json管道输出

[root@k8s-master1 etcd-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

生成之后会显示pem的证书

[root@k8s-master1 etcd-cert]# ls

ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  cfssl.sh  etcd-cert.sh

进行颁发证书,给我们的HTTPS进行加密,对我们的etcd进行CA证书的颁发,并写清每台的主机IP,同时rsa的加密算法实现

[root@k8s-master1 etcd-cert]# cat > server-csr.json <<EOF

{

"CN": "etcd",

"hosts": [

"192.168.30.21",

"192.168.30.22",

"192.168.30.23",

"192.168.30.24"

],

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"L": "BeiJing",

"ST": "BeiJing"

}

]

}

EOF

这里生成了三个证书,可以使用证书了

[root@k8s-master1 etcd-cert]# ls

ca-config.json  ca-csr.json  ca.pem    etcd-cert.sh  server-csr.json  server.pem

ca.csr          ca-key.pem   cfssl.sh  server.csr    server-key.pem

[root@k8s-master1 etcd-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

二、部署etcd集群

创建soft的目录放软件包

[root@k8s-master1 ~]# mkdir soft

[root@k8s-master1 ~]# cd soft

[root@k8s-master1 soft]# rz -E

rz waiting to receive.

[root@k8s-master1 soft]# ls

etcd-v3.3.10-linux-amd64.tar.gz

解压etcd软件包,可以在github上下载,一般以amd结尾的

[root@k8s-master1 soft]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz

[root@k8s-master1 soft]# ls

etcd-v3.3.10-linux-amd64  etcd-v3.3.10-linux-amd64.tar.gz

[root@k8s-master1 soft]# cd etcd-v3.3.10-linux-amd64/

etcd是主要的相关配置,etcdctl是管理工具

[root@k8s-master1 etcd-v3.3.10-linux-amd64]# ls

Documentation  etcd  etcdctl  README-etcdctl.md  README.md  READMEv2-etcdctl.md

创建etcd的目录以便日后管理

[root@k8s-master1 soft]# mkdir /opt/etcd/{cfg,bin,ssl} -p

[root@k8s-master1 soft]# cd etcd-v3.3.10-linux-amd64/

[root@k8s-master1 etcd-v3.3.10-linux-amd64]# mv etcd etcdctl /opt/etcd/bin

[root@k8s-master1 etcd-v3.3.10-linux-amd64]# ls /opt/etcd/bin/

etcd  etcdctl

这里我传了一个写了以下配置etcd的脚本,我们分批执行一下

[root@k8s-master1 k8s]# rz -E

rz waiting to receive.

[root@k8s-master1 k8s]# vim etcd.sh

[root@k8s-master1 k8s]# chmod +x etcd.sh

[root@k8s-master1 k8s]# ls

etcd-cert  etcd.sh  k8s-cert

尝试指定我们的配置etcd文件,输入结果报错

[root@k8s-master1 k8s]# ./etcd.sh etcd01 192.168.30.21 etcd02=https://192.168.30.22:2380,etcd03=https://192.168.30.23:2380,etcd04=https://192.168.30.24:2380

Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.

Job for etcd.service failed because the control process exited with error code. See "systemctl status etcd.service" and "journalctl -xe" for details.

因为我们这里/usr/lib/systemd/system/etcd.service需要指定三个证书,也是刚才我们生成的,确保key-file在我们启动文件systemd中指定调用的文件耦合,ca.pem、server.pem、server-key.pem。这三个需要放在我们调用中的/opt/etcd/ssl中

[root@k8s-master1 ~]# cp /root/k8s/etcd-cert/{ca,server-key,server}.pem /opt/etcd/ssl

[root@k8s-master1 ~]# cd /opt/etcd/ssl

[root@k8s-master1 ssl]# ls

ca.pem  server-key.pem  server.pem

Server.pem是暴露我们2379端口用的

其他的是用于我们集群中

cat/usr/lib/systemd/system/etcd.service

--initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem

这下我们的证书已经指定好了。直接启动就可以了,这里因为我们指定的是三个etcd

找不到另外两个节点,所有处于一直启动状态中,我们只需把另外两个节点加入进来就可以了,用ps-ef查看也是没问题的,进程有etcd的

[root@k8s-master1 ~]# systemctl restart etcd

^C

[root@k8s-master1 ~]# ps -ef |grep etcd

root       2332   1633  0 10:03 pts/0    00:00:00 systemctl restart etcd

root       2338      1  2 10:03 ?        00:00:00 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.30.21:2380 --listen-client-urls=https://192.168.30.21:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.30.21:2379 --initial-advertise-peer-urls=https://192.168.30.21:2380 --initial-cluster=etcd01=https://192.168.30.21:2380,etcd02=https://192.168.30.22:2380,etcd03=https://192.168.30.23:2380,etcd04=https://192.168.30.24:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem

root       2402   2351  0 10:03 pts/1    00:00:00 grep --color=auto etcd

把我们master上opt/etcd的配置文件及启动文件用-r连目录上传到另外两台主机上,还有systemd下的启动调用集群证书文件

[root@k8s-master1 ~]# scp -r /opt/etcd root@192.168.30.22:/opt

[root@k8s-master1 ~]# scp -r /opt/etcd root@192.168.30.23:/opt

[root@k8s-master1 ~]# scp -r /opt/etcd root@192.168.30.24:/opt

[root@k8s-master1 ~]# scp /usr/lib/systemd/system/etcd.service root@192.168.30.22:/usr/lib/systemd/system

[root@k8s-master1 ~]# scp /usr/lib/systemd/system/etcd.service root@192.168.30.23:/usr/lib/systemd/system

[root@k8s-master1 ~]# scp /usr/lib/systemd/system/etcd.service root@192.168.30.24:/usr/lib/systemd/system

修改每个节点上的ip以及名称

2379是数据通信的端口,2380是集群直接的端口

#[Member]

ETCD_NAME="etcd01"

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="https://192.168.30.22:2380"

ETCD_LISTEN_CLIENT_URLS="https://192.168.30.22:2379"

#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.30.22:2380"

ETCD_ADVERTISE_CLIENT_URLS="https://192.168.30.22:2379"

ETCD_INITIAL_CLUSTER="etcd01=https://192.168.30.21:2380,etcd02=https://192.168.30.22:2380,etcd03=https://192.168.30.23:2380,etcd04=https://192.168.30.24:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"

[root@k8s-master2 ~]# systemctl start etcd

#[Member]

ETCD_NAME="etcd02"

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="https://192.168.30.23:2380"

ETCD_LISTEN_CLIENT_URLS="https://192.168.30.23:2379"

#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.30.23:2380"

ETCD_ADVERTISE_CLIENT_URLS="https://192.168.30.23:2379"

ETCD_INITIAL_CLUSTER="etcd01=https://192.168.30.21:2380,etcd02=https://192.168.30.22:2380,etcd03=https://192.168.30.23:2380,etcd04=https://192.168.30.24:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"

[root@k8s-node1 ~]# systemctl start etcd

#[Member]

ETCD_NAME="etcd03"

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="https://192.168.30.24:2380"

ETCD_LISTEN_CLIENT_URLS="https://192.168.30.24:2379"

#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.30.24:2380"

ETCD_ADVERTISE_CLIENT_URLS="https://192.168.30.24:2379"

ETCD_INITIAL_CLUSTER="etcd01=https://192.168.30.21:2380,etcd02=https://192.168.30.22:2380,etcd03=https://192.168.30.23:2380,etcd04=https://192.168.30.24:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"

[root@k8s-node2 ~]# systemctl start etcd

2379是数据通信的端口,2380是集群直接的端口

检查集群状态,因为我们是自签的证书所有要指定我们的证书pem

[root@k8s-master1 ~]# /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.30.21:2379,https://192.168.30.22:2379,https://192.168.30.23:2379,https://192.168.30.24:2379" cluster-health

member 83ee018d2841375 is healthy: got healthy result from https://192.168.30.22:2379

member 31491770f0472891 is healthy: got healthy result from https://192.168.30.23:2379

member 7d0b0924d5dc6c42 is healthy: got healthy result from https://192.168.30.24:2379

member c04fbd1891457563 is healthy: got healthy result from https://192.168.30.21:2379

cluster is healthy

三、node节点都安装docker

这里是Centos7安装方式,ce版本是最新的社区版

安装依赖包

$ sudo yum install -y yum-utils \

device-mapper-persistent-data \

lvm2

添加Docker软件包源

$ sudo yum-config-manager \

--add-repo \

https://download.docker.com/linux/centos/docker-ce.repo

安装Docker-ce

$ sudo yum install docker-ce

启动Docker

$ sudo systemctl start docker

默认是国外的源,下载会很慢,建议配置国内镜像仓库

#vim /etc/docker/daemon.json

{

"registry-mirrors": [ "https://registry.docker-cn.com" ]

}

$ systemctl enable docker

建议使用daocloud的加速器

该脚本可以将 --registry-mirror 加入到你的 Docker 配置文件 /etc/docker/daemon.json 中

curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io

启动Docker

$ sudo systemctl start docker

四、部署fanneld网络

 

使用k8s网络通信原理实现有两种方案,隧道方案和路由方案

常用fannle、100台以内,支持很多的封包类型,传输形式,支持路由表同一局域网限制,对网络环境跨互联网进行使用,支持已有的进行通信,使用的是重叠网络进行隧道方案设计性能开销大,基于现有的tcp数据包再封装一次,传输,两边有这样一次封装和解封装的进程,使用重叠网络(flannel)

callco、上百台   使用BJP 、 协议通信,不支持多网络环境,必须在支持bjp的环境,在路由表中的环境学习IP进行通信,一般大型公司使用callco

路由方案是有路由表进行转发的,不会对数据包封装和解封装,性能好,走的是三层,网络层

Overlay Network:覆盖网络,在基础网络上叠加的一种虚拟网络

技术模式,该网络中的主机通过虚拟链路连接起来。

Flannel:是Overlay网络的一种,也是将源数据包封装在另一种网

络包里面进行路由转发和通信,目前已经支持UDP、VXLAN、AWS

VPC和GCE路由等数据转发方式

为你的key设置数组,为k8s节点设置子网,再为大子网分配一个小的子网,再分配到每个node上,数据转发方式为vxlan

[root@k8s-master1 ~]# /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.30.21:2379,https://192.168.30.22:2379,https://192.168.30.23:2379,https://192.168.30.24:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'

用get去查看子网范围状态

[root@k8s-master1 ~]# /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.30.21:2379,https://192.168.30.22:2379,https://192.168.30.23:2379,https://192.168.30.24:2379" get /coreos.com/network/config

{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}

下载二进制包

https://github.com/coreos/flannel/releases

[root@k8s-node1 ~]# rz -E

rz waiting to receive.

[root@k8s-node1 ~]# rz -E

rz waiting to receive.

[root@k8s-node1 ~]# ls

anaconda-ks.cfg  flannel-v0.10.0-linux-amd64.tar.gz  flannel.sh

[root@k8s-node1 ~]# mkdir /opt/kubernetes/{bin,cfg,ssl} -p

[root@k8s-node1 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz

[root@k8s-node1 ~]# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin

[root@k8s-node1 ~]# chmod +x flannel.sh

[root@k8s-node1 ~]# ./flannel.sh https://192.168.30.21:2379,https://192.168.30.22:2379,https://192.168.30.23:2379,https://192.168.30.24:2379

给node2节点部署flannel

[root@k8s-node1 ~]# scp -r /opt/kubernetes/ root@192.168.30.24:/opt

[root@k8s-node1 ~]# scp /usr/lib/systemd/system/{flanneld,docker}.service root@192.168.30.24:/usr/lib/systemd/system/

[root@k8s-node1 ~]# systemctl start flanneld

[root@k8s-node1 ~]# systemctl restart docker

[root@k8s-node1 ~]# ip a

5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default

link/ether 02:42:97:f5:6c:cd brd ff:ff:ff:ff:ff:ff

inet 172.17.25.1/24 brd 172.17.25.255 scope global docker0

valid_lft forever preferred_lft forever

6: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default

link/ether b2:1a:97:5c:61:1f brd ff:ff:ff:ff:ff:ff

inet 172.17.25.0/32 scope global flannel.1

valid_lft forever preferred_lft forever

[root@k8s-node2 ~]# systemctl start flanneld

[root@k8s-node2 ~]# systemctl restart docker

[root@k8s-node2 ~]# ip a

5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default

link/ether 02:42:3f:3c:a8:62 brd ff:ff:ff:ff:ff:ff

inet 172.17.77.1/24 brd 172.17.77.255 scope global docker0

valid_lft forever preferred_lft forever

6: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default

link/ether 96:1c:bc:ec:05:d6 brd ff:ff:ff:ff:ff:ff

inet 172.17.77.0/32 scope global flannel.1

在node2测试创建容器分配的网络,测试网络都是flanneld分配出去的

[root@k8s-node2 ~]# docker run -it busybox

/ # ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000

inet 172.17.77.2/24 brd 172.17.77.255 scope global eth0

valid_lft forever preferred_lft forever

在node1节点测试node2上的pod容器是否可以通信,是可以的

[root@k8s-node1 ~]# ping 172.17.77.2

PING 172.17.77.2 (172.17.77.2) 56(84) bytes of data.

64 bytes from 172.17.77.2: icmp_seq=1 ttl=63 time=0.477 ms

64 bytes from 172.17.77.2: icmp_seq=2 ttl=63 time=0.445 ms

在node2节点测试node1上的pod容器是否可以通信,是可以的

[root@k8s-node1 ~]# docker run -it busybox

/ # ip a

inet 172.17.97.2/24 brd 172.17.97.255 scope global eth0

[root@k8s-node2 ~]# ping 172.17.97.2

PING 172.17.97.2 (172.17.97.2) 56(84) bytes of data.

64 bytes from 172.17.97.2: icmp_seq=1 ttl=63 time=0.516 ms

五、部署master

这里是自己写的配置

[root@k8s-master1 ~]# ls

master.zip    k8s     soft

[root@k8s-master1 ~]# cd k8s

[root@k8s-master1 k8s ~]# unzip master.zip

Archive:  master.zip

inflating: apiserver.sh

inflating: controller-manager.sh

inflating: scheduler.sh

[root@k8s-master1 k8s]# ls

apiserver.sh  controller-manager.sh  etcd-cert  etcd.sh  flannel.sh  k8s-cert  scheduler.sh

[root@k8s-master1 ~]# cd soft/

[root@k8s-master1 soft]# rz -E

把二进制包拿进来,解压完并放到我们的工作目录

[root@k8s-master1 soft]# tar zxvf kubernetes-server-linux-amd64.tar.gz

[root@k8s-master1 soft]# cd kubernetes/server/bin/

[root@k8s-master1 bin]# mkdir -p /opt/kubernetes/{bin,cfg,ssl}

[root@k8s-master1 bin]# cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin

监听本机apiserver和etcd的地址

[root@k8s-master1 k8s]# ./apiserver.sh 192.168.30.21 https://192.168.30.21:2379,https://192.168.30.22:2379,https://192.168.30.23:2379,https://192.168.30.24:2379

设置api-server.sh我们的配置选项中的日志存放位置

[root@k8s-master1 k8s]# mkdir /opt/kubernetes/logs

修改日志存放位置,修改opt/kubernetes/cfg/kube-apiserver

[root@k8s-master1 k8s]# vim apiserver.sh

KUBE_APISERVER_OPTS="--logtostderr=false \\

--log-dir=/opt/kubernetes/logs  \\

[root@k8s-master1 ~]# cd k8s/k8s-cert/

[root@k8s-master1 k8s-cert]# ls

[root@k8s-master1 k8s-cert]# rz -E

rz waiting to receive.

[root@k8s-master1 k8s-cert]# ls

k8s-cert.sh

修改ip 把节点IP添加进去,并执行,生成证书

[root@k8s-master1 k8s-cert]# vim k8s-cert.sh

cat > server-csr.json <<EOF

{

"CN": "kubernetes",

"hosts": [

"10.0.0.1",

"127.0.0.1",

"192.168.30.21",

"192.168.30.22",

"192.168.30.23",

"192.168.30.24",

"192.168.30.25",

"192.168.30.26",

"kubernetes",

"kubernetes.default",

"kubernetes.default.svc",

"kubernetes.default.svc.cluster",

"kubernetes.default.svc.cluster.local"

[root@k8s-master1 k8s-cert]# bash k8s-cert.sh

[root@k8s-master1 k8s-cert]# ls

admin.csr       ca-config.json  ca.pem               kube-proxy-key.pem  server-key.pem

admin-csr.json  ca.csr          k8s-cert.sh          kube-proxy.pem      server.pem

admin-key.pem   ca-csr.json     kube-proxy.csr       server.csr

admin.pem       ca-key.pem      kube-proxy-csr.json  server-csr.json

[root@k8s-master1 k8s-cert]# cp ca.pem server.pem server-key.pem ca-key.pem /opt/kubernetes/ssl/

五、部署apiserver生成token文件

把kubeconfig.sh 拉进来

把第一段复制进来生成token文件

[root@k8s-master1 k8s-cert]# cat > token.csv <<EOF

> ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"

> EOF

[root@k8s-master1 k8s-cert]# cat token.csv

0fb61c46f8991b718eb38d27b605b008,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

[root@k8s-master1 k8s-cert]# mv token.csv /opt/kubernetes/cfg

[root@k8s-master1 k8s-cert]# systemctl start kube-apiserver

[root@k8s-master1 ~]# ps -ef |grep kube

root      59260      1 99 15:26 ?        00:00:06 /opt/kubernetes/bin/kube-apiserver --logtostderr=false --log-dir=/opt/kubernetes/logs --v=4 --etcd-servers=https://192.168.30.21:2379,https://192.168.30.22:2379,https://192.168.30.23:2379,https://192.168.30.24:2379 --bind-address=192.168.30.21 --secure-port=6443 --advertise-address=192.168.30.21 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem

root      59275  13922  0 15:26 pts/2    00:00:00 grep --color=auto kube

查看apiserver日志存放位置

[root@k8s-master1 cfg]# ls /opt/kubernetes/logs

kube-apiserver.ERROR

kube-apiserver.INFO

kube-apiserver.k8s-master1.unknownuser.log.ERROR.20190713-195308.66108

kube-apiserver.k8s-master1.unknownuser.log.ERROR.20190713-195313.66130

apiserver默认监听8080

[root@k8s-master1 k8s]# netstat -antp | grep :8080

tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      65327/kube-apiserve

[root@k8s-master1 k8s]# chmod +x controller-manager.sh

指定本地连接的ip 127.0.0.1

[root@k8s-master1 k8s]# ./controller-manager.sh 127.0.0.1

[root@k8s-master1 k8s]# ./scheduler.sh 127.0.0.1

把kubectl 放到/usr/bin下可以执行了

[root@k8s-master1 ~]# cp /root/soft/kubernetes/server/bin/kubectl /usr/bin/

查看单词缩写

[root@k8s-master1 ~]# kubectl api-resources

[root@k8s-master1 ~]# kubectl get cs

NAME                 STATUS    MESSAGE             ERROR

scheduler            Healthy   ok

controller-manager   Healthy   ok

etcd-3               Healthy   {"health":"true"}

etcd-2               Healthy   {"health":"true"}

etcd-1               Healthy   {"health":"true"}

etcd-0               Healthy   {"health":"true"}

七、部署node节点

[root@k8s-master1 cfg]# cat token.csv

aa70bb385b5a864e477b8c641fbef3d0,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

将kubelet-bootstrap用户绑定到系统集群角色

[root@k8s-master1 ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

  1. 创建kubeconfig文件

相当于认证信息,有了认证信息,才有权限访问apiserver

将上面生成的删除,从下面开始

[root@k8s-master1 k8s-cert]# vim kubeconfig.sh

BOOTSTRAP_TOKEN=aa70bb385b5a864e477b8c641fbef3d0

APISERVER=$1

SSL_DIR=$2

# 创建kubelet bootstrapping kubeconfig

export KUBE_APISERVER="https://$APISERVER:6443"

# 设置集群参数

kubectl config set-cluster kubernetes \

--certificate-authority=$SSL_DIR/ca.pem \

--embed-certs=true \

--server=${KUBE_APISERVER} \

--kubeconfig=bootstrap.kubeconfig

# 设置客户端认证参数

kubectl config set-credentials kubelet-bootstrap \

--token=${BOOTSTRAP_TOKEN} \

--kubeconfig=bootstrap.kubeconfig

# 设置上下文参数

kubectl config set-context default \

--cluster=kubernetes \

--user=kubelet-bootstrap \

--kubeconfig=bootstrap.kubeconfig

# 设置默认上下文

kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

# 创建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \

--certificate-authority=$SSL_DIR/ca.pem \

--embed-certs=true \

--server=${KUBE_APISERVER} \

--kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \

--client-certificate=$SSL_DIR/kube-proxy.pem \

--client-key=$SSL_DIR/kube-proxy-key.pem \

--embed-certs=true \

--kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \

--cluster=kubernetes \

--user=kube-proxy \

--kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

[root@k8s-master1 k8s-cert]# bash kubeconfig.sh 192.168.30.21 /root/k8s/k8s-cert

确保token加入进来

[root@k8s-master1 k8s-cert]# cat bootstrap.kubeconfig

name: kubernetes

contexts:

- context:

cluster: kubernetes

user: kubelet-bootstrap

name: default

current-context: default

kind: Config

preferences: {}

users:

- name: kubelet-bootstrap

user:

token: aa70bb385b5a864e477b8c641fbef3d0

  1. 部署kubelet,kube-proxy组件

Bootstrap.kubeconfig用来部署kubelet

Kube-proxy.kubeconfig用来部署kube-proxy

[root@k8s-master1 k8s-cert]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.30.23:/opt/kubernetes/cfg

[root@k8s-master1 k8s-cert]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.30.24:/opt/kubernetes/cfg

[root@k8s-node1 ~]# rz -E

rz waiting to receive.

[root@k8s-node1 ~]# unzip node.zip

[root@k8s-node1 ~]# bash kubelet.sh 192.168.30.23

Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

把kube-proxy的ipvs规则删除

创建日志目录文件并调用到执行目录下

[root@k8s-node1 cfg]# mkdir /opt/kubernetes/logs

[root@k8s-node1 kubernetes]# vim cfg/kubelet

KUBELET_OPTS="--logtostderr=false \

--log-dir=/opt/kubernetes/log \

--v=4 \

把kubelet、kube-proxy启动文件传到这个目录下

[root@k8s-master1 ~]# scp soft/kubernetes/server/bin/{kubelet,kube-proxy} root@192.168.30.23:/opt/kubernetes/bin/

[root@k8s-master1 ~]# scp /root/soft/kubernetes/server/bin/{kubelet,kube-proxy} root@192.168.30.24:/opt/kubernetes/bin/

[root@k8s-node1 kubernetes]# systemctl restart kubelet

[root@k8s-node1 kubernetes]# ps -ef |grep kube

root      10953      1  0 16:31 ?        00:00:06 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.30.21:2379,https://192.168.30.23:2379,https://192.168.30.24:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem

root      34160      1  6 20:58 ?        00:00:00 /opt/kubernetes/bin/kubelet --logtostderr=false --log-dir=/opt/kubernetes/log --v=4 --hostname-override=192.168.30.23 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0

root      34183  16147  0 20:58 pts/1    00:00:00 grep --color=auto kube

验证证书

[root@k8s-master1 ~]# kubectl get csr

NAME                                                   AGE     REQUESTOR           CONDITION

node-csr-xLNLbvb3cibW-fyr_5Qyd3YuUYAX9DJgDwViu3AyXMk   3m38s   kubelet-bootstrap   Pending

颁发证书# kubectl certificate approve后面跟node节点的name

[root@k8s-master1 ~]# kubectl certificate approve node-csr-xLNLbvb3cibW-fyr_5Qyd3YuUYAX9DJgDwViu3AyXMk

certificatesigningrequest.certificates.k8s.io/node-csr-xLNLbvb3cibW-fyr_5Qyd3YuUYAX9DJgDwViu3AyXMk approved

[root@k8s-master1 ~]# kubectl get node

NAME            STATUS     ROLES    AGE   VERSION

192.168.30.23   NotReady   <none>   7s    v1.13.4

[root@k8s-master1 ~]# kubectl get node

NAME            STATUS   ROLES    AGE   VERSION

192.168.30.23   Ready    <none>   11s   v1.13.4

[root@k8s-master1 ~]# kubectl get csr

NAME                                                   AGE     REQUESTOR           CONDITION

node-csr-xLNLbvb3cibW-fyr_5Qyd3YuUYAX9DJgDwViu3AyXMk   8m15s   kubelet-bootstrap   Approved,Issued

[root@k8s-node1 ~]# vim proxy.sh

[root@k8s-node1 ~]# bash proxy.sh 192.168.30.23

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

[root@k8s-node1 ~]# ps -ef |grep kube-proxy

root      35841      1  0 21:14 ?        00:00:00 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.30.23 --cluster-cidr=10.0.0.0/24 --proxy-mode=ipvs --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig

root      36005  16147  0 21:15 pts/1    00:00:00 grep --color=auto kube-proxy

七.二、部署第二个node节点

[root@k8s-node1 ~]# scp -r /opt/kubernetes/ root@192.168.30.24:/opt

[root@k8s-node1 ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.30.24:/usr/lib/systemd/system/

去node2上操作

[root@k8s-node2 ~]# cd /opt/kubernetes/cfg/

[root@k8s-node2 cfg]# ls

bootstrap.kubeconfig  kubelet         kubelet.kubeconfig  kube-proxy.kubeconfig

flanneld              kubelet.config  kube-proxy

[root@k8s-node2 cfg]# cd ../ssl

[root@k8s-node2 ssl]# ls

kubelet-client-2019-07-13-21-06-07.pem  kubelet-client-current.pem  kubelet.crt  kubelet.key

删除node1上颁发的证书

[root@k8s-node2 ssl]# rm -rf *

修改一个ip ,找到配置文件把ip上改成第二个node

[root@k8s-node2 cfg]# grep 23 *

kubelet:--hostname-override=192.168.30.23 \

kubelet.config:address: 192.168.30.23

kube-proxy:--hostname-override=192.168.30.23 \

这里把kube-proxy 的ipvs删掉

把这些都修改为24主机的IP之后启动

[root@k8s-node2 cfg]# systemctl restart kubelet

[root@k8s-node2 cfg]# systemctl restart kube-proxy.service

[root@k8s-node2 cfg]# ps -ef |grep kube

root      62846      1  0 16:49 ?        00:00:07 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.30.21:2379,https://192.168.30.23:2379,https://192.168.30.24:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem

root      86738      1  6 21:27 ?        00:00:00 /opt/kubernetes/bin/kubelet --logtostderr=false --log-dir=/opt/kubernetes/log --v=4 --hostname-override=192.168.30.24 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0

root      86780      1 35 21:28 ?        00:00:02 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.30.24 --cluster-cidr=10.0.0.0/24 --proxy-mode=ipvs --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig

root      86923  66523  0 21:28 pts/1    00:00:00 grep --color=auto kube

查看到master节点又有新的节点加入

[root@k8s-master1 ~]# kubectl get csr

NAME                                                   AGE   REQUESTOR           CONDITION

node-csr-eH_jPNUBXJF6sIii9SvNz9fW71543MLjPvOYWeDteqo   90s   kubelet-bootstrap   Pending

node-csr-xLNLbvb3cibW-fyr_5Qyd3YuUYAX9DJgDwViu3AyXMk   31m   kubelet-bootstrap   Approved,Issued

颁发证书

[root@k8s-master1 ~]# kubectl certificate approve node-csr-eH_jPNUBXJF6sIii9SvNz9fW71543MLjPvOYWeDteqo

certificatesigningrequest.certificates.k8s.io/node-csr-eH_jPNUBXJF6sIii9SvNz9fW71543MLjPvOYWeDteqo approved

[root@k8s-master1 ~]# kubectl get csr

NAME                                                   AGE     REQUESTOR           CONDITION

node-csr-eH_jPNUBXJF6sIii9SvNz9fW71543MLjPvOYWeDteqo   3m18s   kubelet-bootstrap   Approved,Issued

node-csr-xLNLbvb3cibW-fyr_5Qyd3YuUYAX9DJgDwViu3AyXMk   33m     kubelet-bootstrap   Approved,Issued

查看node节点状态

[root@k8s-master1 ~]# kubectl get node

NAME            STATUS   ROLES    AGE   VERSION

192.168.30.23   Ready    <none>   25m   v1.13.4

192.168.30.24   Ready    <none>   51s   v1.13.4

八、创建一个测试实例

[root@k8s-master1 ~]# kubectl run nginx --image=nginx

[root@k8s-master1 ~]# kubectl get pod

NAME                     READY   STATUS    RESTARTS   AGE

nginx-7cdbd8cdc9-wb228   1/1     Running   0          49s

暴露外部端口进行用户端访问

[root@k8s-master1 ~]#  kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort

[root@k8s-master1 ~]# kubectl get svc

NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE

kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP        20h

nginx        NodePort    10.0.0.27    <none>        88:44364/TCP   20h

访问外网内网都可以

查看pod日志

[root@k8s-master1 ~]# kubectl logs nginx-7cdbd8cdc9-2qrcw

Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-7cdbd8cdc9-2qrcw)

如果出现报错这里说明缺少一个默认的绑定集群的角色

定义/opt/kubernetes/cfg/kubelet.config

authentication:

anonymous:

enabled: true

并赋予系统集群一个角色

[root@k8s-master1 ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous

[root@k8s-master1 ~]# kubectl logs nginx-7cdbd8cdc9-2qrcw

172.17.55.0 - - [18/Jul/2019:08:08:22 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"

172.17.55.0 - - [18/Jul/2019:08:08:24 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"

172.17.55.0 - - [18/Jul/2019:08:08:27 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"

172.17.46.1 - - [18/Jul/2019:08:14:37 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"

.0; Win64; x64; rv:67.0) Gecko/20100101 Firefox/67.0" "-"

172.17.46.1 - - [18/Jul/2019:08:52:25 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:67.0) Gecko/20100101 Firefox/67.0" "-"

172.17.46.1 - - [18/Jul/2019:08:52:25 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:67.0) Gecko/20100101 Firefox/67.0" "-"

172.17.46.1 - - [18/Jul/2019:08:52:25 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10

九.部署master高可用

[root@k8s-master1 ~]# scp -r /opt/kubernetes/ root@192.168.30.22:/opt

[root@k8s-master1 ~]# scp /usr/lib/systemd/system/{kube-apiserver,kube-scheduler,kube-controller-manager}.service root@192.168.30.22:/usr/lib/systemd/system

修改kube-apiserver的IP

[root@k8s-master2 cfg]# grep 21 *

kube-apiserver:--etcd-servers=https://192.168.30.21:2379,https://192.168.30.22:2379,https://192.168.30.23:2379,https://192.168.30.24:2379 \

kube-apiserver:--bind-address=192.168.30.21 \

kube-apiserver:--advertise-address=192.168.30.21 \

启动kube-apiserver

[root@k8s-master2 ~]# systemctl start kube-apiserver

[root@k8s-master2 ~]# systemctl start kube-scheduler.service

[root@k8s-master2 ~]# systemctl start kube-controller-manager.service

[root@k8s-master2 ~]# ps -ef |grep kube

root       6840      1 12 14:10 ?        00:00:13 /opt/kubernetes/bin/kube-apiserver --logtostderr=false --log-dir=/opt/kubernetes/logs --v=4 --etcd-servers=https://192.168.30.21:2379,https://192.168.30.22:2379,https://192.168.30.23:2379,https://192.168.30.24:2379 --bind-address=192.168.30.22 --secure-port=6443 --advertise-address=192.168.30.22 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem

root       6913      1  9 14:12 ?        00:00:01 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect

root       6945      1 14 14:12 ?        00:00:01 /opt/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem --root-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --experimental-cluster-signing-duration=87600h0m0s

root       6953   3519 10 14:12 pts/1    00:00:00 grep --color=auto kube

把kubectl 从master传过来,查看集群状态

[root@k8s-master1 ~]# scp /usr/bin/kubectl root@192.168.30.22:/usr/bin

因为我们部署了etcd,所有集群状态在master2也能看到

[root@k8s-master2 ~]# kubectl get cs

NAME                 STATUS    MESSAGE             ERROR

scheduler            Healthy   ok

controller-manager   Healthy   ok

etcd-3               Healthy   {"health":"true"}

etcd-2               Healthy   {"health":"true"}

etcd-1               Healthy   {"health":"true"}

etcd-0               Healthy   {"health":"true"}

[root@k8s-master2 ~]# kubectl get node

NAME            STATUS   ROLES    AGE   VERSION

192.168.30.23   Ready    <none>   41m   v1.13.4

192.168.30.24   Ready    <none>   37m   v1.13.4

十.安装nginx实现负载调度

在k8s-LB    1/2上都安装的先决条件 :

sudo yum install yum-utils

建立了 yum 库、创建文件/etc/yum.repos.d/nginx.repo有下列内容 :

[nginx-stable]

name=nginx stable repo

baseurl=http://nginx.org/packages/centos/$releasever/$basearch/

gpgcheck=1

enabled=1

gpgkey=https://nginx.org/keys/nginx_signing.key

[nginx-mainline]

name=nginx mainline repo

baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/

gpgcheck=1

enabled=0

gpgkey=https://nginx.org/keys/nginx_signing.key

安装 nginx, 请运行以下命令 :

sudo yum install nginx

[root@k8s-LB1 ~]# vim /etc/nginx/nginx.conf

worker_processes  4;

修改进程数为4

设置负载均衡器,从stream池子放置需要负载均衡的ip,也是就master上的IP

并代理到我们LB上,用LB来访问,请求分发流量到后端不同的master主机上

stream {

upstream k8s-apiserver {

server 192.168.30.21:6443;

server 192.168.30.22:6443;

}

server {

listen 192.168.30.25:6443;

proxy_pass k8s-apiserver;

}

}

[root@k8s-LB1 ~]# nginx -t

nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok

nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful

[root@k8s-LB1 ~]# systemctl restart nginx

[root@k8s-LB1 ~]# ps -ef |grep nginx

root       2394      1  0 14:56 ?        00:00:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf

nginx      2395   2394  0 14:56 ?        00:00:00 nginx: worker process

nginx      2396   2394  0 14:56 ?        00:00:00 nginx: worker process

nginx      2397   2394  0 14:56 ?        00:00:00 nginx: worker process

nginx      2398   2394  0 14:56 ?        00:00:00 nginx: worker process

root       2414   1912  0 14:56 pts/0    00:00:00 grep --color=auto nginx

确定我们监听的是6443,端口

[root@k8s-LB1 ~]# netstat -anpt |grep 6443

tcp        0      0 192.168.30.25:6443      0.0.0.0:*               LISTEN      2394/nginx: master

修改node节点上的ip,指定我们的负载均衡器的IP  192.168.30.25

[root@k8s-node1 cfg]# vim bootstrap.kubeconfig

server: https://192.168.30.25:6443

[root@k8s-node1 cfg]# vim kubelet.kubeconfig

server: https://192.168.30.25:6443

[root@k8s-node1 cfg]# vim kube-proxy.kubeconfig

server: https://192.168.30.25:6443

重启kubelet

[root@k8s-node1 cfg]# systemctl restart kubelet

[root@k8s-node1 cfg]# ps -ef |grep kubelet

root      39714      1  7 15:05 ?        00:00:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.30.23 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0

root      39903   4302  0 15:05 pts/2    00:00:00 grep --color=auto kubelet

[root@k8s-node2 cfg]# vim kubelet.kubeconfig

server: https://192.168.30.25:6443

[root@k8s-node2 cfg]# vim kube-proxy.kubeconfig

server: https://192.168.30.25:6443

[root@k8s-node2 cfg]# vim bootstrap.kubeconfig

server: https://192.168.30.25:6443

重启kubelet

[root@k8s-node2 cfg]# systemctl restart kubelet

[root@k8s-node2 cfg]# ps -ef |grep kubelet

root     101094      1  7 15:09 ?        00:00:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.30.24 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0

root     101283  13585  0 15:09 pts/1    00:00:00 grep --color=auto kubelet

设置nginx启动日志并记录node节点上的状态日志

[root@k8s-LB1 ~]# vim /etc/nginx/nginx.conf

stream {

log_format main "$remote_addr $upstream_addr - $time_local $status";

access_log /var/log/nginx/k8s-access.log main;

upstream k8s-apiserver {

[root@k8s-LB1 ~]# nginx -t

nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok

nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful

[root@k8s-LB1 ~]# systemctl reload nginx

[root@k8s-LB1 ~]# ls /var/log/nginx/

access.log  error.log  k8s-access.log

测试日志日否开启

重启node2上的kubelet,查看k8s-LB1的日志

[root@k8s-node2 cfg]# systemctl restart kubelet

看到日志已经分配了,并来自两个master上的日志,这里Nginx负载均衡说明没有问题

[root@k8s-LB1 ~]# tail /var/log/nginx/k8s-access.log

192.168.30.24 192.168.30.22:6443 - 25/Jul/2019:15:20:14 +0800 200

192.168.30.24 192.168.30.21:6443 - 25/Jul/2019:15:20:14 +0800 200

十一.部署主从LB +keepalived实现vip 高可用

这里前面我们按照好了,直接把配置scp过来就行

[root@k8s-LB1 ~]#  scp /root/etc/nginx/nginx.conf root@192.168.30.26:/etc/nginx/nginx.conf

修改代理监听的IP为192.168.30.26

[root@k8s-LB2 yum.repos.d]# vim /etc/nginx/nginx.conf

stream {

log_format main "$remote_addr $upstream_addr - $time_local $status";

access_log /var/log/nginx/k8s-access.log main;

upstream k8s-apiserver {

server 192.168.30.21:6443;

server 192.168.30.22:6443;

}

server {

listen 192.168.30.26:6443;

proxy_pass k8s-apiserver;

}

}

[root@k8s-LB2 yum.repos.d]# nginx -t

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok

nginx: configuration file /etc/nginx/nginx.conf

重启,日志也启动了

[root@k8s-LB2 yum.repos.d]# systemctl restart nginx

[root@k8s-LB2 ~]# tail /var/log/nginx/k8s-access.log

两个节点都安装keepalived

[root@k8s-LB1 ~]# yum install keepalived

[root@k8s-LB2 ~]# yum install keepalived

修改主配置文件

[root@k8s-LB1 ~]# rz -E

rz waiting to receive.

[root@k8s-LB1 ~]# cp keepalived.conf /etc/keepalived/keepalived.conf

cp:是否覆盖"/etc/keepalived/keepalived.conf"? y

[root@k8s-LB1 ~]# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {

notification_email {

acassen@firewall.loc

failover@firewall.loc

sysadmin@firewall.loc

}

notification_email_from Alexandre.Cassen@firewall.loc

smtp_server 127.0.0.1

smtp_connect_timeout 30

router_id NGINX_MASTER

}

vrrp_script check_nginx {

script "/etc/keepalived/check_nginx.sh"

}

vrrp_instance VI_1 {

state MASTER

interface ens33

virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的

priority 100    # 优先级,备服务器设置 90

advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

192.168.30.20/24

}

track_script {

check_nginx

}

}

/usr/local/nginx/sbin/check_nginx.sh

count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then

/etc/init.d/keepalived stop

fi

写一个脚本,检查nginx进程状态,如果启动失败,那就停掉keepalived,

上文我们在配置文件中也写到了脚本

[root@k8s-LB1 ~]# vim /etc/keepalived/check_nginx.sh

count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then

systemctl stop keepalived

fi

[root@k8s-LB1 keepalived]# chmod +x /etc/keepalived/check_nginx.sh

[root@k8s-LB1 keepalived]# systemctl restart keepalived

[root@k8s-LB1 keepalived]# ps -ef |grep keepalived

root       4085      1  0 16:15 ?        00:00:00 /usr/sbin/keepalived -D

root       4086   4085  0 16:15 ?        00:00:00 /usr/sbin/keepalived -D

root       4087   4085  0 16:15 ?        00:00:00 /usr/sbin/keepalived -D

root       4111   1912  0 16:15 pts/0    00:00:00 grep --color=auto keepalived

这里会绑定一个vip 地址,配置文件中设置的

[root@k8s-LB1 keepalived]# ip a

2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

link/ether 00:0c:29:12:31:53 brd ff:ff:ff:ff:ff:ff

inet 192.168.30.25/24 brd 192.168.30.255 scope global noprefixroute ens33

valid_lft forever preferred_lft forever

inet 192.168.30.20/24 scope global secondary ens33

把LB1的配置文件转到LB2上,这里修改matser为backup 优先级为90

[root@k8s-LB1 ~]# scp /etc/keepalived/keepalived.conf root@192.168.30.26:/etc/keepalived

! Configuration File for keepalived

global_defs {

notification_email {

acassen@firewall.loc

failover@firewall.loc

sysadmin@firewall.loc

}

notification_email_from Alexandre.Cassen@firewall.loc

smtp_server 127.0.0.1

smtp_connect_timeout 30

router_id NGINX_MASTER

}

vrrp_script check_nginx {

script "/etc/keepalived/check_nginx.sh"

}

vrrp_instance VI_1 {

state BACKUP

interface ens33

virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的

priority 90    # 优先级,备服务器设置 90

advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

192.168.30.20/24

}

track_script {

check_nginx

}

}

/usr/local/nginx/sbin/check_nginx.sh

count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then

/etc/init.d/keepalived stop

fi

脚本也传过来

[root@k8s-LB1 ~]# scp /etc/keepalived/check_nginx.sh root@192.168.30.26:/etc/keepalived

[root@k8s-LB2 keepalived]# ls

check_nginx.sh  keepalived.conf

[root@k8s-LB2 keepalived]# chmod +x check_nginx.sh

[root@k8s-LB2 ~]# systemctl start keepalived

[root@k8s-LB2 ~]# ps -ef |grep keepalived

root      58283      1  0 16:32 ?        00:00:00 /usr/sbin/keepalived -D

root      58285  58283  0 16:32 ?        00:00:00 /usr/sbin/keepalived -D

root      58286  58283  0 16:32 ?        00:00:00 /usr/sbin/keepalived -D

root      58360   2184  0 16:33 pts/0    00:00:00 grep --color=auto keepalived

测试keepalived是否成功

停掉LB1的Nginx 那么VIP地址就飘到LB上了

[root@k8s-LB1 ~]# systemctl stop nginx

[root@k8s-LB2 ~]# ip a

valid_lft 1613sec preferred_lft 1613sec

inet 192.168.30.26/24 brd 192.168.30.255 scope global secondary noprefixroute ens33

valid_lft forever preferred_lft forever

inet 192.168.30.20/24 scope global secondary ens33

因为脚本里面我们停掉了keepaived所有就飘不过来了,重启一个keepalived就可以了

脚本写入停掉keepalived,这里主要是给我们提供一个服务挂掉的原因,设置报警之后,挂了说明服务有问题,方面我们去解决,如果不写入脚本,那么vip飘移过去,但是我们不知道服务存在问题,写入就是更好的通知我们状态,LB1问题解决重启keepalived,VIP地址还会回来,因为重点在与优先级的问题,LB1设置的是100,所有优先抢占

[root@k8s-LB1 ~]# systemctl start nginx

[root@k8s-LB1 ~]# systemctl restart keepalived

[root@k8s-LB1 ~]# ip a

inet 192.168.30.25/24 brd 192.168.30.255 scope global noprefixroute ens33

valid_lft forever preferred_lft forever

inet 192.168.30.20/24 scope global secondary ens33

十二.接入k8s

只需要修改node节点上kubeconfig的IP为master上的虚拟ip 也就是vip 地址192.168.30.20

[root@k8s-node2 cfg]# vim bootstrap.kubeconfig

[root@k8s-node2 cfg]# vim kubelet.kubeconfig

[root@k8s-node2 cfg]# vim kube-proxy.kubeconfig

[root@k8s-node2 cfg]# systemctl restart kubelet

[root@k8s-node2 cfg]# systemctl restart kube-proxy

[root@k8s-node1 cfg]# vim bootstrap.kubeconfig

[root@k8s-node1 cfg]# vim kubelet.kubeconfig

[root@k8s-node1 cfg]# vim kube-proxy.kubeconfig

[root@k8s-node1 cfg]# systemctl restart kubelet

[root@k8s-node1 cfg]# systemctl restart kube-proxy

查看请求目前没有接受到vip 的请求,需要改一下nginx监听的IP

[root@k8s-LB1 ~]# tail /var/log/nginx/k8s-access.log

192.168.30.24 192.168.30.22:6443 - 27/Jul/2019:11:13:59 +0800 200

192.168.30.23 192.168.30.21:6443 - 27/Jul/2019:11:13:59 +0800 200

192.168.30.23 192.168.30.21:6443 - 27/Jul/2019:11:17:19 +0800 200

192.168.30.24 192.168.30.22:6443 - 27/Jul/2019:11:18:11 +0800 200

[root@k8s-LB1 ~]# vim /etc/nginx/nginx.conf

server {

listen 0.0.0.0:6443;

proxy_pass k8s-apiserver;

[root@k8s-LB1 ~]# systemctl restart nginx

[root@k8s-LB2 ~]# vim /etc/nginx/nginx.conf

server {

listen 0.0.0.0:6443;

proxy_pass k8s-apiserver;

[root@k8s-LB2 ~]# systemctl restart nginx

测试重启node节点,查看

[root@k8s-node2 cfg]# systemctl restart kubelet

[root@k8s-LB1 ~]# tail /var/log/nginx/k8s-access.log

192.168.30.23 192.168.30.22:6443 - 25/Jul/2019:17:14:12 +0800 200

192.168.30.23 192.168.30.21:6443 - 25/Jul/2019:17:14:12 +0800 200

192.168.30.24 192.168.30.22:6443 - 25/Jul/2019:17:14:12 +0800 200

192.168.30.24 192.168.30.21:6443 - 25/Jul/2019:17:14:13 +0800 200

kubernetes二进制高可用部署实战的更多相关文章

  1. Kubernetes全栈架构师(二进制高可用安装k8s集群部署篇)--学习笔记

    目录 二进制高可用基本配置 二进制系统和内核升级 二进制基本组件安装 二进制生成证书详解 二进制高可用及etcd配置 二进制K8s组件配置 二进制使用Bootstrapping自动颁发证书 二进制No ...

  2. Kubernetes全栈架构师(二进制高可用安装k8s集群扩展篇)--学习笔记

    目录 二进制Metrics&Dashboard安装 二进制高可用集群可用性验证 生产环境k8s集群关键性配置 Bootstrapping: Kubelet启动过程 Bootstrapping: ...

  3. kubernetes 1.15.1 高可用部署 -- 从零开始

    这是一本书!!! 一本写我在容器生态圈的所学!!! 重点先知: 1. centos 7.6安装优化 2. k8s 1.15.1 高可用部署 3. 网络插件calico 4. dashboard 插件 ...

  4. kubernetes1.7.6 ha高可用部署

    写在前面:  1. 该文章部署方式为二进制部署. 2. 版本信息 k8s 1.7.6,etcd 3.2.9 3. 高可用部分 etcd做高可用集群.kube-apiserver 为无状态服务使用hap ...

  5. 附034.Kubernetes_v1.21.0高可用部署架构二

    kubeadm介绍 kubeadm概述 Kubeadm 是一个工具,它提供了 kubeadm init 以及 kubeadm join 这两个命令作为快速创建 kubernetes 集群的最佳实践. ...

  6. Centos7.2 下DNS+NamedManager高可用部署方案完整记录

    Centos7.2 下DNS+NamedManager高可用部署方案完整记录 之前说到了NamedManager单机版的配置,下面说下DNS+NamedManager双机高可用的配置方案: 1)机器环 ...

  7. 关于Kubernetes Master高可用的一些策略

    关于Kubernetes Master高可用的一些策略 Kubernetes高可用也许是完成了初步的技术评估,打算将生产环境迁移进Kubernetes集群之前普遍面临的问题. 为了减少因为服务器当机引 ...

  8. Pod在多可用区worker节点上的高可用部署

    一. 需求分析 当前kubernetes集群中的worker节点可以支持添加多可用区中的ECS,这种部署方式的目的是可以让一个应用的多个pod(至少两个)能够分布在不同的可用区,起码不能分布在同一个可 ...

  9. 附022.Kubernetes_v1.18.3高可用部署架构一

    kubeadm介绍 kubeadm概述 参考附003.Kubeadm部署Kubernetes. kubeadm功能 参考附003.Kubeadm部署Kubernetes. 本方案描述 本方案采用kub ...

随机推荐

  1. Python字典的合并与拆分

    1.字典的合并 dict1={1:[1,11,111],2:[2,22,222]} dict2={3:[3,33,333],4:[4,44,444]} dictMerged2=dict(dict1, ...

  2. 高性能嵌入式核心板新标杆!米尔推出基于NXP i.MX8M处理器的MYC-JX8MX核心板

    随着嵌入式及物联网技术的飞速发展,高性能计算的嵌入式板卡已经成为智能产品的基础硬件平台.为响应行业应用和满足客户需求,米尔电子推出基于NXP公司i.MX8M系列芯片的开发平台MYD-JX8MX系列开发 ...

  3. uint16,uint32是什么?

    记得之前在刷笔试题的时候就看见过这个问题,发现当时上网百度后又忘了. 最近在看CryEngine3引擎代码的时候又晕了,趁现在赶紧记下来~ 在查看CE3的代码时我发现了这个变量,TFlowNodeId ...

  4. Scala 学习之路(十)—— 函数 & 闭包 & 柯里化

    一.函数 1.1 函数与方法 Scala中函数与方法的区别非常小,如果函数作为某个对象的成员,这样的函数被称为方法,否则就是一个正常的函数. // 定义方法 def multi1(x:Int) = { ...

  5. Hadoop 学习之路(五)—— Hadoop集群环境搭建

    一.集群规划 这里搭建一个3节点的Hadoop集群,其中三台主机均部署DataNode和NodeManager服务,但只有hadoop001上部署NameNode和ResourceManager服务. ...

  6. 深入V8引擎-AST(4)

    (再声明一下,为了简单暴力的讲解AST的转换过程,这里的编译内容以"'Hello' + ' World'"作为案例) 上一篇基本上花了一整篇讲完了scanner的Init方法,接下 ...

  7. Java连载2-Java特性

    一.JDK 1.含义:Java开发工具包. 2.做Java开发之前必须安装的一个工具包,​下载地址:https://www.oracle.com/index.html 3.Java包括三大块内容: ( ...

  8. Effective Java - 静态方法与构造器

    目录 用静态工厂方法替代构造器? 静态工厂有名称 静态工厂不必重新创建一个对象 静态工厂可以返回任何子类型对象 静态工厂返回的类可以动态变化 静态工厂返回的类可以不存在 静态工厂方法的缺点 静态工厂方 ...

  9. 模块(二)os hashlib

    模块(二)os hashlib 1.序列化模块 1.1 json 将满足条件的数据结构转化成特殊的字符串,并且可以反序列化转回去 # 两对方法 # 1 dumps() loads() ## 多用于网络 ...

  10. django基础知识之HTML转义:

    HTML转义 Django对字符串进行自动HTML转义,如在模板中输出如下值: 视图代码: def index(request): return render(request, 'temtest/in ...