Centos 7.9 基于二进制文件部署kubernetes v1.25.5集群
简述
一、集群环境准备
1.1 主机规划
服务器系统 | 节点IP | 主机角色 | CUP/内存 | Hostname | 内核 | 软件列表 |
Centos 7.9.2009 | 192.168.1.92 | master | 2C/4G | k8s-master1 | 6.1.0 | kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、Containerd、runc |
Centos 7.9.2009 | 192.168.1.93 | master | 2C/4G | k8s-master2 | 6.1.0 | kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、Containerd、runc |
Centos 7.9.2009 | 192.168.1.94 | master | 2C/4G | k8s-master3 | 6.1.0 | kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、Containerd、runc |
Centos 7.9.2009 | 192.168.1.96 | worker | 2C/4G | k8s-worker1 | 6.1.0 | kubelet、kube-proxy、Containerd、runc |
Centos 7.9.2009 | 192.168.1.111 | LB | 1C/1G | ha1 | 6.1.0 | haproxy、keepalived |
Centos 7.9.2009 | 192.168.1.112 | LB | 1C/1G | ha2 | 6.1.0 | haproxy、keepalived |
Centos 7.9.2009 | 192.168.1.100 | VIP(虚拟IP) | / | / | / | / |
1.2 软件版本
软件名称 |
版本 | 备注 |
CentOS 7.9
|
kernel版本:6.1.0 | 升级后版本 |
kubernetes |
v1.25.5
|
|
etcd
|
v3.5.6 | 最新版本 |
calico | v3.23 | 稳定版本 |
coredns | 1.8.4 | |
containerd | 1.6.12 | 最新版本 |
runc | 1.1.4 | 最新版本 |
haproxy | 1.5.18 | yum源默认 |
keepalived | v1.3.5 | yum源默认 |
1.3 网络分配
网络名称 | 网段 | 备注 |
Node网络 | 192.168.1.0/24 | |
Service网络 | 10.96.0.0/16 | |
Pod网络 | 10.244.0.0/16 |
二、集群部署
2.1主机准备
2.1.1 主机名设置
注:关于主机名参见1.1小节主机规划表
hostnamectl set-hostname ha1
hostnamectl set-hostname ha2
hostnamectl set-hostname k8s-master1
hostnamectl set-hostname k8s-master2
hostnamectl set-hostname k8s-master3
hostnamectl set-hostname k8s-worker1
2.1.2 主机与IP地址解析
cat >> /etc/hosts << EOF
192.168.1.111 ha1
192.168.1.112 ha2
192.168.1.92 k8s-master1
192.168.1.93 k8s-master2
192.168.1.94 k8s-master3
192.168.1.96 k8s-worker1
EOF
2.1.3 主机安全设置
2.1.3.1 关闭防火墙
systemctl stop firewalld
systmctl disable firewalld
firewall-cmd --state
2.1.3.2 关闭selinux
setenforce 0
sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
sestatus
2.1.4 交换分区设置
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
echo "vm.swappiness=0" >> /etc/sysctl.conf
sysctl -p
2.1.5 主机系统时间同步
yum -y install ntpdate
2.1.5.2 制定时间同步计划任务(每小时执行一次同步)
crontab -e
0 */1 * * * ntpdate time1.aliyun.com
2.1.6 主机系统优化
- limit优化
ulimit -SHn 65535
cat <<EOF >> /etc/security/limits.conf
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF
2.1.7 ipvs管理工具安装及模块加载
- 为集群节点安装,负载均衡节点不用安装
yum -y install ipvsadm ipset sysstat conntrack libseccomp
- 临时加入,所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
永久加入,创建 /etc/modules-load.d/ipvs.conf 并加入以下内容:
cat >/etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
2.1.8 加载containerd相关内核模块
注:master和worker节点执行
2.1.8.1 临时加载模块
modprobe overlay
modprobe br_netfilter
2.1.8.2 永久性加载模块
cat > /etc/modules-load.d/containerd.conf << EOF
overlay
br_netfilter
EOF
2.1.8.3 设置为开机启动
systemctl enable --now systemd-modules-load.service
2.1.9 Linux内核升级
在所有节点中安装,需要重新操作系统更换内核。
[root@localhost ~]# yum -y install perl
[root@localhost ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
[root@localhost ~]# yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
[root@localhost ~]# yum --enablerepo="elrepo-kernel" -y install kernel-ml.x86_64
[root@localhost ~]# grub2-set-default 0
[root@localhost ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
2.1.10 Linux内核优化
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 131072
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
使其生效:
sysctl --system
reboot -h now
重启后查看ipvs模块加载情况:
lsmod | grep --color=auto -e ip_vs -e nf_conntrack
lsmod | egrep 'br_netfilter'
lsmod | egrep 'overlay'
2.1.11 其它工具安装(选装)
注:jq在本次集群部署中,无关紧要,如果无法安装也不影响。
- master和worker节点执行
yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git lrzsz -y
2.2 负载均衡器准备
2.2.1 安装haproxy与keepalived
yum -y install haproxy keepalived
2.2.2 HAProxy配置
cat >/etc/haproxy/haproxy.cfg<<"EOF"
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor frontend k8s-master
bind 0.0.0.0:6443
bind 127.0.0.1:6443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-master1 192.168.1.92:6443 check
server k8s-master2 192.168.1.93:6443 check
server k8s-master3 192.168.1.94:6443 check
EOF
2.2.3 KeepAlived
cat >/etc/keepalived/keepalived.conf<<"EOF"
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface ens32
mcast_src_ip 192.168.1.111
virtual_router_id 51
priority 100
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.1.100
}
track_script {
chk_apiserver
}
}
EOF
说明:
cat >/etc/keepalived/keepalived.conf<<"EOF"
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface ens32
mcast_src_ip 192.168.1.112
virtual_router_id 51
priority 99
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.1.100
}
track_script {
chk_apiserver
}
}
EOF
2.2.4 健康检查脚本
cat > /etc/keepalived/check_apiserver.sh <<"EOF"
#!/bin/bash
err=0
for k in $(seq 1 3)
do
check_code=$(pgrep haproxy)
if [[ $check_code == "" ]]; then
err=$(expr $err + 1)
sleep 1
continue
else
err=0
break
fi
done if [[ $err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
EOF
赋予脚本执行权限:
chmod +x /etc/keepalived/check_apiserver.sh
2.2.5 启动服务并验证
systemctl daemon-reload
systemctl enable --now haproxy keepalived
systemctl status haproxy keepalived
2.3 配置免密登录
注:在k8s-master1上操作
2.3.1 生成密钥:
ssh-keygen
说明:
(/root/.ssh/id_rsa): # 证书私钥保存的位置
Enter passphrase: # 私钥使用过程中设置的密码
Enter same passphrase again: # 密码的确认
三个地方全部回车即可!
2.3.2 拷贝至其他服务器
ssh-copy-id root@k8s-master1
ssh-copy-id root@k8s-master2
ssh-copy-id root@k8s-master3
ssh-copy-id root@k8s-worker1
输入yes,然后输入目标服务器密码!
2.3.3 测试免密登录
ssh root@k8s-master2
2.4 创建CA及etcd证书
2.4.1 创建工作目录
mkdir -p /data/k8s-work
2.4.2 获取cfssl工具
cd /data/k8s-work
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
说明:
chmod +x cfssl*
剪切工具至系统目录:
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
验证:
cfssl version
Version: 1.2.0
Revision: dev
Runtime: go1.6
2.4.3 创建CA证书
2.4.3.1 配置ca证书请求文件
cat > ca-csr.json <<"EOF"
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "kubemsb",
"OU": "CN"
}
],
"ca": {
"expiry": "87600h"
}
}
EOF
2.4.3.2 创建ca证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2.4.3.3 生成配置ca证书策略
cfssl print-defaults config > ca-config.json
说明:自己生成的证书配置,"expiry":有效期是8760h(一年),也可以修改为87600h(十年)。
或者使用以下配置:
cat > ca-config.json <<"EOF"
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
EOF
说明:
- server auth 表示client可以对使用该ca对server提供的证书进行验证
- client auth 表示server可以使用该ca对client提供的证书进行验证
2.4.4 创建etcd证书
2.4.4.1 配置etcd请求文件
注:
- etcd可以部署到集群的master节点或worker节点,不受影响。
- 修改主机IP为自己的主机。
cat > etcd-csr.json <<"EOF"
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"192.168.1.92",
"192.168.1.93",
"192.168.1.94"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "kubemsb",
"OU": "CN"
}]
}
EOF
2.4.4.2 生成etcd证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
2.4.4.3 查看生成的etcd证书文件
[root@k8s-master1 k8s-work]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem etcd.csr etcd-csr.json etcd-key.pem etcd.pem
2.4.5 部署etcd集群
2.4.5.1 下载etcd软件包
下载地址:https://github.com/
搜索框输入etcd,选择etcd-io/etcd
选择右侧releases
选择自己需要下载的安装包
下载安装包:
wget https://github.com/etcd-io/etcd/releases/download/v3.5.6/etcd-v3.5.6-linux-amd64.tar.gz
2.4.5.2 安装etcd软件
tar xf etcd-v3.5.6-linux-amd64.tar.gz
cp -p etcd-v3.5.6-linux-amd64/etcd* /usr/local/bin/
验证安装:
[root@k8s-master1 k8s-work]# etcdctl version
etcdctl version: 3.5.6
API version: 3.5
2.4.5.3 分发etcd软件
scp etcd-v3.5.6-linux-amd64/etcd* k8s-master2:/usr/local/bin/
scp etcd-v3.5.6-linux-amd64/etcd* k8s-master3:/usr/local/bin/
2.4.5.4 创建配置文件
创建目录:
mkdir /etc/etcd
创建配置文件:
cat > /etc/etcd/etcd.conf <<"EOF"
#[Member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.92:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.92:2379,http://127.0.0.1:2379" #[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.92:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.92:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.92:2380,etcd2=https://192.168.1.93:2380,etcd3=https://192.168.1.94:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
说明:
2.4.5.5 创建服务配置文件
创建目录:
mkdir -p /etc/etcd/ssl
mkdir -p /var/lib/etcd/default.etcd
拷贝证书至指定目录:
cd /data/k8s-work
cp ca*.pem /etc/etcd/ssl
cp etcd*.pem /etc/etcd/ssl
创建服务文件:
cat > /etc/systemd/system/etcd.service <<"EOF"
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target [Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-client-cert-auth \
--client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target
EOF
2.4.5.6 同步etcd配置到集群其它master节点
mkdir -p /etc/etcd
mkdir -p /etc/etcd/ssl
mkdir -p /var/lib/etcd/default.etcd
服务配置文件,需要修改etcd节点名称及IP地址
for i in k8s-master2 k8s-master3 \
do \
scp /etc/etcd/etcd.conf $i:/etc/etcd/ \
done
k8s-master2:
cat > /etc/etcd/etcd.conf <<"EOF"
#[Member]
ETCD_NAME="etcd2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.93:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.93:2379,http://127.0.0.1:2379" #[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.93:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.93:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.92:2380,etcd2=https://192.168.1.93:2380,etcd3=https://192.168.1.94:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
k8s-master3:
cat > /etc/etcd/etcd.conf <<"EOF"
#[Member]
ETCD_NAME="etcd3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.94:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.94:2379,http://127.0.0.1:2379" #[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.94:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.94:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.92:2380,etcd2=https://192.168.1.93:2380,etcd3=https://192.168.1.94:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
同步证书文件:
for i in k8s-master2 k8s-master3 ;do scp /etc/etcd/ssl/* $i:/etc/etcd/ssl ;done
同步服务启动配置文件:
for i in k8s-master2 k8s-master3 ;do scp /etc/systemd/system/etcd.service $i:/etc/systemd/system/ ;done
2.4.5.7 启动etcd集群
注:如果报错无法启动,则需要将其他etcd节点设置完成后才可以启动(同时操作以下启动命令)
systemctl daemon-reload
systemctl enable --now etcd.service
systemctl status etcd
2.4.5.8 验证集群状态
检查ETCD集群健康状态
ETCDCTL_API=3 /usr/local/bin/etcdctl \
--write-out=table \
--cacert=/etc/etcd/ssl/ca.pem \
--cert=/etc/etcd/ssl/etcd.pem \
--key=/etc/etcd/ssl/etcd-key.pem \
--endpoints=https://192.168.1.92:2379,https://192.168.1.93:2379,https://192.168.1.94:2379 endpoint health
结果:
+---------------------------+--------+-------------+-------+
| ENDPOINT | HEALTH | TOOK | ERROR |
+---------------------------+--------+-------------+-------+
| https://192.168.1.92:2379 | true | 11.109782ms | |
| https://192.168.1.93:2379 | true | 14.648997ms | |
| https://192.168.1.94:2379 | true | 16.783152ms | |
+---------------------------+--------+-------------+-------+
检查ETCD数据库性能
ETCDCTL_API=3 /usr/local/bin/etcdctl \
--write-out=table \
--cacert=/etc/etcd/ssl/ca.pem \
--cert=/etc/etcd/ssl/etcd.pem \
--key=/etc/etcd/ssl/etcd-key.pem \
--endpoints=https://192.168.1.92:2379,https://192.168.1.93:2379,https://192.168.1.94:2379 check perf
结果:
59 / 60 Booooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooom ! 98.33%PASS: Throughput is 150 writes/s
PASS: Slowest request took 0.055096s
PASS: Stddev is 0.001185s
PASS
检查ETCD集群成员列表
ETCDCTL_API=3 /usr/local/bin/etcdctl \
--write-out=table \
--cacert=/etc/etcd/ssl/ca.pem \
--cert=/etc/etcd/ssl/etcd.pem \
--key=/etc/etcd/ssl/etcd-key.pem \
--endpoints=https://192.168.1.92:2379,https://192.168.1.93:2379,https://192.168.1.94:2379 member list
结果:
+------------------+---------+-------+---------------------------+---------------------------+------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
+------------------+---------+-------+---------------------------+---------------------------+------------+
| af05139f75a68867 | started | etcd1 | https://192.168.1.92:2380 | https://192.168.1.92:2379 | false |
| c3bab7c20fba3f60 | started | etcd2 | https://192.168.1.93:2380 | https://192.168.1.93:2379 | false |
| fbba12d5c6ebb577 | started | etcd3 | https://192.168.1.94:2380 | https://192.168.1.94:2379 | false |
+------------------+---------+-------+---------------------------+---------------------------+------------+
检查ETCD节点状态(可以看出谁是集群的leader)
ETCDCTL_API=3 /usr/local/bin/etcdctl \
--write-out=table \
--cacert=/etc/etcd/ssl/ca.pem \
--cert=/etc/etcd/ssl/etcd.pem \
--key=/etc/etcd/ssl/etcd-key.pem \
--endpoints=https://192.168.1.92:2379,https://192.168.1.93:2379,https://192.168.1.94:2379 endpoint status
结果:
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.1.92:2379 | af05139f75a68867 | 3.5.6 | 22 MB | false | false | 3 | 8982 | 8982 | |
| https://192.168.1.93:2379 | c3bab7c20fba3f60 | 3.5.6 | 22 MB | true | false | 3 | 8982 | 8982 | |
| https://192.168.1.94:2379 | fbba12d5c6ebb577 | 3.5.6 | 22 MB | false | false | 3 | 8982 | 8982 | |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
2.5 Kubernetes集群部署
2.5.1 Kubernetes软件包下载
下载安装包(我这里用的v1.25.5版本):
wget https://dl.k8s.io/v1.25.5/kubernetes-server-linux-amd64.tar.gz
2.5.2 Kubernetes软件包安装
tar -xvf kubernetes-server-linux-amd64.tar.gz cd kubernetes/server/bin/ cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
2.5.3 Kubernetes软件分发
scp kube-apiserver kube-controller-manager kube-scheduler kubectl k8s-master2:/usr/local/bin/
scp kube-apiserver kube-controller-manager kube-scheduler kubectl k8s-master3:/usr/local/bin/
根据自己需求,拷贝以下服务:
scp kubelet kube-proxy k8s-master1:/usr/local/bin
scp kubelet kube-proxy k8s-master2:/usr/local/bin
scp kubelet kube-proxy k8s-master3:/usr/local/bin
scp kubelet kube-proxy k8s-worker1:/usr/local/bin
2.5.4 在集群节点上创建目录
mkdir -p /etc/kubernetes/
mkdir -p /etc/kubernetes/ssl
mkdir -p /var/log/kubernetes
2.5.5 部署api-server
2.5.5.1 创建apiserver证书请求文件
注:master1操作
cat > kube-apiserver-csr.json << "EOF"
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.1.92",
"192.168.1.93",
"192.168.1.94",
"192.168.1.95",
"192.168.1.96",
"192.168.1.97",
"192.168.1.98",
"192.168.1.99",
"192.168.1.100",
"10.96.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "kubemsb",
"OU": "CN"
}
]
}
EOF
说明:
2.5.5.2 生成apiserver证书及token文件
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
创建TLS机制所需TOKEN(自动签发给客户端证书的机制):
cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
说明:
2.5.5.3 创建apiserver服务配置文件
cat > /etc/kubernetes/kube-apiserver.conf << "EOF"
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--anonymous-auth=false \
--bind-address=192.168.1.92 \
--secure-port=6443 \
--advertise-address=192.168.1.92 \
--authorization-mode=Node,RBAC \
--runtime-config=api/all=true \
--enable-bootstrap-token-auth \
--service-cluster-ip-range=10.96.0.0/16 \
--token-auth-file=/etc/kubernetes/token.csv \
--service-node-port-range=30000-32767 \
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-issuer=api \
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--etcd-servers=https://192.168.1.92:2379,https://192.168.1.93:2379,https://192.168.1.94:2379 \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-apiserver-audit.log \
--event-ttl=1h \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=4"
EOF
说明:
--bind-address :监听地址
--secure-port:https安全端口
--advertise-address:集群通告地址
--authorization-mode:认证授权,启用RBAC授权和节点自管理
--allow-privileged:启用授权
--service-cluster-ip-range:Service虚拟IP地址段
--enable-bootstrap-token-auth:启用TLS bootstrap机制
--token-auth-file:bootstrap token文件
--service-node-port-range:Service nodeport类型默认分配端口范围
--kubelet-https:apiserver主动访问kubelet时默认使用https
--kubelet-client-xxx:apiserver访问kubelet客户端证书
--tls-xxx-file:apiserver https证书
--etcd-xxxfile:连接Etcd集群证书
--etcd-servers: etcd集群地址
--audit-log-xxx:审计日志
--logtostderr:启用日志
--log-dir:日志目录
--v:日志等级,越小越多
2.5.5.4 创建apiserver服务管理配置文件
注:
- After 代表谁在前启动
- Requires 代表需要哪个服务
cat > /etc/systemd/system/kube-apiserver.service << "EOF"
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service [Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target
EOF
说明:
此两横字段说明,要启动kube-apiserver,必须先启动etcd。
2.5.5.5 同步文件到集群master节点
拷贝目录:
cd /data/k8s-work
拷贝ca证书到master1指定目录:
cp ca*.pem /etc/kubernetes/ssl/
拷贝kube-apiserver证书到master1指定目录:
cp kube-apiserver*.pem /etc/kubernetes/ssl/
拷贝token.csv文件到master1指定目录:
cp token.csv /etc/kubernetes/
同步token.csv文件到其他master:
scp /etc/kubernetes/token.csv k8s-master2:/etc/kubernetes
scp /etc/kubernetes/token.csv k8s-master3:/etc/kubernetes
同步kube-apiserver证书到其他master:
scp /etc/kubernetes/ssl/kube-apiserver*.pem k8s-master2:/etc/kubernetes/ssl
scp /etc/kubernetes/ssl/kube-apiserver*.pem k8s-master3:/etc/kubernetes/ssl
同步ca证书到其他master:
scp /etc/kubernetes/ssl/ca*.pem k8s-master2:/etc/kubernetes/ssl
scp /etc/kubernetes/ssl/ca*.pem k8s-master3:/etc/kubernetes/ssl
同步kube-apiserver配置文件到master2(需要修改主机IP):
scp /etc/kubernetes/kube-apiserver.conf k8s-master2:/etc/kubernetes/kube-apiserver.conf
或者直接执行以下内容:
cat > /etc/kubernetes/kube-apiserver.conf << "EOF"
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--anonymous-auth=false \
--bind-address=192.168.1.93 \
--secure-port=6443 \
--advertise-address=192.168.1.93 \
--authorization-mode=Node,RBAC \
--runtime-config=api/all=true \
--enable-bootstrap-token-auth \
--service-cluster-ip-range=10.96.0.0/16 \
--token-auth-file=/etc/kubernetes/token.csv \
--service-node-port-range=30000-32767 \
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-issuer=api \
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--etcd-servers=https://192.168.1.92:2379,https://192.168.1.93:2379,https://192.168.1.94:2379 \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-apiserver-audit.log \
--event-ttl=1h \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=4"
EOF
同步kube-apiserver配置文件到master3(需要修改主机IP):
scp /etc/kubernetes/kube-apiserver.conf k8s-master3:/etc/kubernetes/kube-apiserver.conf
或者直接执行以下内容:
cat > /etc/kubernetes/kube-apiserver.conf << "EOF"
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--anonymous-auth=false \
--bind-address=192.168.1.94 \
--secure-port=6443 \
--advertise-address=192.168.1.94 \
--authorization-mode=Node,RBAC \
--runtime-config=api/all=true \
--enable-bootstrap-token-auth \
--service-cluster-ip-range=10.96.0.0/16 \
--token-auth-file=/etc/kubernetes/token.csv \
--service-node-port-range=30000-32767 \
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-issuer=api \
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--etcd-servers=https://192.168.1.92:2379,https://192.168.1.93:2379,https://192.168.1.94:2379 \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-apiserver-audit.log \
--event-ttl=1h \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=4"
EOF
同步服务启动文件(master节点):
scp /etc/systemd/system/kube-apiserver.service k8s-master2:/etc/systemd/system/kube-apiserver.service scp /etc/systemd/system/kube-apiserver.service k8s-master3:/etc/systemd/system/kube-apiserver.service
2.5.5.6 启动apiserver服务
systemctl daemon-reload
systemctl enable --now kube-apiserver
systemctl status kube-apiserver
2.5.5.7 测试
curl --insecure https://192.168.1.92:6443/
curl --insecure https://192.168.1.93:6443/
curl --insecure https://192.168.1.94:6443/
curl --insecure https://192.168.1.100:6443/
说明:
未经授权,所有都会报401。
2.5.6 部署kubectl
2.5.6.1 创建kubectl证书请求文件
cat > admin-csr.json << "EOF"
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:masters",
"OU": "system"
}
]
}
EOF
说明:
2.5.6.2 生成证书文件
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
2.5.6.3 复制文件到指定目录
cd /data/k8s-work
cp admin*.pem /etc/kubernetes/ssl/
2.5.6.4 生成kubeconfig配置文件
注:kube.config 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.1.100:6443 --kubeconfig=kube.config kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config kubectl config use-context kubernetes --kubeconfig=kube.config
说明:
第一步:kubectl 管理的集群,所对应的证书,以及证书的访问链接。
第二步:配置证书的角色,管理员,使用的那一个人的证书,来对集群管理。
第三步:设置安全上下文。
第四步:使用安全上下文,对集群进行管理。
2.5.6.5 准备kubectl配置文件并进行角色绑定
mkdir ~/.kube
cp kube.config ~/.kube/config
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes --kubeconfig=/root/.kube/config
2.5.6.6 查看集群状态
export KUBECONFIG=$HOME/.kube/config
查看集群信息
kubectl cluster-info
查看集群组件状态(后续部署组件中都可以查看确认)
kubectl get componentstatuses
查看命名空间中资源对象
kubectl get all --all-namespaces
2.5.6.7 同步kubectl配置文件到集群其它master节点
mkdir /root/.kube
mkdir /root/.kube
master1操作同步:
scp /root/.kube/config k8s-master2:/root/.kube/config
scp /root/.kube/config k8s-master3:/root/.kube/config
2.5.6.8 配置kubectl命令补全(可选)
注:如果配置了此命令,长时间依赖补全命令,会忘记命令,不建议安装。
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
kubectl completion bash > ~/.kube/completion.bash.inc
source '/root/.kube/completion.bash.inc'
source $HOME/.bash_profile
2.5.7 部署kube-controller-manager
2.5.7.1 创建kube-controller-manager证书请求文件
cat > kube-controller-manager-csr.json << "EOF"
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts": [
"127.0.0.1",
"192.168.1.92",
"192.168.1.93",
"192.168.1.94"
],
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-controller-manager",
"OU": "system"
}
]
}
EOF
说明:
2.5.7.2 创建kube-controller-manager证书文件
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
查看生成的证书:
cd /data/k8s-work/
# ls
kube-controller-manager.csr
kube-controller-manager-csr.json
kube-controller-manager-key.pem
kube-controller-manager.pem
2.5.7.3 创建kube-controller-manager的kube-controller-manager.kubeconfig
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.1.100:6443 --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
2.5.7.4 创建kube-controller-manager配置文件
注:--bind-address 可以绑定主机IP,也可以写127.0.0.1,否则每台主机都需要修改。
cat > kube-controller-manager.conf << "EOF"
KUBE_CONTROLLER_MANAGER_OPTS=" \
--secure-port=10257 \
--bind-address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
--service-cluster-ip-range=10.96.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--cluster-signing-duration=87600h \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
--leader-elect=true \
--feature-gates=RotateKubeletServerCertificate=true \
--controllers=*,bootstrapsigner,tokencleaner \
--horizontal-pod-autoscaler-sync-period=10s \
--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
--use-service-account-credentials=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"
EOF
2.5.7.5 创建服务启动文件
cat > kube-controller-manager.service << "EOF"
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes [Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5 [Install]
WantedBy=multi-user.target
EOF
2.5.7.6 同步文件到集群master节点
拷贝文件到本机(master1)指定目录:
cp kube-controller-manager*.pem /etc/kubernetes/ssl/
cp kube-controller-manager.kubeconfig /etc/kubernetes/
cp kube-controller-manager.conf /etc/kubernetes/
cp kube-controller-manager.service /usr/lib/systemd/system/
同步文件到集群master节点:
scp kube-controller-manager*.pem k8s-master2:/etc/kubernetes/ssl/
scp kube-controller-manager*.pem k8s-master3:/etc/kubernetes/ssl/
scp kube-controller-manager.kubeconfig kube-controller-manager.conf k8s-master2:/etc/kubernetes/
scp kube-controller-manager.kubeconfig kube-controller-manager.conf k8s-master3:/etc/kubernetes/
scp kube-controller-manager.service k8s-master2:/usr/lib/systemd/system/
scp kube-controller-manager.service k8s-master3:/usr/lib/systemd/system/
查看证书
openssl x509 -in /etc/kubernetes/ssl/kube-controller-manager.pem -noout -text
2.5.7.7 启动服务
systemctl daemon-reload
systemctl enable --now kube-controller-manager
systemctl status kube-controller-manager
2.5.7.8 查看集群组件状态(查看controller-manager)
[root@k8s-master1 k8s-work]# kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused
controller-manager Healthy ok
etcd-1 Healthy {"health":"true","reason":""}
etcd-2 Healthy {"health":"true","reason":""}
etcd-0 Healthy {"health":"true","reason":""}
2.5.8 部署kube-scheduler
2.5.8.1 创建kube-scheduler证书请求文件
cat > kube-scheduler-csr.json << "EOF"
{
"CN": "system:kube-scheduler",
"hosts": [
"127.0.0.1",
"192.168.1.92",
"192.168.1.93",
"192.168.1.94"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-scheduler",
"OU": "system"
}
]
}
EOF
2.5.8.2 生成kube-scheduler证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
查看生成的证书:
cd /data/k8s-work/
# ls
kube-scheduler.csr
kube-scheduler-csr.json
kube-scheduler-key.pem
kube-scheduler.pem
2.5.8.3 创建kube-scheduler的kubeconfig
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.1.100:6443 --kubeconfig=kube-scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
2.5.8.4 创建kube-scheduler服务配置文件
cat > kube-scheduler.conf << "EOF"
KUBE_SCHEDULER_OPTS=" \
--bind-address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--leader-elect=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"
EOF
2.5.8.5创建服务启动配置文件
cat > kube-scheduler.service << "EOF"
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes [Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5 [Install]
WantedBy=multi-user.target
EOF
2.5.8.6 同步文件至集群master节点
拷贝文件到本机(master1)指定目录:
cp kube-scheduler*.pem /etc/kubernetes/ssl/
cp kube-scheduler.kubeconfig /etc/kubernetes/
cp kube-scheduler.conf /etc/kubernetes/
cp kube-scheduler.service /usr/lib/systemd/system/
同步文件到集群master节点:
scp kube-scheduler*.pem k8s-master2:/etc/kubernetes/ssl/
scp kube-scheduler*.pem k8s-master3:/etc/kubernetes/ssl/
scp kube-scheduler.kubeconfig kube-scheduler.conf k8s-master2:/etc/kubernetes/
scp kube-scheduler.kubeconfig kube-scheduler.conf k8s-master3:/etc/kubernetes/
scp kube-scheduler.service k8s-master2:/usr/lib/systemd/system/
scp kube-scheduler.service k8s-master3:/usr/lib/systemd/system/
2.5.8.7 启动服务
systemctl daemon-reload
systemctl enable --now kube-scheduler
systemctl status kube-scheduler
2.5.8.8 查看集群组件状态(查看scheduler)
[root@k8s-master1 k8s-work]# kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
etcd-1 Healthy {"health":"true","reason":""}
scheduler Healthy ok
etcd-2 Healthy {"health":"true","reason":""}
2.5.9 工作节点(worker node)部署
2.5.9.1 Containerd安装及配置
2.5.9.1.1 获取软件包
下载地址:https://github.com/
选择cri-containerd-cni-1.6.12-linux-amd64.tar.gz 类型,此类包括了容器管理,网络插件适用
2.5.9.1.2 安装containerd
注:次此master和worker全部安装(master也做为运行pod的节点),根据自己需求安装
(1)下载安装包:
wget https://github.com/containerd/containerd/releases/download/v1.6.12/cri-containerd-cni-1.6.12-linux-amd64.tar.gz
安装包可能无法下载,我已放到百度网盘:
链接:https://pan.baidu.com/s/1Ma28Aanwx5nuhGvqptUsWQ?pwd=ixok
提取码:ixok
(2)手动上传至master1,然后同步runc到其他三个节点(master2、master3、worker1):
scp cri-containerd-cni-1.6.12-linux-amd64.tar.gz k8s-master2:/root/
scp cri-containerd-cni-1.6.12-linux-amd64.tar.gz k8s-master3:/root/
scp cri-containerd-cni-1.6.12-linux-amd64.tar.gz k8s-worker1:/root/
(3)四台主机都解压:
tar -xf cri-containerd-cni-1.6.12-linux-amd64.tar.gz -C /
说明:
2.5.9.1.3 生成配置文件并修改
创建目录:
mkdir /etc/containerd
生成containerd配置文件:
containerd config default >/etc/containerd/config.toml
查看:
# ls /etc/containerd/
config.toml
下面的配置文件中已修改,可不执行,仅修改默认时执行。
sed -i 's@systemd_cgroup = false@systemd_cgroup = true@' /etc/containerd/config.toml
下面的配置文件中已修改,可不执行,仅修改默认时执行。
sed -i 's@k8s.gcr.io/pause:3.6@registry.aliyuncs.com/google_containers/pause:3.6@' /etc/containerd/config.toml
注:生成的配置文件后期改动的地方较多,缺少镜像仓库,这里直接换成可单机使用、也可k8s环境使用的配置文件并配置好镜像加速器。(根据自己需求决定是否使用)
cat >/etc/containerd/config.toml<<EOF
root = "/var/lib/containerd"
state = "/run/containerd"
oom_score = -999 [grpc]
address = "/run/containerd/containerd.sock"
uid = 0
gid = 0
max_recv_message_size = 16777216
max_send_message_size = 16777216 [debug]
address = ""
uid = 0
gid = 0
level = "" [metrics]
address = ""
grpc_histogram = false [cgroup]
path = "" [plugins]
[plugins.cgroups]
no_prometheus = false
[plugins.cri]
stream_server_address = "127.0.0.1"
stream_server_port = "0"
enable_selinux = false
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
stats_collect_period = 10
systemd_cgroup = true
enable_tls_streaming = false
max_container_log_line_size = 16384
[plugins.cri.containerd]
snapshotter = "overlayfs"
no_pivot = false
[plugins.cri.containerd.default_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = ""
runtime_root = ""
[plugins.cri.containerd.untrusted_workload_runtime]
runtime_type = ""
runtime_engine = ""
runtime_root = ""
[plugins.cri.cni]
bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/net.d"
conf_template = "/etc/cni/net.d/10-default.conf"
[plugins.cri.registry]
[plugins.cri.registry.mirrors]
[plugins.cri.registry.mirrors."docker.io"]
endpoint = [
"https://docker.mirrors.ustc.edu.cn",
"http://hub-mirror.c.163.com"
]
[plugins.cri.registry.mirrors."gcr.io"]
endpoint = [
"https://gcr.mirrors.ustc.edu.cn"
]
[plugins.cri.registry.mirrors."k8s.gcr.io"]
endpoint = [
"https://gcr.mirrors.ustc.edu.cn/google-containers/"
]
[plugins.cri.registry.mirrors."quay.io"]
endpoint = [
"https://quay.mirrors.ustc.edu.cn"
]
[plugins.cri.registry.mirrors."harbor.kubemsb.com"]
endpoint = [
"http://harbor.kubemsb.com"
]
[plugins.cri.x509_key_pair_streaming]
tls_cert_file = ""
tls_key_file = ""
[plugins.diff-service]
default = ["walking"]
[plugins.linux]
shim = "containerd-shim"
runtime = "runc"
runtime_root = ""
no_shim = false
shim_debug = false
[plugins.opt]
path = "/opt/containerd"
[plugins.restart]
interval = "10s"
[plugins.scheduler]
pause_threshold = 0.02
deletion_threshold = 0
mutation_threshold = 100
schedule_delay = "0s"
startup_delay = "100ms"
EOF
2.5.9.1.4 安装runc
注:
- 由于上述软件包中包含的runc对系统依赖过多,所以建议单独下载安装。
- 默认runc执行时提示:runc: symbol lookup error: runc: undefined symbol: seccomp_notify_respond
获取runc
下载地址:https://github.com/
(1) 下载安装包:
wget https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64
安装包可能无法下载,我已放到百度网盘:
链接:https://pan.baidu.com/s/1Ma28Aanwx5nuhGvqptUsWQ?pwd=ixok
提取码:ixok
(2) 赋予runc执行权限:
chmod +x runc.amd64
(3) 替换掉原软件包中的runc:
mv runc.amd64 /usr/local/sbin/runc
(4) 同步runc到其他三个节点(master2、master3、worker1)
scp /usr/local/sbin/runc k8s-master2:/usr/local/sbin/runc
scp /usr/local/sbin/runc k8s-master3:/usr/local/sbin/runc
scp /usr/local/sbin/runc k8s-worker1:/usr/local/sbin/runc
(5) 验证:
[root@k8s-master1 k8s-work]# runc -v
runc version 1.1.4
commit: v1.1.4-0-g5fd4c4d1
spec: 1.0.2-dev
go: go1.17.10
libseccomp: 2.5.4
(6) 启动Containerd
systemctl enable containerd
systemctl start containerd
systemctl status containerd
2.5.9.2 部署kubelet
注:在k8s-master1上操作
2.5.9.2.1 创建kubelet-bootstrap.kubeconfig
BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv) kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.1.100:6443 --kubeconfig=kubelet-bootstrap.kubeconfig kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
确定集群的角色和资源调用
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=kubelet-bootstrap kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
验证(描述):
kubectl describe clusterrolebinding cluster-system-anonymous kubectl describe clusterrolebinding kubelet-bootstrap
2.5.9.2.2 创建kubelet配置文件
cat > kubelet.json << "EOF"
{
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"authentication": {
"x509": {
"clientCAFile": "/etc/kubernetes/ssl/ca.pem"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"address": "192.168.1.92",
"port": 10250,
"readOnlyPort": 10255,
"cgroupDriver": "systemd",
"hairpinMode": "promiscuous-bridge",
"serializeImagePulls": false,
"clusterDomain": "cluster.local.",
"clusterDNS": ["10.96.0.2"]
}
EOF
2.5.9.2.3 创建kubelet服务启动管理文件
注:
- After 代表谁在前启动。
- Requires 代表需要哪个服务。
- 次此使用的容器运行时是:containerd,如果使用docker,两处都填写docker.service。
- registry.aliyuncs.com/google_containers/pause:3.2或者3.6 都可以。
cat > kubelet.service << "EOF"
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service [Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
--cert-dir=/etc/kubernetes/ssl \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--config=/etc/kubernetes/kubelet.json \
--container-runtime=remote \
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
--rotate-certificates \
--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.2 \
--root-dir=/etc/cni/net.d \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on-failure
RestartSec=5 [Install]
WantedBy=multi-user.target
EOF
查看生成的文件:
cd /data/k8s-work/
ls
kubelet-bootstrap.kubeconfig
kubelet.json
kubelet.service
2.5.9.2.4 同步文件到集群节点
拷贝文件到指定目录:
cp kubelet-bootstrap.kubeconfig /etc/kubernetes/
cp kubelet.json /etc/kubernetes/
cp kubelet.service /usr/lib/systemd/system/
worker节点创建目录(master之前已经创建过):
mkdir -p /etc/kubernetes/ssl
同步文件到其他三个节点:
for i in k8s-master2 k8s-master3 k8s-worker1;do scp kubelet-bootstrap.kubeconfig kubelet.json $i:/etc/kubernetes/;done for i in k8s-master2 k8s-master3 k8s-worker1;do scp ca.pem $i:/etc/kubernetes/ssl/;done for i in k8s-master2 k8s-master3 k8s-worker1;do scp kubelet.service $i:/usr/lib/systemd/system/;done
说明:
2.5.9.2.5 创建目录及启动服务
注:所有节点创建目录,否则kubelet 无法启动!
mkdir -p /var/lib/kubelet
mkdir -p /var/log/kubernetes
启动服务:
systemctl daemon-reload
systemctl enable --now kubelet systemctl status kubelet
查看集群节点状态:
[root@k8s-master1 k8s-work]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready <none> 13m v1.25.5
k8s-master2 Ready <none> 18s v1.25.5
k8s-master3 Ready <none> 12s v1.25.5
k8s-worker1 Ready <none> 10s v1.25.5
查看bootstrap请求:
[root@k8s-master1 k8s-work]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
csr-24h9v 2m16s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap <none> Approved,Issued
csr-rhpgr 15m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap <none> Approved,Issued
csr-rq2xx 2m8s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap <none> Approved,Issued
csr-xsrd7 2m10s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap <none> Approved,Issued
说明:
2.5.9.3 部署kube-proxy
2.5.9.3.1 创建kube-proxy证书请求文件
cat > kube-proxy-csr.json << "EOF"
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "kubemsb",
"OU": "CN"
}
]
}
EOF
2.5.9.3.2 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
查看生成的证书:
ls kube-proxy*
kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem
2.5.9.3.3 创建kubeconfig文件
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.1.100:6443 --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
2.5.9.3.4 创建服务配置文件
cat > kube-proxy.yaml << "EOF"
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.1.92
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 10.244.0.0/16
healthzBindAddress: 192.168.1.92:10256
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.1.92:10249
mode: "ipvs"
EOF
2.5.9.3.5 创建服务启动管理文件
cat > kube-proxy.service << "EOF"
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target [Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
--config=/etc/kubernetes/kube-proxy.yaml \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target
EOF
2.5.9.3.6 同步文件到集群工作节点主机
拷贝文件至master1指定目录:
cp kube-proxy*.pem /etc/kubernetes/ssl/
cp kube-proxy.kubeconfig kube-proxy.yaml /etc/kubernetes/
cp kube-proxy.service /usr/lib/systemd/system/
同步文件到其他三个节点(master2、master3、worker1):
for i in k8s-master2 k8s-master3 k8s-worker1;do scp kube-proxy.kubeconfig kube-proxy.yaml $i:/etc/kubernetes/;done
for i in k8s-master2 k8s-master3 k8s-worker1;do scp kube-proxy.service $i:/usr/lib/systemd/system/;done
说明:
2.5.9.3.7 服务启动
创建目录(四台主机master1、2、3和worker1):
mkdir -p /var/lib/kube-proxy
启动服务:
systemctl daemon-reload
systemctl enable --now kube-proxy
systemctl status kube-proxy
2.5.10 网络组件部署 Calico
2.5.10.1 下载
curl https://projectcalico.docs.tigera.io/archive/v3.23/manifests/calico.yaml -O
2.5.10.2 修改文件
4470 - name: CALICO_IPV4POOL_CIDR
4471 value: "10.244.0.0/16"
2.5.10.3 应用文件
kubectl apply -f calico.yaml
2.5.10.4 验证应用结果
[root@k8s-master1 k8s-work]# kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-d8b9b6478-g26fc 1/1 Running 2 (31s ago) 3m2s 10.244.194.64 k8s-worker1 <none> <none>
kube-system calico-node-7hx5p 1/1 Running 0 3m2s 192.168.1.93 k8s-master2 <none> <none>
kube-system calico-node-crd6m 1/1 Running 0 3m2s 192.168.1.96 k8s-worker1 <none> <none>
kube-system calico-node-jnvs7 1/1 Running 0 3m2s 192.168.1.94 k8s-master3 <none> <none>
kube-system calico-node-k8qth 1/1 Running 0 3m2s 192.168.1.92 k8s-master1 <none> <none>
集群状态:
[root@k8s-master1 k8s-work]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready <none> 84m v1.25.5
k8s-master2 Ready <none> 71m v1.25.5
k8s-master3 Ready <none> 70m v1.25.5
k8s-worker1 Ready <none> 70m v1.25.5
2.5.10 部署CoreDNS
cat > coredns.yaml << "EOF"
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. Default is 1.
# 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
kubernetes.io/os: linux
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values: ["kube-dns"]
topologyKey: kubernetes.io/hostname
containers:
- name: coredns
image: coredns/coredns:1.8.4
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.96.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP EOF
安装:
kubectl apply -f coredns.yaml
查看状态:
[root@k8s-master1 k8s-work]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-d8b9b6478-g26fc 1/1 Running 2 (10m ago) 13m 10.244.194.64 k8s-worker1 <none> <none>
calico-node-7hx5p 1/1 Running 0 13m 192.168.1.93 k8s-master2 <none> <none>
calico-node-crd6m 1/1 Running 0 13m 192.168.1.96 k8s-worker1 <none> <none>
calico-node-jnvs7 1/1 Running 0 13m 192.168.1.94 k8s-master3 <none> <none>
calico-node-k8qth 1/1 Running 0 13m 192.168.1.92 k8s-master1 <none> <none>
coredns-564fd8c776-h7f4q 1/1 Running 0 29s 10.244.159.129 k8s-master1 <none> <none>
验证测试:
dig -t a www.baidu.com @10.96.0.2
2.5.11 部署应用验证
cat > nginx.yaml << "EOF"
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-web
spec:
replicas: 2
selector:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.19.6
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service-nodeport
spec:
ports:
- port: 80
targetPort: 80
nodePort: 30001
protocol: TCP
type: NodePort
selector:
name: nginx
EOF
(1)安装部署:
kubectl apply -f nginx.yaml
(2)查看pod状态
[root@k8s-master1 k8s-work]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-web-cxgtm 1/1 Running 0 2m16s 10.244.135.193 k8s-master3 <none> <none>
nginx-web-nd8hg 1/1 Running 0 2m16s 10.244.224.1 k8s-master2 <none> <none>
(3)查看所有default下内容:
[root@k8s-master1 k8s-work]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-web-cxgtm 1/1 Running 0 11m
pod/nginx-web-nd8hg 1/1 Running 0 11m NAME DESIRED CURRENT READY AGE
replicationcontroller/nginx-web 2 2 2 11m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 30h
service/nginx-service-nodeport NodePort 10.96.216.41 <none> 80:30001/TCP 11m
(4)查看节点端口:
ss -anput | grep "30001"
访问测试:
至此,集群部署完毕!
Centos 7.9 基于二进制文件部署kubernetes v1.25.5集群的更多相关文章
- 基于docker和cri-dockerd部署kubernetes v1.25.3
基于docker和cri-dockerd部署kubernetes v1.25.3 1.环境准备 1-1.主机清单 主机名 IP地址 系统版本 k8s-master01 k8s-master01.wan ...
- [原创]自动化部署K8S(v1.10.11)集群
标准运维实现自动化部署K8S集群主要分两步,第一步是部署gse-agent,拱第二步执行部署. 第一步:部署gse-agent.如下: 第二步:部署k8s集群.主要通过作业平台分为5小步执 ...
- openEuler 部署Kubernetes(K8s)集群
前言 由于工作原因需要使用 openEuler,openEuler官方文档部署K8s集群比较复杂,并且网上相关资料较少,本文是通过实践与测试整理的 openEuler 22.03 部署 Kuberne ...
- 二进制方式部署Kubernetes 1.6.0集群(开启TLS)
本节内容: Kubernetes简介 环境信息 创建TLS加密通信的证书和密钥 下载和配置 kubectl(kubecontrol) 命令行工具 创建 kubeconfig 文件 创建高可用 etcd ...
- kubeadm安装kubernetes V1.11.1 集群
之前测试了离线环境下使用二进制方法安装配置Kubernetes集群的方法,安装的过程中听说 kubeadm 安装配置集群更加方便,因此试着折腾了一下.安装过程中,也有一些坑,相对来说操作上要比二进制方 ...
- 【葵花宝典】lvs+keepalived部署kubernetes(k8s)高可用集群
一.部署环境 1.1 主机列表 主机名 Centos版本 ip docker version flannel version Keepalived version 主机配置 备注 lvs-keepal ...
- Kubeadm搭建高可用(k8s)Kubernetes v1.24.0集群
文章转载自:https://i4t.com/5451.html 背景 Kubernetes 1.24新特性 从kubelet中移除dockershim,自1.20版本被弃用之后,dockershim组 ...
- 使用 Kubeadm 安装部署 Kubernetes 1.12.1 集群
手工搭建 Kubernetes 集群是一件很繁琐的事情,为了简化这些操作,就产生了很多安装配置工具,如 Kubeadm ,Kubespray,RKE 等组件,我最终选择了官方的 Kubeadm 主要是 ...
- 使用kubeadm部署K8S v1.17.0集群
kubeadm部署K8S集群 安装前的准备 集群机器 172.22.34.34 K8S00 172.22.34.35 K8S01 172.22.34.36 K8S02 注意: 本文档中的 etcd . ...
- 利用 kubeasz 给 suse 12 部署 kubernetes 1.20.1 集群
文章目录 1.前情提要 2.环境准备 2.1.环境介绍 2.2.配置静态网络 2.3.配置ssh免密 2.4.批量开启模块以及创建文件 2.5.安装ansible 2.5.1.安装pip 2.5.2. ...
随机推荐
- 标准&有效的项目开发流程
代码版本管理 在项目中,代码的版本管理非常重要.每个需求版本的代码开发在版本控制里都应该经过以下几个步骤. 在master分支中拉取该需求版本的两个分支,一个feature分支,一个release分支 ...
- 前端如何实现将多页数据合并导出到Excel单Sheet页解决方案|内附代码
前端与数据展示 前后端分离是当前比较盛行的开发模式,它使项目的分工更加明确,后端负责处理.存储数据;前端负责显示数据.前端和后端开发人员通过接口进行数据的交换.因此前端最重要的能力是需要将数据呈现给用 ...
- Idea创建类模板方法模板
参考https://blog.csdn.net/sdut406/article/details/81750858 写代码是少不了注释的,但是自带的注释就几个,所以使用注释模板添加自定义的注释是个非常好 ...
- 周末折腾了两天,踩了无数个坑,终于把win7装成了centos7
上周五的时候,突发奇想,想把自己的Thinkpad E430C的操作系统装成linux. 熟悉电脑的都知道Thinkpad E430C很古老了,现在算来从2012年买来,到现在已经经历了10个年头了. ...
- P15_了解小程序的版本阶段与上线的主要步骤
协同工作和发布 - 小程序的版本 软件开发过程中的不同版本 在软件开发过程中,根据时间节点的不同,会产出不同的软件版本,例如: 开发者编写代码的同时,对项目代码进行自测(开发版本) 直到程序达到一个稳 ...
- rpmbuild时为什么会出现空的debugsourcefiles.list?
错误: 空 %file 文件 /home/user/rpmbuild/BUILD/xxxx-0.1/debugsourcefiles.list 你看错误的里边有一个%file,这是使用spec文件构建 ...
- day12-SpringMVC文件上传
SpringMVC文件上传 1.基本介绍 SpringMVC 为文件上传提供了直接的支持,这种支持是通过即插即用的 MultipartResolver 实现的.spring 用 Jacarta Com ...
- 学习Java Day27
今天在B站学习了什么是清单文件以及可执行的JAR文件,和不同版本下的JAR文件的差异
- 解决VS2019 DevExpress工具不显示问题
一.序言 环境:NetFramework4.5,vs2019社区板 ,DevExpress 14.2.3 项目类型:winfrom 二.解决 找到DevExpress安装路径下的Bin\Framewo ...
- 【WinForm】窗体之间传值的几种方式
方法1:设置公共静态变量传值 eg: 1 public partial class mianForm 2 { 3 //声明i 为公共静态变量 4 public static string i = &q ...