简介

目前生产部署kubernetes集群主要两种方式

kubeadm

Kubeadm是一个K8s部署工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。

二进制包

从github下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。

Kubeadm降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署Kubernetes集群,虽然手动部署麻烦点,期间可以学习很多工作原理,也利于后期维护。

二进制部署K8s

List
CentOS7.3
cni-plugins-linux-amd64-v0.8.6.tgz
etcd-v3.4.9-linux-amd64.tar.gz
kube-flannel.yml
kubernetes-server-linux-amd64.tar.gz
角色 IP 组件
master 192.168.31.71 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
Node1 192.168.31.74 kube-apiserver,kube-controller-manager,kube-scheduler
Node2 192.168.31.72 kubelet,kube-proxy,docker etcd

初始化环境

# 初始化
init_security() {
systemctl stop firewalld
systemctl disable firewalld &>/dev/null
setenforce 0
sed -i '/^SELINUX=/ s/enforcing/disabled/' /etc/selinux/config
sed -i '/^GSSAPIAu/ s/yes/no/' /etc/ssh/sshd_config
sed -i '/^#UseDNS/ {s/^#//;s/yes/no/}' /etc/ssh/sshd_config
systemctl enable sshd crond &> /dev/null
rpm -e postfix --nodeps
echo -e "\033[32m [安全配置] ==> OK \033[0m"
}
init_security init_yumsource() {
if [ ! -d /etc/yum.repos.d/backup ];then
mkdir /etc/yum.repos.d/backup
fi
mv /etc/yum.repos.d/* /etc/yum.repos.d/backup 2>/dev/null
if ! ping -c2 www.baidu.com &>/dev/null
then
echo "您无法上外网,不能配置yum源"
exit
fi
curl -o /etc/yum.repos.d/163.repo http://mirrors.163.com/.help/CentOS7-Base-163.repo &>/dev/null
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo &>/dev/null
yum clean all
timedatectl set-timezone Asia/Shanghai
echo "nameserver 114.114.114.114" > /etc/resolv.conf
echo "nameserver 8.8.8.8" >> /etc/resolv.conf
chattr +i /etc/resolv.conf
yum -y install ntpdate
ntpdate -b ntp1.aliyun.com # 对时很重要
echo -e "\033[32m [YUM Source] ==> OK \033[0m"
}
init_yumsource # 关掉swap分区
swapoff -a
# 如果想永久关掉swap分区,打开如下文件注释掉swap哪一行即可.
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab #永久 # 配置主机名解析
tail -3 /etc/hosts
192.168.0.121 master
192.168.0.123 node1
192.168.0.124 node2 # 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system # 生效 # 升级内核(非必须,只是性能更好)
wget https://cbs.centos.org/kojifiles/packages/kernel/4.9.220/37.el7/x86_64/kernel-4.9.220-37.el7.x86_64.rpm rpm -ivh kernel-4.9.220-37.el7.x86_64.rpm
reboot

部署etcd集群

Etcd 是一个分布式键值存储系统,Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库,为解决Etcd单点故障,应采用集群方式部署,这里使用3台组建集群,可容忍1台机器故障,当然,你也可以使用5台组建集群,可容忍2台机器故障。

节点名称 IP
etcd-1 192.168.31.71
etcd-2 192.168.31.72
etcd-3 192.168.31.73

注:为了节省机器,这里与K8s节点机器复用。也可以独立于k8s集群之外部署,只要apiserver能连接到就行。

准备cfssl证书生成工具

cfssl是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用。

找任意一台服务器操作,这里用Master节点。

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
生成etcd证书
创建工作目录
mkdir -p ~/TLS/{etcd,k8s}

cd TLS/etcd
自签CA
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF cat > ca-csr.json << EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
生成证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

ls *pem
ca-key.pem ca.pem
使用自签CA签发etcd https证书

创建证书申请文件

cat > server-csr.json << EOF
{
"CN": "etcd",
"hosts": [
"192.168.0.121",
"192.168.0.123",
"192.168.0.124"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF

注:上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。

生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

ls server*pem
server-key.pem server.pem
下载etcd二进制文件
wget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
创建工作目录并解压二进制包
mkdir /opt/etcd/{bin,cfg,ssl} -p
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/ # 配置etcd
cat /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.121:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.121:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.121:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.121:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.0.121:2380,etcd-2=https://192.168.0.123:2380,etcd-3=https://192.168.0.124:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new" # ETCD_NAME:节点名称,集群中唯一
# ETCD_DATA_DIR:数据目录
# ETCD_LISTEN_PEER_URLS:集群通信监听地址
# ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
# ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
# ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
# ETCD_INITIAL_CLUSTER:集群节点地址
# ETCD_INITIAL_CLUSTER_TOKEN:集群Token
# ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群
systemd管理etcd
cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
拷贝刚生成证书及生成的文件拷贝到节点2,节点3
cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/

scp -r /opt/etcd/ node1:/opt/
scp -r /opt/etcd/ node2:/opt/
scp /usr/lib/systemd/system/etcd.service node1:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service node2:/usr/lib/systemd/system/
修改节点2和节点3etcd.conf配置文件
node-1
cat /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.123:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.123:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.123:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.123:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.0.121:2380,etcd-2=https://192.168.0.123:2380,etcd-3=https://192.168.0.124:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new" # node-2
#[Member]
ETCD_NAME="etcd-3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.124:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.124:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.124:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.124:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.0.121:2380,etcd-2=https://192.168.0.123:2380,etcd-3=https://192.168.0.124:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new" # 启动服务并设置开机自启
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
验证etcd集群状态
 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.0.121:2379,https://192.168.0.123:2379,https://192.168.0.124:2379" endpoint healthhttps://192.168.0.124:2379 is healthy: successfully committed proposal: took = 13.213712ms
https://192.168.0.121:2379 is healthy: successfully committed proposal: took = 12.907787ms
https://192.168.0.123:2379 is healthy: successfully committed proposal: took = 12.168703ms # 如果输出上面信息,就说明集群部署成功。如果有问题第一步先看日志:/var/log/message 或 journalctl -u etcd

安装docker

下载安装docker
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce-19.03.9-3.el7
配置docker镜像源
mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
启动并设置开机自启
systemctl daemon-reload
systemctl start docker
systemctl enable docker

部署Master Node

生成kube-apiserver证书

1. 自签证书颁发机构(CA)

cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF # 生成证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca - ls *pem
ca-key.pem ca.pem
使用自签CA签发kube-apiserver https 证书
cat /root/TLS/k8s/server-csr.json
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.0.121",
"192.168.0.123",
"192.168.0.124",
"192.168.0.125",
"192.168.0.100",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
} # 上述文件hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。 # 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server ls server*pem
server-key.pem server.pem
下载解压二进制包
wget https://dl.k8s.io/v1.18.4/kubernetes-server-linux-amd64.tar.gz
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/
部署kube-apiserver
cat /opt/kubernetes/cfg/kube-apiserver.conf
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--etcd-servers=https://192.168.0.121:2379,https://192.168.0.123:2379,https://192.168.0.124:2379 \
--bind-address=192.168.0.121 \
--secure-port=6443 \
--advertise-address=192.168.0.121 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log" # –logtostderr:启用日志
# —v:日志等级
# –log-dir:日志目录
# –etcd-servers:etcd集群地址
# –bind-address:监听地址
# –secure-port:https安全端口
# –advertise-address:集群通告地址
# –allow-privileged:启用授权
# –service-cluster-ip-range:Service虚拟IP地址段
# –enable-admission-plugins:准入控制模块
# –authorization-mode:认证授权,启用RBAC授权和节点自管理
# –enable-bootstrap-token-auth:启用TLS bootstrap机制
# –token-auth-file:bootstrap token文件
# –service-node-port-range:Service nodeport类型默认分配端口范围
# –kubelet-client-xxx:apiserver访问kubelet客户端证书
# –tls-xxx-file:apiserver https证书
# –etcd-xxxfile:连接Etcd集群证书
# –audit-log-xxx:审计日志
拷贝刚生成证书
cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/
启用TLS Bootstrapping机制

TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书

TLS bootstraping 工作流程

创建上述配置文件中token文件

cat > /opt/kubernetes/cfg/token.csv << EOF
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF # 格式:token,用户名,UID,用户组

token也可自行生成替换

head -c 16 /dev/urandom | od -An -t x | tr -d ' '
systemd管理apiserver
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
启动设置开机启动
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
授权kubelet-bootstrap用户允许请求证书
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

部署kube-controller-manager

创建配置文件
cat /opt/kubernetes/cfg/kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect=true \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s" # –master:通过本地非安全本地端口8080连接apiserver。
# –leader-elect:当该组件启动多个时,自动选举(HA)
# –cluster-signing-cert-file/–cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致
systemd管理controller-manager
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
启动设置开机启动
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager

部署kube-scheduler

创建配置文件
cat /opt/kubernetes/cfg/kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --leader-elect --master=127.0.0.1:8080 --bind-address=127.0.0.1" # –master:通过本地非安全本地端口8080连接apiserver。
# –leader-elect:当该组件启动多个时,自动选举(HA)
systemd管理scheduler
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
启动并设置开机自启动
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
查看集群状态
# 所有组件都已经启动成功,通过kubectl工具查看当前集群组件状态:
kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}

部署worker node

下面还是在Master Node上操作,即同时作为Worker Node

创建工作目录并拷贝二进制文件

在所有worker node创建工作目录

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}

从master节点拷贝

cd kubernetes/server/bin
cp kubelet kube-proxy /opt/kubernetes/bin # 本地拷贝,注释这里操作还是master节点,

部署kubelet

创建配置文件
cat /opt/kubernetes/cfg/kubelet.conf
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=master \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=lizhenliang/pause-amd64:3.0" # –hostname-override:显示名称,集群中唯一
# –network-plugin:启用CNI
# –kubeconfig:空路径,会自动生成,后面用于连接apiserver
# –bootstrap-kubeconfig:首次启动向apiserver申请证书
# –config:配置参数文件
# –cert-dir:kubelet证书生成目录
# –pod-infra-container-image:管理Pod网络容器的镜像
配置参数文件
cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
生成bootstrap.kubeconfig文件
KUBE_APISERVER="https://192.168.0.121:6443" # apiserver IP:PORT

TOKEN="c47ffb939f5ca36231d9e3121a252940" # 与token.csv里保持一致

# 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig kubectl config set-credentials "kubelet-bootstrap" \
--token=${TOKEN} \
--kubeconfig=bootstrap.kubeconfig kubectl config set-context default \
--cluster=kubernetes \
--user="kubelet-bootstrap" \
--kubeconfig=bootstrap.kubeconfig kubectl config use-context default --kubeconfig=bootstrap.kubeconfig # 拷贝到配置文件路径
cp bootstrap.kubeconfig /opt/kubernetes/cfg
systemd管理kubelet
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
启动并设置开机自启
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
批准kubelet证书并加入集群
# 查看kubelet证书请求
kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A 6m3s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending # 批准申请
kubectl certificate approve node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A # 查看节点
kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready <none> 123m v1.18.4 # 由于网络插件还没有部署,节点会没有准备就绪 NotReady

部署kube-proxy

创建配置文件
cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF
创建参数文件
cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master
clusterCIDR: 10.0.0.0/24
EOF
生成kube-proxy.kubeconfig文件

生成kube-proxy证书

# 切换工作目录
cd TLS/k8s # 创建证书请求文件
cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF # 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy ls kube-proxy*pem
kube-proxy-key.pem kube-proxy.pem
生成kubeconfig文件
KUBE_APISERVER="https://192.168.0.121:6443"

kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig # 拷贝配置文件到指定路径
cp kube-proxy.kubeconfig /opt/kubernetes/cfg/
systemd管理kube-proxy
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
启动并设置开机启动
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy

部署CNI网络

准备二进制文件
wget https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz

# 解压二进制文件并移动到默认工作目录
mkdir /opt/cni/bin
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin # 部署cni网络
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0-amd64#g" kube-flannel.yml # 默认镜像地址无法访问,修改为docker hub镜像仓库。
kubectl apply -f kube-flannel.yml kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-amd64-2pc95 1/1 Running 0 72s kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready <none> 41m v1.18.4
# 部署好网络插件,Node准备就绪
授权apiserver访问kubelet
cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
- pods/log
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF kubectl apply -f apiserver-to-kubelet-rbac.yaml
新增加Worker Node

拷贝已部署好的Node相关文件到新节点

scp -r /opt/kubernetes/ node1:/opt/
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service node1:/usr/lib/systemd/system scp -r /opt/cni/ node1:/opt/
scp /opt/kubernetes/ssl/ca.pem node1:/opt/kubernetes/ssl
删除kubelet证书和kubeconfig文件
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet* # 这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除重新生成。
修改主机名并设置开机自启动
vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=node1 vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: node1 # 启动并设置开机启动
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl start kube-proxy
systemctl enable kube-proxy
再master上批准Node kubelet证书申请
kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro 89s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending kubectl certificate approve node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready <none> 138m v1.18.4
node1 Ready <none> 120m v1.18.4
node2 Ready <none> 112m v1.18.4

部署Dashboard

下载dashboard.yaml文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
修改yaml配置文件使其端口暴露外部访问
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

# 默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:

vi recommended.yaml
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 30001 # 修改这里
type: NodePort # 修改这里
selector:
k8s-app: kubernetes-dashboard kubectl apply -f recommended.yaml kubectl get pods,svc -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-694557449d-69x7g 1/1 Running 0 111m
pod/kubernetes-dashboard-9774cc786-kwgkt 1/1 Running 0 111m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.0.0.3 <none> 8000/TCP 111m
service/kubernetes-dashboard NodePort 10.0.0.122 <none> 443:30001/TCP 111m
创建service account并绑定默认cluster-admin管理员集群角色
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}') # 接下来访问https://node ip:30001
# 然后将上面过滤出来的token复制上面即可访问dashboard # 我们可以部署个Nginx测试下集群可用性
kubectl run --generator=run-pod/v1 nginx-test2 --image=daocloud.io/library/nginx --port=80 --replicas=1 kubectl get pods -o wide
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-test2 1/1 Running 0 89m 10.244.2.2 node2 <none> <none> # 我们去相应的节点访问指定IP即可访问
[root@node1 ~]# curl -I -s 10.244.2.2 |grep 200
HTTP/1.1 200 OK

此篇文章借鉴于公众号DevOps技术栈 ,作者阿良

03 . 二进制部署kubernetes1.18.4的更多相关文章

  1. CentOS 7.5二进制部署Kubernetes1.12(加密通信)(五)

    一.安装方式介绍 1.yum 安装 目前CentOS官方已经把Kubernetes源放入到自己的默认 extras 仓库里面,使用 yum 安装,好处是简单,坏处也很明显,需要官方更新 yum 源才能 ...

  2. 二进制搭建kubernetes-1.18.6单master集群

    master组件 kube-apiserver kubernetes API集群的同一入口,各组件协调者,以RESTful API提供接口服务,所有对象资源的增删改查和监听操作都交给APIserver ...

  3. 来了,老弟!__二进制部署kubernetes1.11.7集群

    Kubernetes容器集群管理 Kubernetes介绍 Kubernetes是Google在2014年6月开源的一个容器集群管理系统,使用Go语言开发,Kubernetes也叫K8S.K8S是Go ...

  4. Kubeadm部署-Kubernetes-1.18.6集群

    环境配置 IP hostname 操作系统 10.11.66.44 k8s-master centos7.6 10.11.66.27 k8s-node1 centos7.7 10.11.66.28 k ...

  5. 【原】二进制部署 k8s 1.18.3

    二进制部署 k8s 1.18.3 1.相关前置信息 1.1 版本信息 kube_version: v1.18.3 etcd_version: v3.4.9 flannel: v0.12.0 cored ...

  6. 二进制方法-部署k8s集群部署1.18版本

    二进制方法-部署k8s集群部署1.18版本 1. 前置知识点 1.1 生产环境可部署kubernetes集群的两种方式 目前生产部署Kubernetes集群主要有两种方式 kuberadm Kubea ...

  7. suse 12 二进制部署 Kubernetets 1.19.7 - 第03章 - 部署flannel插件

    文章目录 1.3.部署flannel网络 1.3.0.下载flannel二进制文件 1.3.1.创建flannel证书和私钥 1.3.2.生成flannel证书和私钥 1.3.3.将pod网段写入et ...

  8. K8S学习笔记之二进制部署Kubernetes v1.13.4 高可用集群

    0x00 概述 本次采用二进制文件方式部署,本文过程写成了更详细更多可选方案的ansible部署方案 https://github.com/zhangguanzhang/Kubernetes-ansi ...

  9. 在CentOS上部署kubernetes1.9.0集群

    原文链接: https://jimmysong.io/kubernetes-handbook/cloud-native/play-with-kubernetes.html (在CentOS上部署kub ...

随机推荐

  1. 安装和换源pip

    pip 是 Python 包管理工具,该工具提供了对Python 包的查找.下载.安装.卸载的功能 一.ubuntu安装和配置pip 1.进入终端,输入命令sudo su root ,输入密码后进入r ...

  2. 【极客思考】计算机网络:Wireshark抓包分析TCP中的三次握手与四次挥手

    [摘要]本文重点分析计算机网络中TCP协议中的握手和挥手的过程. [前提说明] 前段时间突然看到了一篇关于TCP/IP模型的文章,心想这段时间在家里也用wireshark抓了点包,那么想着想着就觉得需 ...

  3. Rocket - debug - TLDebugModuleInner - DMSTATUS

    https://mp.weixin.qq.com/s/GyGriFyeq_7Z3xOjKn56Mg 简单介绍TLDebugModuleInner中DMSTATUS寄存器的实现. 1. DMSTATUS ...

  4. PowerPC-MPC56xx 启动模式

    https://mp.weixin.qq.com/s/aU4sg7780T3_5tJeApFYOQ   参考芯片参考手册第5章:Chapter 5 Microcontroller Boot   The ...

  5. Nginx 笔记(一)nginx简介与安装

    个人博客网:https://wushaopei.github.io/    (你想要这里多有) Nginx 简介: 1.介绍 nginx 的应用场景和具体可以做什么事情 2.介绍什么是反向代理 3.介 ...

  6. CentOS 虚拟机 下载及 搭建

    个人博客网:https://wushaopei.github.io/    (你想要这里多有) CentOS 虚拟机安装包下载 : 链接:https://pan.baidu.com/s/1JDIASm ...

  7. 使用turtle库绘制同心圆

    import turtle as t t.pensize(3) t.setup(600,600,50,50) t.pencolor("yellow") t.penup() t.pe ...

  8. Java实现 LeetCode 679 24 点游戏(递归)

    679. 24 点游戏 你有 4 张写有 1 到 9 数字的牌.你需要判断是否能通过 *,/,+,-,(,) 的运算得到 24. 示例 1: 输入: [4, 1, 8, 7] 输出: True 解释: ...

  9. Java实现 LeetCode 148 排序链表

    148. 排序链表 在 O(n log n) 时间复杂度和常数级空间复杂度下,对链表进行排序. 示例 1: 输入: 4->2->1->3 输出: 1->2->3-> ...

  10. Java实现 LeetCode 114 二叉树展开为链表

    114. 二叉树展开为链表 给定一个二叉树,原地将它展开为链表. 例如,给定二叉树 1 / \ 2 5 / \ \ 3 4 6 将其展开为: 1 \ 2 \ 3 \ 4 \ 5 \ 6 class S ...