Shell脚本实现----Kubernetes单集群二进制部署
搭建Kubernetes集群环境有以下三种方式:
1. Minikube安装方式
Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,尝试Kubernetes或日常开发的用户使用。但是这种方式仅可用于学习和测试部署,不能用于生产环境。
2. Kubeadm安装方式
kubeadm是一个kubernetes官方提供的快速安装和初始化拥有最佳实践(best practice)的kubernetes集群的工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。目前kubeadm还处于beta 和alpha状态,可以通过学习这种部署方法来体会一些官方推荐的kubernetes最佳实践的设计和思想,目前大的生产环境中比较少用。
3. 二进制包安装方式(生产部署的推荐方式)
从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群,这种方式符合企业生产环境标准的Kubernetes集群环境的安装,可用于生产方式部署。
# wget https://dl.k8s.io/v1.16.1/kubernetes-server-linux-amd64.tar.gz
# wget https://github.com/etcd-io/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz
# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
IP | 主机名 | 角色 | 安装组件 |
192.168.10.10 | k8s-master | master | kube- apiserver kube- controller-manager kube- scheduler etcd kubectl |
192.168.10.11 | k8s-node1 | node1 | kubelet kube-proxy docker flannel etcd |
192.168.10.12 | k8s-node2 | node2 | kubelet kube-proxy docker flannel etcd |
1. 环境准备(所有机器)
1.1 关闭防火墙, selinux (略)
1.2 互相解析 (略),master节点上传公钥到node节点
1.3 配置好集群时间同步 (略)
1.4 更新内核(7.6以上的系统不需要这步)
1.5 关闭 swap
1.6 配置内核参数
1.7 重启(升级内核则需要做此步)
1.8 验证
2. 编写脚本及上传二进制包
#/bin/bash
#auther:sunli
#mail:<1916989848@qq.com>
k8s_master=192.168.10.10
k8s_node1=192.168.10.11
k8s_node2=192.168.10.12
sh ansible_docker.sh $k8s_node1 $k8s_node2 || (echo "docker install error" && exit)
sh etcd_install.sh $k8s_master $k8s_node1 $k8s_node2 || (echo "etcd install error" && exit)
sh flannel_install.sh $k8s_master $k8s_node1 $k8s_node2 || (echo "flannel install error" && exit)
sh master.sh $k8s_master || (echo "master install error" && exit)
sh node.sh $k8s_master $k8s_node1 $k8s_node2 || (echo "node install error" && exit)
#!bin/bash
[ ! -x /usr/bin/ansible ] && yum -y install ansible
cat >> /etc/ansible/hosts << EOF
[docker]
$1
$2
EOF
ansible docker -m script -a 'creates=/root/docker_install.sh /root/docker_install.sh'
#!bin/bash
[ ! -x /usr/bin/ansible ] && yum -y install ansible
cat >> /etc/ansible/hosts << EOF
[docker]
$1
$2
EOF
ansible docker -m script -a 'creates=/root/docker_install.sh /root/docker_install.sh'
#/bin/bash
#auther:sunli
#mail:<1916989848@qq.com>
#description:利用cfssljson格式,自建CA中心,生成ca-key.pem(私钥)和ca.pem(证书),还会生成ca.csr(证书签名请求) CFSSL() {
#将下载好的三个cfssl文件赋予可执行权限,并设置为系统命令
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
mv cfssl* /usr/local/bin/
chmod +x /usr/local/bin/cfssl*
} #判断cfssl命令是否存在
which cfssljson_linux-amd64
[ `echo $?` -ne 0 ] && CFSSL #确定service
service=$1
[ ! -d /etc/$service/ssl ] && mkdir -p /etc/$service/ssl
CA_DIR=/etc/$service/ssl #CA中心配置文件
CA_CONFIG() {
cat >> $CA_DIR/ca-config.json <<- EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"$service": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
} #CA证书签名请求
CA_CSR() {
cat >> $CA_DIR/ca-csr.json <<- EOF
{
"CN": "$service",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "GuangDong",
"ST": "GuangZhou",
"O": "$service",
"OU": "System"
}
]
}
EOF
} #服务器请求CA中心颁发证书签名请求
SERVER_CSR() {
host1=192.168.10.10
host2=192.168.10.11
host3=192.168.10.12
host4=192.168.10.13
host5=192.168.10.14
host6=192.168.10.15
cat >> $CA_DIR/server-csr.json <<- EOF
{
"CN": "$service",
"hosts": [
"127.0.0.1",
"$host1",
"$host2",
"$host3",
"$host4",
"$host5"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "GuangDong",
"ST": "GuangZhou",
"O": "$service",
"OU": "System"
}
]
}
EOF
} SERVER_CSR1() {
host1=192.168.10.10
host2=192.168.10.20
host3=192.168.10.30
host4=192.168.10.40
cat >> $CA_DIR/server-csr.json <<- EOF
{
"CN": "$service",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"$host1",
"$host2",
"$host3",
"$host4",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "GuangDong",
"ST": "GuangZhou",
"O": "$service",
"OU": "System"
}
]
}
EOF
} CA_CONFIG && CA_CSR
[ "$service" == "kubernetes" ] && SERVER_CSR1 || SERVER_CSR #生成CA所必需的文件ca-key.pem(私钥)和ca.pem(证书),还会生成ca.csr(证书签名请求),用于交叉签名或重新签名
cd $CA_DIR/
cfssl_linux-amd64 gencert -initca ca-csr.json | cfssljson_linux-amd64 -bare ca #生成证书
cfssl_linux-amd64 gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=$service server-csr.json | cfssljson_linux-amd64 -bare server
#/bin/bash
#解压etcd二进制包,将二进制命令设置为系统命令
etcd_01=$1
etcd_02=$2
etcd_03=$3 sh CA.sh etcd || (echo "etcd CA not build" && exit)
#将二进制包提前拷贝至当前路径
dir=./
pkgname=etcd-v3.3.13-linux-amd64
[ ! -e $dir/$pkgname.tar.gz ] && echo "no package" && exit
tar xf $dir/$pkgname.tar.gz
cp -p $dir/$pkgname/etc* /usr/local/bin/ #创建etcd配置文件
ETCD_CONFIG() {
cat >> /etc/etcd/etcd.conf <<- EOF
#[Member]
ETCD_NAME="etcd-01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://$etcd_01:2380"
ETCD_LISTEN_CLIENT_URLS="https://$etcd_01:2379" #[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://$etcd_01:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://$etcd_01:2379"
ETCD_INITIAL_CLUSTER="etcd-01=https://$etcd_01:2380,etcd-02=https://$etcd_02:2380,etcd-03=https://$etcd_03:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
} #创建etcd的system启动文件
ETCD_SERVICE() {
cat >> /usr/lib/systemd/system/etcd.service <<- EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target [Service]
Type=notify
EnvironmentFile=/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/etc/etcd/ssl/server.pem \
--key-file=/etc/etcd/ssl/server-key.pem \
--peer-cert-file=/etc/etcd/ssl/server.pem \
--peer-key-file=/etc/etcd/ssl/server-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target
EOF
} ETCD_CONFIG && ETCD_SERVICE #将master节点的配置信息拷贝到node节点,前提已做ssh授信和域名解析
scp /usr/local/bin/etcd* $etcd_02:/usr/local/bin/
scp -r /etc/etcd/ $etcd_02:/etc/
scp /usr/lib/systemd/system/etcd.service $etcd_02:/usr/lib/systemd/system/ scp /usr/local/bin/etcd* $etcd_03:/usr/local/bin/
scp -r /etc/etcd/ $etcd_03:/etc/
scp /usr/lib/systemd/system/etcd.service $etcd_03:/usr/lib/systemd/system/ #安装ansible,并将node_ip写入host
[ ! -x /usr/bin/ansible ] && yum -y install ansible
echo "[etcd]" >> /etc/ansible/hosts
echo "$etcd_02" >> /etc/ansible/hosts
echo "$etcd_03" >> /etc/ansible/hosts #修改etcd-02的etcd.conf
cat >> /tmp/etcd-02.sh <<- EOF
#/bin/bash
sed -i "s#\"etcd-01\"#\"etcd-02\"#g" /etc/etcd/etcd.conf
sed -i "s#\"https://$etcd_01#\"https://$etcd_02#g" /etc/etcd/etcd.conf
EOF
ansible $etcd_02 -m script -a 'creates=/tmp/etcd-02.sh /tmp/etcd-02.sh' #修改etcd-03的etcd.conf
#ansible $etcd_03 -m lineinfile -a "dest=/etc/etcd/etcd.conf regexp='ETCD_NAME=\"etcd-01\"' line='ETCD_NAME=\"etcd-03\"' backrefs=yes"
cat >> /tmp/etcd-03.sh <<- EOF
#/bin/bash
sed -i "s#\"etcd-01\"#\"etcd-03\"#g" /etc/etcd/etcd.conf
sed -i "s#\"https://$etcd_01#\"https://$etcd_03#g" /etc/etcd/etcd.conf
EOF
ansible $etcd_03 -m script -a 'creates=/tmp/etcd-03.sh /tmp/etcd-03.sh' #启动etcd-02、etcd-03
ansible etcd -m service -a "name=etcd state=started enabled=yes" && continue #启动etcd_01
systemctl enable etcd
systemctl start etcd #别名简化
cat >> /etc/profile.d/alias_etcd.sh <<- EOF
alias etcdctld='etcdctl --cert-file=/etc/etcd/ssl/server.pem \
--key-file=/etc/etcd/ssl/server-key.pem \
--ca-file=/etc/etcd/ssl/ca.pem \
--endpoint=https://$etcd_01:2379,https://$etcd_02:2379,https://$etcd_03:2379'
EOF
source /etc/profile.d/alias_etcd.sh #将简化的命令拷贝node
scp /etc/profile.d/alias_etcd.sh $etcd_02:/etc/profile.d/
scp /etc/profile.d/alias_etcd.sh $etcd_03:/etc/profile.d/
ansible etcd -m shell -a "source /etc/profile.d/alias_etcd.sh" #输出etcd集群健康,注意重开终端生效
etcdctld cluster-health
#/bin/bash
flannel_01=192.168.10.10
flannel_02=192.168.10.11
flannel_03=192.168.10.12 #将二进制包提前拷贝至当前路径
dir=./
pkgname=flannel-v0.11.0-linux-amd64
[ ! -e $dir/$pkgname.tar.gz ] && echo "error:no package" && exit
tar xf $dir/$pkgname.tar.gz
mv $dir/{flanneld,mk-docker-opts.sh} /usr/local/bin/ #向 etcd 写入集群 Pod 网段信息(在任意一个etcd节点执行)
cd /etc/etcd/ssl/
etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
--endpoint=https://$flannel_01:2379,https://$flannel_02:2379,https://$flannel_03:2379 \
set /coreos.com/network/config '{ "Network": "10.244.0.0/16", "Backend": {"Type": "vxlan"}}' #创建flannel.conf
FLANNEL_CONFIG() {
cat >> /etc/flannel.conf <<- EOF
FLANNEL_OPTIONS="--etcd-endpoints=https://$flannel_01:2379,https://$flannel_02:2379,https://$flannel_03:2379 -etcd-cafile=/etc/etcd/ssl/ca.pem -etcd-certfile=/etc/etcd/ssl/server.pem -etcd-keyfile=/etc/etcd/ssl/server-key.pem"
EOF
} #创建flannel.service
FLANNEL_SERVICE() {
cat >> /usr/lib/systemd/system/flanneld.service <<- EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service [Service]
Type=notify
EnvironmentFile=/etc/flannel.conf
ExecStart=/usr/local/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure [Install]
WantedBy=multi-user.target
EOF
} #重建docker.service
DOCKER_SERVICE() {
cat >> /usr/lib/systemd/system/docker.service <<- EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target [Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s [Install]
WantedBy=multi-user.target
EOF
} FLANNEL_CONFIG && FLANNEL_SERVICE && DOCKER_SERVICE scp /usr/local/bin/{flanneld,mk-docker-opts.sh} $flannel_02:/usr/local/bin/
scp /usr/local/bin/{flanneld,mk-docker-opts.sh} $flannel_03:/usr/local/bin/ scp /etc/flannel.conf $flannel_02:/etc/
scp /etc/flannel.conf $flannel_03:/etc/ scp /usr/lib/systemd/system/flanneld.service $flannel_02:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/flanneld.service $flannel_03:/usr/lib/systemd/system/ scp /usr/lib/systemd/system/docker.service $flannel_02:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/docker.service $flannel_03:/usr/lib/systemd/system/ #安装ansible,并将node_ip写入host
[ ! -x /usr/bin/ansible ] && yum -y install ansible
echo "[flannel]" >> /etc/ansible/hosts
echo "$flannel_02" >> /etc/ansible/hosts
echo "$flannel_03" >> /etc/ansible/hosts
ansible flannel -m service -a "name=flanneld state=started daemon_reload=yes enabled=yes" && continue
ansible flannel -m service -a "name=docker state=restarted enabled=yes" && continue #别名简化
cat >> /etc/profile.d/alias_etcdf.sh <<- EOF
alias etcdctlf='cd /etc/etcd/ssl/;etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoint=https://$flannel_01:2379,https://$flannel_02:2379,https://$flannel_03:2379 ls /coreos.com/network/subnets'
EOF
source /etc/profile.d/alias_etcdf.sh scp /etc/profile.d/alias_etcdf.sh $flannel_02:/etc/profile.d/
scp /etc/profile.d/alias_etcdf.sh $flannel_03:/etc/profile.d/
ansible flannel -m shell -a "source /etc/profile.d/alias_etcdf.sh"
#/bin/bash master=$!
#创建kubernetes的CA证书
sh CA.sh kubernetes #配置kube-apiserver
#将kubernetes二进制包提前拷贝至当前路径
dir=./
pkgname=kubernetes-server-linux-amd64
[ ! -e $dir/$pkgname.tar.gz ] && echo "no package" && exit
tar xf $dir/$pkgname.tar.gz
cp -p $dir/kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl} /usr/local/bin/ #创建Bootstrapping Token 文件
TLS=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat >> /etc/kubernetes/token.csv <<- EOF
$TLS,,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF #创建 kube-apiserver 配置文件
KUBE_API_CONF() {
cat >> /etc/kubernetes/apiserver <<- EOF
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.10.10:2379,https://192.168.10.11:2379,https://192.168.10.12:2379 \
--bind-address=$master \
--secure-port=6443 \
--advertise-address=$master \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/etc/kubernetes/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/etc/kubernetes/ssl/server.pem \
--tls-private-key-file=/etc/kubernetes/ssl/server-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/server.pem \
--etcd-keyfile=/etc/etcd/ssl/server-key.pem"
EOF
} #创建kube-apiserver.service的systemd 启动文件
KUBE_API_SERVICE() {
cat >> /usr/lib/systemd/system/kube-apiserver.service <<- EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service [Service]
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/local/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure [Install]
WantedBy=multi-user.target
EOF
} KUBE_API_CONF && KUBE_API_SERVICE
#启动服务
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver #部署kube-scheduler
#创建kube-scheduler.conf配置文件
KUBE_SCH_CONF() {
cat >> /etc/kubernetes/kube-scheduler.conf <<- EOF
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
EOF
} #创建kube-scheduler的systemd启动文件
KUBE_SCH_SERVICE() {
cat >> /usr/lib/systemd/system/kube-scheduler.service <<- EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes [Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure [Install]
WantedBy=multi-user.target
EOF
} KUBE_SCH_CONF && KUBE_SCH_SERVICE
#启动服务
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler #部署kube-controller-manager
#创建kube-controller-manager.conf配置文件
KUBE_CM_CONF() {
cat >> /etc/kubernetes/kube-controller-manager.conf <<- EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem"
EOF
} #创建kube-controller-manager的systemd 启动文件
KUBE_CM_SERVICE() {
cat >> /usr/lib/systemd/system/kube-controller-manager.service <<- EOF
[unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes [Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure [Install]
WantedBy=multi-user.target
EOF
} KUBE_CM_CONF && KUBE_CM_SERVICE
#启动服务
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager #检查各服务状态及 Master 集群状态
kubectl get cs
#/bin/bash
master=$1
node1=$2
node2=$3
ssh $node1 "mkdir -p /etc/kubernetes/ssl"
ssh $node2 "mkdir -p /etc/kubernetes/ssl"
dir=./
[ ! -d kubernetes ] && echo "error:no kubernetes dir" && exit
#部署kubelet
#在master节点创建 kube-proxy 证书
kube_proxy_csr() {
cd /etc/kubernetes/ssl/
cat >> /etc/kubernetes/ssl/kube-proxy-csr.json <<- EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "GuangDong",
"ST": "GuangZhou",
"O": "kubernetes",
"OU": "System"
}
]
}
EOF
cfssl_linux-amd64 gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json --profile=kubernetes kube-proxy-csr.json | cfssljson_linux-amd64 -bare kube-proxy
}
kube_proxy_csr || exit #创建 kubelet bootstrap kubeconfig 文件
#写一个脚本bs_kubeconfig.sh
cat >> /tmp/bs_kubeconfig.sh <<- EOF
#!/bin/bash
BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)
KUBE_SSL=/etc/kubernetes/ssl
KUBE_APISERVER="https://$master:6443"
cd \$KUBE_SSL/
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=./ca.pem \
--embed-certs=true \
--server=\${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=\${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 设置上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
# 创建 kube-proxy kubeconfig 文件
kubectl config set-cluster kubernetes \
--certificate-authority=./ca.pem \
--embed-certs=true \
--server=\${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
# 将 kubelet-bootstrap 用户绑定到系统集群角色
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
EOF
#执行脚本
sh /tmp/bs_kubeconfig.sh
#查看ls /etc/kubernetes/ssl/*.kubeconfig
#会有/etc/kubernetes/ssl/bootstrap.kubeconfig /etc/kubernetes/ssl/kube-proxy.kubeconfig这两个文件 #将二进制文件, 刚生成的两个 .kubeconfig 文件拷贝到所有的 node 节点
#SHELL_FOLDER=$(cd "$(dirname "$0")";pwd)
scp /root/kubernetes/server/bin/{kubelet,kube-proxy} $node1:/usr/local/bin/
scp /root/kubernetes/server/bin/{kubelet,kube-proxy} $node2:/usr/local/bin/
scp /etc/kubernetes/ssl/*.kubeconfig $node1:/etc/kubernetes/
scp /etc/kubernetes/ssl/*.kubeconfig $node2:/etc/kubernetes/ #node:创建 kubelet 配置文件
KUBELET_CONF() {
cat >> /etc/kubernetes/kubelet.conf <<- EOF
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=$master \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
--config=/etc/kubernetes/kubelet.yaml \
--cert-dir=/etc/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
EOF
} #创建kube-proxy.conf配置文件
KUBE_PROXY_CONF() {
cat >> /etc/kubernetes/kube-proxy.conf <<- EOF
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=$master \
--cluster-cidr=10.0.0.0/24 \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig"
EOF
} #创建 kubelet 参数配置模板文件
KUBELET_YAML() {
cat >> /etc/kubernetes/kubelet.yaml <<- EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: $master
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
EOF
} #创建 kubelet的systemd 启动文件
KUBELET_SERVICE() {
cat >> /usr/lib/systemd/system/kubelet.service <<- EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service [Service]
EnvironmentFile=/etc/kubernetes/kubelet.conf
ExecStart=/usr/local/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process [Install]
WantedBy=multi-user.target
EOF
} #创建 kube-proxy的systemd 启动文件
KUBE_PROXY_SERVICE() {
cat >> /usr/lib/systemd/system/kube-proxy.service <<- EOF
[Unit]
Description=Kubernetes Proxy
After=network.target [Service]
EnvironmentFile=-/etc/kubernetes/kube-proxy.conf
ExecStart=/usr/local/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure [Install]
WantedBy=multi-user.target
EOF
} KUBELET_CONF && KUBELET_YAML && KUBELET_SERVICE && KUBE_PROXY_CONF && KUBE_PROXY_SERVICE scp /etc/kubernetes/{kubelet.conf,kubelet.yaml,kube-proxy.conf} $node1:/etc/kubernetes/
scp /etc/kubernetes/{kubelet.conf,kubelet.yaml,kube-proxy.conf} $node2:/etc/kubernetes/ scp /usr/lib/systemd/system/{kubelet.service,kube-proxy.service} $node1:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/{kubelet.service,kube-proxy.service} $node2:/usr/lib/systemd/system/ #修改node1的kubelet.conf、kubelet.yaml、kube-proxy.conf
cat >> /tmp/kubelet_conf1.sh <<- EOF
#/bin/bash
sed -i "s#$master#$node1#g" /etc/kubernetes/{kubelet.conf,kubelet.yaml,kube-proxy.conf}
EOF
ansible $node1 -m script -a 'creates=/tmp/kubelet_conf1.sh /tmp/kubelet_conf1.sh' #修改node2的kubelet.conf、kubelet.yaml、kube-proxy.conf
cat >> /tmp/kubelet_conf2.sh <<- EOF
#/bin/bash
sed -i "s#$master#$node2#g" /etc/kubernetes/{kubelet.conf,kubelet.yaml,kube-proxy.conf}
EOF
ansible $node2 -m script -a 'creates=/tmp/kubelet_conf2.sh /tmp/kubelet_conf2.sh' #安装ansible,并将node_ip写入host
[ ! -x /usr/bin/ansible ] && yum -y install ansible
echo "[node]" >> /etc/ansible/hosts
echo "$node1" >> /etc/ansible/hosts
echo "$node2" >> /etc/ansible/hosts
ansible node -m service -a "name=kubelet state=started daemon_reload=yes enabled=yes" && continue
ansible node -m service -a "name=kube-proxy state=started daemon_reload=yes enabled=yes" && continue #Approve kubelet CSR请求
kubectl certificate approve `kubectl get csr|awk 'NR>1{print $1}'`
Shell脚本实现----Kubernetes单集群二进制部署的更多相关文章
- linux运维、架构之路-Kubernetes离线、二进制部署集群
一.Kubernetes对应Docker的版本支持列表 Kubernetes 1.9 <--Docker 1.11.2 to 1.13.1 and 17.03.x Kubernetes 1.8 ...
- Shell脚本实现---Swarm集群部署实例(Swarm Cluster)
Shell脚本实现---Swarm集群部署实例(Swarm Cluster) 一.机器环境(均是centos7.8) IP hostname 角色 192.168.10.200 manager-swa ...
- Kubernetes集群的部署方式及详细步骤
一.部署环境架构以及方式 第一种部署方式 1.针对于master节点 将API Server.etcd.controller-manager.scheduler各组件进行yum install.编译安 ...
- shell脚本一键同步集群时间
shell脚本一键同步集群时间 弋嘤捕大 椿澄辄 ψ壤 茇徜燕 ㄢ交涔沔 阚龇棚绍 テ趼蜱棣 灵打了个寒颤也没有去甩脱愣是拖着 喇吉辔 秋北酏崖 琮淄脸酷 茇呶剑 莲夤罱 陕遇骸淫 ...
- 在 Kubernetes 集群快速部署 KubeSphere 容器平台
KubeSphere 不仅支持部署在 Linux 之上,还支持在已有 Kubernetes 集群之上部署 KubeSphere,自动纳管 Kubernetes 集群的已有资源与容器. 前提条件 Kub ...
- Kubernetes V1.15 二进制部署集群
1. 架构篇 1.1 kubernetes 架构说明 1.2 Flannel网络架构图 1.3 Kubernetes工作流程 2. 组件介绍 2.1 ...
- K8S从入门到放弃系列-(13)Kubernetes集群mertics-server部署
集群部署好后,如果我们想知道集群中每个节点及节点上的pod资源使用情况,命令行下可以直接使用kubectl top node/pod来查看资源使用情况,默认此命令不能正常使用,需要我们部署对应api资 ...
- K8S入门系列之集群二进制部署-->master篇(二)
组件版本和配置策略 组件版本 Kubernetes 1.16.2 Docker 19.03-ce Etcd 3.3.17 https://github.com/etcd-io/etcd/release ...
- 高可用Kubernetes集群-15. 部署Kubernetes集群统一日志管理
参考文档: Github:https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsear ...
随机推荐
- Anaconda, conda, pyenv, virtualenv的区别
1.Python环境 Python解释器--Python.exe Python包集合--Lib,包括自带包和第三方包 2.Anaconda--一个科学计算环境,Python的发行版本 包括了Conda ...
- get请求传递json格式数据的两种方法
get请求参数为json格式数据,使用pyhton+request的两种实现方式如下: 方法一:使用requests.request() 示例代码如下: 1.导入requests和json impor ...
- Python练习题 037:Project Euler 009:毕达哥拉斯三元组之乘积
本题来自 Project Euler 第9题:https://projecteuler.net/problem=9 # Project Euler: Problem 9: Special Pythag ...
- Python练习题 018:打印星号菱形
[Python练习题 018] 打印出如下图案(菱形): * *** ***** ******* ***** *** * --------------------------------------- ...
- matlab中find 查找非零元素的索引和值
来源:https://ww2.mathworks.cn/help/matlab/ref/find.html?searchHighlight=find&s_tid=doc_srchtitle f ...
- 一个Java对象的内存布局
1.对象的创建过程 class loading class linking(verification,preparation,resolution) class initializing 申请对象内存 ...
- 如何检测ASP中的浏览器。NET与浏览器文件
介绍 ASP.NET是一个用于使用Web表单.MVC.Web API和SignalR(这是官方定义)构建Web应用程序的高生产力框架.它是在.net框架上开发RESTful应用程序或使用HTML.CS ...
- Go语言中的常见的几个坑
目录 1.for range 2.defer与闭包 3.map内存溢出 4.协程泄漏 5.http手动关闭 记录一下日常中遇到的几个坑,加深一下印象. 1.for range 这个是比较常见的问题了, ...
- shell-的变量-全局变量
shell变量基础及深入 1. 变量类型 变量可分为两类:环境变量(全局变量)和局部变量. 环境变量也可称为全局变量,可以在创建他们的shell及其派生出来的任意子进程shell中使用.局部变量只 ...
- php-fpm 高并发 参数调整 转
工作中经常会遇到会给客户配置服务器,其中有的客户还会有并发量要求,其中也会必须要用负载均衡承载压力的.增加服务器数量肯定能有效的提升服务器承载能力,但只有根据目前已有配置设置好单台服务器才能更好的发挥 ...