一、脚本说明:

本实验中master、node、etcd都是单体。

安装顺序为:先安装test1节点主要组件,然后开始安装test2节点,最后回头把test1节点加入集群中,这样做目的是理解以后扩容都需要进行哪些操作

实验架构:

test1: 192.168.0.91    etcd、kubectl工具、kube-apiserver、kube-controller-manager、kube-scheduler、kubelet组件、cni、kube-proxy

test2: 192.168.0.92    docker、kubectl工具、kubelet组件、cni、kube-proxy、flannel、coredns

1、两个节点上创建目录

mkdir -p /k8s/profile/

mkdir -p /server/software/k8s/

mkdir -p /root/ssl/

mkdir -p /script/

2、定义环境变量

3、需要的文件提前放到test1节点上/k8s/profile/目录下

hosts 、 k8s.conf、etcd.service、profile、token.py、apiserver.address、kube-apiserver.service、config、apiserver

kube-controller-manager.service、controller-manager、kube-scheduler.service、kubelet.service、kubelet、test1-kubelet-config.yml、test2-kubelet-config.yml

kube-proxy.service、test1-proxy、test2-proxy、kube-flannel.yml、coredns.yaml

配置文件下载地址:https://pan.baidu.com/s/1Lyz-xgVaPLyU-MsxWMRROg
提取码:6un5 4、安装包提前放置到test1节点上/server/software/k8s/下面,下面是需要放的安装包 etcd-v3.2.18-linux-amd64.tar cfssl_linux-amd64、cfssl-certinfo_linux-amd64、cfssljson_linux-amd64 kubernetes-server-linux-amd64.tar.gz、cni-plugins-amd64-v0.7.1.tgz、docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm 5、创建证书所需要的文件提前都放到test1节点上 /root/ssl/目录下,下面是需要放置的文件 ca-config.json ca-csr.json etcd-csr.json admin-csr.json kube-apiserver-csr.json kube-controller-manager-csr.json kube-scheduler-csr.json kube-proxy-csr.json 证书所需文件下载地址:链接:https://pan.baidu.com/s/1WfnR4tQjnRIq5Pt5Q15ELw
提取码:ker1 6、用到的脚本有三个,提前放到test1节点上/script/目录下 test1_host.py、test2_host.py、k8s.py、test2.py 脚本下载地址:https://pan.baidu.com/s/1VBnLvfIfVVpy5s6msGsgmg
提取码:hpej 7、配置免密登录实现 192.168.0.91免密登录192.168.0.92 9、下发脚本给所有节点安装python、pip 参照:https://www.cnblogs.com/effortsing/p/9981941.html 10、test1节点安装ansible、配置主机目录实现通信,k8s主脚本开始安装node时候需要用到ansible 11、ansible下发test1_host.py脚本配置test1节点主机名、关闭防火墙、关闭selinux、关闭swap 12、ansible下发test2_host.py脚本配置test1节点主机名、关闭防火墙、关闭selinux、关闭swap 13、先对每个函数进行测试,所有函数测试成功后再一次性执行 python k8s.py 二、所有脚本内容如下: 1、k8s.py内容 [root@test1 script]# cat k8s.py
#!/usr/bin/python
#-*- codinig: UTF-8 -*-
from __future__ import print_function
import os, sys, stat
import shutil
import tarfile
import subprocess def environment_format():
print("配置环境")
subprocess.call(["iptables -P FORWARD ACCEPT"], shell=True)
if not os.path.isdir('/k8s/profile'):
os.makedirs('/k8s/profile') subprocess.call(["iptables -P FORWARD ACCEPT"], shell=True) shutil.copy('/k8s/profile/k8s.conf','/etc/sysctl.d/k8s.conf')
subprocess.call(["sysctl --system"], shell=True) subprocess.call(["modprobe ip_vs"], shell=True)
subprocess.call(["modprobe ip_vs_rr"], shell=True)
subprocess.call(["modprobe ip_vs_wrr"], shell=True)
subprocess.call(["modprobe ip_vs_sh"], shell=True)
subprocess.call(["modprobe nf_conntrack_ipv4"], shell=True)
subprocess.call(["lsmod | grep ip_vs"], shell=True) def etcd_install():
print("安装etcd")
if not os.path.isdir('/server/software/k8s/'):
os.makedirs('/server/software/k8s/')
os.chdir('/server/software/k8s/')
shutil.move('/server/software/k8s/cfssl-certinfo_linux-amd64','/usr/local/bin/cfssl-certinfo')
shutil.move('/server/software/k8s/cfssl_linux-amd64','/usr/local/bin/cfssl')
shutil.move('/server/software/k8s/cfssljson_linux-amd64','/usr/local/bin/cfssljson')
os.chdir('/usr/local/bin/')
os.chmod("cfssl-certinfo",stat.S_IXOTH)
os.chmod("cfssl",stat.S_IXOTH)
os.chmod("cfssljson",stat.S_IXOTH) subprocess.call(["useradd etcd"], shell=True)
if not os.path.isdir('/opt/k8s/bin/'):
os.makedirs('/opt/k8s/bin/')
os.chdir('/server/software/k8s/')
shutil.unpack_archive('etcd-v3.2.18-linux-amd64.tar.gz')
subprocess.call(["mv etcd-v3.2.18-linux-amd64/etcd* /opt/k8s/bin"], shell=True)
subprocess.call(["chmod +x /opt/k8s/bin/*"], shell=True)
subprocess.call(["ln -s /opt/k8s/bin/etcd /usr/bin/etcd"], shell=True)
subprocess.call(["ln -s /opt/k8s/bin/etcdctl /usr/bin/etcdctl"], shell=True)
subprocess.call(["etcd --version"], shell=True)
if not os.path.isdir('/oot/ssl/'):
os.makedirs('/oot/ssl/')
os.chdir('/root/ssl/')
subprocess.call(["cfssl gencert -initca ca-csr.json | cfssljson -bare ca"], shell=True)
if not os.path.isdir('/etc/kubernetes/cert/'):
os.makedirs('/etc/kubernetes/cert/')
shutil.copy('ca.pem','/etc/kubernetes/cert/')
shutil.copy('ca-key.pem','/etc/kubernetes/cert/')
os.chmod("ca.pem",stat.S_IXOTH)
os.chmod("ca-key.pem",stat.S_IXOTH)
subprocess.call(["cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd"], shell=True)
if not os.path.isdir('/etc/etcd/cert/'):
os.makedirs('/etc/etcd/cert/')
shutil.copy('etcd.pem','/etc/etcd/cert/')
shutil.copy('etcd-key.pem','/etc/etcd/cert/')
os.chmod("etcd.pem",stat.S_IXOTH)
os.chmod("etcd-key.pem",stat.S_IXOTH) print("配置环境变量,只能执行一次,如果重复写入到/etc/profile文件中,etcd就会显示不健康,需要手动删除多余的变量")
ms=open("/k8s/profile/profile")
for line in ms.readlines():
with open('/etc/profile','a+') as mon:
mon.write(line)
ms.close()
subprocess.call(["source /etc/profile"], shell=True)
subprocess.call(["mkdir -p /data/etcd"], shell=True) os.chdir('/etc/systemd/system/')
if os.path.exists('etcd.service'):
os.remove('etcd.service') ms=open("/k8s/profile/etcd.service")
for line in ms.readlines():
with open('/etc/systemd/system/etcd.service','a+') as mon:
mon.write(line)
ms.close()
subprocess.call(["systemctl daemon-reload"], shell=True)
subprocess.call(["systemctl start etcd"], shell=True)
subprocess.call(["systemctl enable etcd"], shell=True)
subprocess.call(["etcdctl --ca-file /etc/kubernetes/cert/ca.pem --cert-file /etc/etcd/cert/etcd.pem --key-file /etc/etcd/cert/etcd-key.pem cluster-health"], shell=True) def distribute_binary():
print("分发所有二进制文件")
os.chdir('/server/software/k8s/')
shutil.unpack_archive('kubernetes-server-linux-amd64.tar.gz')
if not os.path.isdir('/usr/local/kubernetes/bin'):
os.makedirs('/usr/local/kubernetes/bin')
os.chdir('/server/software/k8s/kubernetes/server/bin')
subprocess.call(["cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/kubernetes/bin"], shell=True)
shutil.copy('kubectl','/usr/local/bin/')
subprocess.call(["kubectl version"], shell=True) def generate_certificate():
print("生成ca证书")
os.chdir('/root/ssl/')
subprocess.call(["cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin"], shell=True)
subprocess.call(["cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver"], shell=True)
subprocess.call(["cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager"], shell=True)
subprocess.call(["cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler"], shell=True)
subprocess.call(["cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy"], shell=True)
if not os.path.isdir('/etc/kubernetes/pki'):
os.makedirs('/etc/kubernetes/pki')
if not os.path.isdir('/etc/kubernetes/pki/etcd/'):
os.makedirs('/etc/kubernetes/pki/etcd/')
subprocess.call(["cp ca*.pem admin*.pem kube-proxy*.pem kube-scheduler*.pem kube-controller-manager*.pem kube-apiserver*.pem /etc/kubernetes/pki"], shell=True) def create_kubeconfig():
print("生成token")
#生产token变量
output=subprocess.check_output(["head -c 16 /dev/urandom | od -An -t x | tr -d ' '"], shell=True)
token=str(output.decode('utf8').strip()).strip('b')
#把token.py模板文件中的TOKEN换成真实的token
os.chdir('/etc/kubernetes/')
if os.path.exists('token.csv'):
os.remove('token.csv')
f = open('/k8s/profile/token.py','r',encoding='utf-8')
f_new = open('/etc/kubernetes/token.csv','w',encoding='utf-8')
for line in f:
if "TOKEN" in line:
line = line.replace('TOKEN',token)
f_new.write(line)
f.close()
f_new.close() os.chdir('/etc/kubernetes/')
ms=open("/k8s/profile/apiserver.address")
for line in ms.readlines():
with open('/etc/profile','a+') as mon:
mon.write(line)
ms.close() subprocess.call(["source /etc/profile"], shell=True)
print("生产kubelet-bootstrap.py文件")
subprocess.call(["kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.0.91:6443 --kubeconfig=kubelet-bootstrap.py"], shell=True)
subprocess.call(["kubectl config set-credentials kubelet-bootstrap --token=TOKEN --kubeconfig=kubelet-bootstrap.py"], shell=True)
subprocess.call(["kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.py"], shell=True) #把kubelet-bootstrap.py文件中的TOKEN换成真实的token
f = open('/etc/kubernetes/kubelet-bootstrap.py','r',encoding='utf-8')
f_new = open('/etc/kubernetes/kubelet-bootstrap.conf','w',encoding='utf-8')
for line in f:
if "TOKEN" in line:
line = line.replace('TOKEN',token)
f_new.write(line)
f.close()
f_new.close()
subprocess.call(["kubectl config use-context default --kubeconfig=kubelet-bootstrap.conf"], shell=True) subprocess.call(["kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.0.91:6443 --kubeconfig=admin.conf"], shell=True)
subprocess.call(["kubectl config set-credentials admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=admin.conf"], shell=True)
subprocess.call(["kubectl config set-context default --cluster=kubernetes --user=admin --kubeconfig=admin.conf"], shell=True)
subprocess.call(["kubectl config use-context default --kubeconfig=admin.conf"], shell=True)
subprocess.call(["kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.0.91:6443 --kubeconfig=kube-controller-manager.conf"], shell=True)
subprocess.call(["kubectl config set-credentials kube-controller-manager --client-certificate=/etc/kubernetes/pki/kube-controller-manager.pem --client-key=/etc/kubernetes/pki/kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.conf"], shell=True)
subprocess.call(["kubectl config set-context default --cluster=kubernetes --user=kube-controller-manager --kubeconfig=kube-controller-manager.conf"], shell=True)
subprocess.call(["kubectl config use-context default --kubeconfig=kube-controller-manager.conf"], shell=True)
subprocess.call(["kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.0.91:6443 --kubeconfig=kube-scheduler.conf"], shell=True)
subprocess.call(["kubectl config set-credentials kube-scheduler --client-certificate=/etc/kubernetes/pki/kube-scheduler.pem --client-key=/etc/kubernetes/pki/kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.conf"], shell=True)
subprocess.call(["kubectl config set-context default --cluster=kubernetes --user=kube-scheduler --kubeconfig=kube-scheduler.conf"], shell=True)
subprocess.call(["kubectl config use-context default --kubeconfig=kube-scheduler.conf"], shell=True)
subprocess.call(["kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.0.91:6443 --kubeconfig=kube-proxy.conf"], shell=True)
subprocess.call(["kubectl config set-credentials kube-proxy --client-certificate=/etc/kubernetes/pki/kube-proxy.pem --client-key=/etc/kubernetes/pki/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.conf"], shell=True)
subprocess.call(["kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.conf"], shell=True)
subprocess.call(["kubectl config use-context default --kubeconfig=kube-proxy.conf"], shell=True) def configuration_startup_apiserver():
print("配置启动api-server")
os.chdir('/root/ssl/')
subprocess.call(["cp etcd.pem ca-key.pem ca.pem /etc/kubernetes/pki/etcd"], shell=True)
os.chdir('/etc/kubernetes/pki/')
subprocess.call(["openssl genrsa -out /etc/kubernetes/pki/sa.key 2048"], shell=True)
subprocess.call(["openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub"], shell=True)
subprocess.call(["ls /etc/kubernetes/pki/sa.*"], shell=True) os.chdir('/etc/systemd/system/')
if os.path.exists('kube-apiserver.service'):
os.remove('kube-apiserver.service') ms=open("/k8s/profile/kube-apiserver.service")
for line in ms.readlines():
with open('/etc/systemd/system/kube-apiserver.service','a+') as mon:
mon.write(line)
ms.close() os.chdir('/etc/kubernetes/')
if os.path.exists('config'):
os.remove('config') ms=open("/k8s/profile/config")
for line in ms.readlines():
with open('/etc/kubernetes/config','a+') as mon:
mon.write(line)
ms.close() os.chdir('/etc/kubernetes/')
if os.path.exists('apiserver'):
os.remove('apiserver') ms=open("/k8s/profile/apiserver")
for line in ms.readlines():
with open('/etc/kubernetes/apiserver','a+') as mon:
mon.write(line)
ms.close() subprocess.call(["systemctl daemon-reload"], shell=True)
subprocess.call(["systemctl start kube-apiserver"], shell=True)
subprocess.call(["systemctl enable kube-apiserver"], shell=True)
subprocess.call(["systemctl status kube-apiserver"], shell=True) def configuration_startup_controller_manager():
print("配置启动controller_manager")
os.chdir('/etc/systemd/system/')
if os.path.exists('kube-controller-manager.service'):
os.remove('kube-controller-manager.service') ms=open("/k8s/profile/kube-controller-manager.service")
for line in ms.readlines():
with open('/etc/systemd/system/kube-controller-manager.service','a+') as mon:
mon.write(line)
ms.close() ms=open("/k8s/profile/controller-manager")
for line in ms.readlines():
with open('/etc/kubernetes/controller-manager','a+') as mon:
mon.write(line)
ms.close() subprocess.call(["systemctl daemon-reload"], shell=True)
subprocess.call(["systemctl start kube-controller-manager"], shell=True)
subprocess.call(["systemctl enable kube-controller-manager"], shell=True)
subprocess.call(["systemctl status kube-controller-manager"], shell=True) def configuration_startup_scheduler():
print("配置启动scheduler")
os.chdir('/etc/systemd/system/')
if os.path.exists('kube-scheduler.service'):
os.remove('kube-scheduler.service') ms=open("/k8s/profile/kube-scheduler.service")
for line in ms.readlines():
with open('/etc/systemd/system/kube-scheduler.service','a+') as mon:
mon.write(line)
ms.close() ms=open("/k8s/profile/scheduler")
for line in ms.readlines():
with open('/etc/kubernetes/scheduler','a+') as mon:
mon.write(line)
ms.close() subprocess.call(["systemctl daemon-reload"], shell=True)
subprocess.call(["systemctl start kube-scheduler"], shell=True)
subprocess.call(["systemctl enable kube-scheduler"], shell=True)
subprocess.call(["systemctl status kube-scheduler"], shell=True) #给kubelet-bootstrap用户授权
subprocess.call(["kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap"], shell=True)
#查看组件状态
subprocess.call(["kubectl get componentstatuses"], shell=True) def copyfile_to_test2():
print("拷贝所需文件到test2节点")
subprocess.call(["scp /script/test2.py root@192.168.0.92:/home/"], shell=True)
subprocess.call(["scp /server/software/k8s/docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm 192.168.0.92:/home/"], shell=True)
subprocess.call(["scp /server/software/k8s/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm 192.168.0.92:/home/"], shell=True)
subprocess.call(["scp /k8s/profile/k8s.conf 192.168.0.92:/home/"], shell=True)
subprocess.call(["scp /server/software/k8s/kubernetes/server/bin/kubelet 192.168.0.92:/root/"], shell=True)
subprocess.call(["scp /server/software/k8s/kubernetes/server/bin/kubectl 192.168.0.92:/home/"], shell=True)
subprocess.call(["scp /etc/kubernetes/admin.conf 192.168.0.92:/home/"], shell=True)
subprocess.call(["scp /etc/kubernetes/kubelet-bootstrap.conf 192.168.0.92:/home/"], shell=True)
subprocess.call(["scp /server/software/k8s/cni-plugins-amd64-v0.7.1.tgz 192.168.0.92:/home/"], shell=True)
subprocess.call(["scp /k8s/profile/kubelet.service 192.168.0.92:/home/"], shell=True)
subprocess.call(["scp /k8s/profile/config 192.168.0.92:/home/"], shell=True)
subprocess.call(["scp /k8s/profile/kubelet 192.168.0.92:/home/"], shell=True)
subprocess.call(["scp $HOME/ssl/ca.pem 192.168.0.92:/home/"], shell=True)
subprocess.call(["scp /k8s/profile/test2-kubelet-config.yml 192.168.0.92:/home/"], shell=True)
subprocess.call(["scp /server/software/k8s/kubernetes/server/bin/kube-proxy 192.168.0.92:/home/"], shell=True)
subprocess.call(["scp /etc/kubernetes/kube-proxy.conf 192.168.0.92:/home/"], shell=True)
subprocess.call(["scp /k8s/profile/kube-proxy.service 192.168.0.92:/home/"], shell=True)
subprocess.call(["scp /k8s/profile/test2-proxy 192.168.0.92:/home/"], shell=True)
subprocess.call(["scp /k8s/profile/kube-flannel.yml 192.168.0.92:/home/"], shell=True)
subprocess.call(["scp /k8s/profile/coredns.yaml 192.168.0.92:/home/"], shell=True) # 单独安装test2节点,安装test2节点有单独的脚本,需要拷贝到test2节点执行 def install_test2():
print("执行test2.py脚本安装test2节点")
#调用ansible执行脚本
subprocess.call(["time ansible test2 -m shell -a 'chdir=/home python test2.py'"], shell=True) def test1_join_cluster():
print("配置test1节点加入集群")
#禁用selinux
subprocess.call(["sed -i 's/enforcing/disabled/g' /etc/selinux/config"], shell=True)
subprocess.call(["sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux"], shell=True)
#关闭swap,否则csr通过后kubelet马上就会挂掉
subprocess.call(["sed -i 's/\/dev\/mapper\/centos-swap/#\/dev\/mapper\/centos-swap/g' /etc/fstab"], shell=True)
subprocess.call(["swapoff -a"], shell=True) #安装docker
os.chdir('/server/software/k8s')
subprocess.call(["yum install -y docker-ce-*.rpm"], shell=True)
subprocess.call(["systemctl start docker"], shell=True)
subprocess.call(["systemctl enable docker"], shell=True)
if not os.path.isdir('/usr/local/kubernetes/bin'):
os.makedirs('/usr/local/kubernetes/bin')
shutil.copy('/server/software/k8s/kubernetes/server/bin/kubelet','/usr/local/kubernetes/bin/')
subprocess.call(["rm -rf $HOME/.kube"], shell=True)
subprocess.call(["mkdir -p $HOME/.kube"], shell=True)
subprocess.call(["cp /etc/kubernetes/admin.conf $HOME/.kube/config"], shell=True)
subprocess.call(["chown $(id -u):$(id -g) $HOME/.kube/config"], shell=True) def install_kubelet_and_cni():
print("test1节点安装cni网络插件")
#安装cni
subprocess.call(["mkdir -p /opt/cni/bin/"], shell=True)
subprocess.call(["mkdir -p /etc/cni/net.d/"], shell=True)
shutil.unpack_archive('/server/software/k8s/cni-plugins-amd64-v0.7.1.tgz','/opt/cni/bin/')
#安装kubelet
if not os.path.isdir('/data/kubelet'):
os.makedirs('/data/kubelet') os.chdir('/etc/systemd/system/')
if os.path.exists('kubelet.service'):
os.remove('kubelet.service') ms=open("/k8s/profile/kubelet.service")
for line in ms.readlines():
with open('/etc/systemd/system/kubelet.service','a+') as mon:
mon.write(line)
ms.close() os.chdir('/etc/kubernetes/')
if os.path.exists('kubelet'):
os.remove('kubelet') ms=open("/k8s/profile/kubelet")
for line in ms.readlines():
with open('/etc/kubernetes/kubelet','a+') as mon:
mon.write(line)
ms.close() ms=open("/k8s/profile/test1-kubelet-config.yml")
for line in ms.readlines():
with open('/etc/kubernetes/kubelet-config.yml','a+') as mon:
mon.write(line)
ms.close() subprocess.call(["systemctl daemon-reload"], shell=True)
subprocess.call(["systemctl enable kubelet"], shell=True)
subprocess.call(["systemctl start kubelet"], shell=True)
subprocess.call(["systemctl status kubelet"], shell=True) def request_via_csr():
print("test1节点通过csr请求")
output=subprocess.check_output(["kubectl get csr | grep Pending | awk '{print $1}'"], shell=True)
name=output.decode('utf8').strip()
subprocess.call(['kubectl','certificate','approve',name])
#停顿30秒,因为刚通过csr请求等一会才会出现node。否则下一步就会报错
subprocess.call(["sleep 30"], shell=True)
subprocess.call(["kubectl get nodes"], shell=True)
#设置集群角色
test1=subprocess.check_output(["kubectl get nodes | grep test1 | awk '{print $1}'"], shell=True)
test1=test1.decode('utf8').strip()
subprocess.call(['kubectl','label','nodes',test1,'node-role.kubernetes.io/master='])
subprocess.call(['kubectl','taint','nodes',test1,'node-role.kubernetes.io/master=true:NoSchedule'])
subprocess.call(["kubectl get nodes"], shell=True) def install_kube_proxy():
print("test1节点安装kube_proxy")
if not os.path.isdir('/usr/local/kubernetes/bin'):
os.makedirs('/usr/local/kubernetes/bin')
shutil.copy('/server/software/k8s/kubernetes/server/bin/kube-proxy','/usr/local/kubernetes/bin/')
subprocess.call(["yum install -y conntrack-tools"], shell=True) os.chdir('/etc/systemd/system/')
if os.path.exists('kube-proxy.service'):
os.remove('kube-proxy.service') ms=open("/k8s/profile/kube-proxy.service")
for line in ms.readlines():
with open('/etc/systemd/system/kube-proxy.service','a+') as mon:
mon.write(line)
ms.close() ms=open("/k8s/profile/test1-proxy")
for line in ms.readlines():
with open('/etc/kubernetes/proxy','a+') as mon:
mon.write(line)
ms.close() subprocess.call(["systemctl daemon-reload"], shell=True)
subprocess.call(["systemctl enable kube-proxy"], shell=True)
subprocess.call(["systemctl start kube-proxy"], shell=True)
subprocess.call(["systemctl status kube-proxy"], shell=True) def func_list():
#environment_format()
#etcd_install()
#distribute_binary()
#generate_certificate()
#create_kubeconfig()
#configuration_startup_apiserver()
#configuration_startup_controller_manager()
#configuration_startup_scheduler()
#copyfile_to_test2()
#install_test2()
#test1_join_cluster()
#install_kubelet_and_cni()
#request_via_csr()
#install_kube_proxy() def main():
func_list()
if __name__ == '__main__':
main() 2、test2.py内容 [root@test2 home]# cat test2.py
#!/usr/bin/python
#-*- codinig: UTF-8 -*-
from __future__ import print_function
import os, sys, stat
import shutil
import tarfile
import subprocess def environment_format():
print("test2节点配置环境")
#禁用selinux
subprocess.call(["sed -i 's/enforcing/disabled/g' /etc/selinux/config"], shell=True)
subprocess.call(["sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux"], shell=True)
#关闭swap,否则csr通过后kubelet马上就会挂掉
subprocess.call(["sed -i 's/\/dev\/mapper\/centos-swap/#\/dev\/mapper\/centos-swap/g' /etc/fstab"], shell=True)
subprocess.call(["swapoff -a"], shell=True) subprocess.call(["iptables -P FORWARD ACCEPT"], shell=True) os.chdir('/etc/sysctl.d/')
if os.path.exists('k8s.conf'):
os.remove('k8s.conf') shutil.copy('/home/k8s.conf','/etc/sysctl.d/k8s.conf')
subprocess.call(["sysctl --system"], shell=True) subprocess.call(["modprobe ip_vs"], shell=True)
subprocess.call(["modprobe ip_vs_rr"], shell=True)
subprocess.call(["modprobe ip_vs_wrr"], shell=True)
subprocess.call(["modprobe ip_vs_sh"], shell=True)
subprocess.call(["modprobe nf_conntrack_ipv4"], shell=True)
subprocess.call(["lsmod | grep ip_vs"], shell=True) def install_docker():
print("test2节点安装docker")
subprocess.call(["yum remove -y docker-ce docker-ce-selinux container-selinux"], shell=True)
os.chdir('/home')
subprocess.call(["yum install -y docker-ce-*.rpm"], shell=True)
subprocess.call(["systemctl start docker"], shell=True)
subprocess.call(["systemctl enable docker"], shell=True) def install_kubectl():
print("test2节点安装kubectl工具")
subprocess.call(["mkdir -p /usr/local/kubernetes/bin/"], shell=True)
shutil.copy('/root/kubelet','/usr/local/kubernetes/bin/')
shutil.copy('/home/kubectl','/usr/local/bin/')
subprocess.call(["mkdir -p /etc/kubernetes/"], shell=True)
shutil.copy('/home/admin.conf','/etc/kubernetes/')
subprocess.call(["rm -rf $HOME/.kube"], shell=True)
subprocess.call(["mkdir -p $HOME/.kube"], shell=True)
subprocess.call(["cp /etc/kubernetes/admin.conf $HOME/.kube/config"], shell=True)
subprocess.call(["chown $(id -u):$(id -g) $HOME/.kube/config"], shell=True)
shutil.copy('/home/kubelet-bootstrap.conf','/etc/kubernetes/') def install_cni():
print("test2节点安装cni网络插件")
subprocess.call(["mkdir -p /opt/cni/bin/"], shell=True)
subprocess.call(["mkdir -p /etc/cni/net.d/"], shell=True)
shutil.unpack_archive('/home/cni-plugins-amd64-v0.7.1.tgz','/opt/cni/bin/') def configuration_startup_kubelet():
print("test2节点安装kubelet组件")
subprocess.call(["mkdir -p /data/kubelet/"], shell=True) os.chdir('/etc/systemd/system/')
if os.path.exists('kubelet.service'):
os.remove('kubelet.service') ms=open("/home/kubelet.service")
for line in ms.readlines():
with open('/etc/systemd/system/kubelet.service','a+') as mon:
mon.write(line)
ms.close() os.chdir('/etc/kubernetes/')
if os.path.exists('config'):
os.remove('config') ms=open("/home/config")
for line in ms.readlines():
with open('/etc/kubernetes/config','a+') as mon:
mon.write(line)
ms.close() os.chdir('/etc/kubernetes/')
if os.path.exists('kubelet'):
os.remove('kubelet') ms=open("/home/kubelet")
for line in ms.readlines():
with open('/etc/kubernetes/kubelet','a+') as mon:
mon.write(line)
ms.close() if not os.path.isdir('/etc/kubernetes/pki/'):
os.makedirs('/etc/kubernetes/pki/')
shutil.copy('/home/ca.pem','/etc/kubernetes/pki/') os.chdir('/etc/kubernetes/')
if os.path.exists('kubelet-config.yml'):
os.remove('kubelet-config.yml') ms=open("/home/test2-kubelet-config.yml")
for line in ms.readlines():
with open('/etc/kubernetes/kubelet-config.yml','a+') as mon:
mon.write(line)
ms.close() subprocess.call(["systemctl daemon-reload"], shell=True)
subprocess.call(["systemctl enable kubelet"], shell=True)
subprocess.call(["systemctl start kubelet"], shell=True)
subprocess.call(["systemctl status kubelet"], shell=True)
os.listdir('/etc/kubernetes/') def request_via_csr():
print("test2节点通过csr请求")
output=subprocess.check_output(["kubectl get csr | grep csr | awk '{print $1}'"], shell=True)
name=output.decode('utf8').strip()
subprocess.call(['kubectl','certificate','approve',name])
#停顿30秒,因为刚通过csr请求等一会才会出现node。否则下一步就会报错
subprocess.call(["sleep 30"], shell=True)
subprocess.call(["kubectl get nodes"], shell=True)
#设置集群角色
test2=subprocess.check_output(["kubectl get nodes | grep test2 | awk '{print $1}'"], shell=True)
test2=test2.decode('utf8').strip()
subprocess.call(['kubectl','label','nodes',test2,'node-role.kubernetes.io/node=']) def install_kube_proxy():
print("test2节点安装kube_proxy")
shutil.copy('/home/kube-proxy','/usr/local/kubernetes/bin/')
shutil.copy('/home/kube-proxy.conf','/etc/kubernetes/')
subprocess.call(["yum install -y conntrack-tools"], shell=True) os.chdir('/etc/systemd/system/')
if os.path.exists('kube-proxy.service'):
os.remove('kube-proxy.service') ms=open("/home/kube-proxy.service")
for line in ms.readlines():
with open('/etc/systemd/system/kube-proxy.service','a+') as mon:
mon.write(line)
ms.close() os.chdir('/etc/kubernetes/')
if os.path.exists('proxy'):
os.remove('proxy') ms=open("/home/test2-proxy")
for line in ms.readlines():
with open('/etc/kubernetes/proxy','a+') as mon:
mon.write(line)
ms.close() subprocess.call(["systemctl daemon-reload"], shell=True)
subprocess.call(["systemctl enable kube-proxy"], shell=True)
subprocess.call(["systemctl start kube-proxy"], shell=True)
subprocess.call(["systemctl status kube-proxy"], shell=True) def install_flannel():
print("test2节点安装flanel")
subprocess.call(["kubectl apply -f /home/kube-flannel.yml"], shell=True)
subprocess.call(["kubectl get pod -n kube-system"], shell=True)
subprocess.call(["sleep 10"], shell=True)
subprocess.call(["kubectl get nodes"], shell=True) def install_coredns():
print("test2节点安装coredns")
subprocess.call(["yum install jq -y"], shell=True)
subprocess.call(["kubectl apply -f /home/coredns.yaml"], shell=True)
subprocess.call(["sleep 10"], shell=True)
subprocess.call(["kubectl get pod -n kube-system"], shell=True) def func_list():
environment_format()
install_docker()
install_kubectl()
install_cni()
configuration_startup_kubelet()
request_via_csr()
install_kube_proxy()
install_flannel()
install_coredns() def main():
func_list()
if __name__ == '__main__':
main() 3、test1_hostname.py内容 cat >test1_hostname.py <<EOF
#!/usr/bin/python
#-*- codinig: UTF-8 -*-
from __future__ import print_function
import os
import shutil
import tarfile
import subprocess def hostname_format():
subprocess.call(["hostnamectl set-hostname test1"], shell=True)
#配置hosts解析
ms=open("/k8s/profile/hosts")
for line in ms.readlines():
with open('/etc/hosts','a+') as mon:
mon.write(line)
ms.close()
subprocess.call(["sed -i '\hostname=test1' /etc/hostname"], shell=True)
subprocess.call(["sed -i '\hostname=test1' /etc/sysconfig/network"], shell=True)
subprocess.call(["sed -i 's/enforcing/disabled/g' /etc/selinux/config"], shell=True)
subprocess.call(["sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux"], shell=True)
subprocess.call(["sed -i 's/\/dev\/mapper\/centos-swap/#\/dev\/mapper\/centos-swap/g' /etc/fstab"], shell=True)
subprocess.call(["systemctl stop firewalld && systemctl disable firewalld"], shell=True)
subprocess.call(["reboot"], shell=True) def func_list():
hostname_format() def main():
func_list()
if __name__ == '__main__':
main()
EOF 4、test2_hostname.py内容 cat >test2_hostname.py<<EOF
#!/usr/bin/python
#-*- codinig: UTF-8 -*-
from __future__ import print_function
import os
import shutil
import tarfile
import subprocess def hostname_format():
subprocess.call(["hostnamectl set-hostname test1"], shell=True)
#配置hosts解析
ms=open("/k8s/profile/hosts")
for line in ms.readlines():
with open('/etc/hosts','a+') as mon:
mon.write(line)
ms.close()
subprocess.call(["sed -i '\hostname=test2' /etc/hostname"], shell=True)
subprocess.call(["sed -i '\hostname=test2' /etc/sysconfig/network"], shell=True)
subprocess.call(["sed -i 's/enforcing/disabled/g' /etc/selinux/config"], shell=True)
subprocess.call(["sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux"], shell=True)
subprocess.call(["sed -i 's/\/dev\/mapper\/centos-swap/#\/dev\/mapper\/centos-swap/g' /etc/fstab"], shell=True)
subprocess.call(["systemctl stop firewalld && systemctl disable firewalld"], shell=True)
subprocess.call(["reboot"], shell=True) def func_list():
hostname_format() def main():
func_list()
if __name__ == '__main__':
main()
EOF

python安装二进制k8s 1.11.0 一个master、一个node 查看node节点是主机名---apiserver无法启动,后来改了脚本应该可以的更多相关文章

  1. python安装二进制k8s高可用 版本1.13.0

    一.所有安装包.脚本.脚本说明.下载链接:https://pan.baidu.com/s/1kHaesJJuMQ5cG-O_nvljtg 提取码:kkv6 二.脚本安装说明 1.脚本说明: 本实验为三 ...

  2. CentOS 7.4 安装 K8S v1.11.0 集群所遇到的问题

    0.引言 最近打算将现有项目的 Docker 部署到阿里云上面,但是之前是单机部署,现在阿里云上面有 3 台机器,所以想做一个 Docker 集群.之前考虑是用 Docker Swarm 来做这个事情 ...

  3. Centos7安装Kubernetes k8s v1.16.0 国内环境

    一. 为什么是k8s v1.16.0? 最新版的v1.16.2试过了,一直无法安装完成,安装到kubeadm init那一步执行后,报了很多错,如:node xxx not found等.centos ...

  4. Linux安装git (git-2.11.0)

      本文旨在讲述如何在linux上安装最新版的git.   1.查看当前git版本:git --version 查看最新版git:访问https://www.kernel.org/pub/softwa ...

  5. k8s1.11.0安装、一个master、一个node、查看node名称是主机名、node是扩容进来的、带cadvisor监控服务

    一个master.一个node.查看node节点是主机名 # 安装顺序:先在test1 上安装完必要组件后,就开始在 test2 上单独安装node组件,实现node功能,再返回来配置test1加入集 ...

  6. k8s1.11.0安装、一个master、一个node、查看node名称是ip、node是扩容进来的、带cadvisor监控服务

    一个master.一个node.查看node节点是ip # 安装顺序:先在test1 上安装完必要组件后,就开始在 test2 上单独安装node组件,实现node功能,再返回来配置test1加入集群 ...

  7. CDH5.11..0安装

    1.参考: http://www.cnblogs.com/codedevelop/p/6762555.html grant all privileges on *.* to 'root'@'hostn ...

  8. CM5(5.11.0)和CDH5(5.11.0)离线安装

    概述 文件下载 系统环境搭建 日志查看 Q&A 参考 概述 CDH (Cloudera's Distribution, including Apache Hadoop),是Hadoop众多分支 ...

  9. CentOS7 安装oracle 11g (11.2.0.1.0)

    1.安装依赖: #yum -y install binutils compat-libcap1 compat-libstdc++-33 gcc gcc-c++ glibc glibc-devel ks ...

随机推荐

  1. 清除input输入框的历史记录

    当之前的input框输入了数据后,下次输入有历史记录问题的解决方法 <input id="vhcl_no"  type="text"  autocompl ...

  2. (三) 结构化查询语言SQL——1

    1. SQL概述 SQL,结构化查询语言,重要性不必在赘述了,基本上开发软件没有不用到的,此外在一些大数据也有广泛的应用.SQL主要包含数据定义语言(DDL).数据操纵语言(DML)以及数据控制语言( ...

  3. PLS做soft particle

    这个pixel local storage frame fetch 可以一个pass做出soft particle/deferred lighting/soft edge water programb ...

  4. django之ajax结合sweetalert使用,分页器和bulk_create批量插入 07

    目录 sweetalert插件 bulk_create 批量插入数据 分页器 简易版本的分页器的推导 自定义分页器的使用(组件) sweetalert插件 有这么一个需求: ​ 当用户进行一个删除数据 ...

  5. Redis——解决“org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'redisReferenceResolver': Unsatisfied dependency expressed through constructor parameter 0”

    错误栈: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name ...

  6. vue多套样式切换

    最近根据设计要求app需要根据不同环境切换不同样式,网上找了很多方法都不理想,后面自己脑洞大开这么完成的,请大佬多指教! 一.新建全局变量js文件和公用样式文件,在main.js中引入 import  ...

  7. Linux之zookeeper开机启动

    1.用cd 命令切换到/etc/rc.d/init.d/目录下 [root@bogon ~]# cd /etc/rc.d/init.d 2.用touch zookeeper创建一个文件 [root@b ...

  8. MySQL_(Java)【连接池】简单在JDBCUtils.java中创建连接池

    MySQL_(Java)[事物操作]使用JDBC模拟银行转账向数据库发起修改请求 传送门 MySQL_(Java)[连接池]使用DBCP简单模拟银行转账事物 传送门 Java应用程序访问数据库的过程: ...

  9. 域内信息收集 powershell收集域内信息

    POwershell收集域内信息 Powershell(你可以看做CMD的升级版 但是和cmd完全不一样) 原来的powershe是不能执行任何脚本的 更改执行策略 这个是一个绕过的脚本 接下来我们了 ...

  10. arxiv-sanity使用指南

    使用介绍 https://bookdown.org/wshuyi/intro-to-scientific-writings4/reading.html#find-article-with-ai