安装k8s,高可用3 master安装脚本
每个在新集群里,记得更改三个节点的IP即可。
第一步还是要执行那个root脚本,准备好系统环境,安装好软件。
先安装在每个节点上使用docker安装好etcd。(sh script domain etcd)
再在每个节点上安装master。(sh script domain master)
#!/bin/bash # Version V0.09 2019-05-10-10:32 if [ `whoami` != "xxx" ];then echo "[error] You need to switch to docker user to execute this command" ; exit 1 ;fi Domain_name=$1 Node_type=$2 K8S_VER=1.14.1 dir_path=$(cd `dirname $0`;cd ../;pwd) cmd_path=$dir_path/cmd cert_path=$dir_path/cert rpm_path=$dir_path/rpm software_path=$dir_path/software yaml_path=$dir_path/yaml # 每一个新集群,此处必须修改 HOST_1=1.1.1.1 HOST_2=1.1.1.2 HOST_3=1.1.1.3 Domain_name=$1 Node_type=$2 # 定义常量 THIS_HOST=$(hostname -i) LOCAL_HOST=$(hostname) LOCAL_HOST_L=${LOCAL_HOST,,} pki_dir=/etc/kubernetes/pki K8S_API_PORT=6443 K8S_JOIN_TOKEN=xxxxxx.xxxxxxxxxxxxxxxx General_user=xxx REGISTRY=harbor.xxx.cn/3rd_part/k8s.gcr.io ETCD_VERSION=3.3.10 ETCD_CLI_PORT=2379 ETCD_CLU_PORT=2380 TOKEN=xxx-k8s-etcd-token CLUSTER_STATE=new CLUSTER=${HOST_1}=http://${HOST_1}:${ETCD_CLU_PORT},${HOST_2}=http://${HOST_2}:${ETCD_CLU_PORT},${HOST_3}=http://${HOST_3}:${ETCD_CLU_PORT} etcd_data_dir=$HOME/etcd/etcd-data cs=$software_path/cfssl csj=$software_path/cfssljson #判断本机IP是否在集群内 function ip_in_cluster() { if [[ ${THIS_HOST} != ${HOST_1} && ${THIS_HOST} != ${HOST_2} && ${THIS_HOST} != ${HOST_3} ]]; then echo "Ip not in the k8s cluster host. please modify the HOST_1, HOST_2, HOST_3 at k8s_ha_master.sh file." exit 110 fi } function if_file_exist_del() { if [ -e $1 ]; then rm -f $1 fi } function kubeadmConf() { kubeadm_conf=kubeadm-config.yaml if_file_exist_del $kubeadm_conf cat << EOF >$kubeadm_conf apiVersion: kubeadm.k8s.io/v1beta1 kind: InitConfiguration bootstrapTokens: - token: ${K8S_JOIN_TOKEN} ttl: 24h usages: - signing - authentication groups: - system:bootstrappers:kubeadm:default-node-token --- apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration imageRepository: ${REGISTRY} kubernetesVersion: ${K8S_VER} controlPlaneEndpoint: ${Domain_name}:${K8S_API_PORT} etcd: external: endpoints: - https://${HOST_1}:${ETCD_CLI_PORT} - https://${HOST_2}:${ETCD_CLI_PORT} - https://${HOST_3}:${ETCD_CLI_PORT} caFile: ${pki_dir}/etcd/ca.crt certFile: ${pki_dir}/apiserver-etcd-client.crt keyFile: ${pki_dir}/apiserver-etcd-client.key apiServer: extraArgs: service-node-port-range: 30000-50000 networking: podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12 --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: "ipvs" EOF } cert_ha_init() { mkdir -p k8s_cert_tmp cp $cert_path/* ./k8s_cert_tmp chmod +x $cs chmod +x $csj cd k8s_cert_tmp sed -i "s/LOCAL_HOST_L/${LOCAL_HOST_L}/g;s/HOST_1/${HOST_1}/g;s/HOST_2/${HOST_2}/g;s/HOST_3/${HOST_3}/g;s/Domain_name/${Domain_name}/g" ha-etcd-server.json sed -i "s/LOCAL_HOST_L/${LOCAL_HOST_L}/g;s/HOST_1/${HOST_1}/g;s/HOST_2/${HOST_2}/g;s/HOST_3/${HOST_3}/g;s/Domain_name/${Domain_name}/g" ha-etcd-peer.json sed -i "s/LOCAL_HOST_L/${LOCAL_HOST_L}/g;s/HOST_1/${HOST_1}/g;s/HOST_2/${HOST_2}/g;s/HOST_3/${HOST_3}/g;s/Domain_name/${Domain_name}/g" ha-apiserver.json $cs gencert -ca=ca.crt -ca-key=ca.key -config=ca-config.json -profile=server ha-etcd-server.json|$csj -bare server $cs gencert -ca=ca.crt -ca-key=ca.key -config=ca-config.json -profile=client etcd-client.json|$csj -bare client $cs gencert -ca=ca.crt -ca-key=ca.key -config=ca-config.json -profile=peer ha-etcd-peer.json|$csj -bare peer $cs gencert -ca=front-proxy-ca.crt -ca-key=front-proxy-ca.key -config=ca-config.json -profile=client front-proxy-client.json|$csj -bare front-proxy-client $cs gencert -ca=ca.crt -ca-key=ca.key -config=ca-config.json -profile=server ha-apiserver.json|$csj -bare apiserver $cs gencert -ca=ca.crt -ca-key=ca.key -config=ca-config.json -profile=client apiserver-kubelet-client.json|$csj -bare apiserver-kubelet-client mkdir -p $pki_dir/etcd cp server.pem $pki_dir/etcd/server.crt&&cp server-key.pem $pki_dir/etcd/server.key cp client.pem $pki_dir/etcd/healthcheck-client.crt&&cp client-key.pem $pki_dir/etcd/healthcheck-client.key cp client.pem $pki_dir/apiserver-etcd-client.crt&&cp client-key.pem $pki_dir/apiserver-etcd-client.key cp peer.pem $pki_dir/etcd/peer.crt&&cp peer-key.pem $pki_dir/etcd/peer.key cp ca.crt $pki_dir/etcd/ca.crt&&cp ca.key $pki_dir/etcd/ca.key cp front-proxy-ca.crt $pki_dir/front-proxy-ca.crt&&cp front-proxy-ca.key $pki_dir/front-proxy-ca.key cp front-proxy-client.pem $pki_dir/front-proxy-client.crt&&cp front-proxy-client-key.pem $pki_dir/front-proxy-client.key cp ca.crt $pki_dir/ca.crt&&cp ca.key $pki_dir/ca.key cp apiserver.pem $pki_dir/apiserver.crt&cp apiserver-key.pem $pki_dir/apiserver.key cp apiserver-kubelet-client.pem $pki_dir/apiserver-kubelet-client.crt&&cp apiserver-kubelet-client-key.pem $pki_dir/apiserver-kubelet-client.key cp sa.pub $pki_dir/sa.pub&&cp sa.key $pki_dir/sa.key cd ../ rm -rf k8s_cert_tmp } function etcd_install() { # 如果有以前数据,先清除 set +e sudo docker stop etcd &&sudo docker rm etcd rm -rf ${etcd_data_dir}/* sudo systemctl restart docker set -e # 运行docker docker run \ -d \ -p ${ETCD_CLI_PORT}:${ETCD_CLI_PORT} \ -p ${ETCD_CLU_PORT}:${ETCD_CLU_PORT} \ --volume=${etcd_data_dir}:${etcd_data_dir} \ --volume=${pki_dir}:${pki_dir} \ --name etcd ${REGISTRY}/etcd:${ETCD_VERSION} \ /usr/local/bin/etcd \ --data-dir=${etcd_data_dir} --name ${THIS_HOST} \ --initial-advertise-peer-urls http://${THIS_HOST}:${ETCD_CLU_PORT} \ --listen-peer-urls http://0.0.0.0:${ETCD_CLU_PORT} \ --advertise-client-urls https://${THIS_HOST}:${ETCD_CLI_PORT} \ --listen-client-urls https://0.0.0.0:${ETCD_CLI_PORT} \ --initial-cluster ${CLUSTER} \ --initial-cluster-state ${CLUSTER_STATE} \ --initial-cluster-token ${TOKEN} \ --cert-file=${pki_dir}/etcd/server.crt \ --key-file=${pki_dir}/etcd/server.key \ --trusted-ca-file=${pki_dir}/etcd/ca.crt echo "=================================" echo "etcd start success" } function etcd_reset() { set +e docker stop etcd rm -rf ${etcd_data_dir}/* docker rm etcd set -e } function master_install(){ sudo /usr/local/bin/kubeadm init --config $kubeadm_conf sudo chown -R docker /etc/kubernetes/ mkdir -p $HOME/.kube \cp -f /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config General_user_HOME=`cat /etc/passwd |grep -e ^${General_user} |awk -F: '{print $6}'` mkdir -p ${General_user_HOME}/.kube \cp -f /etc/kubernetes/admin.conf ${General_user_HOME}/.kube/config chown -R $(id -u ${General_user}):$(id -g ${General_user}) ${General_user_HOME}/.kube kubectl apply -f $yaml_path/secret kubectl apply -f $yaml_path/auto_cert_server kubectl apply -f $yaml_path/flannel } function node_join(){ system_init kubeadm join ${Domain_name}:${K8S_API_PORT} --token ${K8S_JOIN_TOKEN} --discovery-token-unsafe-skip-ca-verification echo "=================================" echo "node join success" } case ${Node_type} in "etcd") ip_in_cluster cert_ha_init etcd_install ;; "cert") cert_ha_init ;; "etcd_install") etcd_install ;; "master") ip_in_cluster kubeadmConf master_install ;; "node") node_join ;; *) echo "usage `basename $0` [Domain] [etcd|master|node]" ;; esac
安装k8s,高可用3 master安装脚本的更多相关文章
- 一、k8s介绍(第一章、k8s高可用集群安装)
作者:北京小远 出处:http://www.cnblogs.com/bj-xy/ 参考课程:Kubernetes全栈架构师(电脑端购买优惠) 文档禁止转载,转载需标明出处,否则保留追究法律责任的权利! ...
- 三、k8s集群可用性验证与调参(第一章、k8s高可用集群安装)
作者:北京小远 出处:http://www.cnblogs.com/bj-xy/ 参考课程:Kubernetes全栈架构师(电脑端购买优惠) 文档禁止转载,转载需标明出处,否则保留追究法律责任的权利! ...
- 安装ORACLE高可用RAC集群11g执行root脚本的输出信息
安装ORACLE高可用RAC集群11g执行root脚本的输出信息 作者:Eric 微信:loveoracle11g [root@node1 ~]# /u01/app/oraInventory/orai ...
- 使用kubeadm安装kubernetes高可用集群
kubeadm安装kubernetes高可用集群搭建 第一步:首先搭建etcd集群 yum install -y etcd 配置文件 /etc/etcd/etcd.confETCD_NAME=inf ...
- ActiveMQ 高可用集群安装、配置(ZooKeeper + LevelDB)
ActiveMQ 高可用集群安装.配置(ZooKeeper + LevelDB) 1.ActiveMQ 集群部署规划: 环境: JDK7 版本:ActiveMQ 5.11.1 ZooKeeper 集群 ...
- Neo4j 高可用集群安装
安装neo4j高可用集群,抓图安装过程 http://www.ibm.com/developerworks/cn/java/j-lo-neo4j/ Step1.下载neo4j商业版并解压,复制为neo ...
- 安装ORACLE高可用RAC集群11g校验集群安装的可行性输出信息
安装ORACLE高可用RAC集群11g校验集群安装的可行性输出信息 作者:Eric 微信:loveoracle11g [grid@node1 grid]$ ./runcluvfy.sh stage - ...
- kubeadm实现k8s高可用集群环境部署与配置
高可用架构 k8s集群的高可用实际是k8s各核心组件的高可用,这里使用主备模式,架构如下: 主备模式高可用架构说明: 核心组件 高可用模式 高可用实现方式 apiserver 主备 keepalive ...
- kubernetes高可用设计-master节点和kubectl
部署master 节点 上一遍是CA证书和etcd的部署,这一篇继续搭建k8s,废话不多说.开始部署. kubernetes master 节点包含的组件有: kube-apiserver kube- ...
随机推荐
- [LeetCode] 741. Cherry Pickup 捡樱桃
In a N x N grid representing a field of cherries, each cell is one of three possible integers. 0 mea ...
- 西门子PLC1200内使用SCL实现简化版PID算法
西门子自带的PID效果很好,但是会比较吃性能,使用次数有限,很多地方需要PID但不需要这么精准的PID,所以网上找个简单的算法自己调用. 新建数据类型 前三个就是PID三个参数 新建FC块: #PID ...
- win10锁住鼠标和键盘操作。
以前做的一个winform自动更新程序没考虑到程序在更新过程中禁止操作被更新程序.现在加上了更新过程中锁住鼠标和大部分键盘. 碰到问题:用系统api -- BlockInput(true)锁住屏幕无效 ...
- LeetCode 946. 验证栈序列(Validate Stack Sequences) 26
946. 验证栈序列 946. Validate Stack Sequences 题目描述 Given two sequences pushed and popped with distinct va ...
- [转帖]都在说DCEP,央行数字货币究竟跟你有什么关系?
都在说DCEP,央行数字货币究竟跟你有什么关系? https://kuaibao.qq.com/s/20191104A0G1D300?refer=spider 黄奇帆指出,DCEP 使得交易环节对 ...
- 《游戏引擎构架Game Engine Architecture》略读笔记
<游戏引擎构架Game Engine Architecture>略读笔记 分析标题作者 分析目录 选取感兴趣的章节阅读 25分钟略读完章节 分析标题作者 此书是一本帮助人入行做游戏的书,也 ...
- Flask总结篇
1 Flask框架的优势? 相信做Python这一块的程序员都有听说这三个框架,就像神一样的存在,每一个框架的介绍我就不写出来了,感兴趣可以自己百度了解了解!下面我就说正事 Django:Python ...
- Codeforces 1207 G. Indie Album
Codeforces 1207 G. Indie Album 解题思路 离线下来用SAM或者AC自动机就是一个单点加子树求和,套个树状数组就好了,因为这个题广义SAM不能存在 \(len[u] = l ...
- .net语音播放,自定义播报文字
// using System.Speech.Synthesis; SpeechSynthesizer synth = new SpeechSynthesizer(); // Configure th ...
- 1 集群状态、增删改查、全量替换、强制创建、设置单个index的分片数副本数
检查集群健康状态,可以看集群颜色.(黄色:primary shard都正常,replica不正常) GET /_cat/health?v 列出集群所有index GET /_cat/indices?v ...