可以参考其他网友的阿里云搭建k8s高可用集群(1.17.3)

https://www.cnblogs.com/gmmy/p/12372805.html

准备四台centos7虚拟机,用来安装k8s集群

master01(192.168.1.203)配置:2核cpu,2G内存,60G硬盘 桥接网络

master02(192.168.1.204)配置:2核cpu,2G内存,60G硬盘 桥接网络

master03(192.168.1.205)配置:2核cpu,2G内存,60G硬盘 桥接网络

node01(192.168.1.206)配置:2核cpu,1G内存,60G硬盘 桥接网络

所有master和node节点都要安装的基础组件

#以下教程非一键式,需要逐行查看,在提示的服务器中一一部署
#在大部分情况下,可以直接复制代码到命令行中,需要一点点的Linux基础知识 #请根据自己的情况,修改自己的主机名(如master01,master02,master03,node01)
#在master01
hostnamectl set-hostname master01
#在master02
hostnamectl set-hostname master02
#在master03
hostnamectl set-hostname master03
#在node01
hostnamectl set-hostname node01 #在master01, master02, master03, node01上/etc/hosts文件增加如下几行:
cat >> /etc/hosts << EOF
192.168.1.202 master01
192.168.1.203 master02
192.168.1.204 master03
192.168.1.205 node01
EOF
#设置免密登陆的密钥
#默认相关内容放在/root/.ssh/下面
#在所有master节点运行如下
mkdir /root/.ssh/
chmod 600 /root/.ssh/
touch /root/.ssh/authorized_keys
chmod 600 /root/.ssh/authorized_keys
ssh-keygen -t rsa #enter ,enter, enter #yum安装一些必备的软件
yum -y install wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake\
libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel\
wget vim ncurses-devel autoconf automake zlib-devel python-devel\
epel-release lrzsz openssh-server socat ipvsadm conntrack bind-utils epel-release libffi-devel\
libaio-devel libxml2-devel cmake python-devel\
device-mapper-persistent-data lvm2 yum-utils
#关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
yum install iptables-services -y
iptables -F && service iptables save
service iptables stop && systemctl disable iptables #修改时区,设置ntp时间更新
mv -f /etc/localtime /etc/localtime.bak
/bin/cp -rf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'ZONE="CST"' > /etc/sysconfig/clock
ntpdate cn.pool.ntp.org
echo "* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org" >> /etc/crontab
service crond restart #关闭selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
setenforce 0 #开发最大文件描述符限制
cat /etc/profile |grep ulimit || echo "ulimit -n 65536" >> /etc/profile
cat /etc/security/limits.conf |grep 65536 || echo "root soft nofile 65536" >> /etc/security/limits.conf
cat /etc/security/limits.conf |grep 65536 || echo "root hard nofile 65536" >> /etc/security/limits.conf
cat /etc/security/limits.conf |grep 65536 || echo "* soft nofile 65536" >> /etc/security/limits.conf
cat /etc/security/limits.conf |grep 65536 || echo "* hard nofile 65536" >> /etc/security/limits.conf #关闭swap
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab #更换yum源到阿里的
mv -f /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo #配置安装k8s需要的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF
#配置docker yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #清理缓存
yum clean all
yum makecache fast #安装19.03.7版本
yum install -y docker-ce-19.03.7-3.el7
systemctl enable docker && systemctl start docker
systemctl status docker
#修改docker配置文件
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
systemctl daemon-reload && systemctl restart docker #开启k8s 网络桥接相关内核配置
#设置网桥包经IPTables,core文件生成路径,配置永久生效 echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 >/proc/sys/net/bridge/bridge-nf-call-ip6tables cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF cat >> /etc/sysctl.conf << EOF
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl -p #开启ipvs,不开启ipvs将会使用iptables,但是效率低,所以官网推荐需要开通ipvs内核
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
/sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe \${kernel_module}
fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep ip_vs #在master01、master02、master03和node01上安装kubeadm,kubelet和kubectl
yum install kubeadm-1.18.2 kubelet-1.18.2 kubectl-1.18.2 -y && systemctl enable kubelet && systemctl start kubelet

Master节点的安装和配置

#在master01,master02,master03上部署keepalive+lvs实现master节点高可用-对apiserver做高可用
yum install -y socat keepalived ipvsadm conntrack #修改master1的/etc/keepalived/keepalived.conf文件
#master01节点作如下操作,该配置的priority为100, 请根据自己的masterIP地址和期望的虚拟IP地址做相应的修改
#如下的虚拟IP地址是192.168.1.199
wget -O /etc/keepalived/keepalived.conf http://download.zhufunin.com/k8s_1.18/keepalived.conf
sed -i 's/master01/192.168.1.202/g' /etc/keepalived/keepalived.conf
sed -i 's/master02/192.168.1.203/g' /etc/keepalived/keepalived.conf
sed -i 's/master03/192.168.1.204/g' /etc/keepalived/keepalived.conf
sed -i 's/VIP_addr/192.168.1.199/g' /etc/keepalived/keepalived.conf #在master02节点作如下操作,修改priority 为 50, 请根据自己的masterIP地址和期望的虚拟IP地址做相应的修改
#如下的虚拟IP地址是192.168.1.199
wget -O /etc/keepalived/keepalived.conf http://download.zhufunin.com/k8s_1.18/keepalived.conf
sed -i 's/priority 100/priority 50/g' /etc/keepalived/keepalived.conf
sed -i 's/master01/192.168.1.202/g' /etc/keepalived/keepalived.conf
sed -i 's/master02/192.168.1.203/g' /etc/keepalived/keepalived.conf
sed -i 's/master03/192.168.1.204/g' /etc/keepalived/keepalived.conf
sed -i 's/VIP_addr/192.168.1.199/g' /etc/keepalived/keepalived.conf #在master03节点作如下操作,修改priority 为 30, 请根据自己的masterIP地址和期望的虚拟IP地址做相应的修改
#如下的虚拟IP地址是192.168.1.199
wget -O /etc/keepalived/keepalived.conf http://download.zhufunin.com/k8s_1.18/keepalived.conf
sed -i 's/priority 100/priority 30/g' /etc/keepalived/keepalived.conf
sed -i 's/master01/192.168.1.202/g' /etc/keepalived/keepalived.conf
sed -i 's/master02/192.168.1.203/g' /etc/keepalived/keepalived.conf
sed -i 's/master03/192.168.1.204/g' /etc/keepalived/keepalived.conf
sed -i 's/VIP_addr/192.168.1.199/g' /etc/keepalived/keepalived.conf
#如果你的主网卡名称是ens33,那就需要sed -i 's/eth0/ens33/g' /etc/keepalived/keepalived.conf
sed -i 's/eth0/eth0/g' /etc/keepalived/keepalived.conf #新下载的keepalived.conf, 主要是让keepalive配置为BACKUP模式,而且是非抢占模式nopreempt,假设master01宕机,启动之后vip不会自动漂移到master01,这样可以保证k8s集群始终处于正常状态,因为假设master01启动,apiserver等组件不会立刻运行,如果vip漂移到master01,那么整个集群就会挂掉,这就是为什么我们需要配置成非抢占模式了
#通过修改priority的值,让启动顺序master01->master02->master03
#在master1、master2、master3依次执行如下命令
systemctl enable keepalived && systemctl start keepalived && systemctl status keepalived
#keepalived启动成功之后,在master1上通过ip addr可以看到vip 192.168.1.199(本教程的虚拟IP) 已经绑定到网卡上了 #在master01上执行如下命令
cd /usr/local/src
wget -O /usr/local/src/kubeadm-config.yaml http://download.zhufunin.com/k8s_1.18/kubeadm-config.yaml
#这个文件是给master初始化使用,如下是修改初始化中节点所对应的IP地址,需要根据自己的情况,做适当的调整
sed -i 's/master01/192.168.1.202/g' kubeadm-config.yaml
sed -i 's/master02/192.168.1.203/g' kubeadm-config.yaml
sed -i 's/master03/192.168.1.204/g' kubeadm-config.yaml
sed -i 's/VIP_addr/192.168.1.199/g' kubeadm-config.yaml
#master01初始化命
kubeadm init --config kubeadm-config.yaml
kubeadm config images list
#10.244.0.0/16是flannel网络插件的默认网段,后面会用到
#如果报错,那么用下面的kubeadm-config.yaml文件多一个imageRepository: registry.aliyuncs.com/google_containers参数,表示走的是阿里云镜像,我们可以直接访问,这个方法更简单,但是在这里了解即可,先不使用这种方法,使用的话在后面手动加节点到k8s集群会有问题。
#本教程会需要的镜像
#如果有报错,可以重置 kubeadm reset
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /var/lib/kubelet/
rm -rf /var/lib/etcd
rm -rf /var/lib/dockershim
rm -rf /var/run/kubernetes
rm -rf /var/lib/cni
rm -rf /etc/cni/net.d #下面是手动下载到本机
wget http://download.zhufunin.com/k8s_1.18/1-18-kube-apiserver.tar.gz
wget http://download.zhufunin.com/k8s_1.18/1-18-kube-scheduler.tar.gz
wget http://download.zhufunin.com/k8s_1.18/1-18-kube-controller-manager.tar.gz
wget http://download.zhufunin.com/k8s_1.18/1-18-pause.tar.gz
wget http://download.zhufunin.com/k8s_1.18/1-18-cordns.tar.gz
wget http://download.zhufunin.com/k8s_1.18/1-18-etcd.tar.gz
wget http://download.zhufunin.com/k8s_1.18/1-18-kube-proxy.tar.gz
docker load -i 1-18-kube-apiserver.tar.gz
docker load -i 1-18-kube-scheduler.tar.gz
docker load -i 1-18-kube-controller-manager.tar.gz
docker load -i 1-18-pause.tar.gz
docker load -i 1-18-cordns.tar.gz
docker load -i 1-18-etcd.tar.gz
docker load -i 1-18-kube-proxy.tar.gz
echo """
说明:
pause版本是3.2,用到的镜像是k8s.gcr.io/pause:3.2
etcd版本是3.4.3,用到的镜像是k8s.gcr.io/etcd:3.4.3-0        
cordns版本是1.6.7,用到的镜像是k8s.gcr.io/coredns:1.6.7
apiserver、scheduler、controller-manager、kube-proxy版本是1.18.2,用到的镜像分别是
k8s.gcr.io/kube-apiserver:v1.18.2
k8s.gcr.io/kube-controller-manager:v1.18.2
k8s.gcr.io/kube-scheduler:v1.18.2
k8s.gcr.io/kube-proxy:v1.18.2
如果机器很多,我们只需要把这些镜像传到我们的内部私有镜像仓库即可,这样我们在kubeadm初始化kubernetes时可以通过"--image-repository=私有镜像仓库地址"的方式进行镜像拉取,这样不需要手动传到镜像到每个机器 """
#走完这一步,差不多要结束了
#初始化成功后会看到类似如下的,照做就好
#mkdir -p $HOME/.kube
#sudo cp -i  /etc/kubernetes/admin.conf  $HOME/.kube/config
#sudo chown $(id -u):$(id -g)  $HOME/.kube/config
#看到kubeadm join ...这条命令需要记住,我们把k8s的master02、master03,node01节点加入到集群需要在这些节点节点输入这条命令,每次执行这个结果都是不一样的,大家记住自己执行的结果,在下面会用到 #查看状态
kubectl get nodes #把master1节点的证书拷贝到master02和master03上
#在master2和master3上创建证书存放目录
cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p ~/.kube/
#在master1节点把证书拷贝到master2和master3上,在master1上操作如下,下面的scp命令大家最好一行一行复制,这样不会出错
scp /etc/kubernetes/pki/ca.crt master02:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key master02:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key master02:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub master02:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt master02:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key master02:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt master02:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key master02:/etc/kubernetes/pki/etcd/ scp /etc/kubernetes/pki/ca.crt master03:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key master03:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key master03:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub master03:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt master03:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key master03:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt master03:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key master03:/etc/kubernetes/pki/etcd/ #证书拷贝之后在master2和master3上执行如下命令,大家复制自己的,这样就可以把master2和master3加入到集群
#类似kubeadm join 192.168.1.199:6443 --token 7dwluq.x6nypje7h55rnrhl \
    --discovery-token-ca-cert-hash sha256:fa75619ab0bb6273126350a9dbda9aa6c89828c2c4650299fe1647ab510a7e6c   --control-plane
#--control-plane:这个参数表示加入到k8s集群的是master节点
#在master2和master3上操作:
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g)$HOME/.kube/config
kubectl get nodes 
#在master01节点部署calico.yaml,master01就是主控,其他的master02,master03就是备用而已
wget http://download.zhufunin.com/k8s_1.18/calico.yaml #(原地址https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/calico.yaml)
kubectl apply -f calico.yaml

把node01节点加入到k8s集群,在node1节点操作

确保已完成“上面”的所有master和node节点都要安装的基础组件内容

特别是yum install kubeadm-1.18.2 kubelet-1.18.2 kubectl-1.18.2 -y && systemctl enable kubelet && systemctl start kubelet

类似kubeadm join 192.168.1.199:6443 --token 7dwluq.x6nypje7h55rnrhl \
    --discovery-token-ca-cert-hash sha256:fa75619ab0bb6273126350a9dbda9aa6c89828c2c4650299fe1647ab510a7e6c

kubeadm 报错那就在最后面添加-v 6参数,查看更多信息

如果忘记kubeadm join的参数,可以在master节点中运行下面的命令来查询

kubeadm token create --print-join-command

在master01节点查看集群节点状态

kubectl get nodes  

显示如下:

NAME        STATUS     ROLES    AGE    VERSION
master1 Ready master 3m36s v1.18.2
master2 Ready master 3m36s v1.18.2
master3 Ready master 3m36s v1.18.2
node1 Ready <none> 3m36s v1.18.2

如果忘记kubeadm join的参数,可以在master节点中运行下面的命令来查询

kubeadm token create --print-join-command

通用node节点初始化

#设置hostname, 并且把hostname写到Master的hosts文件中
hostnamectl set-hostname xxxx
#必要的yum软件安装
yum -y install wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake\
libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel\
wget vim ncurses-devel autoconf automake zlib-devel python-devel\
epel-release lrzsz openssh-server socat ipvsadm conntrack bind-utils epel-release libffi-devel\
libaio-devel libxml2-devel cmake python-devel\
device-mapper-persistent-data lvm2 yum-utils
#关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
yum install iptables-services -y
iptables -F && service iptables save
service iptables stop && systemctl disable iptables
#修改时区,设置ntp时间更新
mv -f /etc/localtime /etc/localtime.bak
/bin/cp -rf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'ZONE="CST"' > /etc/sysconfig/clock
ntpdate cn.pool.ntp.org
echo "* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org" >> /etc/crontab
service crond restart #关闭selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
setenforce 0 #开发最大文件描述符限制
cat /etc/profile |grep ulimit || echo "ulimit -n 65536" >> /etc/profile
cat /etc/security/limits.conf |grep 65536 || echo "root soft nofile 65536" >> /etc/security/limits.conf
cat /etc/security/limits.conf |grep 65536 || echo "root hard nofile 65536" >> /etc/security/limits.conf
cat /etc/security/limits.conf |grep 65536 || echo "* soft nofile 65536" >> /etc/security/limits.conf
cat /etc/security/limits.conf |grep 65536 || echo "* hard nofile 65536" >> /etc/security/limits.conf #关闭swap
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab #更换yum源到阿里的,可以不换
#mv -f /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
#wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo #配置安装k8s需要的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF #配置docker yum源,如果你的很慢,可以选择下方的阿里云docker yum源
#yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo #清理缓存
yum clean all
yum makecache fast #安装19.03.7版本
yum install -y docker-ce-19.03.7-3.el7
systemctl enable docker && systemctl start docker
systemctl status docker
#修改docker配置文件
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
systemctl daemon-reload && systemctl restart docker #开启k8s 网络桥接相关内核配置
#设置网桥包经IPTables,core文件生成路径,配置永久生效 echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 >/proc/sys/net/bridge/bridge-nf-call-ip6tables cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF cat >> /etc/sysctl.conf << EOF
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl -p #安装kubeadm,kubelet和kubectl
yum install kubeadm-1.18.2 kubelet-1.18.2 kubectl-1.18.2 -y && systemctl enable kubelet && systemctl start kubelet
#类似kubeadm join 192.168.1.199:6443 --token 7dwluq.x6nypje7h55rnrhl \
    --discovery-token-ca-cert-hash sha256:fa75619ab0bb6273126350a9dbda9aa6c89828c2c4650299fe1647ab510a7e6c #开启ipvs,不开启ipvs将会使用iptables,但是效率低,所以官网推荐需要开通ipvs内核
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
/sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe \${kernel_module}
fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep ip_vs

如果忘记kubeadm join的参数,可以在master节点中运行下面的命令来查询

kubeadm token create --print-join-command

centos7安装kubernetes k8s 1.18的更多相关文章

  1. centos7安装kubernetes k8s 1.16

    #初始化服务器 echo 'export LC_ALL="en_US.UTF-8"' >> /etc/profile source /etc/profile #!/bi ...

  2. Centos7安装Kubernetes k8s v1.16.0 国内环境

    一. 为什么是k8s v1.16.0? 最新版的v1.16.2试过了,一直无法安装完成,安装到kubeadm init那一步执行后,报了很多错,如:node xxx not found等.centos ...

  3. CentOS7安装Kubernetes

    CentOS7安装Kubernetes 安装Kubernetes时候需要一台机器作为管理机器,1台或者多台机器作为集群中的节点. 系统信息: Hosts: 请将IP地址换成自己环境的地址. cento ...

  4. centos7.3 kubernetes/k8s 1.10 离线安装 --已验证

    本文介绍在centos7.3使用kubeadm快速离线安装kubernetes 1.10. 采用单master,单node(可以多node),占用资源较少,方便在笔记本或学习环境快速部署,不适用于生产 ...

  5. centos7安装kubernetes 1.1

    原文地址:http://foxhound.blog.51cto.com/1167932/1717105 前提:centos7 已经update yum update -y 一.创建yum源 maste ...

  6. 开启和安装Kubernetes k8s 基于Docker For Windows

    0.最近发现,Docker For Windows Stable在Enable Kubernetes这个问题上是有Bug的,建议切换到Edge版本,并且采用下文AliyunContainerServi ...

  7. CentOS7 安装kubernetes

    2台机器,1台为Master,1台为Node 修改Host Master为dmaster,Node为dslave 安装K8s and Etcd 在Master机器上安装 yum install etc ...

  8. kubeadm安装kubernetes V1.11.1 集群

    之前测试了离线环境下使用二进制方法安装配置Kubernetes集群的方法,安装的过程中听说 kubeadm 安装配置集群更加方便,因此试着折腾了一下.安装过程中,也有一些坑,相对来说操作上要比二进制方 ...

  9. 使用kubeadm 安装 kubernetes 1.12.0

    目录 简介: 架构说明: 系统配置: 1.1 关闭防火墙 1.2 禁用SELinux 1.3 关闭系统Swap 1.4 安装docker 使用kubeadm部署Kubernetes: 2.1 安装ku ...

随机推荐

  1. Linux解压缩相关命令

    Linux解压缩相关命令 运行级别: 0:关机 1:单用户 2:多用户无网络连接 3:多用户有网络连接 4:系统保留 5:图形界面 6:系统重启 通过init[0123456]来切换不同的运行级别 g ...

  2. 聊一聊JVM

    JVM 什么是JVM? ​ JVM是java虚拟机的缩写,本质上是一个程序,能识别.class字节码文件(.java文件编译后产生的二进制代码),并且能够解析它的指令,最终调用操作系统上的函数,完成我 ...

  3. 数组的常用方法之split

    今天我们来聊一下数组的常用方法:split 返回值:一个新数组. 1.该方法可以直接调用不传任何值,则会直接将字符串转化成数组. var str = 'I love Javascript'; cons ...

  4. PAT-1132(Cut Integer )数的拆分+简单题

    Cut Integer PAT-1132 #include<iostream> #include<cstring> #include<string> #includ ...

  5. 漏洞复现-CVE-2014-3120-ElasticSearch 命令执行漏洞

        0x00 实验环境 攻击机:Win 10 靶机也可作为攻击机:Ubuntu18 (docker搭建的vulhub靶场) 0x01 影响版本 < ElasticSearch 1.2的版本 ...

  6. Java小tips之命令行传参

    在命令行运行主函数时,后缀字符串,则会储存在args[]数组中,这种方法可以在程序运行时,借助Main函数传参 主类书写不规范见谅 ```java public class hello{ public ...

  7. Excel查分系统搭建小技巧

    推荐一个教师必备工具"Yichafen",是一个在线查分系统,全国8000所高校都在用,三分钟极速创建发布查分系统 在工作学习中,我们经常会遇到查分系统这样的问题.培根说过:读书足 ...

  8. Java实现解压缩文件和文件夹

    一 前言 项目开发中,总会遇到解压缩文件的时候.比如,用户下载多个文件时,服务端可以将多个文件压缩成一个文件(例如xx.zip或xx.rar).用户上传资料时,允许上传压缩文件,服务端进行解压读取每一 ...

  9. url里bookmark是什么意思

    <a rel="bookmark" href="abc.com"> 点击查看 </a> rel 这个属性的全称是  relationsh ...

  10. 基于Docker的MindSpore安装与使用基础介绍

    技术背景 MindSpore是一款新一代AI开源计算框架,其特色在于:创新编程范式,AI科学家和工程师更易使用,便于开放式创新:该计算框架可满足终端.边缘计算.云全场景需求,能更好保护数据隐私:可开源 ...