可以参考其他网友的阿里云搭建k8s高可用集群(1.17.3)

https://www.cnblogs.com/gmmy/p/12372805.html

准备四台centos7虚拟机,用来安装k8s集群

master01(192.168.1.203)配置:2核cpu,2G内存,60G硬盘 桥接网络

master02(192.168.1.204)配置:2核cpu,2G内存,60G硬盘 桥接网络

master03(192.168.1.205)配置:2核cpu,2G内存,60G硬盘 桥接网络

node01(192.168.1.206)配置:2核cpu,1G内存,60G硬盘 桥接网络

所有master和node节点都要安装的基础组件

#以下教程非一键式,需要逐行查看,在提示的服务器中一一部署
#在大部分情况下,可以直接复制代码到命令行中,需要一点点的Linux基础知识 #请根据自己的情况,修改自己的主机名(如master01,master02,master03,node01)
#在master01
hostnamectl set-hostname master01
#在master02
hostnamectl set-hostname master02
#在master03
hostnamectl set-hostname master03
#在node01
hostnamectl set-hostname node01 #在master01, master02, master03, node01上/etc/hosts文件增加如下几行:
cat >> /etc/hosts << EOF
192.168.1.202 master01
192.168.1.203 master02
192.168.1.204 master03
192.168.1.205 node01
EOF
#设置免密登陆的密钥
#默认相关内容放在/root/.ssh/下面
#在所有master节点运行如下
mkdir /root/.ssh/
chmod 600 /root/.ssh/
touch /root/.ssh/authorized_keys
chmod 600 /root/.ssh/authorized_keys
ssh-keygen -t rsa #enter ,enter, enter #yum安装一些必备的软件
yum -y install wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake\
libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel\
wget vim ncurses-devel autoconf automake zlib-devel python-devel\
epel-release lrzsz openssh-server socat ipvsadm conntrack bind-utils epel-release libffi-devel\
libaio-devel libxml2-devel cmake python-devel\
device-mapper-persistent-data lvm2 yum-utils
#关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
yum install iptables-services -y
iptables -F && service iptables save
service iptables stop && systemctl disable iptables #修改时区,设置ntp时间更新
mv -f /etc/localtime /etc/localtime.bak
/bin/cp -rf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'ZONE="CST"' > /etc/sysconfig/clock
ntpdate cn.pool.ntp.org
echo "* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org" >> /etc/crontab
service crond restart #关闭selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
setenforce 0 #开发最大文件描述符限制
cat /etc/profile |grep ulimit || echo "ulimit -n 65536" >> /etc/profile
cat /etc/security/limits.conf |grep 65536 || echo "root soft nofile 65536" >> /etc/security/limits.conf
cat /etc/security/limits.conf |grep 65536 || echo "root hard nofile 65536" >> /etc/security/limits.conf
cat /etc/security/limits.conf |grep 65536 || echo "* soft nofile 65536" >> /etc/security/limits.conf
cat /etc/security/limits.conf |grep 65536 || echo "* hard nofile 65536" >> /etc/security/limits.conf #关闭swap
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab #更换yum源到阿里的
mv -f /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo #配置安装k8s需要的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF
#配置docker yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #清理缓存
yum clean all
yum makecache fast #安装19.03.7版本
yum install -y docker-ce-19.03.7-3.el7
systemctl enable docker && systemctl start docker
systemctl status docker
#修改docker配置文件
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
systemctl daemon-reload && systemctl restart docker #开启k8s 网络桥接相关内核配置
#设置网桥包经IPTables,core文件生成路径,配置永久生效 echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 >/proc/sys/net/bridge/bridge-nf-call-ip6tables cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF cat >> /etc/sysctl.conf << EOF
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl -p #开启ipvs,不开启ipvs将会使用iptables,但是效率低,所以官网推荐需要开通ipvs内核
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
/sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe \${kernel_module}
fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep ip_vs #在master01、master02、master03和node01上安装kubeadm,kubelet和kubectl
yum install kubeadm-1.18.2 kubelet-1.18.2 kubectl-1.18.2 -y && systemctl enable kubelet && systemctl start kubelet

Master节点的安装和配置

#在master01,master02,master03上部署keepalive+lvs实现master节点高可用-对apiserver做高可用
yum install -y socat keepalived ipvsadm conntrack #修改master1的/etc/keepalived/keepalived.conf文件
#master01节点作如下操作,该配置的priority为100, 请根据自己的masterIP地址和期望的虚拟IP地址做相应的修改
#如下的虚拟IP地址是192.168.1.199
wget -O /etc/keepalived/keepalived.conf http://download.zhufunin.com/k8s_1.18/keepalived.conf
sed -i 's/master01/192.168.1.202/g' /etc/keepalived/keepalived.conf
sed -i 's/master02/192.168.1.203/g' /etc/keepalived/keepalived.conf
sed -i 's/master03/192.168.1.204/g' /etc/keepalived/keepalived.conf
sed -i 's/VIP_addr/192.168.1.199/g' /etc/keepalived/keepalived.conf #在master02节点作如下操作,修改priority 为 50, 请根据自己的masterIP地址和期望的虚拟IP地址做相应的修改
#如下的虚拟IP地址是192.168.1.199
wget -O /etc/keepalived/keepalived.conf http://download.zhufunin.com/k8s_1.18/keepalived.conf
sed -i 's/priority 100/priority 50/g' /etc/keepalived/keepalived.conf
sed -i 's/master01/192.168.1.202/g' /etc/keepalived/keepalived.conf
sed -i 's/master02/192.168.1.203/g' /etc/keepalived/keepalived.conf
sed -i 's/master03/192.168.1.204/g' /etc/keepalived/keepalived.conf
sed -i 's/VIP_addr/192.168.1.199/g' /etc/keepalived/keepalived.conf #在master03节点作如下操作,修改priority 为 30, 请根据自己的masterIP地址和期望的虚拟IP地址做相应的修改
#如下的虚拟IP地址是192.168.1.199
wget -O /etc/keepalived/keepalived.conf http://download.zhufunin.com/k8s_1.18/keepalived.conf
sed -i 's/priority 100/priority 30/g' /etc/keepalived/keepalived.conf
sed -i 's/master01/192.168.1.202/g' /etc/keepalived/keepalived.conf
sed -i 's/master02/192.168.1.203/g' /etc/keepalived/keepalived.conf
sed -i 's/master03/192.168.1.204/g' /etc/keepalived/keepalived.conf
sed -i 's/VIP_addr/192.168.1.199/g' /etc/keepalived/keepalived.conf
#如果你的主网卡名称是ens33,那就需要sed -i 's/eth0/ens33/g' /etc/keepalived/keepalived.conf
sed -i 's/eth0/eth0/g' /etc/keepalived/keepalived.conf #新下载的keepalived.conf, 主要是让keepalive配置为BACKUP模式,而且是非抢占模式nopreempt,假设master01宕机,启动之后vip不会自动漂移到master01,这样可以保证k8s集群始终处于正常状态,因为假设master01启动,apiserver等组件不会立刻运行,如果vip漂移到master01,那么整个集群就会挂掉,这就是为什么我们需要配置成非抢占模式了
#通过修改priority的值,让启动顺序master01->master02->master03
#在master1、master2、master3依次执行如下命令
systemctl enable keepalived && systemctl start keepalived && systemctl status keepalived
#keepalived启动成功之后,在master1上通过ip addr可以看到vip 192.168.1.199(本教程的虚拟IP) 已经绑定到网卡上了 #在master01上执行如下命令
cd /usr/local/src
wget -O /usr/local/src/kubeadm-config.yaml http://download.zhufunin.com/k8s_1.18/kubeadm-config.yaml
#这个文件是给master初始化使用,如下是修改初始化中节点所对应的IP地址,需要根据自己的情况,做适当的调整
sed -i 's/master01/192.168.1.202/g' kubeadm-config.yaml
sed -i 's/master02/192.168.1.203/g' kubeadm-config.yaml
sed -i 's/master03/192.168.1.204/g' kubeadm-config.yaml
sed -i 's/VIP_addr/192.168.1.199/g' kubeadm-config.yaml
#master01初始化命
kubeadm init --config kubeadm-config.yaml
kubeadm config images list
#10.244.0.0/16是flannel网络插件的默认网段,后面会用到
#如果报错,那么用下面的kubeadm-config.yaml文件多一个imageRepository: registry.aliyuncs.com/google_containers参数,表示走的是阿里云镜像,我们可以直接访问,这个方法更简单,但是在这里了解即可,先不使用这种方法,使用的话在后面手动加节点到k8s集群会有问题。
#本教程会需要的镜像
#如果有报错,可以重置 kubeadm reset
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /var/lib/kubelet/
rm -rf /var/lib/etcd
rm -rf /var/lib/dockershim
rm -rf /var/run/kubernetes
rm -rf /var/lib/cni
rm -rf /etc/cni/net.d #下面是手动下载到本机
wget http://download.zhufunin.com/k8s_1.18/1-18-kube-apiserver.tar.gz
wget http://download.zhufunin.com/k8s_1.18/1-18-kube-scheduler.tar.gz
wget http://download.zhufunin.com/k8s_1.18/1-18-kube-controller-manager.tar.gz
wget http://download.zhufunin.com/k8s_1.18/1-18-pause.tar.gz
wget http://download.zhufunin.com/k8s_1.18/1-18-cordns.tar.gz
wget http://download.zhufunin.com/k8s_1.18/1-18-etcd.tar.gz
wget http://download.zhufunin.com/k8s_1.18/1-18-kube-proxy.tar.gz
docker load -i 1-18-kube-apiserver.tar.gz
docker load -i 1-18-kube-scheduler.tar.gz
docker load -i 1-18-kube-controller-manager.tar.gz
docker load -i 1-18-pause.tar.gz
docker load -i 1-18-cordns.tar.gz
docker load -i 1-18-etcd.tar.gz
docker load -i 1-18-kube-proxy.tar.gz
echo """
说明:
pause版本是3.2,用到的镜像是k8s.gcr.io/pause:3.2
etcd版本是3.4.3,用到的镜像是k8s.gcr.io/etcd:3.4.3-0        
cordns版本是1.6.7,用到的镜像是k8s.gcr.io/coredns:1.6.7
apiserver、scheduler、controller-manager、kube-proxy版本是1.18.2,用到的镜像分别是
k8s.gcr.io/kube-apiserver:v1.18.2
k8s.gcr.io/kube-controller-manager:v1.18.2
k8s.gcr.io/kube-scheduler:v1.18.2
k8s.gcr.io/kube-proxy:v1.18.2
如果机器很多,我们只需要把这些镜像传到我们的内部私有镜像仓库即可,这样我们在kubeadm初始化kubernetes时可以通过"--image-repository=私有镜像仓库地址"的方式进行镜像拉取,这样不需要手动传到镜像到每个机器 """
#走完这一步,差不多要结束了
#初始化成功后会看到类似如下的,照做就好
#mkdir -p $HOME/.kube
#sudo cp -i  /etc/kubernetes/admin.conf  $HOME/.kube/config
#sudo chown $(id -u):$(id -g)  $HOME/.kube/config
#看到kubeadm join ...这条命令需要记住,我们把k8s的master02、master03,node01节点加入到集群需要在这些节点节点输入这条命令,每次执行这个结果都是不一样的,大家记住自己执行的结果,在下面会用到 #查看状态
kubectl get nodes #把master1节点的证书拷贝到master02和master03上
#在master2和master3上创建证书存放目录
cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p ~/.kube/
#在master1节点把证书拷贝到master2和master3上,在master1上操作如下,下面的scp命令大家最好一行一行复制,这样不会出错
scp /etc/kubernetes/pki/ca.crt master02:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key master02:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key master02:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub master02:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt master02:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key master02:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt master02:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key master02:/etc/kubernetes/pki/etcd/ scp /etc/kubernetes/pki/ca.crt master03:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key master03:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key master03:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub master03:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt master03:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key master03:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt master03:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key master03:/etc/kubernetes/pki/etcd/ #证书拷贝之后在master2和master3上执行如下命令,大家复制自己的,这样就可以把master2和master3加入到集群
#类似kubeadm join 192.168.1.199:6443 --token 7dwluq.x6nypje7h55rnrhl \
    --discovery-token-ca-cert-hash sha256:fa75619ab0bb6273126350a9dbda9aa6c89828c2c4650299fe1647ab510a7e6c   --control-plane
#--control-plane:这个参数表示加入到k8s集群的是master节点
#在master2和master3上操作:
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g)$HOME/.kube/config
kubectl get nodes 
#在master01节点部署calico.yaml,master01就是主控,其他的master02,master03就是备用而已
wget http://download.zhufunin.com/k8s_1.18/calico.yaml #(原地址https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/calico.yaml)
kubectl apply -f calico.yaml

把node01节点加入到k8s集群,在node1节点操作

确保已完成“上面”的所有master和node节点都要安装的基础组件内容

特别是yum install kubeadm-1.18.2 kubelet-1.18.2 kubectl-1.18.2 -y && systemctl enable kubelet && systemctl start kubelet

类似kubeadm join 192.168.1.199:6443 --token 7dwluq.x6nypje7h55rnrhl \
    --discovery-token-ca-cert-hash sha256:fa75619ab0bb6273126350a9dbda9aa6c89828c2c4650299fe1647ab510a7e6c

kubeadm 报错那就在最后面添加-v 6参数,查看更多信息

如果忘记kubeadm join的参数,可以在master节点中运行下面的命令来查询

kubeadm token create --print-join-command

在master01节点查看集群节点状态

kubectl get nodes  

显示如下:

NAME        STATUS     ROLES    AGE    VERSION
master1 Ready master 3m36s v1.18.2
master2 Ready master 3m36s v1.18.2
master3 Ready master 3m36s v1.18.2
node1 Ready <none> 3m36s v1.18.2

如果忘记kubeadm join的参数,可以在master节点中运行下面的命令来查询

kubeadm token create --print-join-command

通用node节点初始化

#设置hostname, 并且把hostname写到Master的hosts文件中
hostnamectl set-hostname xxxx
#必要的yum软件安装
yum -y install wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake\
libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel\
wget vim ncurses-devel autoconf automake zlib-devel python-devel\
epel-release lrzsz openssh-server socat ipvsadm conntrack bind-utils epel-release libffi-devel\
libaio-devel libxml2-devel cmake python-devel\
device-mapper-persistent-data lvm2 yum-utils
#关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
yum install iptables-services -y
iptables -F && service iptables save
service iptables stop && systemctl disable iptables
#修改时区,设置ntp时间更新
mv -f /etc/localtime /etc/localtime.bak
/bin/cp -rf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'ZONE="CST"' > /etc/sysconfig/clock
ntpdate cn.pool.ntp.org
echo "* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org" >> /etc/crontab
service crond restart #关闭selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
setenforce 0 #开发最大文件描述符限制
cat /etc/profile |grep ulimit || echo "ulimit -n 65536" >> /etc/profile
cat /etc/security/limits.conf |grep 65536 || echo "root soft nofile 65536" >> /etc/security/limits.conf
cat /etc/security/limits.conf |grep 65536 || echo "root hard nofile 65536" >> /etc/security/limits.conf
cat /etc/security/limits.conf |grep 65536 || echo "* soft nofile 65536" >> /etc/security/limits.conf
cat /etc/security/limits.conf |grep 65536 || echo "* hard nofile 65536" >> /etc/security/limits.conf #关闭swap
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab #更换yum源到阿里的,可以不换
#mv -f /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
#wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo #配置安装k8s需要的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF #配置docker yum源,如果你的很慢,可以选择下方的阿里云docker yum源
#yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo #清理缓存
yum clean all
yum makecache fast #安装19.03.7版本
yum install -y docker-ce-19.03.7-3.el7
systemctl enable docker && systemctl start docker
systemctl status docker
#修改docker配置文件
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
systemctl daemon-reload && systemctl restart docker #开启k8s 网络桥接相关内核配置
#设置网桥包经IPTables,core文件生成路径,配置永久生效 echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 >/proc/sys/net/bridge/bridge-nf-call-ip6tables cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF cat >> /etc/sysctl.conf << EOF
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl -p #安装kubeadm,kubelet和kubectl
yum install kubeadm-1.18.2 kubelet-1.18.2 kubectl-1.18.2 -y && systemctl enable kubelet && systemctl start kubelet
#类似kubeadm join 192.168.1.199:6443 --token 7dwluq.x6nypje7h55rnrhl \
    --discovery-token-ca-cert-hash sha256:fa75619ab0bb6273126350a9dbda9aa6c89828c2c4650299fe1647ab510a7e6c #开启ipvs,不开启ipvs将会使用iptables,但是效率低,所以官网推荐需要开通ipvs内核
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
/sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe \${kernel_module}
fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep ip_vs

如果忘记kubeadm join的参数,可以在master节点中运行下面的命令来查询

kubeadm token create --print-join-command

centos7安装kubernetes k8s 1.18的更多相关文章

  1. centos7安装kubernetes k8s 1.16

    #初始化服务器 echo 'export LC_ALL="en_US.UTF-8"' >> /etc/profile source /etc/profile #!/bi ...

  2. Centos7安装Kubernetes k8s v1.16.0 国内环境

    一. 为什么是k8s v1.16.0? 最新版的v1.16.2试过了,一直无法安装完成,安装到kubeadm init那一步执行后,报了很多错,如:node xxx not found等.centos ...

  3. CentOS7安装Kubernetes

    CentOS7安装Kubernetes 安装Kubernetes时候需要一台机器作为管理机器,1台或者多台机器作为集群中的节点. 系统信息: Hosts: 请将IP地址换成自己环境的地址. cento ...

  4. centos7.3 kubernetes/k8s 1.10 离线安装 --已验证

    本文介绍在centos7.3使用kubeadm快速离线安装kubernetes 1.10. 采用单master,单node(可以多node),占用资源较少,方便在笔记本或学习环境快速部署,不适用于生产 ...

  5. centos7安装kubernetes 1.1

    原文地址:http://foxhound.blog.51cto.com/1167932/1717105 前提:centos7 已经update yum update -y 一.创建yum源 maste ...

  6. 开启和安装Kubernetes k8s 基于Docker For Windows

    0.最近发现,Docker For Windows Stable在Enable Kubernetes这个问题上是有Bug的,建议切换到Edge版本,并且采用下文AliyunContainerServi ...

  7. CentOS7 安装kubernetes

    2台机器,1台为Master,1台为Node 修改Host Master为dmaster,Node为dslave 安装K8s and Etcd 在Master机器上安装 yum install etc ...

  8. kubeadm安装kubernetes V1.11.1 集群

    之前测试了离线环境下使用二进制方法安装配置Kubernetes集群的方法,安装的过程中听说 kubeadm 安装配置集群更加方便,因此试着折腾了一下.安装过程中,也有一些坑,相对来说操作上要比二进制方 ...

  9. 使用kubeadm 安装 kubernetes 1.12.0

    目录 简介: 架构说明: 系统配置: 1.1 关闭防火墙 1.2 禁用SELinux 1.3 关闭系统Swap 1.4 安装docker 使用kubeadm部署Kubernetes: 2.1 安装ku ...

随机推荐

  1. Jquery获取链接请求的参数

    比如有一个链接:https://www.baidu.com/s?cl=3&tn=baidutop10&fr=top1000,先定义方法: //获取url中的参数 function ge ...

  2. 第39天学习打卡(多线程 Thread Runnable 初始并发问题 Callable )

    多线程详解 01线程简介 Process与Thread 程序:是指令和数据的有序集合,其本身没有任何运行的含义,是一个静态的概念. 进程则是执行程序的一次执行过程,它是一个动态的概念.是系统资源分配的 ...

  3. nginx反向代理、负载均衡以及分布式下的session保持

    [前言]部署服务器用到了nginx,相比较于apache并发能力更强,优点也比其多得多.虽然我的项目可能用不到这么多性能,还是部署一个流行的服务器吧! 此篇博文主要学习nginx(ingine x)的 ...

  4. MVCC多版本并发控制器

    在多个事务并发执行的时候,MVCC机制可以协调数据的可见性,事务的隔离级别就是建立在MVCC之上的: MVCC机制通过undo log链和ReadView机制来实现: undo log版本链: 在数据 ...

  5. HDOJ-1711(KMP算法)

    Number Sequence HDOJ-1711 1.这里使用的算法是KMP算法,pi数组就是前缀数组. 2.代码中使用到了一个技巧就是用c数组看成是复合字符串,里面加一个特殊整数位-1000006 ...

  6. super_curd组件技术点总结

    1.基于包的导入的方式实现单例模式 # test1.py class AdminSite(object): def __init__(self): self.registry = {} self.ap ...

  7. vue+vuex 修复数据更新页面没有渲染问题

    不解: 为什么在关闭开关后,已经将data里的属性和vuex属性初始化后,页面就是不响应??? 问题: 由于切换路由后,获取到vuex的数据在created中赋值到data相对应的属性中,在关闭开关后 ...

  8. HDFS的上传流程以及windows-idea操作文件上传的注意

    HDFS的上传流程 命令:hdfs dfs -put xxx.wmv /hdfs的文件夹 cd进入到要上传文件的当前目录,再输入hdfs命令上传,注意-put后tab可以自动补全, 最后加上你要上传到 ...

  9. IDEA如何像ecplise一样添加jar包?

    以前使用ecplise开发代码,现在换成IDEA,有很多操作都不习惯,比如添加jar包.网上可以找到IDEA好几种添加jar包的方法,这里主要介绍在用IDEA开发时如何像ecplise一样添加jar包 ...

  10. IPFS矿池集群方案详解

    IPFS作为一项分布式存储技术,可以说是web3.0发展的基石.关于IPFS的产业,如存储.技术.矿机.矿池等也发展得非常迅速. 什么是单机挖矿? 单机挖矿就是一台机器就是一个节点,一台机器就完成挖矿 ...