可以参考其他网友的阿里云搭建k8s高可用集群(1.17.3)

https://www.cnblogs.com/gmmy/p/12372805.html

准备四台centos7虚拟机,用来安装k8s集群

master01(192.168.1.203)配置:2核cpu,2G内存,60G硬盘 桥接网络

master02(192.168.1.204)配置:2核cpu,2G内存,60G硬盘 桥接网络

master03(192.168.1.205)配置:2核cpu,2G内存,60G硬盘 桥接网络

node01(192.168.1.206)配置:2核cpu,1G内存,60G硬盘 桥接网络

所有master和node节点都要安装的基础组件

#以下教程非一键式,需要逐行查看,在提示的服务器中一一部署
#在大部分情况下,可以直接复制代码到命令行中,需要一点点的Linux基础知识 #请根据自己的情况,修改自己的主机名(如master01,master02,master03,node01)
#在master01
hostnamectl set-hostname master01
#在master02
hostnamectl set-hostname master02
#在master03
hostnamectl set-hostname master03
#在node01
hostnamectl set-hostname node01 #在master01, master02, master03, node01上/etc/hosts文件增加如下几行:
cat >> /etc/hosts << EOF
192.168.1.202 master01
192.168.1.203 master02
192.168.1.204 master03
192.168.1.205 node01
EOF
#设置免密登陆的密钥
#默认相关内容放在/root/.ssh/下面
#在所有master节点运行如下
mkdir /root/.ssh/
chmod 600 /root/.ssh/
touch /root/.ssh/authorized_keys
chmod 600 /root/.ssh/authorized_keys
ssh-keygen -t rsa #enter ,enter, enter #yum安装一些必备的软件
yum -y install wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake\
libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel\
wget vim ncurses-devel autoconf automake zlib-devel python-devel\
epel-release lrzsz openssh-server socat ipvsadm conntrack bind-utils epel-release libffi-devel\
libaio-devel libxml2-devel cmake python-devel\
device-mapper-persistent-data lvm2 yum-utils
#关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
yum install iptables-services -y
iptables -F && service iptables save
service iptables stop && systemctl disable iptables #修改时区,设置ntp时间更新
mv -f /etc/localtime /etc/localtime.bak
/bin/cp -rf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'ZONE="CST"' > /etc/sysconfig/clock
ntpdate cn.pool.ntp.org
echo "* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org" >> /etc/crontab
service crond restart #关闭selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
setenforce 0 #开发最大文件描述符限制
cat /etc/profile |grep ulimit || echo "ulimit -n 65536" >> /etc/profile
cat /etc/security/limits.conf |grep 65536 || echo "root soft nofile 65536" >> /etc/security/limits.conf
cat /etc/security/limits.conf |grep 65536 || echo "root hard nofile 65536" >> /etc/security/limits.conf
cat /etc/security/limits.conf |grep 65536 || echo "* soft nofile 65536" >> /etc/security/limits.conf
cat /etc/security/limits.conf |grep 65536 || echo "* hard nofile 65536" >> /etc/security/limits.conf #关闭swap
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab #更换yum源到阿里的
mv -f /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo #配置安装k8s需要的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF
#配置docker yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #清理缓存
yum clean all
yum makecache fast #安装19.03.7版本
yum install -y docker-ce-19.03.7-3.el7
systemctl enable docker && systemctl start docker
systemctl status docker
#修改docker配置文件
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
systemctl daemon-reload && systemctl restart docker #开启k8s 网络桥接相关内核配置
#设置网桥包经IPTables,core文件生成路径,配置永久生效 echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 >/proc/sys/net/bridge/bridge-nf-call-ip6tables cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF cat >> /etc/sysctl.conf << EOF
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl -p #开启ipvs,不开启ipvs将会使用iptables,但是效率低,所以官网推荐需要开通ipvs内核
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
/sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe \${kernel_module}
fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep ip_vs #在master01、master02、master03和node01上安装kubeadm,kubelet和kubectl
yum install kubeadm-1.18.2 kubelet-1.18.2 kubectl-1.18.2 -y && systemctl enable kubelet && systemctl start kubelet

Master节点的安装和配置

#在master01,master02,master03上部署keepalive+lvs实现master节点高可用-对apiserver做高可用
yum install -y socat keepalived ipvsadm conntrack #修改master1的/etc/keepalived/keepalived.conf文件
#master01节点作如下操作,该配置的priority为100, 请根据自己的masterIP地址和期望的虚拟IP地址做相应的修改
#如下的虚拟IP地址是192.168.1.199
wget -O /etc/keepalived/keepalived.conf http://download.zhufunin.com/k8s_1.18/keepalived.conf
sed -i 's/master01/192.168.1.202/g' /etc/keepalived/keepalived.conf
sed -i 's/master02/192.168.1.203/g' /etc/keepalived/keepalived.conf
sed -i 's/master03/192.168.1.204/g' /etc/keepalived/keepalived.conf
sed -i 's/VIP_addr/192.168.1.199/g' /etc/keepalived/keepalived.conf #在master02节点作如下操作,修改priority 为 50, 请根据自己的masterIP地址和期望的虚拟IP地址做相应的修改
#如下的虚拟IP地址是192.168.1.199
wget -O /etc/keepalived/keepalived.conf http://download.zhufunin.com/k8s_1.18/keepalived.conf
sed -i 's/priority 100/priority 50/g' /etc/keepalived/keepalived.conf
sed -i 's/master01/192.168.1.202/g' /etc/keepalived/keepalived.conf
sed -i 's/master02/192.168.1.203/g' /etc/keepalived/keepalived.conf
sed -i 's/master03/192.168.1.204/g' /etc/keepalived/keepalived.conf
sed -i 's/VIP_addr/192.168.1.199/g' /etc/keepalived/keepalived.conf #在master03节点作如下操作,修改priority 为 30, 请根据自己的masterIP地址和期望的虚拟IP地址做相应的修改
#如下的虚拟IP地址是192.168.1.199
wget -O /etc/keepalived/keepalived.conf http://download.zhufunin.com/k8s_1.18/keepalived.conf
sed -i 's/priority 100/priority 30/g' /etc/keepalived/keepalived.conf
sed -i 's/master01/192.168.1.202/g' /etc/keepalived/keepalived.conf
sed -i 's/master02/192.168.1.203/g' /etc/keepalived/keepalived.conf
sed -i 's/master03/192.168.1.204/g' /etc/keepalived/keepalived.conf
sed -i 's/VIP_addr/192.168.1.199/g' /etc/keepalived/keepalived.conf
#如果你的主网卡名称是ens33,那就需要sed -i 's/eth0/ens33/g' /etc/keepalived/keepalived.conf
sed -i 's/eth0/eth0/g' /etc/keepalived/keepalived.conf #新下载的keepalived.conf, 主要是让keepalive配置为BACKUP模式,而且是非抢占模式nopreempt,假设master01宕机,启动之后vip不会自动漂移到master01,这样可以保证k8s集群始终处于正常状态,因为假设master01启动,apiserver等组件不会立刻运行,如果vip漂移到master01,那么整个集群就会挂掉,这就是为什么我们需要配置成非抢占模式了
#通过修改priority的值,让启动顺序master01->master02->master03
#在master1、master2、master3依次执行如下命令
systemctl enable keepalived && systemctl start keepalived && systemctl status keepalived
#keepalived启动成功之后,在master1上通过ip addr可以看到vip 192.168.1.199(本教程的虚拟IP) 已经绑定到网卡上了 #在master01上执行如下命令
cd /usr/local/src
wget -O /usr/local/src/kubeadm-config.yaml http://download.zhufunin.com/k8s_1.18/kubeadm-config.yaml
#这个文件是给master初始化使用,如下是修改初始化中节点所对应的IP地址,需要根据自己的情况,做适当的调整
sed -i 's/master01/192.168.1.202/g' kubeadm-config.yaml
sed -i 's/master02/192.168.1.203/g' kubeadm-config.yaml
sed -i 's/master03/192.168.1.204/g' kubeadm-config.yaml
sed -i 's/VIP_addr/192.168.1.199/g' kubeadm-config.yaml
#master01初始化命
kubeadm init --config kubeadm-config.yaml
kubeadm config images list
#10.244.0.0/16是flannel网络插件的默认网段,后面会用到
#如果报错,那么用下面的kubeadm-config.yaml文件多一个imageRepository: registry.aliyuncs.com/google_containers参数,表示走的是阿里云镜像,我们可以直接访问,这个方法更简单,但是在这里了解即可,先不使用这种方法,使用的话在后面手动加节点到k8s集群会有问题。
#本教程会需要的镜像
#如果有报错,可以重置 kubeadm reset
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /var/lib/kubelet/
rm -rf /var/lib/etcd
rm -rf /var/lib/dockershim
rm -rf /var/run/kubernetes
rm -rf /var/lib/cni
rm -rf /etc/cni/net.d #下面是手动下载到本机
wget http://download.zhufunin.com/k8s_1.18/1-18-kube-apiserver.tar.gz
wget http://download.zhufunin.com/k8s_1.18/1-18-kube-scheduler.tar.gz
wget http://download.zhufunin.com/k8s_1.18/1-18-kube-controller-manager.tar.gz
wget http://download.zhufunin.com/k8s_1.18/1-18-pause.tar.gz
wget http://download.zhufunin.com/k8s_1.18/1-18-cordns.tar.gz
wget http://download.zhufunin.com/k8s_1.18/1-18-etcd.tar.gz
wget http://download.zhufunin.com/k8s_1.18/1-18-kube-proxy.tar.gz
docker load -i 1-18-kube-apiserver.tar.gz
docker load -i 1-18-kube-scheduler.tar.gz
docker load -i 1-18-kube-controller-manager.tar.gz
docker load -i 1-18-pause.tar.gz
docker load -i 1-18-cordns.tar.gz
docker load -i 1-18-etcd.tar.gz
docker load -i 1-18-kube-proxy.tar.gz
echo """
说明:
pause版本是3.2,用到的镜像是k8s.gcr.io/pause:3.2
etcd版本是3.4.3,用到的镜像是k8s.gcr.io/etcd:3.4.3-0        
cordns版本是1.6.7,用到的镜像是k8s.gcr.io/coredns:1.6.7
apiserver、scheduler、controller-manager、kube-proxy版本是1.18.2,用到的镜像分别是
k8s.gcr.io/kube-apiserver:v1.18.2
k8s.gcr.io/kube-controller-manager:v1.18.2
k8s.gcr.io/kube-scheduler:v1.18.2
k8s.gcr.io/kube-proxy:v1.18.2
如果机器很多,我们只需要把这些镜像传到我们的内部私有镜像仓库即可,这样我们在kubeadm初始化kubernetes时可以通过"--image-repository=私有镜像仓库地址"的方式进行镜像拉取,这样不需要手动传到镜像到每个机器 """
#走完这一步,差不多要结束了
#初始化成功后会看到类似如下的,照做就好
#mkdir -p $HOME/.kube
#sudo cp -i  /etc/kubernetes/admin.conf  $HOME/.kube/config
#sudo chown $(id -u):$(id -g)  $HOME/.kube/config
#看到kubeadm join ...这条命令需要记住,我们把k8s的master02、master03,node01节点加入到集群需要在这些节点节点输入这条命令,每次执行这个结果都是不一样的,大家记住自己执行的结果,在下面会用到 #查看状态
kubectl get nodes #把master1节点的证书拷贝到master02和master03上
#在master2和master3上创建证书存放目录
cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p ~/.kube/
#在master1节点把证书拷贝到master2和master3上,在master1上操作如下,下面的scp命令大家最好一行一行复制,这样不会出错
scp /etc/kubernetes/pki/ca.crt master02:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key master02:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key master02:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub master02:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt master02:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key master02:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt master02:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key master02:/etc/kubernetes/pki/etcd/ scp /etc/kubernetes/pki/ca.crt master03:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key master03:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key master03:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub master03:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt master03:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key master03:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt master03:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key master03:/etc/kubernetes/pki/etcd/ #证书拷贝之后在master2和master3上执行如下命令,大家复制自己的,这样就可以把master2和master3加入到集群
#类似kubeadm join 192.168.1.199:6443 --token 7dwluq.x6nypje7h55rnrhl \
    --discovery-token-ca-cert-hash sha256:fa75619ab0bb6273126350a9dbda9aa6c89828c2c4650299fe1647ab510a7e6c   --control-plane
#--control-plane:这个参数表示加入到k8s集群的是master节点
#在master2和master3上操作:
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g)$HOME/.kube/config
kubectl get nodes 
#在master01节点部署calico.yaml,master01就是主控,其他的master02,master03就是备用而已
wget http://download.zhufunin.com/k8s_1.18/calico.yaml #(原地址https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/calico.yaml)
kubectl apply -f calico.yaml

把node01节点加入到k8s集群,在node1节点操作

确保已完成“上面”的所有master和node节点都要安装的基础组件内容

特别是yum install kubeadm-1.18.2 kubelet-1.18.2 kubectl-1.18.2 -y && systemctl enable kubelet && systemctl start kubelet

类似kubeadm join 192.168.1.199:6443 --token 7dwluq.x6nypje7h55rnrhl \
    --discovery-token-ca-cert-hash sha256:fa75619ab0bb6273126350a9dbda9aa6c89828c2c4650299fe1647ab510a7e6c

kubeadm 报错那就在最后面添加-v 6参数,查看更多信息

如果忘记kubeadm join的参数,可以在master节点中运行下面的命令来查询

kubeadm token create --print-join-command

在master01节点查看集群节点状态

kubectl get nodes  

显示如下:

NAME        STATUS     ROLES    AGE    VERSION
master1 Ready master 3m36s v1.18.2
master2 Ready master 3m36s v1.18.2
master3 Ready master 3m36s v1.18.2
node1 Ready <none> 3m36s v1.18.2

如果忘记kubeadm join的参数,可以在master节点中运行下面的命令来查询

kubeadm token create --print-join-command

通用node节点初始化

#设置hostname, 并且把hostname写到Master的hosts文件中
hostnamectl set-hostname xxxx
#必要的yum软件安装
yum -y install wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake\
libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel\
wget vim ncurses-devel autoconf automake zlib-devel python-devel\
epel-release lrzsz openssh-server socat ipvsadm conntrack bind-utils epel-release libffi-devel\
libaio-devel libxml2-devel cmake python-devel\
device-mapper-persistent-data lvm2 yum-utils
#关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
yum install iptables-services -y
iptables -F && service iptables save
service iptables stop && systemctl disable iptables
#修改时区,设置ntp时间更新
mv -f /etc/localtime /etc/localtime.bak
/bin/cp -rf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'ZONE="CST"' > /etc/sysconfig/clock
ntpdate cn.pool.ntp.org
echo "* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org" >> /etc/crontab
service crond restart #关闭selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
setenforce 0 #开发最大文件描述符限制
cat /etc/profile |grep ulimit || echo "ulimit -n 65536" >> /etc/profile
cat /etc/security/limits.conf |grep 65536 || echo "root soft nofile 65536" >> /etc/security/limits.conf
cat /etc/security/limits.conf |grep 65536 || echo "root hard nofile 65536" >> /etc/security/limits.conf
cat /etc/security/limits.conf |grep 65536 || echo "* soft nofile 65536" >> /etc/security/limits.conf
cat /etc/security/limits.conf |grep 65536 || echo "* hard nofile 65536" >> /etc/security/limits.conf #关闭swap
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab #更换yum源到阿里的,可以不换
#mv -f /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
#wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo #配置安装k8s需要的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF #配置docker yum源,如果你的很慢,可以选择下方的阿里云docker yum源
#yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo #清理缓存
yum clean all
yum makecache fast #安装19.03.7版本
yum install -y docker-ce-19.03.7-3.el7
systemctl enable docker && systemctl start docker
systemctl status docker
#修改docker配置文件
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
systemctl daemon-reload && systemctl restart docker #开启k8s 网络桥接相关内核配置
#设置网桥包经IPTables,core文件生成路径,配置永久生效 echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 >/proc/sys/net/bridge/bridge-nf-call-ip6tables cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF cat >> /etc/sysctl.conf << EOF
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl -p #安装kubeadm,kubelet和kubectl
yum install kubeadm-1.18.2 kubelet-1.18.2 kubectl-1.18.2 -y && systemctl enable kubelet && systemctl start kubelet
#类似kubeadm join 192.168.1.199:6443 --token 7dwluq.x6nypje7h55rnrhl \
    --discovery-token-ca-cert-hash sha256:fa75619ab0bb6273126350a9dbda9aa6c89828c2c4650299fe1647ab510a7e6c #开启ipvs,不开启ipvs将会使用iptables,但是效率低,所以官网推荐需要开通ipvs内核
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
/sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe \${kernel_module}
fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep ip_vs

如果忘记kubeadm join的参数,可以在master节点中运行下面的命令来查询

kubeadm token create --print-join-command

centos7安装kubernetes k8s 1.18的更多相关文章

  1. centos7安装kubernetes k8s 1.16

    #初始化服务器 echo 'export LC_ALL="en_US.UTF-8"' >> /etc/profile source /etc/profile #!/bi ...

  2. Centos7安装Kubernetes k8s v1.16.0 国内环境

    一. 为什么是k8s v1.16.0? 最新版的v1.16.2试过了,一直无法安装完成,安装到kubeadm init那一步执行后,报了很多错,如:node xxx not found等.centos ...

  3. CentOS7安装Kubernetes

    CentOS7安装Kubernetes 安装Kubernetes时候需要一台机器作为管理机器,1台或者多台机器作为集群中的节点. 系统信息: Hosts: 请将IP地址换成自己环境的地址. cento ...

  4. centos7.3 kubernetes/k8s 1.10 离线安装 --已验证

    本文介绍在centos7.3使用kubeadm快速离线安装kubernetes 1.10. 采用单master,单node(可以多node),占用资源较少,方便在笔记本或学习环境快速部署,不适用于生产 ...

  5. centos7安装kubernetes 1.1

    原文地址:http://foxhound.blog.51cto.com/1167932/1717105 前提:centos7 已经update yum update -y 一.创建yum源 maste ...

  6. 开启和安装Kubernetes k8s 基于Docker For Windows

    0.最近发现,Docker For Windows Stable在Enable Kubernetes这个问题上是有Bug的,建议切换到Edge版本,并且采用下文AliyunContainerServi ...

  7. CentOS7 安装kubernetes

    2台机器,1台为Master,1台为Node 修改Host Master为dmaster,Node为dslave 安装K8s and Etcd 在Master机器上安装 yum install etc ...

  8. kubeadm安装kubernetes V1.11.1 集群

    之前测试了离线环境下使用二进制方法安装配置Kubernetes集群的方法,安装的过程中听说 kubeadm 安装配置集群更加方便,因此试着折腾了一下.安装过程中,也有一些坑,相对来说操作上要比二进制方 ...

  9. 使用kubeadm 安装 kubernetes 1.12.0

    目录 简介: 架构说明: 系统配置: 1.1 关闭防火墙 1.2 禁用SELinux 1.3 关闭系统Swap 1.4 安装docker 使用kubeadm部署Kubernetes: 2.1 安装ku ...

随机推荐

  1. Maven 打包项目到私服 (deploy)

    一.配置maven 在maven安装目录 /conf/setting.xml 中的servers下添加: 1 <servers> 2 <server> 3 <id> ...

  2. 《进击吧!Blazor!》第一章 5.组件开发

    <进击吧!Blazor!>是本人与张善友老师合作的Blazor零基础入门系列视频,此系列能让一个从未接触过Blazor的程序员掌握开发Blazor应用的能力. 视频地址:https://s ...

  3. Git 提交获取项目与提交项目 记录

    首先去git官网下载版本安装:https://git-scm.com/downloads 在自己生产免密令牌,安装后用git程序导出. 1.自己在桌面或者某盘创建一个文件夹,在文件夹右键找到 GIt ...

  4. Asp.NET Core 限流控制-AspNetCoreRateLimit

    起因: 近期项目中,提供了一些调用频率较高的api接口,需要保障服务器的稳定运行:需要对提供的接口进行限流控制.避免因客户端频繁的请求导致服务器的压力. 一.AspNetCoreRateLimit 介 ...

  5. hive中更改表impala中不能生效

    hive中的更新或者新建表impala 不能实时更新 Impala是基于Hive的大数据实时分析查询引擎,直接使用Hive的元数据库Metadata,意味着impala元数据都存储在Hive的meta ...

  6. Bullet碰撞检测

    DBVT 在bullet 引擎中是很基础且重要的一个数据结构,本质上是一个可以动态更新的AABB树. 碰撞响应的分析 约束分类:可积约束,不可积约束 ,摩擦力(见[1]第四章) 整个bullet在动力 ...

  7. Java数据类型拓展

    public class Demo03 { public static void main(String[] args) { //整数拓展: 二进制0b 十进制 八进制0 十六进制0x int i = ...

  8. 为什么要用Spring Boot?

    什么是Spring Boot?   Spring Boot是由Pivotal团队提供的全新框架,其设计目的是用来简化新 Spring 应用的初始搭建以及开发过程,该框架使用了特定的方式来进行配置,从而 ...

  9. 面试必备——Java多线程与并发(二)

    1.synchroized相关(锁的是对象,不是代码) (1)线程安全问题的主要原因 存在共享数据(也称临界资源) 存在多线程共同操作这些共享数据 解决:同一时刻有且只有一个线程在操作共享数据,其他线 ...

  10. 如何在 C# 8 中使用 Channels

    在面对 生产者-消费者 的场景下, netcore 提供了一个新的命名空间 System.Threading.Channels 来帮助我们更高效的处理此类问题,有了这个 Channels 存在, 生产 ...