K8s集群环境搭建
K8s集群环境搭建
1、环境规划
1.1 集群类型
Kubernetes集群大体上分为两类:一主多从和多主多从
- 一主多从:一台master节点和多台node节点,搭建简单,但是有单机故障风险,适用于测试环境
- 多主多从:多台master节点和多台node节点,搭建麻烦,安全性高,适用于生产环境
1.2 安装方式
Kubernetes有多种部署方式,目前主流的方式有
kubeadm
、minikube
、二进制包
- Minikube:一个用于快速搭建单节点kubernetes的工具
- Kubeadm:一个用于快速搭建kubernetes集群的工具,https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
- 二进制包:从官网下载每个组件的二进制包,依次去安装,此方式对于理解kubernetes组件更加有效,https://github.com/kubernetes/kubernetes
说明:现在需要安装kubernetes的集群环境,但是又不想过于麻烦,所有选择使用kubeadm方式
1.3 准备环境
角色 | ip地址 | 组件 |
---|---|---|
master | 192.168.111.100 | docker,kubectl,kubeadm,kubelet |
node1 | 192.168.111.101 | docker,kubectl,kubeadm,kubelet |
node2 | 192.168.111.102 | docker,kubectl,kubeadm,kubelet |
2、环境搭建
说明:
本次环境搭建需要安装三台Linux系统(一主二从),内置centos7.5系统,然后在每台linux中分别安装docker。kubeadm(1.25),kubelet(1.25.4),kubelet(1.25.4).
2.1 主机安装
安装虚拟机过程中注意下面选项的设置:
操作系统环境:cpu2个 内存2G 硬盘50G centos7+
语言:中文简体/英文
软件选择:基础设施服务器
分区选择:自动分区/手动分区
网络配置:按照下面配置网络地址信息
网络地址:192.168.100.(100、10、20)
子网掩码:255.255.255.0
默认网关:192.168.100.254
DNS:8.8.8.8
主机名设置:
Master节点:master
Node节点:node1
Node节点:node2
2.2 环境初始化
查看操作系统的版本
# 此方式下安装kubernetes集群要求Centos版本要在7.5或之上
[root@master ~]#cat /etc/redhat-release
CentOS Stream release 8
主机名解析 (三个节点都做)
# 为了方便集群节点间的直接调用,在这个配置一下主机名解析,企业中推荐使用内部DNS服务器
[root@master ~]#cat >> /etc/hosts << EOF
> 192.168.111.100 master.example.com master
> 192.168.111.101 node1.example.com node1
> 192.168.111.102 node2.example.com node2
> EOF [root@master ~]#scp /etc/hosts root@192.168.111.101:/etc/hosts
The authenticity of host '192.168.111.101 (192.168.111.101)' can't be established.
ECDSA key fingerprint is SHA256:0UQKIYmXwgllRaiKyKIR8RaO8bzS7GGb5180xGHoiMI.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.111.101' (ECDSA) to the list of known hosts.
root@192.168.111.101's password:
hosts 100% 280 196.1KB/s 00:00
[root@master ~]#scp /etc/hosts root@192.168.111.102:/etc/hosts
The authenticity of host '192.168.111.102 (192.168.111.102)' can't be established.
ECDSA key fingerprint is SHA256:0UQKIYmXwgllRaiKyKIR8RaO8bzS7GGb5180xGHoiMI.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.111.102' (ECDSA) to the list of known hosts.
root@192.168.111.102's password:
hosts
时钟同步
# kubernetes要求集群中的节点时间必须精确一致,这里使用chronyd服务从网络同步时间,企业中建议配置内部的时间同步服务器
-master节点
[root@master ~]#vim /etc/chrony.conf
local stratum 10
[root@master ~]#systemctl restart chronyd.service
[root@master ~]#systemctl enable chronyd.service
[root@master ~]#hwclock -w -node1节点
[root@node1 ~]#vim /etc/chrony.conf
server master.example.com iburst
...
[root@node1 ~]#systemctl restart chronyd.service
[root@node1 ~]#systemctl enable chronyd.service
[root@node1 ~]#hwclock -w -node2节点
[root@node2 ~]#vim /etc/chrony.conf
server master.example.com iburst
...
[root@node2 ~]#systemctl restart chronyd.service
[root@node2 ~]#systemctl enable chronyd.service
[root@node2 ~]#hwclock -w
禁用firewalld、selinux、postfix(三个节点都做)
# 关闭防火墙、selinux,postfix----3台主机都配置
-master节点
[root@master ~]#systemctl disable --now firewalld
[root@master ~]#sed -i 's/enforcing/disabled/' /etc/selinux/config
[root@master ~]#setenforce 0
[root@master ~]#systemctl stop postfix
[root@master ~]#systemctl disable postfix -node1节点
[root@node1 ~]#systemctl disable --now firewalld
[root@node1 ~]#sed -i 's/enforcing/disabled/' /etc/selinux/config
[root@node1 ~]#setenforce 0
[root@node1 ~]#systemctl stop postfix
[root@node1 ~]#systemctl disable postfix -node2节点
[root@node2 ~]#systemctl disable --now firewalld
[root@node2 ~]#sed -i 's/enforcing/disabled/' /etc/selinux/config
[root@node2 ~]#setenforce 0
[root@node2 ~]#systemctl stop postfix
[root@node2 ~]#systemctl disable postfix
禁用swap分区(三个节点都做)
-master节点
[root@master ~]#vim /etc/fstab # 注释掉swap分区那一行
[root@master ~]#swapoff -a -node1节点
[root@node1 ~]#vim /etc/fstab # 注释掉swap分区那一行
[root@node1 ~]#swapoff -a -node1节点
[root@node2 ~]#vim /etc/fstab # 注释掉swap分区那一行
[root@node2 ~]#swapoff -a
开启IP转发,和修改内核信息---三个节点都需要配置
-master节点
[root@master ~]#vim /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@master ~]#modprobe br_netfilter
[root@master ~]#sysctl -p /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1 -node1节点
[root@node1 ~]#vim /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@node1 ~]#modprobe br_netfilter
[root@node1 ~]#sysctl -p /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1 -node1节点
[root@node2 ~]#vim /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@node2 ~]#modprobe br_netfilter
[root@node2 ~]#sysctl -p /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
配置IPVS功能(三个节点都做)
-master节点
[root@master ~]#vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh [root@master ~]#chmod +x /etc/sysconfig/modules/ipvs.modules
[root@master ~]#bash /etc/sysconfig/modules/ipvs.modules
[root@master ~]#lsmod | grep -e ip_vs
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 172032 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 172032 1 ip_vs
nf_defrag_ipv6 20480 2 nf_conntrack,ip_vs
libcrc32c 16384 3 nf_conntrack,xfs,ip_vs
[root@master ~]#reboot -node1节点
[root@node1 ~]#vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh [root@node1 ~]#chmod +x /etc/sysconfig/modules/ipvs.modules
[root@node1 ~]#bash /etc/sysconfig/modules/ipvs.modules
[root@node1 ~]#lsmod | grep -e ip_vs
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 172032 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 172032 1 ip_vs
nf_defrag_ipv6 20480 2 nf_conntrack,ip_vs
libcrc32c 16384 3 nf_conntrack,xfs,ip_vs
[root@node1 ~]#reboot -node2节点
[root@node2 ~]#vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh [root@node2 ~]#chmod +x /etc/sysconfig/modules/ipvs.modules
[root@node2 ~]#bash /etc/sysconfig/modules/ipvs.modules
[root@node2 ~]#lsmod | grep -e ip_vs
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 172032 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 172032 1 ip_vs
nf_defrag_ipv6 20480 2 nf_conntrack,ip_vs
libcrc32c 16384 3 nf_conntrack,xfs,ip_vs
[root@node2 ~]#reboot
ssh免密认证
[root@master ~]#ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:VcZ6m+gceBJxwysFWwM08526KiBoSt9qdbDQoMSx3kU root@master
The key's randomart image is:
+---[RSA 3072]----+
|... E .*+o.o |
| o... .*==.. |
|... o. .+o+o |
|.....o o.o.. |
| o .. o S+.o o |
|.o. .o .o +.o |
|+ ..o.. =.. |
|. o .. .o |
| ... .. |
+----[SHA256]-----+
[root@master ~]#ssh-copy-id -i ~/.ssh/id_rsa.pub root@node1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node1 (192.168.111.101)' can't be established.
ECDSA key fingerprint is SHA256:0UQKIYmXwgllRaiKyKIR8RaO8bzS7GGb5180xGHoiMI.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node1's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'root@node1'"
and check to make sure that only the key(s) you wanted were added. [root@master ~]#ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node2 (192.168.111.102)' can't be established.
ECDSA key fingerprint is SHA256:0UQKIYmXwgllRaiKyKIR8RaO8bzS7GGb5180xGHoiMI.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node2's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'root@node2'"
and check to make sure that only the key(s) you wanted were added.
2.3、安装docker
切换镜像源
-master节点
[root@master /etc/yum.repos.d]#curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
[root@master /etc/yum.repos.d]# dnf -y install epel-release
[root@master /etc/yum.repos.d]#wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -node1节点
[root@node1 /etc/yum.repos.d]#curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
[root@node1 /etc/yum.repos.d]# dnf -y install epel-release
[root@node1 /etc/yum.repos.d]#wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -node2节点
[root@node2 /etc/yum.repos.d]#curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
[root@node2 /etc/yum.repos.d]# dnf -y install epel-release
[root@node2 /etc/yum.repos.d]#wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
安装docker-ce
-master节点
[root@master ~]# dnf -y install docker-ce --allowerasing
[root@master ~]# systemctl restart docker
[root@master ~]# systemctl enable docker -node1节点
[root@node1 ~]# dnf -y install docker-ce --allowerasing
[root@node1 ~]# systemctl restart docker
[root@node1 ~]# systemctl enable docker -node2节点
[root@node2 ~]# dnf -y install docker-ce --allowerasing
[root@node2 ~]# systemctl restart docker
[root@node2 ~]# systemctl enable docker
添加一个配置文件,配置docker仓库加速器
-master节点
[root@master ~]#cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://6vrrj6n2.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
[root@master ~]#systemctl daemon-reload
[root@master ~]#systemctl restart docker -node1节点
[root@node1 ~]#cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://6vrrj6n2.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
[root@node1 ~]#systemctl daemon-reload
[root@node1 ~]#systemctl restart docker -node2节点
[root@node2 ~]#cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://6vrrj6n2.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
[root@node2 ~]#systemctl daemon-reload
[root@node2 ~]#systemctl restart docker
2.4 安装kubernetes组件
由于kubernetes的镜像在国外,速度比较慢,这里切换成国内的镜像源
-master节点
[root@master ~]#cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF -node1节点
[root@node1 ~]#cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF -node2节点
[root@node2 ~]#cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装kubeadm kubelet kubectl工具
-master节点
[root@master ~]#dnf -y install kubeadm kubelet kubectl
[root@master ~]#systemctl restart kubelet
[root@master ~]#systemctl enable kubelet -node1节点
[root@node1 ~]#dnf -y install kubeadm kubelet kubectl
[root@node1 ~]#systemctl restart kubelet
[root@node1 ~]#systemctl enable kubelet -node2节点
[root@node2 ~]#dnf -y install kubeadm kubelet kubectl
[root@node2 ~]#systemctl restart kubelet
[root@node2 ~]#systemctl enable kubelet
配置containerd
# 为确保后面集群初始化及加入集群能够成功执行,需要配置containerd的配置文件/etc/containerd/config.toml,此操作需要在所有节点执行
-master节点
[root@master ~]#containerd config default > /etc/containerd/config.toml
# 将/etc/containerd/config.toml文件中的k8s镜像仓库改为registry.aliyuncs.com/google_containers
[root@master ~]#vim /etc/containerd/config.toml
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
# 然后重启并设置containerd服务
[root@master ~]#systemctl restart containerd
[root@master ~]#systemctl enable containerd # 为确保后面集群初始化及加入集群能够成功执行,需要配置containerd的配置文件/etc/containerd/config.toml,此操作需要在所有节点执行
-node1节点
[root@node1 ~]#containerd config default > /etc/containerd/config.toml
# 将/etc/containerd/config.toml文件中的k8s镜像仓库改为registry.aliyuncs.com/google_containers
[root@node1 ~]#vim /etc/containerd/config.toml
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
# 然后重启并设置containerd服务
[root@node1 ~]#systemctl restart containerd
[root@node1 ~]#systemctl enable containerd # 为确保后面集群初始化及加入集群能够成功执行,需要配置containerd的配置文件/etc/containerd/config.toml,此操作需要在所有节点执行
-node2节点
[root@node2 ~]#containerd config default > /etc/containerd/config.toml
# 将/etc/containerd/config.toml文件中的k8s镜像仓库改为registry.aliyuncs.com/google_containers
[root@node2 ~]#vim /etc/containerd/config.toml
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
# 然后重启并设置containerd服务
[root@node2 ~]#systemctl restart containerd
[root@node2 ~]#systemctl enable containerd
部署k8s的master节点
-master节点
[root@master ~]#kubeadm init \
--apiserver-advertise-address=192.168.111.100 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.25.4 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
# 建议将初始化内容保存在某个文件中
[root@master ~]#vim k8s
To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.111.100:6443 --token eav8jn.zj2muv0thd7e8dad \
--discovery-token-ca-cert-hash sha256:b38f8a6a6302e25c0bcba2a67c13b234fd0b9fdd8b0c0645154c79edf6555e09 [root@master ~]#vim /etc/profile.d/k8s.sh
export KUBECONFIG=/etc/kubernetes/admin.conf
[root@master ~]#source /etc/profile.d/k8s.sh
安装pod网络插件
-master节点
[root@master ~]#wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
[root@master ~]#kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master ~]#kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 6m41s v1.25.4
[root@master ~]#kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 7m10s v1.25.4
将node节点加入到k8s集群中
-node1节点
[root@node1 ~]#kubeadm join 192.168.111.100:6443 --token eav8jn.zj2muv0thd7e8dad \
> --discovery-token-ca-cert-hash sha256:b38f8a6a6302e25c0bcba2a67c13b234fd0b9fdd8b0c0645154c79edf6555e09 -node2节点
[root@node2 ~]#kubeadm join 192.168.111.100:6443 --token eav8jn.zj2muv0thd7e8dad \
> --discovery-token-ca-cert-hash sha256:b38f8a6a6302e25c0bcba2a67c13b234fd0b9fdd8b0c0645154c79edf6555e09
kubectl get nodes 查看node状态
-master节点
[root@master ~]#kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 9m37s v1.25.4
node1 NotReady <none> 51s v1.25.4
node2 NotReady <none> 31s v1.25.4
[root@master ~]#kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 9m57s v1.25.4
node1 Ready <none> 71s v1.25.4
node2 Ready <none> 51s v1.25.4
使用k8s集群创建一个pod,运行nginx容器,然后进行测试
[root@master ~]#kubectl create deployment nginx --image nginx
deployment.apps/nginx created
[root@master ~]#kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-76d6c9b8c-z7p4l 1/1 Running 0 35s
[root@master ~]#kubectl expose deployment nginx --port 80 --type NodePort
service/nginx exposed
[root@master ~]#kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-76d6c9b8c-z7p4l 1/1 Running 0 119s 10.244.1.2 node1 <none> <none>
[root@master ~]#kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15m
nginx NodePort 10.109.37.202 <none> 80:31125/TCP 17s
测试访问
修改默认网页
[root@master ~]# kubectl exec -it pod/nginx-76d6c9b8c-z7p4l -- /bin/bash
root@nginx-76d6c9b8c-z7p4l:/# cd /usr/share/nginx/html/
echo "zhaoshulin" > index.html
K8s集群环境搭建的更多相关文章
- 入门-k8s集群环境搭建(二)
对于 Kubernetes 初学者,在搭建K8S集群时,推荐在阿里云或腾讯云采购如下配置:(您也可以使用自己的虚拟机.私有云等您最容易获得的 Linux 环境) 至少2台 2核4G 的服务器 Cent ...
- Hadoop+Spark:集群环境搭建
环境准备: 在虚拟机下,大家三台Linux ubuntu 14.04 server x64 系统(下载地址:http://releases.ubuntu.com/14.04.2/ubuntu-14.0 ...
- 项目进阶 之 集群环境搭建(三)多管理节点MySQL集群
上次的博文项目进阶 之 集群环境搭建(二)MySQL集群中,我们搭建了一个基础的MySQL集群,这篇博客咱们继续讲解MySQL集群的相关内容,同时针对上一篇遗留的问题提出一个解决方案. 1.单管理节点 ...
- Spark 1.6.1分布式集群环境搭建
一.软件准备 scala-2.11.8.tgz spark-1.6.1-bin-hadoop2.6.tgz 二.Scala 安装 1.master 机器 (1)下载 scala-2.11.8.tgz, ...
- hadoop集群环境搭建之zookeeper集群的安装部署
关于hadoop集群搭建有一些准备工作要做,具体请参照hadoop集群环境搭建准备工作 (我成功的按照这个步骤部署成功了,经实际验证,该方法可行) 一.安装zookeeper 1 将zookeeper ...
- hadoop集群环境搭建之安装配置hadoop集群
在安装hadoop集群之前,需要先进行zookeeper的安装,请参照hadoop集群环境搭建之zookeeper集群的安装部署 1 将hadoop安装包解压到 /itcast/ (如果没有这个目录 ...
- hadoop集群环境搭建准备工作
一定要注意hadoop和linux系统的位数一定要相同,就是说如果hadoop是32位的,linux系统也一定要安装32位的. 准备工作: 1 首先在VMware中建立6台虚拟机(配置默认即可).这是 ...
- Ningx集群环境搭建
Ningx集群环境搭建 Nginx是什么? Nginx ("engine x") 是⼀个⾼性能的 HTTP 和 反向代理 服务器,也是⼀个 IMAP/ POP3/SMTP 代理服务 ...
- hadoop2集群环境搭建
在查询了很多资料以后,发现国内外没有一篇关于hadoop2集群环境搭建的详细步骤的文章. 所以,我想把我知道的分享给大家,方便大家交流. 以下是本文的大纲: 1. 在windows7 下面安装虚拟机2 ...
- [转]ZooKeeper 集群环境搭建 (本机3个节点)
ZooKeeper 集群环境搭建 (本机3个节点) 是一个简单的分布式同步数据库(或者是小文件系统) ------------------------------------------------- ...
随机推荐
- 对表白墙wxss的解释
一.index.wxss 1 /* 信息 */ 2 .Xinxi{ 3 display: flex; 4 flex-wrap: wrap; 5 margin: 0rpx 1%; 6 } 7 8 9 / ...
- Java 热更新 Groovy 实践及踩坑指南
Groovy 是什么? Apache的Groovy是Java平台上设计的面向对象编程语言.这门动态语言拥有类似Python.Ruby和Smalltalk中的一些特性,可以作为Java平台的脚本语言使用 ...
- AD画板从头开始
AD画板从头开始 前言 近期认真的画了一次板子,以前虽然也画过,但是都是很随意的,这次是做一个小项目,然后因为有一段时间没有画板了,发现自己很多基础的东西都忘记了,这里就来记录一下从头到尾的过程.本次 ...
- 开源即时通讯GGTalk 8.0发布,增加Linux客户端,支持在统信UOS、银河麒麟上运行!
GGTalk在2021年推出7.0后,经过一年多时间的开发,终于推出8.0版本,实现了Linux客户端. 这几年,信创国产化的势头越来越猛,政府事企业单位都在逐步转向使用国产OS.国产CPU.国产数据 ...
- .NET 反向代理-YARP 部署Https(SSL)
YARP 作为反向代理中间件,那就无可避免需要使用到 Https 去部署项目,那 YARP 要怎么去实现呢,本来以为 YARP 会有一套自己的实现,在翻阅了资料后发现,根本不是我想的那样,按照 YAR ...
- Elastic:Elastic部署架构介绍
Elastic Stack是一套完整的从数据采集,解析,分析,丰富,到搜索,检索,数据程序等一套完整的软件栈.在具体的实践中,我们应该如何搭建我们的系统呢? 下图描述了常用的Elastic Stack ...
- fastdfs-zyc管理FastDFS的web界面
俩压缩包根据大小重命名以下,按图片所示 把1_fastdfs-zyc.7z重命名为fastdfs-zyc.7z.001 把2_fastdfs-zyc.7z重命名为fastdfs-zyc.7z.002 ...
- Beats:Beats 入门教程 (二)
- HCIP-OSPF域间路由
链路类型: P2P:描述了对端链路信息和本端链路信息. (描述了从一台路由器到另外一台路由器之间点到点的链路信息,用来描述拓扑信息,P2P.P2MP) TransNET:描述了从一台路由器需要经过一个 ...
- PHP全栈开发(八):CSS Ⅳ 文本格式及字体
文本系列属性主要是设置文本格式的,例如.... 颜色 body {color:red;} h1 {color:#00ff00;} p.ex {color:rgb(0,0,255); 可以设置文本的居中 ...