K8s集群环境搭建

1、环境规划

1.1 集群类型

Kubernetes集群大体上分为两类:一主多从和多主多从

  • 一主多从:一台master节点和多台node节点,搭建简单,但是有单机故障风险,适用于测试环境
  • 多主多从:多台master节点和多台node节点,搭建麻烦,安全性高,适用于生产环境

1.2 安装方式

Kubernetes有多种部署方式,目前主流的方式有kubeadmminikube二进制包

说明:现在需要安装kubernetes的集群环境,但是又不想过于麻烦,所有选择使用kubeadm方式

1.3 准备环境

角色 ip地址 组件
master 192.168.111.100 docker,kubectl,kubeadm,kubelet
node1 192.168.111.101 docker,kubectl,kubeadm,kubelet
node2 192.168.111.102 docker,kubectl,kubeadm,kubelet

2、环境搭建

说明:

本次环境搭建需要安装三台Linux系统(一主二从),内置centos7.5系统,然后在每台linux中分别安装docker。kubeadm(1.25),kubelet(1.25.4),kubelet(1.25.4).

2.1 主机安装

  • 安装虚拟机过程中注意下面选项的设置:

  • 操作系统环境:cpu2个 内存2G 硬盘50G centos7+

  • 语言:中文简体/英文

  • 软件选择:基础设施服务器

  • 分区选择:自动分区/手动分区

  • 网络配置:按照下面配置网络地址信息

  • 网络地址:192.168.100.(100、10、20)

  • 子网掩码:255.255.255.0

  • 默认网关:192.168.100.254

  • DNS:8.8.8.8

  • 主机名设置:

  • Master节点:master

  • Node节点:node1

  • Node节点:node2

2.2 环境初始化

  1. 查看操作系统的版本

    # 此方式下安装kubernetes集群要求Centos版本要在7.5或之上
    [root@master ~]#cat /etc/redhat-release
    CentOS Stream release 8
  2. 主机名解析 (三个节点都做)

    # 为了方便集群节点间的直接调用,在这个配置一下主机名解析,企业中推荐使用内部DNS服务器
    [root@master ~]#cat >> /etc/hosts << EOF
    > 192.168.111.100 master.example.com master
    > 192.168.111.101 node1.example.com node1
    > 192.168.111.102 node2.example.com node2
    > EOF [root@master ~]#scp /etc/hosts root@192.168.111.101:/etc/hosts
    The authenticity of host '192.168.111.101 (192.168.111.101)' can't be established.
    ECDSA key fingerprint is SHA256:0UQKIYmXwgllRaiKyKIR8RaO8bzS7GGb5180xGHoiMI.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
    Warning: Permanently added '192.168.111.101' (ECDSA) to the list of known hosts.
    root@192.168.111.101's password:
    hosts 100% 280 196.1KB/s 00:00
    [root@master ~]#scp /etc/hosts root@192.168.111.102:/etc/hosts
    The authenticity of host '192.168.111.102 (192.168.111.102)' can't be established.
    ECDSA key fingerprint is SHA256:0UQKIYmXwgllRaiKyKIR8RaO8bzS7GGb5180xGHoiMI.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
    Warning: Permanently added '192.168.111.102' (ECDSA) to the list of known hosts.
    root@192.168.111.102's password:
    hosts
  3. 时钟同步

    # kubernetes要求集群中的节点时间必须精确一致,这里使用chronyd服务从网络同步时间,企业中建议配置内部的时间同步服务器
    -master节点
    [root@master ~]#vim /etc/chrony.conf
    local stratum 10
    [root@master ~]#systemctl restart chronyd.service
    [root@master ~]#systemctl enable chronyd.service
    [root@master ~]#hwclock -w -node1节点
    [root@node1 ~]#vim /etc/chrony.conf
    server master.example.com iburst
    ...
    [root@node1 ~]#systemctl restart chronyd.service
    [root@node1 ~]#systemctl enable chronyd.service
    [root@node1 ~]#hwclock -w -node2节点
    [root@node2 ~]#vim /etc/chrony.conf
    server master.example.com iburst
    ...
    [root@node2 ~]#systemctl restart chronyd.service
    [root@node2 ~]#systemctl enable chronyd.service
    [root@node2 ~]#hwclock -w
  4. 禁用firewalld、selinux、postfix(三个节点都做)

    # 关闭防火墙、selinux,postfix----3台主机都配置
    -master节点
    [root@master ~]#systemctl disable --now firewalld
    [root@master ~]#sed -i 's/enforcing/disabled/' /etc/selinux/config
    [root@master ~]#setenforce 0
    [root@master ~]#systemctl stop postfix
    [root@master ~]#systemctl disable postfix -node1节点
    [root@node1 ~]#systemctl disable --now firewalld
    [root@node1 ~]#sed -i 's/enforcing/disabled/' /etc/selinux/config
    [root@node1 ~]#setenforce 0
    [root@node1 ~]#systemctl stop postfix
    [root@node1 ~]#systemctl disable postfix -node2节点
    [root@node2 ~]#systemctl disable --now firewalld
    [root@node2 ~]#sed -i 's/enforcing/disabled/' /etc/selinux/config
    [root@node2 ~]#setenforce 0
    [root@node2 ~]#systemctl stop postfix
    [root@node2 ~]#systemctl disable postfix
  5. 禁用swap分区(三个节点都做)

    -master节点
    [root@master ~]#vim /etc/fstab # 注释掉swap分区那一行
    [root@master ~]#swapoff -a -node1节点
    [root@node1 ~]#vim /etc/fstab # 注释掉swap分区那一行
    [root@node1 ~]#swapoff -a -node1节点
    [root@node2 ~]#vim /etc/fstab # 注释掉swap分区那一行
    [root@node2 ~]#swapoff -a
  6. 开启IP转发,和修改内核信息---三个节点都需要配置

    -master节点
    [root@master ~]#vim /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    [root@master ~]#modprobe br_netfilter
    [root@master ~]#sysctl -p /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1 -node1节点
    [root@node1 ~]#vim /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    [root@node1 ~]#modprobe br_netfilter
    [root@node1 ~]#sysctl -p /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1 -node1节点
    [root@node2 ~]#vim /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    [root@node2 ~]#modprobe br_netfilter
    [root@node2 ~]#sysctl -p /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
  7. 配置IPVS功能(三个节点都做)

    -master节点
    [root@master ~]#vim /etc/sysconfig/modules/ipvs.modules
    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh [root@master ~]#chmod +x /etc/sysconfig/modules/ipvs.modules
    [root@master ~]#bash /etc/sysconfig/modules/ipvs.modules
    [root@master ~]#lsmod | grep -e ip_vs
    ip_vs_sh 16384 0
    ip_vs_wrr 16384 0
    ip_vs_rr 16384 0
    ip_vs 172032 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
    nf_conntrack 172032 1 ip_vs
    nf_defrag_ipv6 20480 2 nf_conntrack,ip_vs
    libcrc32c 16384 3 nf_conntrack,xfs,ip_vs
    [root@master ~]#reboot -node1节点
    [root@node1 ~]#vim /etc/sysconfig/modules/ipvs.modules
    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh [root@node1 ~]#chmod +x /etc/sysconfig/modules/ipvs.modules
    [root@node1 ~]#bash /etc/sysconfig/modules/ipvs.modules
    [root@node1 ~]#lsmod | grep -e ip_vs
    ip_vs_sh 16384 0
    ip_vs_wrr 16384 0
    ip_vs_rr 16384 0
    ip_vs 172032 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
    nf_conntrack 172032 1 ip_vs
    nf_defrag_ipv6 20480 2 nf_conntrack,ip_vs
    libcrc32c 16384 3 nf_conntrack,xfs,ip_vs
    [root@node1 ~]#reboot -node2节点
    [root@node2 ~]#vim /etc/sysconfig/modules/ipvs.modules
    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh [root@node2 ~]#chmod +x /etc/sysconfig/modules/ipvs.modules
    [root@node2 ~]#bash /etc/sysconfig/modules/ipvs.modules
    [root@node2 ~]#lsmod | grep -e ip_vs
    ip_vs_sh 16384 0
    ip_vs_wrr 16384 0
    ip_vs_rr 16384 0
    ip_vs 172032 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
    nf_conntrack 172032 1 ip_vs
    nf_defrag_ipv6 20480 2 nf_conntrack,ip_vs
    libcrc32c 16384 3 nf_conntrack,xfs,ip_vs
    [root@node2 ~]#reboot
  8. ssh免密认证

    [root@master ~]#ssh-keygen
    Generating public/private rsa key pair.
    Enter file in which to save the key (/root/.ssh/id_rsa):
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    SHA256:VcZ6m+gceBJxwysFWwM08526KiBoSt9qdbDQoMSx3kU root@master
    The key's randomart image is:
    +---[RSA 3072]----+
    |... E .*+o.o |
    | o... .*==.. |
    |... o. .+o+o |
    |.....o o.o.. |
    | o .. o S+.o o |
    |.o. .o .o +.o |
    |+ ..o.. =.. |
    |. o .. .o |
    | ... .. |
    +----[SHA256]-----+
    [root@master ~]#ssh-copy-id -i ~/.ssh/id_rsa.pub root@node1
    /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
    The authenticity of host 'node1 (192.168.111.101)' can't be established.
    ECDSA key fingerprint is SHA256:0UQKIYmXwgllRaiKyKIR8RaO8bzS7GGb5180xGHoiMI.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
    /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
    /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    root@node1's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'root@node1'"
    and check to make sure that only the key(s) you wanted were added. [root@master ~]#ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2
    /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
    The authenticity of host 'node2 (192.168.111.102)' can't be established.
    ECDSA key fingerprint is SHA256:0UQKIYmXwgllRaiKyKIR8RaO8bzS7GGb5180xGHoiMI.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
    /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
    /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    root@node2's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'root@node2'"
    and check to make sure that only the key(s) you wanted were added.

2.3、安装docker

  1. 切换镜像源

    -master节点
    [root@master /etc/yum.repos.d]#curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
    [root@master /etc/yum.repos.d]# dnf -y install epel-release
    [root@master /etc/yum.repos.d]#wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -node1节点
    [root@node1 /etc/yum.repos.d]#curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
    [root@node1 /etc/yum.repos.d]# dnf -y install epel-release
    [root@node1 /etc/yum.repos.d]#wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -node2节点
    [root@node2 /etc/yum.repos.d]#curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
    [root@node2 /etc/yum.repos.d]# dnf -y install epel-release
    [root@node2 /etc/yum.repos.d]#wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  2. 安装docker-ce

    -master节点
    [root@master ~]# dnf -y install docker-ce --allowerasing
    [root@master ~]# systemctl restart docker
    [root@master ~]# systemctl enable docker -node1节点
    [root@node1 ~]# dnf -y install docker-ce --allowerasing
    [root@node1 ~]# systemctl restart docker
    [root@node1 ~]# systemctl enable docker -node2节点
    [root@node2 ~]# dnf -y install docker-ce --allowerasing
    [root@node2 ~]# systemctl restart docker
    [root@node2 ~]# systemctl enable docker
  3. 添加一个配置文件,配置docker仓库加速器

    -master节点
    [root@master ~]#cat > /etc/docker/daemon.json << EOF
    {
    "registry-mirrors": ["https://6vrrj6n2.mirror.aliyuncs.com"],
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-driver": "json-file",
    "log-opts": {
    "max-size": "100m"
    },
    "storage-driver": "overlay2"
    }
    EOF
    [root@master ~]#systemctl daemon-reload
    [root@master ~]#systemctl restart docker -node1节点
    [root@node1 ~]#cat > /etc/docker/daemon.json << EOF
    {
    "registry-mirrors": ["https://6vrrj6n2.mirror.aliyuncs.com"],
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-driver": "json-file",
    "log-opts": {
    "max-size": "100m"
    },
    "storage-driver": "overlay2"
    }
    EOF
    [root@node1 ~]#systemctl daemon-reload
    [root@node1 ~]#systemctl restart docker -node2节点
    [root@node2 ~]#cat > /etc/docker/daemon.json << EOF
    {
    "registry-mirrors": ["https://6vrrj6n2.mirror.aliyuncs.com"],
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-driver": "json-file",
    "log-opts": {
    "max-size": "100m"
    },
    "storage-driver": "overlay2"
    }
    EOF
    [root@node2 ~]#systemctl daemon-reload
    [root@node2 ~]#systemctl restart docker

2.4 安装kubernetes组件

  1. 由于kubernetes的镜像在国外,速度比较慢,这里切换成国内的镜像源

    -master节点
    [root@master ~]#cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF -node1节点
    [root@node1 ~]#cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF -node2节点
    [root@node2 ~]#cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
  2. 安装kubeadm kubelet kubectl工具

    -master节点
    [root@master ~]#dnf -y install kubeadm kubelet kubectl
    [root@master ~]#systemctl restart kubelet
    [root@master ~]#systemctl enable kubelet -node1节点
    [root@node1 ~]#dnf -y install kubeadm kubelet kubectl
    [root@node1 ~]#systemctl restart kubelet
    [root@node1 ~]#systemctl enable kubelet -node2节点
    [root@node2 ~]#dnf -y install kubeadm kubelet kubectl
    [root@node2 ~]#systemctl restart kubelet
    [root@node2 ~]#systemctl enable kubelet
  3. 配置containerd

    # 为确保后面集群初始化及加入集群能够成功执行,需要配置containerd的配置文件/etc/containerd/config.toml,此操作需要在所有节点执行
    -master节点
    [root@master ~]#containerd config default > /etc/containerd/config.toml
    # 将/etc/containerd/config.toml文件中的k8s镜像仓库改为registry.aliyuncs.com/google_containers
    [root@master ~]#vim /etc/containerd/config.toml
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
    # 然后重启并设置containerd服务
    [root@master ~]#systemctl restart containerd
    [root@master ~]#systemctl enable containerd # 为确保后面集群初始化及加入集群能够成功执行,需要配置containerd的配置文件/etc/containerd/config.toml,此操作需要在所有节点执行
    -node1节点
    [root@node1 ~]#containerd config default > /etc/containerd/config.toml
    # 将/etc/containerd/config.toml文件中的k8s镜像仓库改为registry.aliyuncs.com/google_containers
    [root@node1 ~]#vim /etc/containerd/config.toml
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
    # 然后重启并设置containerd服务
    [root@node1 ~]#systemctl restart containerd
    [root@node1 ~]#systemctl enable containerd # 为确保后面集群初始化及加入集群能够成功执行,需要配置containerd的配置文件/etc/containerd/config.toml,此操作需要在所有节点执行
    -node2节点
    [root@node2 ~]#containerd config default > /etc/containerd/config.toml
    # 将/etc/containerd/config.toml文件中的k8s镜像仓库改为registry.aliyuncs.com/google_containers
    [root@node2 ~]#vim /etc/containerd/config.toml
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
    # 然后重启并设置containerd服务
    [root@node2 ~]#systemctl restart containerd
    [root@node2 ~]#systemctl enable containerd
  4. 部署k8s的master节点

    -master节点
    [root@master ~]#kubeadm init \
    --apiserver-advertise-address=192.168.111.100 \
    --image-repository registry.aliyuncs.com/google_containers \
    --kubernetes-version v1.25.4 \
    --service-cidr=10.96.0.0/12 \
    --pod-network-cidr=10.244.0.0/16
    # 建议将初始化内容保存在某个文件中
    [root@master ~]#vim k8s
    To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.111.100:6443 --token eav8jn.zj2muv0thd7e8dad \
    --discovery-token-ca-cert-hash sha256:b38f8a6a6302e25c0bcba2a67c13b234fd0b9fdd8b0c0645154c79edf6555e09 [root@master ~]#vim /etc/profile.d/k8s.sh
    export KUBECONFIG=/etc/kubernetes/admin.conf
    [root@master ~]#source /etc/profile.d/k8s.sh
  5. 安装pod网络插件

    -master节点
    [root@master ~]#wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
    [root@master ~]#kubectl apply -f kube-flannel.yml
    namespace/kube-flannel created
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.apps/kube-flannel-ds created
    [root@master ~]#kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    master NotReady control-plane 6m41s v1.25.4
    [root@master ~]#kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    master Ready control-plane 7m10s v1.25.4
  6. 将node节点加入到k8s集群中

    -node1节点
    [root@node1 ~]#kubeadm join 192.168.111.100:6443 --token eav8jn.zj2muv0thd7e8dad \
    > --discovery-token-ca-cert-hash sha256:b38f8a6a6302e25c0bcba2a67c13b234fd0b9fdd8b0c0645154c79edf6555e09 -node2节点
    [root@node2 ~]#kubeadm join 192.168.111.100:6443 --token eav8jn.zj2muv0thd7e8dad \
    > --discovery-token-ca-cert-hash sha256:b38f8a6a6302e25c0bcba2a67c13b234fd0b9fdd8b0c0645154c79edf6555e09
  7. kubectl get nodes 查看node状态

    -master节点
    [root@master ~]#kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    master Ready control-plane 9m37s v1.25.4
    node1 NotReady <none> 51s v1.25.4
    node2 NotReady <none> 31s v1.25.4
    [root@master ~]#kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    master Ready control-plane 9m57s v1.25.4
    node1 Ready <none> 71s v1.25.4
    node2 Ready <none> 51s v1.25.4
  8. 使用k8s集群创建一个pod,运行nginx容器,然后进行测试

    [root@master ~]#kubectl create  deployment  nginx  --image nginx
    deployment.apps/nginx created
    [root@master ~]#kubectl get pods
    NAME READY STATUS RESTARTS AGE
    nginx-76d6c9b8c-z7p4l 1/1 Running 0 35s
    [root@master ~]#kubectl expose deployment nginx --port 80 --type NodePort
    service/nginx exposed
    [root@master ~]#kubectl get pods -o wide
    NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    nginx-76d6c9b8c-z7p4l 1/1 Running 0 119s 10.244.1.2 node1 <none> <none>
    [root@master ~]#kubectl get services
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15m
    nginx NodePort 10.109.37.202 <none> 80:31125/TCP 17s
  9. 测试访问

  10. 修改默认网页

    [root@master ~]# kubectl exec -it pod/nginx-76d6c9b8c-z7p4l -- /bin/bash
    root@nginx-76d6c9b8c-z7p4l:/# cd /usr/share/nginx/html/
    echo "zhaoshulin" > index.html

K8s集群环境搭建的更多相关文章

  1. 入门-k8s集群环境搭建(二)

    对于 Kubernetes 初学者,在搭建K8S集群时,推荐在阿里云或腾讯云采购如下配置:(您也可以使用自己的虚拟机.私有云等您最容易获得的 Linux 环境) 至少2台 2核4G 的服务器 Cent ...

  2. Hadoop+Spark:集群环境搭建

    环境准备: 在虚拟机下,大家三台Linux ubuntu 14.04 server x64 系统(下载地址:http://releases.ubuntu.com/14.04.2/ubuntu-14.0 ...

  3. 项目进阶 之 集群环境搭建(三)多管理节点MySQL集群

    上次的博文项目进阶 之 集群环境搭建(二)MySQL集群中,我们搭建了一个基础的MySQL集群,这篇博客咱们继续讲解MySQL集群的相关内容,同时针对上一篇遗留的问题提出一个解决方案. 1.单管理节点 ...

  4. Spark 1.6.1分布式集群环境搭建

    一.软件准备 scala-2.11.8.tgz spark-1.6.1-bin-hadoop2.6.tgz 二.Scala 安装 1.master 机器 (1)下载 scala-2.11.8.tgz, ...

  5. hadoop集群环境搭建之zookeeper集群的安装部署

    关于hadoop集群搭建有一些准备工作要做,具体请参照hadoop集群环境搭建准备工作 (我成功的按照这个步骤部署成功了,经实际验证,该方法可行) 一.安装zookeeper 1 将zookeeper ...

  6. hadoop集群环境搭建之安装配置hadoop集群

    在安装hadoop集群之前,需要先进行zookeeper的安装,请参照hadoop集群环境搭建之zookeeper集群的安装部署 1 将hadoop安装包解压到 /itcast/  (如果没有这个目录 ...

  7. hadoop集群环境搭建准备工作

    一定要注意hadoop和linux系统的位数一定要相同,就是说如果hadoop是32位的,linux系统也一定要安装32位的. 准备工作: 1 首先在VMware中建立6台虚拟机(配置默认即可).这是 ...

  8. Ningx集群环境搭建

    Ningx集群环境搭建 Nginx是什么? Nginx ("engine x") 是⼀个⾼性能的 HTTP 和 反向代理 服务器,也是⼀个 IMAP/ POP3/SMTP 代理服务 ...

  9. hadoop2集群环境搭建

    在查询了很多资料以后,发现国内外没有一篇关于hadoop2集群环境搭建的详细步骤的文章. 所以,我想把我知道的分享给大家,方便大家交流. 以下是本文的大纲: 1. 在windows7 下面安装虚拟机2 ...

  10. [转]ZooKeeper 集群环境搭建 (本机3个节点)

    ZooKeeper 集群环境搭建 (本机3个节点) 是一个简单的分布式同步数据库(或者是小文件系统) ------------------------------------------------- ...

随机推荐

  1. 【pytest官方文档】解读- 如何自定义mark标记,并将测试用例的数据传递给fixture函数

    在之前的分享中,我们知道可以使用yield或者return关键字把fixture函数里的值传递给test函数. 这种方法很实用,比如我在fixture函数里向数据库里插入必要的测试数据,那我就可以把相 ...

  2. 如何使用helm优雅安装prometheus-operator,并监控k8s集群微服务

    前言:随着云原生概念盛行,对于容器.服务.节点以及集群的监控变得越来越重要.Prometheus 作为 Kubernetes 监控的事实标准,有着强大的功能和良好的生态.但是它不支持分布式,不支持数据 ...

  3. 从EDR的火热看安全产品的发展

    从EDR的火热看安全产品的发展 2021年4月8日23:13 当开始写这篇博客时,外面正是护网进行得如火如荼的时候.作为一个产品经理,在吃瓜的同时,也在思考着安全产品的发展.这几年一些看得到的变化在深 ...

  4. thinkphp5.1 cookie跨域、thinkphp5.1 session跨域、tp5.1cookie跨域

    cookie跨域: //config/cookie.php return [ //... //仅7.3.0及以上适用 'samesite' => 'None', //是否加密cookie值,fa ...

  5. Unity接入微信支付SDK 2022年版安卓篇

    最近1年转了UE开发,博客更新的比较少,技术栈宽了不少,以后有空尽量多更新,也方便总结记忆 Unity接入微信支付整个过程坑比较多,网上之前的教程要么比较老,要么比较零碎,只能东拼西凑摸索,跑通后还是 ...

  6. Traefik2.3.x 使用大全(更新版)

    文章转载自:https://mp.weixin.qq.com/s?__biz=MzU4MjQ0MTU4Ng==&mid=2247488793&idx=1&sn=bb2b0ad1 ...

  7. Prometheus使用nginx 设置二级路径反向代理

    1.nginx 设置 location /promethues/ { proxy_pass http://10.xx.xxx.55:9090/prometheus/; } 2.设置prometheus ...

  8. 第三章:模版层 - 1:Django模板语言详解

    本节将介绍Django模版系统的语法.Django模版语言致力于在性能和简单性上取得平衡. 如果你有过其它编程背景,或者使用过一些在HTML中直接混入程序代码的语言,那么你需要记住,Django的模版 ...

  9. linux系统排查数据包常用命令

    1.查看当前系统中生效的所有参数 sysctl -a 2.统计处于TIME_WAIT状态的TCP连接数 netstat -ant|grep TIME_WAIT|wc -l 3.统计TCP连接数 net ...

  10. shell脚本中执行source命令不生效的解决办法

    一个shell脚本文件中有一个source命令,使用bash a.sh命令执行后source命令进行验证没有生效. 这是因为在shell脚本中执行source会看到效果,但是shell脚本执行完后再次 ...