k8s记录-kubeam方式部署k8s
参考:https://blog.csdn.net/networken/article/details/84991940
# k8s工具部署方案
# 1.集群规划
| **服务器** | |
| ------------ | ---------------------------------------- |
| **数量** | >1(根据实际提供的服务器分配模块) |
| **配置** | 16 core /32 memory / 300GB硬盘/50M带宽 |
| **操作系统** | CentOS linux 7.2 master节点需要外网环境 |
| **文件系统** | 300G硬盘安装在/ data目录下 |
| **其他条件** | master节点必须具备外网环境 |
| 节点名称 | 主机名 | IP地址 | 操作系统 |
| -------- | ------------- | ----------- | ---------- |
| master | centos01 | 192.168.0.1 | CentOS 7.2 |
| node1 | centos02 | 192.168.0.2 | CentOS 7.2 |
| node2 | centos03 | 192.168.0.3 | CentOS 7.2 |
# 2.基础环境配置
## 2.1 hostname配置(可选)
**1)修改主机名**
**在192.168.0.1 root用户下执行:**
hostnamectl set-hostname VM_0_1_centos
**在192.168.0.2 root用户下执行:**
hostnamectl set-hostname VM_0_2_centos
**在192.168.0.3 root用户下执行:**
hostnamectl set-hostname VM_0_3_centos
**2)加入主机映射**
**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行:**
vim /etc/hosts
192.168.0.1 VM_0_1_centos
192.168.0.2 VM_0_2_centos
192.168.0.3 VM_0_3_centos
## 2.2 关闭selinux(可选)
**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行:**
sed -i '/\^SELINUX/s/=.\*/=disabled/' /etc/selinux/config
setenforce 0
## 2.3 修改Linux最大打开文件数
**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行:**
vim /etc/security/limits.conf
\* soft nofile 65536
\* hard nofile 65536
## 2.4 关闭防火墙(可选)
**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行**
systemctl disable firewalld.service
systemctl stop firewalld.service
systemctl status firewalld.service
## 2.5 软件环境初始化
**1)初始化服务器**
groupadd -g 6000 apps
useradd -s /bin/sh -g apps –d /home/app app
passwd app
yum -y install gcc gcc-c++ make autoconfig openssl-devel supervisor gmp-devel mpfr-devel libmpc-devel libaio numactl autoconf automake libtool libffi-dev
**2)配置sudo**
**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行**
vim /etc/sudoers.d/app
app ALL=(ALL) ALL
app ALL=(ALL) NOPASSWD: ALL
Defaults !env_reset
**3)配置ssh无密登录**
**a. 在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)app用户下执行**
su app
ssh-keygen -t rsa
cat \~/.ssh/id_rsa.pub \>\> /home/app/.ssh/authorized_keys
chmod 600 \~/.ssh/authorized_keys
**b.合并id_rsa_pub文件**
**在192.168.0.1 app用户下执行**
scp \~/.ssh/authorized_keys app\@192.168.0.2:/home/app/.ssh
输入app密码
**在192.168.0.2 app用户下执行**
cat \~/.ssh/id_rsa.pub \>\> /home/app/.ssh/authorized_keys
scp \~/.ssh/authorized_keys app@192.168.0.3:/home/app/.ssh
**在192.168.0.3 app用户下执行**
cat \~/.ssh/id_rsa.pub \>\> /home/app/.ssh/authorized_keys
scp \~/.ssh/authorized_keys app@192.168.0.1:/home/app/.ssh
scp \~/.ssh/authorized_keys app@192.168.0.2:/home/app/.ssh
**c. 在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)app用户下执行ssh 测试**
ssh app@192.168.0.1
ssh app@192.168.0.2
ssh app@192.168.0.3
## 2.6 sysctl参数配置
**在192.168.0.1 192.168.0.2 192.168.0.3 root用户下操作**
**vim /etc/sysctl.conf**
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
**#生效**
sysctl –p
## 2.7 ntpd配置
****1**)服务端配置****
**在192.168.0.1 root用户下操作**
yum install -y ntp ntpdate
**修改etc/ntp.conf**
**注释所有的server和restrict**
**加入:**
server 0.cn.pool.ntp.org
server 0.asia.pool.ntp.org
server 3.asia.pool.ntp.org
restrict 0.cn.pool.ntp.org nomodify notrap noquery
restrict 0.asia.pool.ntp.org nomodify notrap noquery
restrict 3.asia.pool.ntp.org nomodify notrap noquery
server 127.127.1.0 # local clock
fudge 127.127.1.0 stratum 10
system enable ntpd
systemctl disable chronyd
systemctl restart ntpd
**查看网络中的NTP服务器**
ntpq –p
****2**)客户端配置****
**在192.168.0.2 192.168.0.3 root用户下操作**
yum install -y ntp ntpdate
**在/etc/ntp.conf加入**
server 192.168.0.1 prefer
system enable ntpd
systemctl disable chronyd
systemctl restart ntpd
**同步**
ntpdate -u 192.168.0.1
执行hwclock --systohc,把系统时间同步到硬件BIOS
ssh app@192.168.0.3
# 3.配置centos源
**在192.168.0.1 root用户下操作,需要外网环境**
**1)安装插件**
yum install -y yum-plugin-downloadonly createrepo rsync
**2)创建目录**
mkdir -p /data/mirrors/centos
**3)下载文件或上传文件**
yum install nginx -y --downloadonly --downloaddir= /data/mirrors/centos
也可以自行下载rpm包到/data/mirrors/centos
**4)创建repo**
createrepo /data/mirrors/centos
**5)安装nginx**
yum -y install nginx
cd /etc/nginx/conf.d
**vim mirrors.conf**
server {
listen 88;
server_name localhost;
root /data/mirrors/;
location / {
autoindex on;
autoindex_exact_size off;
autoindex_localtime on;
}
}
**启服务**
nginx
nginx -t
nginx -s reload
systemctl enable nginx
systemctl start nginx
**6)配置repo(在192.168.0.1 192.168.0.2 192.168.0.3 root用户下操作)**
**vim /etc/yum.repos.d/mirrors.repo**
[yumbase]
name=yum-local-repository
baseurl=http://192.168.0.1:88/centos/
enabled=1
gpgcheck=0
#验证
yum clean all && yum makecache
yum repoinfo webank-local-repository
**7)验证(任意机器)**
yum -y install 软件名.版本号
**8)同步清华大学源(在192.168.0.1 root用户下操作)**
#!/bin/bash
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/centosplus/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/extras/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/os/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/updates/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/epel/7Server/x86_64/Packages/ /data/mirrors/centos
**9)同步阿里云k8s组件(在192.168.0.1 root用户下操作)**
需要手动下载:https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages,下载并拷贝到 /data/mirrors/centos即可。
# 4.安装docker
**在192.168.0.1 192.168.0.2 192.168.0.3 root用户下操作**
wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.0-3.el7.x86_64.rpm
wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-cli-18.09.0-3.el7.x86_64.rpm
wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-18.09.0-3.el7.x86_64.rpm
wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.3.ce-1.el7.noarch.rpm
rpm -ivh containerd.io-1.2.0-3.el7.x86_64.rpm
rpm -ivh docker-ce-selinux-17.03.3.ce-1.el7.noarch.rpm
rpm -ivh docker-ce-cli-18.09.0-3.el7.x86_64.rpm
rpm -ivh docker-ce-18.09.0-3.el7.x86_64.rpm
systemctl enable docker
usermod -G docker app
systemctl start docker
# 5.registry私有仓库配置
****1**)配置registry私有仓库****
**在192.168.0.1(外网环境)app用户下执行**
docker pull registry.cn-beijing.aliyuncs.com/zhoujun/pause:3.1
docker tag registry.cn-beijing.aliyuncs.com/zhoujun/pause:3.1 k8s.gcr.io/pause:3.1
docker pull registry
docker run -d -v /data/registry:/var/lib/registry -p 5000:5000 --restart=always --privileged=true --name registry registry:latest
**在192.168.0.1 192.168.0.2 192.168.0.3 root用户下执行**
vim /etc/docker/daemon.json加入
{
"registry-mirrors": ["https://njrds9qc.mirror.aliyuncs.com"],
"insecure-registries":["192.168.0.1:5000"]
}
systemctl daemon-reload
systemctl restart docker
docker login 192.168.0.1:5000 输入用户名和密码:wb 123
cat ~/.docker/config.json 查看认证信息
**创建secret**
/data/projects/common/kubernetes/bin/kubectl create secret docker-registry dockercfg-192 --docker-server=192.168.0.1:5000 --docker-username=wb --docker-password=123
**查看创建的dockercfg-192**
/data/projects/common/kubernetes/bin/kubectl get secret |grep dockercfg-192
**2****)推送images到私有仓库**
**在192.168.0.1 app用户下执行**
**a.****改标签**
docker tag f32a97de94e1 192.168.0.1:5000/registry:latest
docker tag k8s.gcr.io/pause:3.1 192.168.0.1:5000/k8s.gcr.io/pause:3.1
**b.****推送**
docker push 192.168.0.1:5000/registry: latest
docker push 192.168.0.1:5000/ k8s.gcr.io/pause:3.1
**c.****拉取**
**在192.168.0.2 192.168.0.3 app用户下执行**
docker pull 192.168.0.1:5000/registry:latest
docker tag f32a97de94e1 registry:latest
docker run -d -v /data/registry:/var/lib/registry -p 5000:5000 --restart=always --privileged=true --name registry registry:latest
docker pull 192.168.0.1:5000/ k8s.gcr.io/pause:3.1
docker tag 192.168.0.1:5000/k8s.gcr.io /pause:3.1 k8s.gcr.io/pause:3.1
# 6.安装k8s管理工具
**在192.168.0.1 192.168.0.2 192.168.0.3 root 用户下安装**
yum -y install kubelet-1.16.1 kubeadm-1.15.4 kubectl-1.15.0 --disableexcludes=kubernetes
systemctl daemon-reload
systemctl enable kubelet
# 7.部署k8s组件
**1)查看所需要镜像(在master节点192.168.0.1 root用户下操作)**
#kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.16.1
k8s.gcr.io/kube-controller-manager:v1.16.1
k8s.gcr.io/kube-scheduler:v1.16.1
k8s.gcr.io/kube-proxy:v1.16.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
**2)下载镜像(在master节点192.168.0.1 root用户下操作)**
**cat kubeadm.sh**
#!/bin/bash
set -e
KUBE_VERSION=v1.16.1
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.10
CORE_DNS_VERSION=1.3.1
GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers
images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})
for imageName in ${images[@]} ; do
docker pull $ALIYUN_URL/$imageName
docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done
**执行脚本**
bash kubeadm.sh
**3)初始化(master节点)**
kubeadm init \
--apiserver-advertise-address 192.168.0.1 \
--kubernetes-version=v1.16.1 \
--image-repository registry.aliyuncs.com/google_containers\
--pod-network-cidr=10.244.0.0/16
初始化了需要重新执行 nohup /usr/local/bin/tiller &
**如返回以下信息表示初始化成功**
kubeadm join 192.168.0.1:6443 --token yksijn.pggvc1rweyk7ryv3 \
--discovery-token-ca-cert-hash sha256:7aee53faa90a6ef1ed6a72b5ef7352843bdb0b4b93c76db786a04805ef47607b
**#添加节点(所有节点)**
kubeadm join 192.168.0.1:8080 --token yksijn.pggvc1rweyk7ryv3 \
--discovery-token-ca-cert-hash sha256:7aee53faa90a6ef1ed6a72b5ef7352843bdb0b4b93c76db786a04805ef47607b --ignore-preflight-errors=all
**#将master节点开放到node节点处**
kubectl taint nodes --all node-role.kubernetes.io/master-
**#导出配置(8080)**
在/etc/profile加入,然后source /etc/profile
export KUBECONFIG=/etc/kubernetes/kubelet.conf
export KUBECONFIG=/etc/kubernetes/admin.conf
**4)安装flannel插件(所有节点)**
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
**#下载镜像**
**vim flanneld.sh**
#!/bin/bash
set -e
FLANNEL_VERSION=v0.11.0
QUAY_URL=quay.io/coreos
QINIU_URL=quay-mirror.qiniu.com/coreos
images=(flannel:${FLANNEL_VERSION}-amd64
flannel:${FLANNEL_VERSION}-arm64
flannel:${FLANNEL_VERSION}-arm
flannel:${FLANNEL_VERSION}-ppc64le
flannel:${FLANNEL_VERSION}-s390x)
for imageName in ${images[@]} ; do
docker pull $QINIU_URL/$imageName
docker tag $QINIU_URL/$imageName $QUAY_URL/$imageName
docker rmi $QINIU_URL/$imageName
done
**执行脚本**
bash flanneld.sh
**#创建**
git clone https://github.com/coreos/flannel.git
cd flannel/Documentation
kubectl apply -f kube-flannel.yml
**#验证节点安装情况**
kubectl get componentstatus
kubectl get node
**k8s组件配置镜像仓库**
**#master节点操作**
docker tag k8s.gcr.io/kube-proxy:v1.16.1 192.168.0.1:5000/k8s.gcr.io/kube-proxy:v1.16.1
docker tag k8s.gcr.io/kube-controller-manager:v1.16.1 192.168.0.1:5000/k8s.gcr.io/kube-controller-manager:v1.16.1
docker tag k8s.gcr.io/kube-apiserver:v1.16.1 192.168.0.1:5000/k8s.gcr.io/kube-apiserver:v1.16.1
docker tag k8s.gcr.io/kube-scheduler:v1.16.1 192.168.0.1:5000/k8s.gcr.io/kube-scheduler:v1.16.1
docker tag k8s.gcr.io/coredns:1.3.1 192.168.0.1:5000/k8s.gcr.io/coredns:1.3.1
docker tag k8s.gcr.io/etcd:3.3.10 192.168.0.1:5000/k8s.gcr.io/etcd:3.3.10
docker tag k8s.gcr.io/pause:3.1 192.168.0.1:5000/k8s.gcr.io/pause:3.1
docker tag quay.io/coreos/flannel:v0.11.0-s390x 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-s390x
docker tag quay.io/coreos/flannel:v0.11.0-ppc64le 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-ppc64le
docker tag quay.io/coreos/flannel:v0.11.0-arm64 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm64
docker tag quay.io/coreos/flannel:v0.11.0-arm 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm
docker tag quay.io/coreos/flannel:v0.11.0-amd64 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-amd64
docker push 192.168.0.1:5000/k8s.gcr.io/kube-proxy:v1.16.1
docker push 192.168.0.1:5000/k8s.gcr.io/kube-controller-manager:v1.16.1
docker push 192.168.0.1:5000/k8s.gcr.io/kube-apiserver:v1.16.1
docker push 192.168.0.1:5000/k8s.gcr.io/kube-scheduler:v1.16.1
docker push 192.168.0.1:5000/k8s.gcr.io/coredns:1.3.1
docker push 192.168.0.1:5000/k8s.gcr.io/etcd:3.3.10
docker push 192.168.0.1:5000/k8s.gcr.io/pause:3.1
docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-s390x
docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-ppc64le
docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm64
docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm
docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-amd64
**node节点操作**
docker pull 192.168.0.1:5000/k8s.gcr.io/kube-proxy:v1.16.1
docker pull 192.168.0.1:5000/k8s.gcr.io/kube-controller-manager:v1.16.1
docker pull 192.168.0.1:5000/k8s.gcr.io/kube-apiserver:v1.16.1
docker pull 192.168.0.1:5000/k8s.gcr.io/kube-scheduler:v1.16.1
docker pull 192.168.0.1:5000/k8s.gcr.io/coredns:1.3.1
docker pull 192.168.0.1:5000/k8s.gcr.io/etcd:3.3.10
docker pull 192.168.0.1:5000/k8s.gcr.io/pause:3.1
docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-s390x
docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-ppc64le
docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm64
docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm
docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-amd64
docker tag 192.168.0.1:5000/k8s.gcr.io/kube-proxy:v1.16.1 k8s.gcr.io/kube-proxy:v1.16.1
docker tag 192.168.0.1:5000/k8s.gcr.io/kube-controller-manager:v1.16.1 k8s.gcr.io/kube-controller-manager:v1.16.1
docker tag 192.168.0.1:5000/k8s.gcr.io/kube-apiserver:v1.16.1 k8s.gcr.io/kube-apiserver:v1.16.1
docker tag 192.168.0.1:5000/k8s.gcr.io/kube-scheduler:v1.16.1 k8s.gcr.io/kube-scheduler:v1.16.1
docker tag 192.168.0.1:5000/k8s.gcr.io/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker tag 192.168.0.1:5000/k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag 192.168.0.1:5000/k8s.gcr.io/pause:3.1 k8s.gcr.io/pause:3.1
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-s390x quay.io/coreos/flannel:v0.11.0-s390x
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-ppc64le quay.io/coreos/flannel:v0.11.0-ppc64le
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm64 quay.io/coreos/flannel:v0.11.0-arm64
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm quay.io/coreos/flannel:v0.11.0-arm
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
# 8.安装helm
**所有节点安装**
wget
https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz
tar xvf helm-v2.14.3-linux-amd64.tar.gz
sudo cp linux-amd64/helm tiller /usr/local/bin
sudo yum install -y socat
sudo yum install -y *rhsm*
sudo yum –y install bridge*
sudo nohup /usr/local/bin/tiller &
sudo sed -i '$a\export HELM_HOST=localhost:44134' /etc/profile
source /etc/profile
helm version
k8s记录-kubeam方式部署k8s的更多相关文章
- k8s记录-kubeam部署
#配置源[kubernetes] name=kubernetes repo baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kuberne ...
- k8s记录-node组件部署(十)
1)CA 证书配置登录 192.168.0.1 app 用户下cd ssl/kubernetes#注意修改 KUBE_HOME,BOOTSTRAP_TOKEN #与 3.5 3)token 一致,KU ...
- k8s记录-master组件部署(八)
在 192.168.0.1 app 用户下执行1)程序准备tar zxvf kubernetes-server-linux-amd64.tar.gzmv kubernetes/server/bin/{ ...
- k8s重要概念及部署k8s集群(一)--技术流ken
重要概念 1. cluster cluster是 计算.存储和网络资源的集合,k8s利用这些资源运行各种基于容器的应用. 2.master master是cluster的大脑,他的主要职责是调度,即决 ...
- k8s重要概念及部署k8s集群(一)
k8s介绍 Kubernetes(k8s)是Google开源的容器集群管理系统(谷歌内部:Borg).在Docker技术的基础上,为容器化的应用提供部署运行.资源调度.服务发现和动态伸缩等一系列完整功 ...
- centos7.8 安装部署 k8s 集群
centos7.8 安装部署 k8s 集群 目录 centos7.8 安装部署 k8s 集群 环境说明 Docker 安装 k8s 安装准备工作 Master 节点安装 k8s 版本查看 安装 kub ...
- 使用Kubeadm创建k8s集群之部署规划(三十)
前言 上一篇我们讲述了使用Kubectl管理k8s集群,那么接下来,我们将使用kubeadm来启动k8s集群. 部署k8s集群存在一定的挑战,尤其是部署高可用的k8s集群更是颇为复杂(后续会讲).因此 ...
- 二进制方式安装 k8s
推荐个好用的安装k8s的工具 https://github.com/easzlab/kubeasz 该工具基于二进制方式部署 k8s, 利用 ansible-playbook 实现自动化 1.1 ...
- 使用saltstack自动部署K8S
使用saltstack自动部署K8S 一.环境准备 1.1 规划 1. 操作系统 CentOS-7.x-x86_64. 2. 关闭 iptables 和 SELinux. 3. 所有节点的主机名和 I ...
随机推荐
- 如何为scratch3.0创建一个独立的页面或窗体
很多人都利用GIT上的scratch3.0做开发,但是苦于有些定制需要个性化开发但是不知道如何动手.本篇文章来做好普及工作吧. 首先需要完成事项如下: 1.需要进行modal定义 2.新增窗口的UI界 ...
- xargs的用法
处理带有空格的文件名 #我们创建了3个日志文件, 且故意让文件名称中都含有空格 [roc@roclinux ~]$ ;i<;i++)); do touch "test ${i}.log ...
- 本地项目git到github上
步骤: 1.下载git,安装完成后到桌面右击鼠标会出现git的选项 2.创建一个本地仓库用来存储你的本地项目,我在D盘创建一个reposity的文件夹 3.在reposity文件夹打开git命令行,输 ...
- Virtual DOM--react
Consider a DOM made of thousands of divs. Remember, we are modern web developers, our app is very SP ...
- 传统ELK分布式日志收集的缺点?
传统ELK图示: 单纯使用ElK实现分布式日志收集缺点? 1.logstash太多了,扩展不好. 如上图这种形式就是一个 tomcat 对应一个 logstash,新增一个节点就得同样的拥有 logs ...
- [React] Fix "React Error: Rendered fewer hooks than expected"
In this lesson we'll see an interesting situation where we're actually calling a function component ...
- WinDbg常用命令系列---sx, sxd, sxe, sxi, sxn, sxr, sx- (设置异常)
简介 sx*命令控制调试器在正在调试的应用程序中发生异常或发生某些事件时采取的操作. 使用形式 sx sx{e|d|i|n} [-c "Cmd1"] [-c2 "Cmd2 ...
- jq 轮播图 转载-周菜菜
<style> li{list-style-type:none ; display:inline; width:90px; height:160px; float:left; } .pic ...
- 关于单片机的RAM
一块RAM 分为了 堆 和 栈 名词而已,知道就可以了, 各种内存溢出问题: 全局数组访问越界 出现的问题:直接重启,或者死机 解决办法 : 额,写好自己的程序吧!!!!!!! 函数的局部变量过 ...
- 洛谷 P2872 【[USACO07DEC]道路建设Building Roads】
P2872 传送门 首先 题目概括:题目让着求使所有牧场都联通.需要修建多长的路. 显然这是一道最小生成树板子题(推荐初学者做). 那我就说一下kruskal吧. Kruskal算法是一种用来查找最小 ...