准备服务器

ESXi6.5安装Ubuntu18.04 Server, 使用三台主机, 计划使用hostname为 kube01, kube02, kube03, 配置为2核4G/160G, K8s要求U为双核以上.

因为ESXi6.5存在Ubuntu虚机在Remote SSH时宕机的Bug, 根据 https://kb.vmware.com/s/article/2151480 中的解决方案, 需要SSH登录ESXi后修改配置, 对应的文件在 /vmfs/volumes/584f7xxx-7xx749b4-3461-x0... / 目录下, 将虚机关机后, 找到对应的虚机文件目录, 在下面找到vmx文件, 在最后添加

vmxnet3.rev.30 = FALSE

更新服务器

将Ubuntu的apt源设为国内

kube02:~$ more /etc/apt/sources.list
deb https://mirrors.ustc.edu.cn/ubuntu bionic main
deb https://mirrors.ustc.edu.cn/ubuntu bionic-security main
deb https://mirrors.ustc.edu.cn/ubuntu bionic-updates main sudo apt update
sudo apt upgrade

修改主机名

修改cloud.cfg

sudo vi /etc/cloud/cloud.cfg
# 将
preserve_hostname: false
# 修改为
preserve_hostname: true

否则hostnamectl set-hostname在重启后就被恢复了

修改hostname

sudo hostnamectl set-hostname kube01

关闭swap分区

1. 立即关闭swap

sudo swapoff -a

2. 在fstab中关闭swap

vi /etc/fstab

用#注释swap那一行

3. 在systemctl中禁用swap, 这一步如果不操作的话, 重启后依然会出现swap分区

# 也可能是sdb, sdc, 根据自己机器硬盘定, 看哪个分区是swap), 假定是/dev/sda2
sudo fdisk -lu /dev/sda
# 根据上一步的结果, 执行下面的命令
sudo systemctl mask dev-sda2.swap

安装并配置 Docker

根据计划安装的k8s版本, 选择对应的docker版本, 以下为直接安装最新版

# 准备软件
sudo apt install apt-transport-https ca-certificates curl software-properties-common
# 安装证书, 注意管道后面要用sudo
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
# 添加当前发行版的apt源
lsb_release -cs
sudo add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
# 安装Docker
sudo apt install docker-ce
# 检查版本, 此次安装版本为 19.03.5
docker version
# 将当前用户添加到docker group, 之后需要重新登录使其生效, 用id命令检查
sudo usermod -aG docker milton
# 配置docker, 添加mirror及其他配置
sudo vi /etc/docker/daemon.json

daemon.json内容如下

{
"registry-mirrors": ["https://registry.docker-cn.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {"max-size": "100m"},
"storage-driver": "overlay2"
}

修改cgroup为systemd

# 重启 docker服务, 并检查Cgroup Driver和Registry Mirrors是否正确
sudo systemctl restart docker
docker info

如果需要安装指定版本的docker, 需要使用以下命令

# 查看可用版本列表
apt-cache madison docker-ce
# 安装指定版本
sudo apt install docker-ce=18.06.3~ce~3-0~ubuntu

安装Kubernetes

# 安装证书, 注意管道后面的sudo
curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
# 添加apt源, 没有bionic的, 用xenial
cd /etc/apt/sources.list.d/
sudo vi kubernetes.list

kubernetes.list文件的内容

deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main

更新并安装

sudo apt update
sudo apt install kubelet kubeadm kubectl

kubeadm: the command to bootstrap the cluster.
kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
kubectl: the command line util to talk to your cluster.

同样, 这里也可以选择指定的版本, 使用以下命令安装

apt-cache madison kubelet
sudo apt install kubelet=1.14.10-00 kubeadm=1.14.10-00 kubectl=1.14.10-00

拖取无法下载的k8s容器镜像

查看需要的镜像列表, 会得到以 k8s.gcr.io/ 开头的一堆结果

kubeadm config images list

写一个脚本, 将来源改为 registry.aliyuncs.com/google_containers/ , 拖取完再改回去, 脚本内容如下, 要根据上一步得到的列表修改, 然后执行.

#!/bin/bash
# 下面的镜像应该去除"k8s.gcr.io/"的前缀,版本换成kubeadm config images list命令获取到的版本
images=(
kube-apiserver:v1.17.0
kube-controller-manager:v1.17.0
kube-scheduler:v1.17.0
kube-proxy:v1.17.0
pause:3.1
etcd:3.4.3-0
coredns:1.6.5
) for imageName in ${images[@]} ; do
docker pull registry.aliyuncs.com/google_containers/$imageName
docker tag registry.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
docker rmi registry.aliyuncs.com/google_containers/$imageName
done

对于node节点, 因为join后也需要下载并启动对应的容器, 如果没有预先下载会发现join后, 在nodes列表里一直会NotReady, 所以也要预先下载, 配置node节点到这步就可以了, 如果是配置master节点, 就再往下走

使用kubeadm init初始化Master主机

上面的准备工作都做好之后, 就可以初始化Master主机了

sudo kubeadm init --apiserver-advertise-address=0.0.0.0 --pod-network-cidr=172.16.0.0/16 --service-cidr=10.1.0.0/16

其中的参数说明

  • --apiserver-advertise-address 用哪个IP(网口)提供api, 可以用当前主机的IP, 或者0.0.0.0不指定
  • --pod-network-cidr Pod层的网络IP范围, 需要与后面要配置的kube-flannel.yml里的设置一致
  • --service-cidr Service层的网络IP范围, 这个是虚拟IP不会体现在路由表上, 与前面的IP区分开就行

输出的信息

W1231 08:57:05.495224   11297 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W1231 08:57:05.495416 11297 version.go:102] falling back to the local client version: v1.17.0
W1231 08:57:05.495703 11297 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1231 08:57:05.495735 11297 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kube01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.11.129]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kube01 localhost] and IPs [192.168.11.129 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kube01 localhost] and IPs [192.168.11.129 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1231 08:57:14.315543 11297 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1231 08:57:14.318419 11297 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 37.004860 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kube01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kube01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: f3jgn2.5w8152dpifacihnj
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.11.129:6443 --token f3jgn2.5w8152dpifacihnj \
--discovery-token-ca-cert-hash sha256:cc1ae32e0924dffa587b5d94b61005ae892db289f1a59f1ef71b45a7eda65ca3

根据上面的提示, 创建.kube 目录, 复制config文件并修改owner属性. kubeadm join的命令在24小时内有效(包括关机重启).

检查

# 查看pods
kubectl get pods -n kube-system
# 输出
NAME READY STATUS RESTARTS AGE
coredns-6955765f44-7dnqv 1/1 Running 0 71m
coredns-6955765f44-pvlcp 1/1 Running 0 71m
etcd-kube01 1/1 Running 0 71m
kube-apiserver-kube01 1/1 Running 0 71m
kube-controller-manager-kube01 1/1 Running 0 71m
kube-proxy-7c8f5 1/1 Running 0 71m
kube-scheduler-kube01 1/1 Running 0 71m

.

安装Flannel

# 下载 kube-flannel.yml
wget https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
# 修改其中net-conf.json的Network参数使其与kubeadm init时指定的 --pod-network-cidr一致, 此次使用的是172.16.0.0/16
vi kube-flannel.yml
# 安装
kubectl apply -f kube-flannel.yml
输出
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

而后会在后台进程下载flannel的容器镜像并启动, 稍待片刻后, 查看flannel网络信息

more /run/flannel/subnet.env
FLANNEL_NETWORK=172.16.0.0/16
FLANNEL_SUBNET=172.16.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true

查看flannel网络配置

more /etc/cni/net.d/10-flannel.conflist
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}

再次查看pods, 能看到新增加的flannel

kube-flannel-ds-amd64-kkxlm      1/1     Running   0          3m5s

查看pod日志

kubectl logs coredns-6955765f44-7dnqv -n kube-system
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2

查看nodes, 此时只有master主机

kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube01 Ready master 78m v1.17.0

Node主机加入集群

使用前面kubeadm init产生的命令, 需要sudo, 与网上查到的教程不同, 不需要从master主机复制配置文件, 实际测试直接运行下面的命令就加入集群了

sudo kubeadm join 192.168.11.129:6443 --token f3jgn2.5w8152dpifacihnj --discovery-token-ca-cert-hash sha256:cc1ae32e0924dffa587b5d94b61005ae892db289f1a59f1ef71b45a7eda65ca3

输出

W1231 10:42:36.665020    6229 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

在master主机上检查新加入的node主机

kubectl get nodes
# 会看到NotReady
NAME STATUS ROLES AGE VERSION
kube01 Ready master 105m v1.17.0
kube02 NotReady <none> 10s v1.17.0 # 过一段时间后就Ready了
NAME STATUS ROLES AGE VERSION
kube01 Ready master 107m v1.17.0
kube02 Ready <none> 109s v1.17.0

部署测试容器

在master主机上创建一个nginx-deployment.yaml文件, 内容如下

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80

如果在国内, 可以将nginx:1.7.9替换为 registry.cn-shanghai.aliyuncs.com/jovi/nginx:alpine 这个镜像拉取比较快

运行部署命令

kubectl apply -f nginx-deployment.yaml

查看部署, 查看对应的pod, 如果pod一直不ready, 可以用describe看pod的event列表, 现在执行到哪一步. 在有些网络下pull image会花比较长时间.

$ kubectl describe deployment nginx-deployment

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-6dd86d77d-qwlmz 0/1 ContainerCreating 0 78s
nginx-deployment-6dd86d77d-xk294 0/1 ContainerCreating 0 78s $ kubectl describe pod nginx-deployment-6dd86d77d-qwlmz

通过describe查看到pod的IP后, 就可以在master节点通过 curl http://IP 查看到nginx的欢迎页了.

集群节点间Pods的网络互通问题

在K8s 1.17下, 默认安装完, master节点就能ping通node节点上的pod IP, 不存在互通问题.

存在问题的是K8s 1.14. 在百度上查到的方式, 大多数使用了下面的第一种方法, 实际上不是最优方案.

在k8s 1.12之前, 默认是使用以下方式来解决node节点的pod之间的互相访问, 相应讨论 https://github.com/coreos/flannel/issues/699

1. 在node节点上, 修改 /etc/sysctl.conf,  设置 #net.ipv4.ip_forward=1, 并执行 sudo sysctl -p 生效
2. 在node节点上, 执行 sudo iptables --policy FORWARD ACCEPT

这时候从master节点或者其他node节点, 去ping这个node节点上的pod IP就能ping通

在1.13及之后, k8s调整了网络策略, 因为这样的方式会带来安全问题, 相应的讨论在 
https://github.com/kubernetes/kubernetes/issues/40182 以及 https://github.com/moby/moby/pull/28257

要解决互通问题, 可以使用针对cni0网口的特定规则, 执行

sudo iptables -A FORWARD -i cni0 -j ACCEPT
sudo iptables -A FORWARD -o cni0 -j ACCEPT

然后这个node节点上的pod IP就能被ping通了. 在1.14下没有直接设置这个的方法, 可以加到开机启动执行脚本中去.

节点维护

节点的维护涉及到加入/删除节点, 停用/启用节点等

加入新节点

新加入节点: 在node主机上使用 kubeadm join 命令. 对于已经过期的token, 在master主机上通过以下步骤得到hash

# 查看现在可用的token
$ kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
pn2yvw.h2ffrw5goe0y8hoy 3h 2020-01-08T11:41:49Z authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token # 运行以下命令得到sha256 Hash, 取等号后面的部分
$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt \
| openssl rsa -pubin -outform der 2>/dev/null \
| openssl dgst -sha256 -hex \
(stdin)= 165d0f5e60f569e8fbf558f61b8b3e823023cdba4e3d95aa55cc5b6e7a082841

根据上面得到的结果, 组装成kubeadm join命令, 两个值分别来自于上面两个命令的结果, 在node主机上执行

sudo kubeadm join 192.168.11.129:6443 --token pn2yvw.h2ffrw5goe0y8hoy --discovery-token-ca-cert-hash sha256:165d0f5e60f569e8fbf558f61b8b3e823023cdba4e3d95aa55cc5b6e7a082841

如果没有可用的token, 或者都已经过期, 则使用以下命令创建

kubeadm token create --print-join-command

停用 / 启用节点

在master主机上执行命令

# 停用节点, 使其unscheduable
kubectl drain 节点名 # 启用节点, 使其scheduable
kubectl uncordon 节点名

删除节点

在master主机上, 先停用节点后, 再使用 kubectl delete node 节点名 删除节点

集群关机重启

集群关机, 直接在所有节点上执行halt -p是可以的, 如果讲究操作顺序的话:

  1. 卸载pods,
  2. 在master上drain所有的node,
  3. 在node上停止kubelet服务, 停止docker服务, 关机
  4. 在master上停止kubelet服务, 停止docker服务, 关机

因为K8s视所有pod都是临时的, 整个集群应当看做一个和持久数据无关的服务群体, 所以在停机时, 应当将(非系统自带的)pod全部消除, 在下次启动集群后, 再由发布脚本重建所有pod.

参考

https://kubernetes.io/docs/setup/production-environment/container-runtimes/
http://pwittrock.github.io/docs/admin/kubeadm/
https://github.com/coreos/flannel
https://www.latelee.org/kubernetes/k8s-deploy-1.17.0-detail.html
https://blog.csdn.net/liukuan73/article/details/83116271

Ubuntu18.04 Server部署Flannel网络的Kubernetes的更多相关文章

  1. 二进制安装 kubernetes 1.12(二) - 安装docker, 部署Flannel网络

    在 node 节点上安装 docker 参考 https://www.cnblogs.com/klvchen/p/8468855.html Flannel 工作原理: 部署Flannel网络 在 ma ...

  2. kubernetes容器集群部署Flannel网络

    Overlay Network:覆盖网络,在基础网络上叠加的一种虚拟网络技术模式,该网络中的主机通过虚拟链路连接起来. VXLAN:将源数据包封装到UDP中,并使用基础网络的IP/MAC作为外层报文头 ...

  3. 高可用Kubernetes集群-5. 部署flannel网络

    七.部署flannel网络 kubernetes支持基于vxlan方式的flannel与weave网络,基于BGP路由的Calico网络,本节采用flannel网络. Flannel网络采用etcd等 ...

  4. ubuntu18.04 server配置静态ip,新的网络工具netplan的使用方法【转:http://forum.ubuntu.org.cn/viewtopic.php?t=487463】

    最新发布的ubuntu18.04 server,启用了新的网络工具netplan,对于命令行配置网络参数跟之前的版本有比较大的差别,现在介绍如下:1.其网络配置文件是放在/etc/netplan/50 ...

  5. 部署Flannel网络

    部署Flannel网络 部署flannel网络需要执行以下步骤: 1)写入分配的子网段到etcd,供flanneld使用 2)下载二进制包 3)配置Flannel 4)systemd管理Flannel ...

  6. ubuntu18.04 server配置静态ip (转载)

    原文地址: https://blog.csdn.net/mossan/article/details/80381679 最新发布的ubuntu18.04 server,启用了新的网络工具netplan ...

  7. ubuntu18.04 server配置静态ip

    最新发布的ubuntu18.04 server,启用了新的网络工具netplan,对于命令行配置网络参数跟之前的版本有比较大的差别,现在介绍如下:1.其网络配置文件是放在/etc/netplan/50 ...

  8. 基于Ubuntu18.04一站式部署(python-mysql-redis-nginx)

    基于Ubuntu18.04一站式部署 Python3.6.8的安装 1. 安装依赖 ~$ sudo apt install openssl* zlib* 2. 安装python3.6.8(个人建议从官 ...

  9. Ubuntu18.04 Server安装Nginx+Git服务和独立的svn服务

    安装Nginx+Git 需要安装的包有 nginx, fcgiwrap, git. 其中git在Ubuntu18.04 Server安装时已经默认安装了. 需要安装的是前两个 而fcgiwrap是在 ...

  10. Ubuntu18.04通过网线共享网络

    Ubuntu18.04通过网线共享网络 这几天要给实验室一个新电脑装系统,但是实验室路由器好像有点问题,所以决定共享我的笔记本的网络,但是搜了很多教程都是基于Ubuntu16.04的,而Ubuntu1 ...

随机推荐

  1. CSS - 工具类 tool.css

    /* flex */ .flex{     display: flex; } .f1{     flex:1 } .flex-center{     align-items: center;      ...

  2. [转帖]聊聊TPS、QPS、CPS概念和区别

    https://cloud.tencent.com/developer/article/1859053 TPS 概念 TPS:是TransactionsPerSecond的缩写,也就是事务数/秒.它是 ...

  3. [转帖]高并发下nginx配置模板

    user web;       # One worker process per CPU core.   worker_processes 8;       # Also set   # /etc/s ...

  4. 银河麒麟在线升级新版本docker

    银河麒麟在线升级新版本docker 卸载 学习来自: https://cloud.tencent.com/developer/article/1491742 yum remove docker \ d ...

  5. [转帖]使用 TiUP cluster 在单机上安装TiDB

    https://zhuanlan.zhihu.com/p/369414808   TiUP 是 TiDB 4.0 版本引入的集群运维工具,TiUP cluster 是 TiUP 提供的使用 Golan ...

  6. [转帖]【Jmeter】Jmeter压力测试工具安装及使用教程(redis测试)

    摘自:https://www.cnblogs.com/monjeo/p/9330464.html 一.Jmeter下载 进入官网:http://jmeter.apache.org/ 1.第一步进入官网 ...

  7. [转帖]解释docker单机部署kraft模式kafka集群时,尝试各种方式的网络broker全部不通而启动失败的原因,并提示常见bug关注点

    现象: controller节点与其他两个broker的通信失败.公网ip,宿主机ip,服务名,各种网络方式,都无法成功. 两点提示: 1.bug原因:因为单机内存不够用,设置了较低的 KAFKA_H ...

  8. 【转帖】SmartNIC — TSO、GSO、LRO、GRO 技术

    目录 文章目录 目录 TSO(TCP Segmentation Offload) GSO(Generic Segmentation Offload) LRO(Large Receive Offload ...

  9. [转帖]高性能网络实战:借助 eBPF 来优化负载均衡的性能

    https://zhuanlan.zhihu.com/p/592981662 网络性能优化,eBPF 是如何发挥作用的呢? 本篇文章,我就以最常用的负载均衡器为例,带你一起来看看如何借助 eBPF 来 ...

  10. [转帖]Windows系统内置测试工具(winsat)

    WinSAT 是 Windows 系统评估工具(Windows System Assessment Tool)的缩写,是从 Windows Vista 开始便内置于系统之中的命令行工具,可对 Wind ...