kubeadm升级k8s之1.23.17->1.24.17
查看当前版本
[root@k8s-master31 ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master31 Ready control-plane,master 47h v1.23.17 10.0.0.31 <none> openEuler 22.03 (LTS-SP1) 5.10.0-136.12.0.86.oe2203sp1.x86_64 docker://26.1.4
k8s-node34 Ready <none> 47h v1.23.17 10.0.0.34 <none> openEuler 22.03 (LTS-SP1) 5.10.0-136.12.0.86.oe2203sp1.x86_64 docker://26.1.4
k8s-node35 Ready <none> 47h v1.23.17 10.0.0.35 <none> openEuler 22.03 (LTS-SP1) 5.10.0-136.12.0.86.oe2203sp1.x86_64 docker://26.1.4
Master 节点操作
升级 kubeadm
[root@k8s-master31 ~]# yum install -y kubeadm-1.24.17-0 --disableexcludes=kubernetes
[root@k8s-master31 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.17", GitCommit:"22a9682c8fe855c321be75c5faacde343f909b04", GitTreeState:"clean", BuildDate:"2023-08-23T23:43:11Z", GoVersion:"go1.20.7", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s-master31 ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0822 10:35:30.450616 51429 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/dockershim.sock". Please update your configuration!
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.23.17
[upgrade/versions] kubeadm version: v1.24.17
I0822 10:35:38.878235 51429 version.go:256] remote version is much newer: v1.31.0; falling back to: stable-1.24
[upgrade/versions] Target version: v1.24.17
[upgrade/versions] Latest version in the v1.23 series: v1.23.17
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 3 x v1.23.17 v1.24.17
Upgrade to the latest stable version:
COMPONENT CURRENT TARGET
kube-apiserver v1.23.17 v1.24.17
kube-controller-manager v1.23.17 v1.24.17
kube-scheduler v1.23.17 v1.24.17
kube-proxy v1.23.17 v1.24.17
CoreDNS v1.8.6 v1.8.6
etcd 3.5.6-0 3.5.6-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.24.17
_____________________________________________________________________
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
修改节点 runtime
PS:安裝了docker 24版本,默認安裝了containerd
[root@k8s-master-01 ~]# kubectl edit nodes k8s-master31
apiVersion: v1
kind: Node
metadata:
annotations:
csi.volume.kubernetes.io/nodeid: '{"csi.tigera.io":"k8s-master31"}'
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
# K8s 1.23版本之前使用的runtime是/var/run/dockershim.sock
# 修改为/var/run/containerd/containerd.sock
配置containerd修改默认Cgroup驱动
[root@k8s-master31 ~]# containerd config default > /etc/containerd/config.toml
[root@k8s-master31 ~]# sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml
[root@k8s-master31 ~]# vim /var/lib/kubelet/kubeadm-flags.env
# 移除--network-plugin=cni
# 添加--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock
KUBELET_KUBEADM_ARGS="--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.6 --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
systemctl daemon-reload
#重启containerd
systemctl restart containerd
#重启kubelet
systemctl restart kubelet
在 Kubernetes 的较早版本中,--network-plugin 选项用于指定 Kubelet 应该使用的网络插件,例如 cni、kubenet 等。然而,从 Kubernetes v1.24 版本开始,dockershim(包括 kubenet)已被完全移除,同时许多与 dockershim 相关的标志也不再被支持。这可能是你遇到这个问题的原因。
定义 crictl 如何连接到容器运行时。
[root@k8s-master31 ~]# cat >/etc/crictl.yaml<<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
# 加载
systemctl daemon-reload
systemctl restart containerd
执行升级
[root@k8s-master31 ~]# kubeadm upgrade apply v1.24.17
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0822 10:54:49.815166 15201 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/containerd/containerd.sock". Please update your configuration!
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.24.17"
[upgrade/versions] Cluster version: v1.23.17
[upgrade/versions] kubeadm version: v1.24.17
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.24.17" (timeout: 5m0s)...
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests3701995785"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-08-22-10-55-29/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-08-22-10-55-29/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-08-22-10-55-29/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Removing the deprecated label node-role.kubernetes.io/master='' from all control plane Nodes. After this step only the label node-role.kubernetes.io/control-plane='' will be present on control plane Nodes.
[upgrade/postupgrade] Adding the new taint &Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,} to all control plane Nodes. After this step both taints &Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,} and &Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,} should be present on control plane Nodes.
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.24.17". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
升级Calico
# 查看当前集群配置
[root@k8s-master31 ~]# kubectl -n kube-system get cm kubeadm-config -o yaml
# 下载 Tigera Calico 操作器和自定义资源定义。
# 通过创建必要的自定义资源来安装 Calico。有关此清单中可用的配置选项的更多信息,请参阅安装参考。
wget https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/tigera-operator.yaml
wget https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/custom-resources.yaml
参考链接: https://docs.tigera.io/archive/v3.24/getting-started/kubernetes/quickstart
修改custom-resources.yaml里的IP段为上面集群的配置
Node 节点升级
在master上腾空worker 节点
[root@k8s-master31 ~]# kubectl drain k8s-node34 --ignore-daemonsets --delete-emptydir-data
node/k8s-node34 cordoned
WARNING: ignoring DaemonSet-managed Pods: calico-system/calico-node-5v6tj, calico-system/csi-node-driver-jhkkt, kube-system/kube-proxy-r5fz4
evicting pod tigera-operator/tigera-operator-6c49dc8ddf-99nr7
evicting pod calico-system/calico-kube-controllers-68995875fb-bpgfz
evicting pod calico-apiserver/calico-apiserver-86c46fdb85-hmglw
evicting pod calico-system/calico-typha-88d5b6455-4d9df
evicting pod kube-system/upgrade-health-check-w2jkr
pod/calico-typha-88d5b6455-4d9df evicted
pod/calico-apiserver-86c46fdb85-hmglw evicted
pod/tigera-operator-6c49dc8ddf-99nr7 evicted
pod/calico-kube-controllers-68995875fb-bpgfz evicted
pod/upgrade-health-check-w2jkr evicted
在 node 节点操作
[root@k8s-node34 ~]# yum install -y kubeadm-1.24.17-0 --disableexcludes=kubernetes
[root@k8s-node34 ~]# vim /var/lib/kubelet/kubeadm-flags.env
[root@k8s-node34 ~]# containerd config default > /etc/containerd/config.toml
[root@k8s-node34 ~]# sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml
[root@k8s-node34 ~]# vim /etc/containerd/config.toml
[root@k8s-node34 ~]# cat >/etc/crictl.yaml<<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
[root@k8s-node34 ~]# systemctl daemon-reload
#重启containerd
systemctl restart containerd
#重启kubelet
systemctl restart kubelet
[root@k8s-node34 ~]# yum install -y kubelet-1.24.17-0 kubectl-1.24.17-0 --disableexcludes=kubernetes
Last metadata expiration check: 0:24:58 ago on 2024年08月22日 星期四 14时02分20秒.
Dependencies resolved.
=========================================================================================================================================
Package Architecture Version Repository Size
=========================================================================================================================================
Upgrading:
kubectl x86_64 1.24.17-0 k8s 10 M
kubelet x86_64 1.24.17-0 k8s 21 M
Transaction Summary
=========================================================================================================================================
Upgrade 2 Packages
Total download size: 31 M
Downloading Packages:
(1/2): c3dc5ffa817d2c69bdd77494b5b9240568c4eb0d06b7b1bf3546bdab971741f5-kubectl-1.24.17-0.x86_64.rpm 405 kB/s | 10 MB 00:25
(2/2): f46e0356e279308a525195d1ae939268faaea772a119cb752480be2b998bec54-kubelet-1.24.17-0.x86_64.rpm 401 kB/s | 21 MB 00:53
-----------------------------------------------------------------------------------------------------------------------------------------
Total 595 kB/s | 31 MB 00:53
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Running scriptlet: kubelet-1.24.17-0.x86_64 1/1
Upgrading : kubelet-1.24.17-0.x86_64 1/4
Upgrading : kubectl-1.24.17-0.x86_64 2/4
Cleanup : kubectl-1.23.17-0.x86_64 3/4
Cleanup : kubelet-1.23.17-0.x86_64 4/4
Running scriptlet: kubelet-1.23.17-0.x86_64 4/4
Verifying : kubectl-1.24.17-0.x86_64 1/4
Verifying : kubectl-1.23.17-0.x86_64 2/4
Verifying : kubelet-1.24.17-0.x86_64 3/4
Verifying : kubelet-1.23.17-0.x86_64 4/4
Upgraded:
kubectl-1.24.17-0.x86_64 kubelet-1.24.17-0.x86_64
Complete!
[root@k8s-node34 ~]# echo "KUBELET_EXTRA_ARGS=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --cgroup-driver=systemd" >/etc/sysconfig/kubelet
[root@k8s-node34 ~]# systemctl restart kubelet
在 master 节点操作
[root@k8s-master31 ~]# kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.24.17"...
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests2793172026"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Current and new manifests of kube-apiserver are equal, skipping upgrade
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Current and new manifests of kube-controller-manager are equal, skipping upgrade
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Current and new manifests of kube-scheduler are equal, skipping upgrade
[upgrade] The control plane instance for this node was successfully updated!
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
查看集群状态
[root@k8s-master31 ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master31 Ready control-plane 2d3h v1.24.17 10.0.0.31 <none> openEuler 22.03 (LTS-SP1) 5.10.0-136.12.0.86.oe2203sp1.x86_64 containerd://1.6.33
k8s-node34 Ready,SchedulingDisabled <none> 2d3h v1.24.17 10.0.0.34 <none> openEuler 22.03 (LTS-SP1) 5.10.0-136.12.0.86.oe2203sp1.x86_64 containerd://1.6.33
k8s-node35 Ready,SchedulingDisabled <none> 2d3h v1.24.17 10.0.0.35 <none> openEuler 22.03 (LTS-SP1) 5.10.0-136.12.0.86.oe2203sp1.x86_64 containerd://1.6.33
测试
# 创建一个临时的 Nginx Pod
[root@k8s-master31 ~]# kubectl run temp-nginx --image=nginx --restart=Never
pod/temp-nginx created
# 创建一个 Service 来暴露 Nginx Pod
[root@k8s-master31 ~]# kubectl expose pod temp-nginx --port=80 --target-port=80 --name=temp-nginx-svc
service/temp-nginx-svc exposed
[root@k8s-master31 ~]# kubectl get svc,pod -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 172.96.0.1 <none> 443/TCP 2d3h <none>
service/temp-nginx-svc ClusterIP 172.101.81.80 <none> 80/TCP 31s run=temp-nginx
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/temp-nginx 1/1 Running 0 34s 172.244.221.136 k8s-master31 <none> <none>
[root@k8s-master31 ~]# curl 172.244.221.136
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@k8s-node34 ~]# curl 172.101.81.80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
# 删除 Pod 和 Service
[root@k8s-master31 ~]# kubectl delete pod temp-nginx
pod "temp-nginx" deleted
[root@k8s-master31 ~]# kubectl delete svc temp-nginx-svc
service "temp-nginx-svc" deleted
kubeadm升级k8s之1.23.17->1.24.17的更多相关文章
- deepin 15.11 升级docker-ce 18.01到19.03.1,升级docker compose 1.23到1.24.1
1.升级docker compose ,docker官方安装方法 $ sudo curl -L "https://github.com/docker/compose/releases/dow ...
- Kubeadm安装k8s集群升级100年证书时报错:Unable to connect to the server: EOF:求解决方法.
报错信息: 使用命令时: Kubelet服务报错: 报错情况,在更新完k8s100年证书的时候,到最后重新启动kubelet服务的时候,服务是可以重新启动的,但是kubectl的命令是无法使用的,会等 ...
- 使用kubeadm部署K8S v1.17.0集群
kubeadm部署K8S集群 安装前的准备 集群机器 172.22.34.34 K8S00 172.22.34.35 K8S01 172.22.34.36 K8S02 注意: 本文档中的 etcd . ...
- 二进制安装Kubernetes(k8s) v1.23.7 IPv4/IPv6双栈
二进制安装Kubernetes(k8s) v1.23.7 IPv4/IPv6双栈 Kubernetes 开源不易,帮忙点个star,谢谢了 介绍 kubernetes二进制安装 后续尽可能第一时间更新 ...
- kubeadm 搭建 K8s
kubeadm 搭建 K8s 本篇主要记录一下 使用 kubeadm 搭建 k8s 详细过程 ,环境使用 VirtualBox 构建的3台虚拟机 1.环境准备 操作系统:Centos7 (CentOS ...
- Kubeadm部署k8s单点master
Kubeadm部署k8s单点master 1.环境准备: 主机名 IP 说明 宿主机系统 master 10.0.0.17 Kubernetes集群的master节点 CentOS 7.9 node1 ...
- 二进制安装Kubernetes(k8s) v1.23.6
二进制安装Kubernetes(k8s) v1.23.6 背景 kubernetes二进制安装 1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 文档以及安装包已生成. 后续尽可能第 ...
- 二进制安装Kubernetes(k8s) v1.23.5
Github:https://github.com/cby-chen/Kubernetes/releases 前提说明:公主号不支持富文本,建议在Github查看. 1.23.3 和 1.23.4 和 ...
- 二进制安装Kubernetes(k8s) v1.23.4
1.环境 网段 物理主机:192.168.1.0/24 service:10.96.0.0/12 pod:172.16.0.0/12 如果有条件建议k8s集群与etcd集群分开安装 1.1.k8s基础 ...
- 二进制安装Kubernetes(k8s) v1.23.3
声明:微信公众号不支持富文本格式,代码缩进有问题 参考我其他平台文档: https://www.oiox.cn/index.php/archives/90/ https://juejin.cn/pos ...
随机推荐
- tcp_tw_reuse、tcp_tw_recycle、tcp_fin_timeout参数介绍
参数介绍 net.ipv4.tcp_tw_reuse = 1 表示开启重用.允许将TIME-WAIT sockets重新用于新的TCP连接,默认为0,表示关闭: net.ipv4.tcp_tw_rec ...
- xlookup与vlookup的区别
区别还是很大的,vlookup暂时扔不了.
- 在SelfHost项目中获取客户端IP地址
在SelfHost项目中,获取客户端的IP地址比OwinSelfHost项目要复杂一些,可以通过以下方法获得: base.Request.Properties["System.Service ...
- Redis缓存雪崩,击穿,穿透以及解决方案
Redis读写过程 一般情况下,Redis都是作为client与MySQL间的一层缓存,尽量减少MySQL的读压力,数据流向如图所示: Redis的五种数据类型及使用场景 String 这个其实没啥好 ...
- ROS2开发BUG记录:在将 use_sim_timer 置为 true 时,节点的 Timer_Callback 行为“异常”
问题: 在将 use_sim_timer 置为 true 时,节点 Timer_Callback 行为 "异常" .在回调函数中,使用 self.get_logger().info ...
- 搭建自动化 Web 页面性能检测系统 —— 部署篇
我们是袋鼠云数栈 UED 团队,致力于打造优秀的一站式数据中台产品.我们始终保持工匠精神,探索前端道路,为社区积累并传播经验价值. 本文作者:琉易 liuxianyu.cn 这一篇是系列文章: 搭建自 ...
- SemanticKernel/C#:使用Ollama中的对话模型与嵌入模型用于本地离线场景
前言 上一篇文章介绍了使用SemanticKernel/C#的RAG简易实践,在上篇文章中我使用的是兼容OpenAI格式的在线API,但实际上会有很多本地离线的场景.今天跟大家介绍一下在Semanti ...
- 【SpringCloud】Nacos集群部署(Centos平台)
一.前提环境准备 Nacos 下载 https://github.com/alibaba/nacos/releases 或者使用其它博主备份的 https://blog.csdn.net/weixin ...
- 【ActiveJdbc】02
一.基本的数据库操作 数据模型层: import org.javalite.activejdbc.Model; 数据访问层: import org.javalite.activejdbc.Base; ...
- Reinforcement 代码库
https://github.com/dragen1860?tab=repositories https://github.com/awjuliani?tab=repositories https:/ ...