说来话长,我想配一个一劳永逸的环境,方便以后复用。省的以后反复查教程重新装了

1. 安装miniconda+py3.10

cd /root
wget -q https://repo.anaconda.com/miniconda/Miniconda3-py310_24.4.0-0-Linux-x86_64.sh
bash ./Miniconda3-py310_24.4.0-0-Linux-x86_64.sh -b -f -p /root/miniconda3
rm -f ./Miniconda3-py310_24.4.0-0-Linux-x86_64.sh
echo "PATH=/root/miniconda3/bin:/usr/local/bin:$PATH" >> /etc/profile
echo "source /etc/profile" >> /root/.bashrc
# 初始化miniconda
conda init

2. 本机安装cudatoolkit 12.1

这块内容来自:https://docs.infini-ai.com/posts/install-cuda-on-devmachine.html

首先更新系统包列表

sudo apt update

以 Runfile 的方式安装系统级 CUDA 12.1.1

# 下载 CUDA Toolkit 安装包
wget https://developer.download.nvidia.com/compute/cuda/12.1.1/local_installers/cuda_12.1.1_530.30.02_linux.run
# 安装 CUDA Toolkit
sudo sh cuda_12.1.1_530.30.02_linux.run

稍等片刻,会提示接受 EULA 协议。输入 accept 接受协议。

┌──────────────────────────────────────────────────────────────────────────────┐
│ End User License Agreement │
│ -------------------------- │
│ │
│ NVIDIA Software License Agreement and CUDA Supplement to │
│ Software License Agreement. Last updated: October 8, 2021 │
│ │
│ The CUDA Toolkit End User License Agreement applies to the │
│ NVIDIA CUDA Toolkit, the NVIDIA CUDA Samples, the NVIDIA │
│ Display Driver, NVIDIA Nsight tools (Visual Studio Edition), │
│ and the associated documentation on CUDA APIs, programming │
│ model and development tools. If you do not agree with the │
│ terms and conditions of the license agreement, then do not │
│ download or use the software. │
│ │
│ Last updated: October 8, 2021. │
│ │
│ │
│ Preface │
│ ------- │
│ │
│──────────────────────────────────────────────────────────────────────────────│
│ Do you accept the above EULA? (accept/decline/quit): │
│ │
└──────────────────────────────────────────────────────────────────────────────┘

输入accept,同意协议后,按照提示进行安装,选择自定义安装,只选择 CUDA Toolkit 和相关库。

┌──────────────────────────────────────────────────────────────────────────────┐
│ CUDA Installer │
│ - [ ] Driver │
│ [ ] 520.61.05 │
│ + [X] CUDA Toolkit 12.1 │
│ [X] CUDA Demo Suite 12.1 │
│ [X] CUDA Documentation 12.1 │
│ - [ ] Kernel Objects │
│ [ ] nvidia-fs │
│ Options │
│ Install │
│ │
│ │
│ │
│ │
│ │
│ │
│ │
│ │
│ │
│ │
│ │
│ │
│ Up/Down: Move | Left/Right: Expand | 'Enter': Select | 'A': Advanced options │
└──────────────────────────────────────────────────────────────────────────────┘

安装完成后,输出如下:

===========
= Summary =
=========== Driver: Not Selected
Toolkit: Installed in /usr/local/cuda-12.1/ Please make sure that
- PATH includes /usr/local/cuda-12.1/bin
- LD_LIBRARY_PATH includes /usr/local/cuda-12.1/lib64, or, add /usr/local/cuda-12.1/lib64 to /etc/ld.so.conf and run ldconfig as root To uninstall the CUDA Toolkit, run cuda-uninstaller in /usr/local/cuda-12.1/bin
***WARNING: Incomplete installation! This installation did not install the CUDA Driver. A driver of version at least 520.00 is required for CUDA 12.1 functionality to work.
To install the driver using this installer, run the following command, replacing <CudaInstaller> with the name of this run file:
sudo <CudaInstaller>.run --silent --driver Logfile is /var/log/cuda-installer.log

接下来配置 CUDA 环境变量

设置 PATHLD_LIBRARY_PATHCUDA_HOME(通用路径和 CUDA 12.1 特定路径):

echo 'export PATH=/usr/local/cuda/bin:/usr/local/cuda-12.1/bin${PATH:+:${PATH}}' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda-12.1/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc
echo 'export CUDA_HOME=/usr/local/cuda' >> ~/.bashrc

将 CUDA 库路径添加到 /etc/ld.so.conf

echo '/usr/local/cuda/lib64' | sudo tee -a /etc/ld.so.conf
echo '/usr/local/cuda-12.1/lib64' | sudo tee -a /etc/ld.so.conf

运行 ldconfig

sudo ldconfig

应用更改到当前会话:

source ~/.bashrc

验证路径设置:

echo $PATH | grep -E "cuda|cuda-12.1"
echo $LD_LIBRARY_PATH | grep -E "cuda|cuda-12.1"
echo $CUDA_HOME
ldconfig -p | grep "libcudart"

验证nvcc可用否:

nvcc --version

输出:

(base) root@autodl-container-39eb4a843f-12e69afc:~/autodl-tmp/code# nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Mon_Apr__3_17:16:06_PDT_2023
Cuda compilation tools, release 12.1, V12.1.105
Build cuda_12.1.r12.1/compiler.32688072_0
(base) root@autodl-container-39eb4a843f-12e69afc:~/autodl-tmp/code#

3. 安装系统级cudnn 8.9.2.26

我们这里选择tarball方式安装,首先wget下载cudnn安装包

wget -O cudnn-linux-x86_64-8.9.2.26_cuda12-archive.tar.xz https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/linux-x86_64/cudnn-linux-x86_64-8.9.2.26_cuda12-archive.tar.xz

随后解压并安装

# 解压安装包
tar -xvf cudnn-linux-x86_64-8.9.2.26_cuda12-archive.tar.xz
cd cudnn-linux-x86_64-8.9.2.26_cuda12-archive
# 将解压后的文件复制到cuda安装路径
sudo cp -r include/* /usr/local/cuda/include
sudo cp -r lib/* /usr/local/cuda/lib64
# 也可以将其复制到 /usr/include/ 和 /usr/lib/x86_64-linux-gnu/
# 修改文件权限
sudo chmod a+r /usr/local/cuda/include/cudnn*.h
sudo chmod a+r /usr/local/cuda/lib64/libcudnn*
# 添加环境变量
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
# 刷新ldconfig
sudo ldconfig

其他安装方式,例如包管理器安装方式可参考:https://docs.nvidia.com/deeplearning/cudnn/latest/installation/linux.html#tarball-installation

4. 安装pytorch2.3.0+cu121 torchvision0.18.0等

使用pip命令安装pytorch和torchvision,注意,该方法安装的pytorch会自带cudatoolkit12.1和cudnn8.9.0,但仅限于pytorch自己使用,与系统中之前本机安装的系统级,所有环境都能使用的cudatoolkit和cudnn不是同一套,参考:

https://docs.infini-ai.com/posts/where-is-cuda.html

首先在base环境上新建一个环境,命名为pytorch

conda create -n pytorch
conda activate pytorch

然后在pytorch环境中安装torch2.3.0,安装命令如下:

pip install torch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0 --index-url https://download.pytorch.org/whl/cu121

验证安装是否成功,新建并执行脚本:python test_torch.py,如下内容:

import torch
print('PyTorch version: ' + str(torch.__version__))
print('CUDA available: ' + str(torch.cuda.is_available()))
print('cuDNN version: ' + str(torch.backends.cudnn.version()))
a = torch.tensor([0., 0.], dtype=torch.float32, device='cuda')
print('Tensor a =', a)
b = torch.randn(2, device='cuda')
print('Tensor b =', b)
c = a + b
print('Tensor c =', c) import torchvision
print(torchvision.__version__)

输出如下:

PyTorch version: 2.3.0+cu121
CUDA available: True
cuDNN version: 8902
Tensor a = tensor([0., 0.], device='cuda:0')
Tensor b = tensor([-0.4807, -0.8651], device='cuda:0')
Tensor c = tensor([-0.4807, -0.8651], device='cuda:0')
0.18.0+cu121

5. 安装onnxruntime-gpu

首先pypi上官方的onnxruntime-gpu安装包只有1.19.0以上版本才支持cuda12.x,但1.19.0以上版本又不支持cudnn8.x,所以需要自己编译安装支持cudnn8.x且能支持cuda12.x的onnxruntime-gpu,不过已经有人编译过了,可以直接拿来用。

# 安装支持cuda12+cudnn8的onnxruntime-gpu1.18.0:
pip install -U onnxruntime-gpu==1.18.0 --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/

验证安装是否成功,新建并执行脚本:python test_onnxruntime.py,如下内容:

import cv2
import numpy as np
import onnxruntime as ort
import torch use_gpu = True # 检查可用的执行提供者
available_providers = ort.get_available_providers()
print("Available providers:", available_providers)
# 设置执行提供者
providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] if use_gpu and 'CUDAExecutionProvider' in available_providers else ['CPUExecutionProvider']
print("Using providers:", providers)
# 加载模型
session = ort.InferenceSession('./yolov8s.onnx', providers=providers)
# 获取输入和输出的名称
input_name = session.get_inputs()[0].name
output_names = [output.name for output in session.get_outputs()] print(f"input_name: {input_name}")
print(f"output_names: {output_names}") providers = [("CUDAExecutionProvider", {"device_id": torch.cuda.current_device(),
"user_compute_stream": str(torch.cuda.current_stream().cuda_stream)})]
sess_options = ort.SessionOptions()
sess = ort.InferenceSession("./yolov8s.onnx", sess_options=sess_options, providers=providers)

输出如下:

Available providers: ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
Using providers: ['CUDAExecutionProvider', 'CPUExecutionProvider']
2024-12-15 22:59:47.961114142 [W:onnxruntime:, constant_folding.cc:269 ApplyImpl] Could not find a CPU kernel and hence can't constant fold ReduceMean node '/model.2/m.0/cv2/ReduceMean'
2024-12-15 22:59:47.962762629 [W:onnxruntime:, constant_folding.cc:269 ApplyImpl] Could not find a CPU kernel and hence can't constant fold ReduceMean node '/model.4/m.0/cv2/ReduceMean'
2024-12-15 22:59:47.963180939 [W:onnxruntime:, constant_folding.cc:269 ApplyImpl] Could not find a CPU kernel and hence can't constant fold ReduceMean node '/model.6/m.0/cv2/ReduceMean'
2024-12-15 22:59:47.966209107 [W:onnxruntime:, constant_folding.cc:269 ApplyImpl] Could not find a CPU kernel and hence can't constant fold ReduceMean node '/model.2/m.0/cv2/ReduceMean'
2024-12-15 22:59:47.966248419 [W:onnxruntime:, constant_folding.cc:269 ApplyImpl] Could not find a CPU kernel and hence can't constant fold ReduceMean node '/model.4/m.0/cv2/ReduceMean'
2024-12-15 22:59:47.966477201 [W:onnxruntime:, constant_folding.cc:269 ApplyImpl] Could not find a CPU kernel and hence can't constant fold ReduceMean node '/model.6/m.0/cv2/ReduceMean'
input_name: images
output_names: ['output0']
2024-12-15 22:59:48.085628154 [W:onnxruntime:, constant_folding.cc:269 ApplyImpl] Could not find a CPU kernel and hence can't constant fold ReduceMean node '/model.2/m.0/cv2/ReduceMean'
2024-12-15 22:59:48.085909892 [W:onnxruntime:, constant_folding.cc:269 ApplyImpl] Could not find a CPU kernel and hence can't constant fold ReduceMean node '/model.4/m.0/cv2/ReduceMean'
2024-12-15 22:59:48.086174739 [W:onnxruntime:, constant_folding.cc:269 ApplyImpl] Could not find a CPU kernel and hence can't constant fold ReduceMean node '/model.6/m.0/cv2/ReduceMean'
2024-12-15 22:59:48.091461531 [W:onnxruntime:, constant_folding.cc:269 ApplyImpl] Could not find a CPU kernel and hence can't constant fold ReduceMean node '/model.2/m.0/cv2/ReduceMean'
2024-12-15 22:59:48.091661505 [W:onnxruntime:, constant_folding.cc:269 ApplyImpl] Could not find a CPU kernel and hence can't constant fold ReduceMean node '/model.4/m.0/cv2/ReduceMean'
2024-12-15 22:59:48.091925856 [W:onnxruntime:, constant_folding.cc:269 ApplyImpl] Could not find a CPU kernel and hence can't constant fold ReduceMean node '/model.6/m.0/cv2/ReduceMean'

6. 安装TensorRT8.6.1

下载对应cuda版本的TensorRT版本和Liunx_x86_64版本的【Tar】文件

官网的tensorrt9.*版本的安装包不知咋的全都没了,我这里下载了支持cuda12.1的tensorrt8.6.1.8

下载TensorRT 8.6 GA for Linux x86_64 and CUDA 12.0 and 12.1 TAR Package然后上传到容器中

参考官方指导进行安装:https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-tar

# 解压安装包
tar -xvf TensorRT-8.6.1.8.Linux.x86_64-gnu.cuda-12.1.cudnn8.9.tar.gz
# 将 TensorRT lib 目录的绝对路径添加到环境变量中 LD_LIBRARY_PATH :
export LD_LIBRARY_PATH=/path/to/TensorRT-8.6.1.6/lib:$LD_LIBRARY_PATH
# 安装 Python TensorRT wheel 文件,3.x换成你的python版本
cd TensorRT-8.6.1.8/python
python3 -m pip install tensorrt-*-cp310-none-linux_x86_64.whl
# (可选)安装 TensorRT 精益和调度运行时轮 文件:
python3 -m pip install tensorrt_lean-*-cp310-none-linux_x86_64.whl
python3 -m pip install tensorrt_dispatch-*-cp310-none-linux_x86_64.whl

验证安装是否成功,新建并执行脚本:python test_tensorrt.py,如下内容:

import onnx
from onnx import shape_inference
import tensorrt as trt
import os print(f"TensorRT version: {trt.__version__}") # 加载ONNX模型
onnx_model_path = "./yolov8s.onnx"
onnx_model = onnx.load(onnx_model_path) # 进行形状推断
inferred_model = shape_inference.infer_shapes(onnx_model)
onnx.save(inferred_model, "inferred_" + os.path.basename(onnx_model_path)) # 创建TensorRT builder和网络定义
TRT_LOGGER = trt.Logger(trt.Logger.WARNING)
builder = trt.Builder(TRT_LOGGER)
network = builder.create_network(1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)) # 解析ONNX模型
parser = trt.OnnxParser(network, TRT_LOGGER)
with open("inferred_" + os.path.basename(onnx_model_path), 'rb') as model:
if not parser.parse(model.read()):
print('Failed to parse the ONNX file.')
for error in range(parser.num_errors):
print(parser.get_error(error))
exit() # 构建TensorRT引擎
config = builder.create_builder_config()
config.max_workspace_size = 1 << 30 # 1GB
serialized_engine = builder.build_serialized_network(network, config) # 保存TensorRT引擎文件
engine_file_path = "yolov8s.trt"
with open(engine_file_path, "wb") as f:
f.write(serialized_engine) # 加载TensorRT引擎并进行推理(省略具体实现)

如果能正常运行,将onnx模型转为tensorrt模型,不报错,说明安装成功。

7. 编译安装带有cuda支持的opencv4.9.0

首先安装编译工具,也是opencv正常运行的前提。

sudo apt install cmake
sudo apt install python3-numpy
sudo apt install libavcodec-dev libavformat-dev libswscale-dev
sudo apt install libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev
sudo apt install libgtk-3-dev
sudo apt install libpng-dev libjpeg-dev libopenexr-dev libtiff-dev libwebp-dev

然后下载4.9.0版本的opencv源码包

sudo apt install git
cd ~/Downloads
git clone --branch=4.9.0 --single-branch https://github.com/opencv/opencv.git
git clone --branch=4.9.0 --single-branch https://github.com/opencv/opencv_contrib.git

编译安装到当前conda环境中:

cd opencv
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_CUDA=ON -D WITH_CUDNN=ON -D WITH_CUBLAS=ON -D WITH_TBB=ON -D OPENCV_DNN_CUDA=ON -D OPENCV_ENABLE_NONFREE=ON -D CUDA_ARCH_BIN=8.9 -D OPENCV_EXTRA_MODULES_PATH=$HOME/Downloads/opencv_contrib/modules -D BUILD_EXAMPLES=OFF -D HAVE_opencv_python3=ON -D ENABLE_FAST_MATH=1 -D cuda_toolkit_root_dir=/usr/local/cuda-12.1 -D CUDNN_INCLUDE_DIR=/usr/include/ -D CUDNN_LIBRARY=/usr/lib/x86_64-linux-gnu/libcudnn.so.8 -D PYTHON_DEFAULT_EXECUTABLE=$(python3 -c "import sys; print(sys.executable)") -D PYTHON3_EXECUTABLE=$(python3 -c "import sys; print(sys.executable)") -D PYTHON3_NUMPY_INCLUDE_DIRS=$(python3 -c "import numpy; print (numpy.get_include())") -D PYTHON3_PACKAGES_PATH=$(python3 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") ..

注意,请将CUDA_ARCH_BIN参数的值改为你的cuda算力,例如:CUDA_ARCH_BIN=8.9,请将CUDNN_INCLUDE_DIRCUDNN_LIBRARY参数的值改为你的cudnn安装路径,例如:CUDNN_INCLUDE_DIR=/usr/include/CUDNN_LIBRARY=/usr/lib/x86_64-linux-gnu/libcudnn.so.8,请将OPENCV_EXTRA_MODULES_PATH参数的值改为你的opencv_contrib源码包路径,例如:OPENCV_EXTRA_MODULES_PATH=$HOME/Downloads/opencv_contrib/modules,最后执行:

make -j$(nproc)

等待编译完成,编译完成后,执行:

sudo make install
sudo ldconfig

验证安装,新建并执行脚本:python test_cv.py,如下内容:

import cv2
print(cv2.__version__)
print(cv2.cuda.getCudaEnabledDeviceCount())
print(cv2.getBuildInformation())

如果没有报错,则说明安装成功。

注意,由于我pytorch环境中使用的也是base环境中的python3.10.14,所以pytorch环境中可直接使用该cuda版opencv4.9.0,这个支持cuda的opencv4.9.0针对python的包位于/usr/local/lib/python3.10/dist-packages/cv2,如更换其他版本,请自行创建链接到自己版本python的库中:sudo ln -s /usr/local/lib/python3.10/dist-packages/cv2 /root/miniconda3/lib/python3.xx/site-packages/cv2

安装完成后,之前apt安装的包也不要卸载,否则opencv会导入报错,无法运行。

Ubuntu22.04安装cuda12.1+cudnn8.9.2+TensorRT8.6.1+pytorch2.3.0+opencv_cuda4.9+onnxruntime-gpu1.18的更多相关文章

  1. Ubuntu22.04 安装配置流水账

    前两天为了测一个CH340的bug, 装了三遍20.04. bug解决完, 心想反正也要重新装各种软件, 不如直接装22.04吧. 把涉及的安装记录一下方便将来参考. 制作启动U盘 在Ubuntu网站 ...

  2. Ubuntu22.04 安装配置VNC Server

    如果转载, 请注明出处 https://www.cnblogs.com/milton/p/16730512.html Ubuntu22.40下VNC和远程桌面的区别 使用远程桌面时, 用户必须在hos ...

  3. ubuntu22.04安装 kubernetes(docker)

    初始化检查 操作系统:ubuntu22.04 LTS docker:20.10.18 kubelet: v1.23.6 kubeadm:v1.23.6 kubectl: v1.23.6 1.校准时间: ...

  4. ubuntu22.04安装mysql5.7

    22版本的ubuntu默认安装mysql8.0版本,要想安装5.x的版本就得下载安装,这个比较详细的教程,可以参考: https://www.cnblogs.com/juanxincai/p/1648 ...

  5. 带有pwn环境的Ubuntu22.04快速安装

    pwn环境ubuntu22.04快速安装(有克隆vmk) ubuntu更新到了22.04版本,经过本人测试后非常的好(ma)用(fan),该版本和mac很相像,而且用起来也比较丝滑,只不过配置上稍微有 ...

  6. Ubuntu22.04 KubeSphere 安装K8S集群

    Ubuntu22.04 KubeSphere 安装K8S集群_Ri0n的博客-CSDN博客 一.系统环境系统:Ubuntu 22.04集群IP分布hostname 角色 IP地址master mast ...

  7. ubuntu14.04安装、NVIDIA显卡驱动安装及CUDA8.0、Cudnn5.1的环境搭建

    安装环境:hp-Z440工作站.64位Ubuntu14.04(64位Ubuntu16.04).Cuda8.0.Cudnn5.1.Nvidia GeForce GT 705.Tesla K40c 本文可 ...

  8. ubuntu 16.04 安装 tensorflow-gpu 包括 CUDA ,CUDNN,CONDA

    ubuntu 16.04 安装 tensorflow-gpu 包括 CUDA ,CUDNN,CONDA 显卡驱动装好了,如图: 英文原文链接: https://github.com/williamFa ...

  9. Caffe2(1)----Ubantu14.04安装

    英文好的请直接参考官方安装文档:Ubantu14.04下的源码编译. Caffe2的安装相比以前Caffe一代的安装,简直有点一键装机的感觉,下面简单总结下Caffe2的安装. 环境:Ubantu14 ...

  10. Ubuntu server16.04安装配置驱动418.87、cuda10.1、cudnn7.6.4.38、anaconda、pytorch超详细解决

    目录 安装GCC 安装NVIDIA驱动 1. 卸载原有驱动(没装跳过) 2. 禁用nouveau 3. 安装NVIDIA显卡驱动 安装CUDA10.1 安装cudnn 安装anaconda 安装ten ...

随机推荐

  1. Flutter Forward 活动正式发布

    2023 年 1 月 25 日,Flutter 团队将在肯尼亚首都内罗毕举办 Flutter Forward 大会,并同时开启线上直播,敬请期待! 活动将于北京时间 1 月 25 日 22:30 开始 ...

  2. 信创环境经典版SuperMap iManager监控外部SuperMap iServer资源失败,无法监控目标GIS服务器CPU与内存使用情况

    一.问题环境 操作系统:银河麒麟kylin V10 CPU:鲲鹏920 SuperMap iServer 10.2.0 SuperMap iManager 10.2.1 二.现象 部署完经典版Supe ...

  3. Shell分析服务器日志命令

    1.查看有多少个IP访问: awk '{print $1}' log_file|sort|uniq|wc -l 2.查看某一个页面被访问的次数: grep "/index.php" ...

  4. Nuxt.js 应用中的 app:beforeMount 钩子详解

    title: Nuxt.js 应用中的 app:beforeMount 钩子详解 date: 2024/10/4 updated: 2024/10/4 author: cmdragon excerpt ...

  5. Java实用小工具系列1---使用StringUtils分割字符串

    经常有这种情况,需要将逗号分割的字符串,比如:aaa, bbb ,ccc,但往往是人工输入的,难免会有多空格逗号情况,比如:aaa, bbb , ccc, ,,这种情况使用split会解析出不正常的结 ...

  6. kotlin更多语言结构——>相等性

    Kotlin 中有两种类型的相等性: - 结构相等(用 equals() 检测); - 引用相等(两个引用指向同一对象).   结构相等 结构相等由 ==(以及其否定形式 !=)操作判断.按照惯例,像 ...

  7. harbor磁盘爆满,执行垃圾回收清理镜像

    1.在使用Jenkins发版操作时发现,推送私有仓库harbor报错: received unexpected HTTP status: 500 Internal Server Error 2.想要登 ...

  8. 工作使用:Exchange问题汇总

    工作使用:Exchange问题汇总 1:邮件不能发给公司内部的人,但是可以发公司外部人员 解析:GC出问题 2:xchange 2016环境,最近经常发生邮件队列堆积的现象.邮件服务器整体性能看上去没 ...

  9. 一文彻底弄懂并解决Redis的缓存雪崩,缓存击穿,缓存穿透

    缓存雪崩.缓存击穿.缓存穿透是分布式系统中使用缓存时,常遇到的三类问题,都会对系统性能和稳定性产生严重影响.下面将详细介绍这三者的定义.产生原因.危害以及常见的解决方案. 1. 缓存雪崩 1.1 定义 ...

  10. curl命令详解【转载】

    本文转载自curl 的用法指南-阮一峰 简介 curl 是常用的命令行工具,用来请求 Web 服务器.它的名字就是客户端(client)的 URL 工具的意思. 它的功能非常强大,命令行参数多达几十种 ...