之前在测试NN中各个层的时间的时候,遇到一个非常奇怪的问题,分别使用Caffe自己的gpu方法和cuDNN方法,在卷积上性能差异非常大,但是在pooling层上基本没有变化。抽空检查了代码之后,发现是layer_factory模式导致的问题。下面就以下几个方面来进行

1.工厂模式

2.layer_factory详解

3.layer_factory中坑

4.问题影响分析

1.工厂模式

工厂模式是设计模式中的一种,面向的业务大概是在编码时不能预见需要创建那种类的实例,系统不依赖产品类如何被创建、组合和表达的细节,工厂模式的弊端是扩展比较少的项目中比较合适。

工厂模式有三种角色:

工厂类角色:根据逻辑产生具体的产品

抽象产品角色:具体产品的父类,一把由Java中的接口或者C++中的抽象类来实现

具体产品角色:产品实例

2.layer_factory详解

众所周知,Caffe1.0版本中,目前有三大类算子:CPU版本、Caffe自己实现的CUDA版本的和CuDNN版本的。layer_factory文件负责组装Caffe中算子,工厂模式的意思就是根据用户的设置,在执行时,选择相应版本的算子进行。

以下参考至http://zhuanlan.zhihu.com/hacker-and-painter/20456649

layer_factory.hpp是layer_factory的头文件

  1. /**
  2. * @brief A layer factory that allows one to register layers.
  3. * During runtime, registered layers could be called by passing a LayerParameter
  4. * protobuffer to the CreateLayer function:
  5. *
  6. * LayerRegistry<Dtype>::CreateLayer(param);
  7. *
  8. * There are two ways to register a layer. Assuming that we have a layer like:
  9. *
  10. * template <typename Dtype>
  11. * class MyAwesomeLayer : public Layer<Dtype> {
  12. * // your implementations
  13. * };
  14. *
  15. * and its type is its C++ class name, but without the "Layer" at the end
  16. * ("MyAwesomeLayer" -> "MyAwesome").
  17. *
  18. * If the layer is going to be created simply by its constructor, in your c++
  19. * file, add the following line:
  20. *
  21. * REGISTER_LAYER_CLASS(MyAwesome);
  22. *
  23. * Or, if the layer is going to be created by another creator function, in the
  24. * format of:
  25. *
  26. * template <typename Dtype>
  27. * Layer<Dtype*> GetMyAwesomeLayer(const LayerParameter& param) {
  28. * // your implementation
  29. * }
  30. *
  31. * (for example, when your layer has multiple backends, see GetConvolutionLayer
  32. * for a use case), then you can register the creator function instead, like
  33. *
  34. * REGISTER_LAYER_CREATOR(MyAwesome, GetMyAwesomeLayer)
  35. *
  36. * Note that each layer type should only be registered once.
  37. */
  38.  
  39. #ifndef CAFFE_LAYER_FACTORY_H_
  40. #define CAFFE_LAYER_FACTORY_H_
  41.  
  42. #include <map>
  43. #include <string>
  44.  
  45. #include "caffe/common.hpp"
  46. #include "caffe/proto/caffe.pb.h"
  47.  
  48. namespace caffe {
  49.  
  50. template <typename Dtype>
  51. class Layer;
  52. //LayerResistry的功能很简单,就是将类和对应的字符串类型放入到一个map当中去,以便灵活调用。主要就是注册类的功能
  53. template <typename Dtype>
  54. class LayerRegistry {
  55. public:
  56. // 函数指针Creator,返回的是Layer<Dtype>类型的指针
  57. typedef shared_ptr<Layer<Dtype> > (*Creator)(const LayerParameter&);
  58. // CreatorRegistry是字符串与对应的Creator的映射
  59. typedef std::map<string, Creator> CreatorRegistry;
  60.  
  61. static CreatorRegistry& Registry() {
  62. static CreatorRegistry* g_registry_ = new CreatorRegistry();
  63. return *g_registry_;
  64. }
  65.  
  66. // Adds a creator.
  67. // 根据类型和函数指针,加入到表中
  68. static void AddCreator(const string& type, Creator creator) {
  69. CreatorRegistry& registry = Registry();
  70. CHECK_EQ(registry.count(type), )
  71. << "Layer type " << type << " already registered.";
  72. registry[type] = creator;
  73. }
  74.  
  75. // Get a layer using a LayerParameter.
  76. //给定层的类型,创建层
  77. static shared_ptr<Layer<Dtype> > CreateLayer(const LayerParameter& param) {
  78. LOG(INFO) << "Creating layer " << param.name();
  79. // 从参数中获得类型字符串
  80. const string& type = param.type();
  81. // 检查是否查找到给定type的Creator
  82. CreatorRegistry& registry = Registry();
  83. CHECK_EQ(registry.count(type), ) << "Unknown layer type: " << type
  84. << " (known types: " << LayerTypeList() << ")";
  85. // 调用对应的层的Creator函数
  86. return registry[type](param);
  87. }
  88.  
  89. private:
  90. // Layer registry should never be instantiated - everything is done with its
  91. // static variables.
  92. // 禁止实例化,因为该类都是静态函数,所以是私有的
  93. LayerRegistry() {}
  94. //返回层的类型列表
  95. static string LayerTypeList() {
  96. // 获得注册表
  97. CreatorRegistry& registry = Registry();
  98. string layer_types;
  99. // 遍历注册表压入layer_types字符串容器
  100. for (typename CreatorRegistry::iterator iter = registry.begin();
  101. iter != registry.end(); ++iter) {
  102. if (iter != registry.begin()) {
  103. layer_types += ", ";
  104. }
  105. layer_types += iter->first;
  106. }
  107. return layer_types;
  108. }
  109. };
  110.  
  111. // LayerRegisterer
  112. // 自己定义层的注册器
  113. // 以供后面的宏进行使用
  114. template <typename Dtype>
  115. class LayerRegisterer {
  116. public:
  117. // 层的注册器的构造函数
  118. LayerRegisterer(const string& type,
  119. shared_ptr<Layer<Dtype> > (*creator)(const LayerParameter&)) {
  120. // LOG(INFO) << "Registering layer type: " << type;
  121. // 还是调用的层注册表中的加入Creator函数加入注册表
  122. LayerRegistry<Dtype>::AddCreator(type, creator);
  123. }
  124. };
  125. //为了方便作者还弄了个宏便于注册自己写的层类
  126. // 生成g_creator_f_type(type, creator<Dtype>)的两个函数 (double和float类型)
  127. #define REGISTER_LAYER_CREATOR(type, creator) \
  128. static LayerRegisterer<float> g_creator_f_##type(#type, creator<float>); \
  129. static LayerRegisterer<double> g_creator_d_##type(#type, creator<double>) \
  130. /* 注册自己定义的类,类名为type,
  131. 假设比如type=bias,那么生成如下的代码
  132. 下面的函数直接调用你自己的类的构造函数生成一个类的实例并返回
  133. CreatorbiasLayer(const LayerParameter& param)
  134. 下面的语句是为你自己的类定义了LayerRegisterer<float>类型的静态变量g_creator_f_biasLayer(float类型,实际上就是把你自己的类的字符串类型和类的实例绑定到注册表)
  135. static LayerRegisterer<float> g_creator_f_biasLayer(bias, CreatorbiasLayer)
  136. 下面的语句为你自己的类定义了LayerRegisterer<double>类型的静态变量g_creator_d_biasLayer(double类型,实际上就是把你自己的类的字符串类型和类的实例绑定到注册表)
  137. static LayerRegisterer<double> g_creator_d_biasLayer(bias, CreatorbiasLayer)
  138. */
  139. #define REGISTER_LAYER_CLASS(type) \
  140. template <typename Dtype> \
  141. shared_ptr<Layer<Dtype> > Creator_##type##Layer(const LayerParameter& param) \
  142. { \
  143. return shared_ptr<Layer<Dtype> >(new type##Layer<Dtype>(param)); \
  144. } \
  145. REGISTER_LAYER_CREATOR(type, Creator_##type##Layer)
  146.  
  147. } // namespace caffe
  148.  
  149. #endif // CAFFE_LAYER_FACTORY_H_

经过上边的阐述之后,实现部分(这部分和1.0版本有出入,大的方面不影响)

layer_factory.hpp:

  1. // Make sure we include Python.h before any system header
  2. // to avoid _POSIX_C_SOURCE redefinition
  3. #ifdef WITH_PYTHON_LAYER
  4. #include <boost/python.hpp>
  5. #endif
  6. #include <string>
  7.  
  8. #include "caffe/layer.hpp"
  9. #include "caffe/layer_factory.hpp"
  10. #include "caffe/proto/caffe.pb.h"
  11. #include "caffe/vision_layers.hpp"
  12.  
  13. #ifdef WITH_PYTHON_LAYER
  14. #include "caffe/python_layer.hpp"
  15. #endif
  16.  
  17. namespace caffe {
  18.  
  19. // 写一个获取卷积层实例的函数
  20. // Get convolution layer according to engine.
  21. template <typename Dtype>
  22. shared_ptr<Layer<Dtype> > GetConvolutionLayer(
  23. const LayerParameter& param) {
  24. // 从参数中获取是使用什么引擎进行计算CUDNN还是CAFFE还是DEFAULT
  25. // engine可从caffe.proto中看出是枚举类型的
  26. ConvolutionParameter_Engine engine = param.convolution_param().engine();
  27. if (engine == ConvolutionParameter_Engine_DEFAULT) {
  28. engine = ConvolutionParameter_Engine_CAFFE;
  29. #ifdef USE_CUDNN
  30. engine = ConvolutionParameter_Engine_CUDNN;
  31. #endif
  32. }
  33. if (engine == ConvolutionParameter_Engine_CAFFE) {
  34. // 直接初始化Caffe的卷积层
  35. return shared_ptr<Layer<Dtype> >(new ConvolutionLayer<Dtype>(param));
  36. #ifdef USE_CUDNN
  37. } else if (engine == ConvolutionParameter_Engine_CUDNN) {
  38. // 初始化CUDNN的卷积层
  39. return shared_ptr<Layer<Dtype> >(new CuDNNConvolutionLayer<Dtype>(param));
  40. #endif
  41. } else {// 否则就是出错了
  42. LOG(FATAL) << "Layer " << param.name() << " has unknown engine.";
  43. }
  44. }
  45. // 注册该卷积层,类型名为Convolution,获取卷积层的实例为GetConvolutionLayer函数
  46. REGISTER_LAYER_CREATOR(Convolution, GetConvolutionLayer);
  47.  
  48. // 获取池化层的实例,同卷积层的逻辑
  49. // Get pooling layer according to engine.
  50. template <typename Dtype>
  51. shared_ptr<Layer<Dtype> > GetPoolingLayer(const LayerParameter& param) {
  52. PoolingParameter_Engine engine = param.pooling_param().engine();
  53. if (engine == PoolingParameter_Engine_DEFAULT) {
  54. engine = PoolingParameter_Engine_CAFFE;
  55. #ifdef USE_CUDNN
  56. engine = PoolingParameter_Engine_CUDNN;
  57. #endif
  58. }
  59. if (engine == PoolingParameter_Engine_CAFFE) {
  60. return shared_ptr<Layer<Dtype> >(new PoolingLayer<Dtype>(param));
  61. #ifdef USE_CUDNN
  62. } else if (engine == PoolingParameter_Engine_CUDNN) {
  63. PoolingParameter p_param = param.pooling_param();
  64. if (p_param.pad() || p_param.pad_h() || p_param.pad_w() ||
  65. param.top_size() > ) {
  66. LOG(INFO) << "CUDNN does not support padding or multiple tops. "
  67. << "Using Caffe's own pooling layer.";
  68. return shared_ptr<Layer<Dtype> >(new PoolingLayer<Dtype>(param));
  69. }
  70. return shared_ptr<Layer<Dtype> >(new CuDNNPoolingLayer<Dtype>(param));
  71. #endif
  72. } else {
  73. LOG(FATAL) << "Layer " << param.name() << " has unknown engine.";
  74. }
  75. }
  76.  
  77. // 注册池化层
  78. REGISTER_LAYER_CREATOR(Pooling, GetPoolingLayer);
  79.  
  80. // 注册ReLU层
  81. // Get relu layer according to engine.
  82. template <typename Dtype>
  83. shared_ptr<Layer<Dtype> > GetReLULayer(const LayerParameter& param) {
  84. ReLUParameter_Engine engine = param.relu_param().engine();
  85. if (engine == ReLUParameter_Engine_DEFAULT) {
  86. engine = ReLUParameter_Engine_CAFFE;
  87. #ifdef USE_CUDNN
  88. engine = ReLUParameter_Engine_CUDNN;
  89. #endif
  90. }
  91. if (engine == ReLUParameter_Engine_CAFFE) {
  92. return shared_ptr<Layer<Dtype> >(new ReLULayer<Dtype>(param));
  93. #ifdef USE_CUDNN
  94. } else if (engine == ReLUParameter_Engine_CUDNN) {
  95. return shared_ptr<Layer<Dtype> >(new CuDNNReLULayer<Dtype>(param));
  96. #endif
  97. } else {
  98. LOG(FATAL) << "Layer " << param.name() << " has unknown engine.";
  99. }
  100. }
  101.  
  102. REGISTER_LAYER_CREATOR(ReLU, GetReLULayer);
  103.  
  104. // 注册sigmoid层
  105. // Get sigmoid layer according to engine.
  106. template <typename Dtype>
  107. shared_ptr<Layer<Dtype> > GetSigmoidLayer(const LayerParameter& param) {
  108. SigmoidParameter_Engine engine = param.sigmoid_param().engine();
  109. if (engine == SigmoidParameter_Engine_DEFAULT) {
  110. engine = SigmoidParameter_Engine_CAFFE;
  111. #ifdef USE_CUDNN
  112. engine = SigmoidParameter_Engine_CUDNN;
  113. #endif
  114. }
  115. if (engine == SigmoidParameter_Engine_CAFFE) {
  116. return shared_ptr<Layer<Dtype> >(new SigmoidLayer<Dtype>(param));
  117. #ifdef USE_CUDNN
  118. } else if (engine == SigmoidParameter_Engine_CUDNN) {
  119. return shared_ptr<Layer<Dtype> >(new CuDNNSigmoidLayer<Dtype>(param));
  120. #endif
  121. } else {
  122. LOG(FATAL) << "Layer " << param.name() << " has unknown engine.";
  123. }
  124. }
  125.  
  126. REGISTER_LAYER_CREATOR(Sigmoid, GetSigmoidLayer);
  127.  
  128. // 注册softmax层
  129. // Get softmax layer according to engine.
  130. template <typename Dtype>
  131. shared_ptr<Layer<Dtype> > GetSoftmaxLayer(const LayerParameter& param) {
  132. SoftmaxParameter_Engine engine = param.softmax_param().engine();
  133. if (engine == SoftmaxParameter_Engine_DEFAULT) {
  134. engine = SoftmaxParameter_Engine_CAFFE;
  135. #ifdef USE_CUDNN
  136. engine = SoftmaxParameter_Engine_CUDNN;
  137. #endif
  138. }
  139. if (engine == SoftmaxParameter_Engine_CAFFE) {
  140. return shared_ptr<Layer<Dtype> >(new SoftmaxLayer<Dtype>(param));
  141. #ifdef USE_CUDNN
  142. } else if (engine == SoftmaxParameter_Engine_CUDNN) {
  143. return shared_ptr<Layer<Dtype> >(new CuDNNSoftmaxLayer<Dtype>(param));
  144. #endif
  145. } else {
  146. LOG(FATAL) << "Layer " << param.name() << " has unknown engine.";
  147. }
  148. }
  149.  
  150. REGISTER_LAYER_CREATOR(Softmax, GetSoftmaxLayer);
  151.  
  152. // 注册tanh层
  153. // Get tanh layer according to engine.
  154. template <typename Dtype>
  155. shared_ptr<Layer<Dtype> > GetTanHLayer(const LayerParameter& param) {
  156. TanHParameter_Engine engine = param.tanh_param().engine();
  157. if (engine == TanHParameter_Engine_DEFAULT) {
  158. engine = TanHParameter_Engine_CAFFE;
  159. #ifdef USE_CUDNN
  160. engine = TanHParameter_Engine_CUDNN;
  161. #endif
  162. }
  163. if (engine == TanHParameter_Engine_CAFFE) {
  164. return shared_ptr<Layer<Dtype> >(new TanHLayer<Dtype>(param));
  165. #ifdef USE_CUDNN
  166. } else if (engine == TanHParameter_Engine_CUDNN) {
  167. return shared_ptr<Layer<Dtype> >(new CuDNNTanHLayer<Dtype>(param));
  168. #endif
  169. } else {
  170. LOG(FATAL) << "Layer " << param.name() << " has unknown engine.";
  171. }
  172. }
  173.  
  174. REGISTER_LAYER_CREATOR(TanH, GetTanHLayer);
  175.  
  176. // 注册PYTHON层
  177. #ifdef WITH_PYTHON_LAYER
  178. template <typename Dtype>
  179. shared_ptr<Layer<Dtype> > GetPythonLayer(const LayerParameter& param) {
  180. Py_Initialize();
  181. try {
  182. bp::object module = bp::import(param.python_param().module().c_str());
  183. bp::object layer = module.attr(param.python_param().layer().c_str())(param);
  184. return bp::extract<shared_ptr<PythonLayer<Dtype> > >(layer)();
  185. } catch (bp::error_already_set) {
  186. PyErr_Print();
  187. throw;
  188. }
  189. }
  190.  
  191. REGISTER_LAYER_CREATOR(Python, GetPythonLayer);
  192. #endif
  193.  
  194. // Layers that use their constructor as their default creator should be
  195. // registered in their corresponding cpp files. Do not register them here.
  196. } // namespace caffe

3.layer_factory中坑

在现有的代码中,Pooling层的注册部分出现了这个代码:

  1. // CuDNN assumes layers are not being modified in place, thus
  2. // breaking our index tracking for updates in some cases in Caffe.
  3. // Until there is a workaround in Caffe (index management) or
  4. // cuDNN, use Caffe layer to max pooling, or don't use in place
  5. // layers after max pooling layers
  6. if (param.pooling_param().pool() == PoolingParameter_PoolMethod_MAX) {
  7. return shared_ptr<Layer<Dtype> >(new PoolingLayer<Dtype>(param));
  8. } else {
  9. return shared_ptr<Layer<Dtype> >(new CuDNNPoolingLayer<Dtype>(param));
  10. }

这就直接导致,只要你用的是MaxPool,使用的一定是Caffe自己实现的cu代码,永远无法使用cuDNN版本的代码,这就解释了我们之前测试MaxPool层性能一直没有变化的原因

4.问题影响分析

但是caffe的作者为什么不使用cuDNN的MaxPool呢,经过查询NVIDIA cuDNN的User Manual,我们发现,

4.144. cudnnPoolingForward

  1. cudnnStatus_t cudnnPoolingForward(
  2. cudnnHandle_t handle,
  3. const cudnnPoolingDescriptor_t poolingDesc,
  4. const void *alpha,
  5. const cudnnTensorDescriptor_t xDesc,
  6. const void *x,
  7. const void *beta,
  8. const cudnnTensorDescriptor_t yDesc,
  9. void *y)

This function computes pooling of input values (i.e., the maximum or average of several adjacent values) to produce an output with smaller height and/or width.

Note: All tensor formats are supported, best performance is expected when usingHW-packedtensors. Only 2 and 3 spatial dimensions are allowed.
Note: The dimensions of the output tensoryDesccan be smaller or bigger than the dimensions advised by the routinecudnnGetPooling2dForwardOutputDimorcudnnGetPoolingNdForwardOutputDim.

Parameters

handle

Input. Handle to a previously created cuDNN context.

poolingDesc

Input. Handle to a previously initialized pooling descriptor.

alpha, beta

Input. Pointers to scaling factors (in host memory) used to blend the computation result with prior value in the output layer as follows: dstValue = alpha[0]*result + beta[0]*priorDstValue. Refer to this section for additional details.

xDesc

Input. Handle to the previously initialized input tensor descriptor. Must be of type FLOAT, or DOUBLE, or HALF, or INT8. See cudnnDataType_t.

x

Input. Data pointer to GPU memory associated with the tensor descriptorxDesc.

yDesc

Input. Handle to the previously initialized output tensor descriptor. Must be of type FLOAT, or DOUBLE, or HALF, or INT8. See cudnnDataType_t.

y

Output. Data pointer to GPU memory associated with the output tensor descriptoryDesc.

The possible error values returned by this function and their meanings are listed below.

Returns

CUDNN_STATUS_SUCCESS

The function launched successfully.

CUDNN_STATUS_BAD_PARAM

At least one of the following conditions are met:

  • The dimensionsn,cof the input tensor and output tensors differ.
  • Thedatatypeof the input tensor and output tensors differs.
CUDNN_STATUS_NOT_SUPPORTED

The function does not support the provided configuration. See the following for some examples of non-supported configurations:

  • ThewStrideof input tensor or output tensor is not 1.
CUDNN_STATUS_EXECUTION_FAILED

The function failed to launch on the GPU

这个地方比较神奇的是只能传入两个参数,这就无法实现mask的更新,不太明白cuDNN设计者的思路,目前看,这个地方要想保持正确性,暂时应该是无法使用cuDNN的PoolingForward了。

Caffe之layer_factory的更多相关文章

  1. 基于Caffe的DeepID2实现(中)

    小喵的唠叨话:我们在上一篇博客里面,介绍了Caffe的Data层的编写.有了Data层,下一步则是如何去使用生成好的训练数据.也就是这一篇的内容. 小喵的博客:http://www.miaoerduo ...

  2. 浅析py-faster-rcnn中不同版本caffe的安装及其对应不同版本cudnn的解决方案

    浅析py-faster-rcnn中不同版本caffe的安装及其对应不同版本cudnn的解决方案 本文是截止目前为止最强攻略,按照本文方法基本可以无压力应对caffe和Ross B. Girshick的 ...

  3. 【caffe】mnist训练日志

    @tags caffe 前面根据train_lenet.sh改写了train_lenet.py后,在根目录下执行它,得到一系列输出,内容如下: I1013 10:05:16.721294 1684 c ...

  4. 在caffe中添加新的layer

    比如现在要添加一个vision layer,名字叫Ly_Layer:(一般命名第一个字母大写,其余小写.) 1.属于哪个类型的layer(共五种:common_layer, data_layer, l ...

  5. caffe: compile error: Could not open or find file your path~~/resized_data/0 and a total of 2 images .

    I0219 14:48:40.965386 31108 net.cpp:76] Memory required for data: 0I0219 14:48:40.965517 31108 layer ...

  6. [caffe]深度学习之图像分类模型VGG解读

    一.简单介绍 vgg和googlenet是2014年imagenet竞赛的双雄,这两类模型结构有一个共同特点是go deeper.跟googlenet不同的是.vgg继承了lenet以及alexnet ...

  7. caffe+GPU︱AWS.G2+Ubuntu14.04+GPU+CUDA8.0+cudnn8.0

    国服亚马逊的GPU实例G2.2xlarge的python+caffe的安装过程,被虐- 一周才装出来- BVLC/caffe的在AWS安装的官方教程github: https://github.com ...

  8. caffe项目工程化封装FRCNN

    各种坑!!想要做好,一定要自己一步步试,下载别人的总会出现各种问题. 步骤如下:(可以把这些文件打包在一个文件加下,分两个文件libs,include,一定要是自己的文件) 1 首先是配置caffe的 ...

  9. caffe中使用python定义新的层

    转载链接:http://withwsf.github.io/2016/04/14/Caffe-with-Python-Layer/ Caffe通过Boost中的Boost.Python模块来支持使用P ...

随机推荐

  1. Http的请求协议请求行介绍

    请求协议包含的内容 请求行 GET /day04-tomcat/index.jsp HTTP/1.1 HTTP/1.1: 表示的是我们使用的是http协议的1.1版本 请求头 请求空行 请求体: 存储 ...

  2. js es6遍历对象的6种方法(应用中推荐前三种)

        javaScript遍历对象总结 1.for … in 循环遍历对象自身的和继承的可枚举属性(循环遍历对象自身的和继承的可枚举属性(不含Symbol属性).). 2.使用Object.keys ...

  3. for-update与for-update nowait

    1.for update 和 for update nowait 的区别: 首先一点,如果只是select 的话,Oracle是不会加任何锁的,也就是Oracle对 select 读到的数据不会有任何 ...

  4. Qt编写自定义控件21-圆弧仪表盘

    一.前言 圆弧仪表盘在整个自定义控件大全中也稍微遇到了技术难点,比如背景透明,如果采用以前画圆形画扇形的方式绘制,肯定很难形成背景透明,需要用到切割,最后换了一种绘制方法,采用绘制圆弧的方式,即使用d ...

  5. 阿里云服务出现TCP连接快速增加尤其是NON_ESTABLISHED大量增加导致内存和CPU暴增系统无法使用的问题

    TCP状态转移要点TCP协议规定,对于已经建立的连接,网络双方要进行四次握手才能成功断开连接,如果缺少了其中某个步骤,将会使连接处于假死状态,连接本身占用的资源不 会被释放.网络服务器程序要同时管理大 ...

  6. ansible安装、配置ssh、hosts、测试连接

    .安装ansible 1.1.源码安装 源码安装参照 https://www.cnblogs.com/guxiong/p/7218717.html [root@kube-node3 ~]# .tar. ...

  7. Centos7 系统更改apache默认网站目录(解决You don't have permission to access / on this server问题)

    当我们在Centos7中配置好Apache时,发现apache默认解析目录是在 /var/www/html,也就是说当访问服务器 IP 或者本地 localhost 时, 默认定位到这个目录里的 in ...

  8. ansible实践

    ansible常用module ansible-doc -l List available modules -s Show playbook snippet for specified module( ...

  9. nrpe command

    1. nrpe 连接问题: 报错:/usr/local/nagios/libexec/check_nrpe  -H  destip   ;   CHECK_NRPE: Error - Could no ...

  10. uWSGI 漏洞复现(CVE-2018-7490)

    uWSGI是一个Web服务器,它实现了WSGI协议.uwsgi.http等协议.Nginx中HttpUwsgiModule的作用是与uWSGI服务器进行交换.WSGI是一种Web服务器网关接口.它是一 ...