Implement C++ Class

The C++ class of the layer implements the initialization, forward, and backward part of the layer. It needs to derive the base class paddle::Layer, and it needs to override the following functions:

  • constructor and destructor.
  • init function. It is used to initialize the parameters and settings.
  • forward. It implements the forward part of the layer.
  • backward. It implements the backward part of the layer.
  • prefetch. It is utilized to determine the rows corresponding parameter matrix to prefetch from parameter server. You do not need to override this function if your layer does not need remote sparse update. (most layers do not need to support remote sparse update)

头文件:

namespace paddle {
/**
* A layer has full connections to all neurons in the previous layer.
* It computes an inner product with a set of learned weights, and
* (optionally) adds biases.
*
* The config file api is fc_layer.
*/ class FullyConnectedLayer : public Layer {
protected:
WeightList weights_;
std::unique_ptr<Weight> biases_; public:
explicit FullyConnectedLayer(const LayerConfig& config)
: Layer(config) {}
~FullyConnectedLayer() {} bool init(const LayerMap& layerMap, const ParameterMap& parameterMap); Weight& getWeight(int idx) { return *weights_[idx]; } void prefetch();
void forward(PassType passType);
void backward(const UpdateCallback& callback = nullptr);
};
} // namespace paddle

It defines the parameters as class variables. We use Weight class as abstraction of parameters. It supports multi-thread update. The details of this class will be described in details in the implementations.

  • weights_ is a list of weights for the transformation matrices. The current implementation can have more than one inputs. Thus, it has a list of weights. One weight corresponds to an input.
  • biases_ is a weight for the bias vector.

The fully connected layer does not have layer configuration hyper-parameters. If there are some layer hyper-parameters, a common practice is to store it in LayerConfig& config, and put it into a class variable in the constructor.

The following code snippet implements the init function.

  • First, every init function must call the init function of the base class Layer::init(layerMap, parameterMap);. This statement will initialize the required variables and connections for each layer.
  • The it initializes all the weights matrices $W$ . The current implementation can have more than one inputs. Thus, it has a list of weights.(当前layer的输入可能来自多个layer,每个layer对应一个weight)
  • Finally, it initializes the bias.
bool FullyConnectedLayer::init(const LayerMap& layerMap,
const ParameterMap& parameterMap) {
/* Initialize the basic parent class */
Layer::init(layerMap, parameterMap); /* initialize the weightList */
CHECK(inputLayers_.size() == parameters_.size());
for (size_t i = ; i < inputLayers_.size(); i++) {
// Option the parameters
// 输入层的神经元数目
size_t height = inputLayers_[i]->getSize();
// 当前层的神经元数目
size_t width = getSize(); // create a new weight
if (parameters_[i]->isSparse()) {
CHECK_LE(parameters_[i]->getSize(), width * height);
} else {
CHECK_EQ(parameters_[i]->getSize(), width * height);
}
Weight* w = new Weight(height, width, parameters_[i]); // append the new weight to the list
weights_.emplace_back(w);
} /* initialize biases_ */
if (biasParameter_.get() != NULL) {
biases_ = std::unique_ptr<Weight>(new Weight(, getSize(), biasParameter_));
} return true;
}

The implementation of the forward part has the following steps.

  • Every layer must call Layer::forward(passType); at the beginning of its forward function.
  • Then it allocates memory for the output using reserveOutput(batchSize, size);. This step is necessary because we support the batches to have different batch sizes. reserveOutput will change the size of the output accordingly. For the sake of efficiency, we will allocate new memory if we want to expand the matrix, but we will reuse the existing memory block if we want to shrink the matrix.
  • Then it computes $\sum_i W_i x + b$ using Matrix operations。 getInput(i).value retrieve the matrix of the i-th input. Each input is a $batchSize×dim$ matrix, where each row represents an single input in a batch. For a complete lists of supported matrix operations, please refer to paddle/math/Matrix.h and paddle/math/BaseMatrix.h.
  • Finally it applies the activation function using forwardActivation();. It will automatically applies the corresponding activation function specifies in the network configuration.
void FullyConnectedLayer::forward(PassType passType) {
Layer::forward(passType); /* malloc memory for the output_ if necessary */
// batchSize是样本数,size是神经元数目
int batchSize = getInput().getBatchSize();
int size = getSize(); {
// Settup the size of the output.
reserveOutput(batchSize, size);
} MatrixPtr outV = getOutputValue(); // Apply the the transformation matrix to each input.
for (size_t i = ; i != inputLayers_.size(); ++i) {
auto input = getInput(i);
CHECK(input.value) << "The input of 'fc' layer must be matrix";
i == ? outV->mul(input.value, weights_[i]->getW(), , )
: outV->mul(input.value, weights_[i]->getW(), , );
} /* add the bias-vector */
if (biases_.get() != NULL) {
outV->addBias(*(biases_->getW()), );
} /* activation */ {
forwardActivation();
}
}

The implementation of the backward part has the following steps.

  • backwardActivation() computes the gradients of the activation. The gradients will be multiplies in place to the gradients of the output, which can be retrieved using getOutputGrad().
  • Compute the gradients of bias. Notice that we an use biases_->getWGrad() to get the gradient matrix of the corresponding parameter. After the gradient of one parameter is updated, it must call getParameterPtr()->incUpdate(callback);. This is utilize for parameter update over multiple threads or multiple machines.
  • Then it computes the gradients of the transformation matrices and inputs, and it calls incUpdate for the corresponding parameter. This gives the framework the chance to know whether it has gathered all the gradient to one parameter so that it can do some overlapping work (e.g., network communication)
void FullyConnectedLayer::backward(const UpdateCallback& callback) {
/* Do derivation for activations.*/ {
// 计算本层网络的激活关于本层网络参数的偏导
backwardActivation();
} if (biases_ && biases_->getWGrad()) {
// 计算loss函数关于本层网络偏差的梯度
biases_->getWGrad()->collectBias(*getOutputGrad(), ); biases_->getParameterPtr()->incUpdate(callback);
} bool syncFlag = hl_get_sync_flag(); for (size_t i = ; i != inputLayers_.size(); ++i) {
/* Calculate the W-gradient for the current layer */
if (weights_[i]->getWGrad()) {
MatrixPtr input_T = getInputValue(i)->getTranspose();
MatrixPtr oGrad = getOutputGrad();
{
weights_[i]->getWGrad()->mul(input_T, oGrad, , );
}
} /* Calculate the input layers error */
MatrixPtr preGrad = getInputGrad(i);
if (NULL != preGrad) {
MatrixPtr weights_T = weights_[i]->getW()->getTranspose();
preGrad->mul(getOutputGrad(), weights_T, , );
} {
weights_[i]->getParameterPtr()->incUpdate(callback);
}
}
}

The prefetch function specifies the rows that need to be fetched from parameter server during training. It is only useful for remote sparse training. In remote sparse training, the full parameter matrix is stored distributedly at the parameter server. When the layer uses a batch for training, only a subset of locations of the input is non-zero in this batch. Thus, this layer only needs the rows of the transformation matrix corresponding to the locations of these non-zero entries. The prefetch function specifies the ids of these rows.

Most of the layers do not need remote sparse training function. You do not need to override this function in this case.

void FullyConnectedLayer::prefetch() {
for (size_t i = ; i != inputLayers_.size(); ++i) {
auto* sparseParam =
dynamic_cast<SparsePrefetchRowCpuMatrix*>(weights_[i]->getW().get());
if (sparseParam) {
MatrixPtr input = getInputValue(i);
sparseParam->addRows(input);
}
}
}

Finally, you can use REGISTER_LAYER(fc, FullyConnectedLayer); to register the layer. fc is the identifier of the layer, and FullyConnectedLayer is the class name of the layer.

namespace paddle {
REGISTER_LAYER(fc, FullyConnectedLayer);
}

If the cpp file is put into paddle/gserver/layers, it will be automatically added to the compilation list.

Implement Python Wrapper

Implementing Python wrapper allows us to use the added layer in configuration files. All the Python wrappers are in file python/paddle/trainer/config_parser.py. An example of the Python wrapper for fully connected layer is listed below. It has the following steps:

  • Use @config_layer('fc') at the decorator for all the Python wrapper class. fc is the identifier of the layer.
  • Implements __init__ constructor function.
    • It first call super(FCLayer, self).__init__(name, 'fc', size, inputs=inputs, **xargs) base constructor function. FCLayer is the Python wrapper class name, and fc is the layer identifier name. They must be correct in order for the wrapper to work.
    • Then it computes the size and format (whether sparse) of each transformation matrix as well as the size.
@config_layer('fc')
class FCLayer(LayerBase):
def __init__(
self,
name,
size,
inputs,
bias=True,
**xargs):
super(FCLayer, self).__init__(name, 'fc', size, inputs=inputs, **xargs)
for input_index in xrange(len(self.inputs)):
input_layer = self.get_input_layer(input_index)
psize = self.config.size * input_layer.size
dims = [input_layer.size, self.config.size]
format = self.inputs[input_index].format
sparse = format == "csr" or format == "csc"
if sparse:
psize = self.inputs[input_index].nnz
self.create_input_parameter(input_index, psize, dims, sparse, format)
self.create_bias_parameter(bias, self.config.size)

In network configuration, the layer can be specifies using the following code snippets. The arguments of this class are:

  • name is the name identifier of the layer instance.
  • type is the type of the layer, specified using layer identifier.
  • size is the output size of the layer.
  • bias specifies whether this layer instance has bias.
  • inputs specifies a list of layer instance names as inputs.
Layer(
name = "fc1",
type = "fc",
size = ,
bias = True,
inputs = [Input("pool3")]
)

You are also recommended to implement a helper for the Python wrapper, which makes it easier to write models. You can refer to python/paddle/trainer_config_helpers/layers.py for examples.

http://doc.paddlepaddle.org/doc/howto/dev/new_layer_en.html

paddle源码解析:

http://wiki.babel.baidu.com/twiki/bin/view/Main/Paddle%E6%BA%90%E7%A0%81%E5%89%96%E6%9E%90--Layer#2.2 backward函数

http://wiki.baidu.com/pages/viewpage.action?pageId=353372756

paddle中新增layer的更多相关文章

  1. html5中新增的form表单属性

    html5中新增两个表单属性,分别autocomplete和novalidate属性 1.autocomplete属性 该属性用于控制自动完成功能的开启和关闭.可以设置表单或者input元素,有两个属 ...

  2. Bash 4.4 中新增的 ${parameter@operator} 语法

    Bash 4.4 中新增了一种 ${...} 语法,长这样:${parameter@operator}.根据不同的 operator,它展开后的值可能是 parameter 这个参数的值经过某种转换后 ...

  3. 在 .NET 4.0 中使用 .NET 4.5 中新增的特性(CallerMemberNameAttribute/CallerFilePathAttribute/CallerLineNumberAttribute)

    介绍 标题中所说的三个特性 CallerMemberNameAttribute / CallerFilePathAttribute / CallerLineNumberAttribute 我们统称为调 ...

  4. [转]在NopCommerce中新增一个Domain Model的步骤

    本文转自:http://www.cnblogs.com/aneasystone/archive/2012/08/27/2659183.html 在NopCommerce中新增一个Domain Mode ...

  5. S5中新增的Array方法详细说明

      ES5中新增的Array方法详细说明 by zhangxinxu from http://www.zhangxinxu.com 本文地址:http://www.zhangxinxu.com/wor ...

  6. ES5中新增的Array方法详细说明

    一.前言-索引 ES5中新增的不少东西,了解之对我们写JavaScript会有不少帮助,比如数组这块,我们可能就不需要去有板有眼地for循环了. ES5中新增了写数组方法,如下: forEach (j ...

  7. AJAX-----13HTML5中新增的API---FormData

    FormData 表单数据对象,这是在HTML5中新增的一个API,他能以表单对象做参数,自动的将表单的数据打包,当ajax发送数据是,发送FormData内的表单数据给后端即可 <!DOCTY ...

  8. SQL Server 2008中新增的 1.变更数据捕获(CDC) 和 2.更改跟踪

    概述 1.变更数据捕获(CDC)        每一次的数据操作都会记录下来 2.更改跟踪       只会记录最新一条记录   以上两种的区别:         http://blog.csdn.n ...

  9. 2dx解析cocosbuilder中使用layer时的缺陷

    2dx解析cocosbuilder中使用layer时的缺陷 cocos2d-x 3.7 cocosbuilder中的layer通常会用到触摸属性: 但是在2dx解析布局文件的时候,却很多属性都没解析: ...

随机推荐

  1. 【Java_多线程并发编程】基础篇—线程状态及实现多线程的两种方式

    1.Java多线程的概念 同一时间段内,位于同一处理器上多个已开启但未执行完毕的线程叫做多线程.他们通过轮寻获得CPU处理时间,从而在宏观上构成一种同时在执行的假象,实质上在任意时刻只有一个线程获得C ...

  2. 【IDE_PyCharm】PyCharm中配置当鼠标悬停时快速提示方法参数

    方法一:通过在settings里面设置当鼠标至于方法之上时给出快速提示 方法二:按住Ctrl键,光标放在任意变量或方法上都会弹出该变量或方法的详细信息,点击鼠标还能跳转到变量或方法的定义处

  3. DFS输出全排列

    前言 输入n(1 <= n <= 20),按字典序输出所有1~n的排列.如果排列数量太多,则只需要输出前100个 输入样例 3 输出样例 1 2 3 1 3 2 2 1 3 2 3 1 3 ...

  4. RN在设备上运行

    https://facebook.github.io/react-native/docs/running-on-device.html 在发布之前,最好是在真实的设备上测试一下应用.如果是通过crea ...

  5. python 学习总结4

    数字类型及操作 一.整数类型 (1)python中的整数与数学中的概念是一致的,可以正也可以负,没有取值范围.   pow(x,y)函数是计算x的y次幂,想计算多大就多大. (2)在整数类型中有四种进 ...

  6. (原)neuq oj 1022给定二叉树的前序遍历和后序遍历确定二叉树的个数

    题目描述 众所周知,遍历一棵二叉树就是按某条搜索路径巡访其中每个结点,使得每个结点均被访问一次,而且仅被访问一次.最常使用的有三种遍历的方式: 1.前序遍历:若二叉树为空,则空操作:否则先访问根结点, ...

  7. hdu2255 奔小康赚大钱 KM 算法

    参见这里 #include <iostream> #include <cstring> #include <cstdio> using namespace std; ...

  8. 大数据学习——sql练习

    1. 现有如下的建表语句和数据: 建表语句 create table student(Sno int,Sname string,Sex string,Sage int,Sdept string)row ...

  9. [SQL server] IF ELSE 和 CASE WHEN 的用法

    /*判断一个数如果大于10,按10统计,如果小于0,按0统计*/ --方法a DECLARE @AA INT SET @AA=15 IF @AA>10 SELECT 10 ELSE IF @AA ...

  10. C/C++的类型安全

    类型安全很大程度上可以等价于内存安全,类型安全的代码不会试图访问自己没被授权的内存区域.“类型安全”常被用来形容编程语言,其根据在于该门编程语言是否提供保障类型安全的机制:有的时候也用“类型安全”形容 ...