梳理caffe代码common(八)
因为想梳理data_layer的过程。整理一半发现有几个很重要的头文件就是题目列出的这几个:
追本溯源,先从根基開始学起。这里面都是些什么鬼呢?
common类
命名空间的使用:google、cv、caffe{boost、std}。
然后在项目中就能够任意使用google、opencv、c++的标准库、以及c++高级库boost。
caffe採用单例模式封装boost的智能指针(caffe的灵魂)、std一些标准的使用方法、重要的初始化内容(随机数生成器的内容以及google的gflags和glog的初始化)。
提供一个统一的接口。方便移植和开发。为毛使用随机数?我也不是非常清楚,知乎的一个解释:
随机数在caffe中是很重要的,最重要的应用是权值的初始化,如高斯、xavier等。初始化的好坏直接影响终于的训练结果,其它的应用如训练图像的随机crop和mirror、dropout层的神经元的选择。RNG类是对Boost以及STL中随机数函数的封装,以方便使用。至于想每次产生同样的随机数,仅仅要设定固定的种子就可以。见caffe.proto中random_seed的定义:
// If non-negative, the seed with which the Solver will initialize the Caffe
// random number generator -- useful for reproducible results. Otherwise,
// (and by default) initialize using a seed derived from the system clock.
optional int64 random_seed = 20 [default = -1];
头文件:
#ifndef CAFFE_COMMON_HPP_
#define CAFFE_COMMON_HPP_ #include <boost/shared_ptr.hpp>
#include <gflags/gflags.h>
#include <glog/logging.h> #include <climits>
#include <cmath>
#include <fstream> // NOLINT(readability/streams)
#include <iostream> // NOLINT(readability/streams)
#include <map>
#include <set>
#include <sstream>
#include <string>
#include <utility> // pair
#include <vector> #include "caffe/util/device_alternate.hpp" // Convert macro to string
// 将宏转换为字符串
#define STRINGIFY(m) #m
#define AS_STRING(m) STRINGIFY(m) // gflags 2.1 issue: namespace google was changed to gflags without warning.
// Luckily we will be able to use GFLAGS_GFLAGS_H_ to detect if it is version
// 2.1. If yes, we will add a temporary solution to redirect the namespace.
// TODO(Yangqing): Once gflags solves the problem in a more elegant way, let's
// remove the following hack.
// 检測gflags2.1
#ifndef GFLAGS_GFLAGS_H_
namespace gflags = google;
#endif // GFLAGS_GFLAGS_H_ // Disable the copy and assignment operator for a class.
// 禁止某个类通过构造函数直接初始化还有一个类
// 禁止某个类通过赋值来初始化还有一个类
#define DISABLE_COPY_AND_ASSIGN(classname) \
private:\
classname(const classname&);\
classname& operator=(const classname&) // Instantiate a class with float and double specifications.
#define INSTANTIATE_CLASS(classname) \
char gInstantiationGuard##classname; \
template class classname<float>; \
template class classname<double> // 初始化GPU的前向传播函数
#define INSTANTIATE_LAYER_GPU_FORWARD(classname) \
template void classname<float>::Forward_gpu( \
const std::vector<Blob<float>*>& bottom, \
const std::vector<Blob<float>*>& top); \
template void classname<double>::Forward_gpu( \
const std::vector<Blob<double>*>& bottom, \
const std::vector<Blob<double>*>& top); // 初始化GPU的反向传播函数
#define INSTANTIATE_LAYER_GPU_BACKWARD(classname) \
template void classname<float>::Backward_gpu( \
const std::vector<Blob<float>*>& top, \
const std::vector<bool>& propagate_down, \
const std::vector<Blob<float>*>& bottom); \
template void classname<double>::Backward_gpu( \
const std::vector<Blob<double>*>& top, \
const std::vector<bool>& propagate_down, \
const std::vector<Blob<double>*>& bottom) // 初始化GPU的前向反向传播函数
#define INSTANTIATE_LAYER_GPU_FUNCS(classname) \
INSTANTIATE_LAYER_GPU_FORWARD(classname); \
INSTANTIATE_LAYER_GPU_BACKWARD(classname) // A simple macro to mark codes that are not implemented, so that when the code
// is executed we will see a fatal log.
// NOT_IMPLEMENTED实际上调用的LOG(FATAL) << "Not Implemented Yet"
#define NOT_IMPLEMENTED LOG(FATAL) << "Not Implemented Yet" // See PR #1236
namespace cv { class Mat; }
/*
Caffe类里面有个RNG。RNG这个类里面还有个Generator类在RNG里面会用到Caffe里面的Get()函数来获取一个新的Caffe类的实例。然后RNG里面用到了Generator。 Generator是实际产生随机数的。
*/
namespace caffe { // We will use the boost shared_ptr instead of the new C++11 one mainly
// because cuda does not work (at least now) well with C++11 features.
using boost::shared_ptr; // Common functions and classes from std that caffe often uses.
using std::fstream;
using std::ios;
//using std::isnan;//vc++的编译器不支持这两个函数
//using std::isinf;
using std::iterator;
using std::make_pair;
using std::map;
using std::ostringstream;
using std::pair;
using std::set;
using std::string;
using std::stringstream;
using std::vector; // A global initialization function that you should call in your main function.
// Currently it initializes google flags and google logging.
void GlobalInit(int* pargc, char*** pargv); // A singleton class to hold common caffe stuff, such as the handler that
// caffe is going to use for cublas, curand, etc.
class Caffe {
public:
~Caffe(); // Thread local context for Caffe. Moved to common.cpp instead of
// including boost/thread.hpp to avoid a boost/NVCC issues (#1009, #1010)
// on OSX. Also fails on Linux with CUDA 7.0.18.
//Get函数利用Boost的局部线程存储功能实现
static Caffe& Get();
//Brew就是CPU,GPU的枚举类型,这个名字是不是来自Homebrew???Mac的软件包管理器,我猜的。 。。。
enum Brew { CPU, GPU }; // This random number generator facade hides boost and CUDA rng
// implementation from one another (for cross-platform compatibility).
class RNG {
public:
RNG();//利用系统的熵池或者时间来初始化RNG内部的generator_
explicit RNG(unsigned int seed);
explicit RNG(const RNG&);
RNG& operator=(const RNG&);
void* generator();
private:
class Generator;
shared_ptr<Generator> generator_;
}; // Getters for boost rng, curand, and cublas handles
inline static RNG& rng_stream() {
if (!Get().random_generator_) {
Get().random_generator_.reset(new RNG());
}
return *(Get().random_generator_);
}
#ifndef CPU_ONLY// GPU
inline static cublasHandle_t cublas_handle() { return Get().cublas_handle_; }// cublas的句柄
inline static curandGenerator_t curand_generator() {//curandGenerator句柄
return Get().curand_generator_;
}
#endif
//以下这一块就是设置CPU和GPU以及训练的时候线程并行数目吧
// Returns the mode: running on CPU or GPU.
inline static Brew mode() { return Get().mode_; }
// The setters for the variables
// Sets the mode. It is recommended that you don't change the mode halfway
// into the program since that may cause allocation of pinned memory being
// freed in a non-pinned way, which may cause problems - I haven't verified
// it personally but better to note it here in the header file.
inline static void set_mode(Brew mode) { Get().mode_ = mode; }
// Sets the random seed of both boost and curand
static void set_random_seed(const unsigned int seed);
// Sets the device. Since we have cublas and curand stuff, set device also
// requires us to reset those values.
static void SetDevice(const int device_id);
// Prints the current GPU status.
static void DeviceQuery();
// Parallel training info
inline static int solver_count() { return Get().solver_count_; }
inline static void set_solver_count(int val) { Get().solver_count_ = val; }
inline static bool root_solver() { return Get().root_solver_; }
inline static void set_root_solver(bool val) { Get().root_solver_ = val; } protected:
#ifndef CPU_ONLY
cublasHandle_t cublas_handle_;// cublas的句柄
curandGenerator_t curand_generator_;// curandGenerator句柄
#endif
shared_ptr<RNG> random_generator_; Brew mode_;
int solver_count_;
bool root_solver_; private:
// The private constructor to avoid duplicate instantiation.
//避免实例化
Caffe();
// 禁止caffe这个类被复制构造函数和赋值进行构造
DISABLE_COPY_AND_ASSIGN(Caffe);
}; } // namespace caffe #endif // CAFFE_COMMON_HPP_
cpp文件:
#include <boost/thread.hpp>
#include <glog/logging.h>
#include <cmath>
#include <cstdio>
#include <ctime> #include "caffe/common.hpp"
#include "caffe/util/rng.hpp" namespace caffe { // Make sure each thread can have different values.
// boost::thread_specific_ptr是线程局部存储机制
// 一開始的值是NULL
static boost::thread_specific_ptr<Caffe> thread_instance_; Caffe& Caffe::Get() {
if (!thread_instance_.get()) {// 假设当前线程没有caffe实例
thread_instance_.reset(new Caffe());// 则新建一个caffe的实例并返回
}
return *(thread_instance_.get());
} // random seeding
// linux下的熵池下获取随机数的种子
int64_t cluster_seedgen(void) {
int64_t s, seed, pid;
FILE* f = fopen("/dev/urandom", "rb");
if (f && fread(&seed, 1, sizeof(seed), f) == sizeof(seed)) {
fclose(f);
return seed;
} LOG(INFO) << "System entropy source not available, "
"using fallback algorithm to generate seed instead.";
if (f)
fclose(f);
// 採用传统的基于时间来生成随机数种子
pid = getpid();
s = time(NULL);
seed = std::abs(((s * 181) * ((pid - 83) * 359)) % 104729);
return seed;
}
// 初始化gflags和glog
void GlobalInit(int* pargc, char*** pargv) {
// Google flags.
::gflags::ParseCommandLineFlags(pargc, pargv, true);
// Google logging.
::google::InitGoogleLogging(*(pargv)[0]);
// Provide a backtrace on segfault.
::google::InstallFailureSignalHandler();
}
#ifdef CPU_ONLY // CPU-only Caffe.
Caffe::Caffe()
: random_generator_(), mode_(Caffe::CPU),// shared_ptr<RNG> random_generator_; Brew mode_;
solver_count_(1), root_solver_(true) { }// int solver_count_; bool root_solver_;
Caffe::~Caffe() { }
// 手动设定随机数生成器的种子
void Caffe::set_random_seed(const unsigned int seed) {
// RNG seed
Get().random_generator_.reset(new RNG(seed));
<span style="font-family:Microsoft YaHei;">}</span>
void Caffe::SetDevice(const int device_id) {
NO_GPU;
}
void Caffe::DeviceQuery() {
NO_GPU;
}
// 定义RNG内部的Generator类
class Caffe::RNG::Generator {
public:
Generator() : rng_(new caffe::rng_t(cluster_seedgen())) {}// linux下的熵池生成随机数种子,注意typedef boost::mt19937 rng_t;这个在utils/rng.hpp头文件中面
explicit Generator(unsigned int seed) : rng_(new caffe::rng_t(seed)) {}// 採用给定的种子初始化
caffe::rng_t* rng() { return rng_.get(); }// 属性
private:
shared_ptr<caffe::rng_t> rng_;// 内部变量
};
// 实现RNG内部的构造函数
Caffe::RNG::RNG() : generator_(new Generator()) { }
Caffe::RNG::RNG(unsigned int seed) : generator_(new Generator(seed)) { }
// 实现RNG内部的运算符重载
Caffe::RNG& Caffe::RNG::operator=(const RNG& other) {
generator_ = other.generator_;
return *this;
}
void* Caffe::RNG::generator() {
return static_cast<void*>(generator_->rng());
}
#else // Normal GPU + CPU Caffe.
// 构造函数,初始化cublas和curand库的句柄
Caffe::Caffe()
: cublas_handle_(NULL), curand_generator_(NULL), random_generator_(),
mode_(Caffe::CPU), solver_count_(1), root_solver_(true) {
// Try to create a cublas handler, and report an error if failed (but we will
// keep the program running as one might just want to run CPU code).
// 初始化cublas并获得句柄
if (cublasCreate(&cublas_handle_) != CUBLAS_STATUS_SUCCESS) {
LOG(ERROR) << "Cannot create Cublas handle. Cublas won't be available.";
}
// Try to create a curand handler.
if (curandCreateGenerator(&curand_generator_, CURAND_RNG_PSEUDO_DEFAULT)
!= CURAND_STATUS_SUCCESS ||
curandSetPseudoRandomGeneratorSeed(curand_generator_, cluster_seedgen())
!= CURAND_STATUS_SUCCESS) {
LOG(ERROR) << "Cannot create Curand generator. Curand won't be available.";
}
} Caffe::~Caffe() {
// 销毁句柄
if (cublas_handle_) CUBLAS_CHECK(cublasDestroy(cublas_handle_));
if (curand_generator_) {
CURAND_CHECK(curandDestroyGenerator(curand_generator_));
}
}
// 初始化CUDA的随机数种子以及cpu的随机数种子
void Caffe::set_random_seed(const unsigned int seed) {
// Curand seed
static bool g_curand_availability_logged = false;// 推断是否log了curand的可用性。假设没有则log一次,log之后则再也不log。用的是静态变量
if (Get().curand_generator_) {
// CURAND_CHECK见/utils/device_alternate.hpp中的宏定义
CURAND_CHECK(curandSetPseudoRandomGeneratorSeed(curand_generator(),
seed));
CURAND_CHECK(curandSetGeneratorOffset(curand_generator(), 0));
} else {
if (!g_curand_availability_logged) {
LOG(ERROR) <<
"Curand not available. Skipping setting the curand seed.";
g_curand_availability_logged = true;
}
}
// RNG seed
// CPU code
Get().random_generator_.reset(new RNG(seed));
} // 设置GPU设备并初始化句柄以及随机数种子
void Caffe::SetDevice(const int device_id) {
int current_device;
CUDA_CHECK(cudaGetDevice(¤t_device));// 获取当前设备id
if (current_device == device_id) {
return;
}
// The call to cudaSetDevice must come before any calls to Get, which
// may perform initialization using the GPU.
// 在Get之前必须先运行cudasetDevice函数
CUDA_CHECK(cudaSetDevice(device_id));
// 清理曾经的句柄
if (Get().cublas_handle_) CUBLAS_CHECK(cublasDestroy(Get().cublas_handle_));
if (Get().curand_generator_) {
CURAND_CHECK(curandDestroyGenerator(Get().curand_generator_));
}
// 创建新句柄
CUBLAS_CHECK(cublasCreate(&Get().cublas_handle_));
CURAND_CHECK(curandCreateGenerator(&Get().curand_generator_,
CURAND_RNG_PSEUDO_DEFAULT));
// 设置随机数种子
CURAND_CHECK(curandSetPseudoRandomGeneratorSeed(Get().curand_generator_,
cluster_seedgen()));
} // 获取设备信息
void Caffe::DeviceQuery() {
cudaDeviceProp prop;
int device;
if (cudaSuccess != cudaGetDevice(&device)) {
printf("No cuda device present.\n");
return;
}
// #define CUDA_CHECK(condition) \
/* Code block avoids redefinition of cudaError_t error */ \
//do { \
// cudaError_t error = condition; \
// CHECK_EQ(error, cudaSuccess) << " " << cudaGetErrorString(error); \
//} while (0)
CUDA_CHECK(cudaGetDeviceProperties(&prop, device));
LOG(INFO) << "Device id: " << device;
LOG(INFO) << "Major revision number: " << prop.major;
LOG(INFO) << "Minor revision number: " << prop.minor;
LOG(INFO) << "Name: " << prop.name;
LOG(INFO) << "Total global memory: " << prop.totalGlobalMem;
LOG(INFO) << "Total shared memory per block: " << prop.sharedMemPerBlock;
LOG(INFO) << "Total registers per block: " << prop.regsPerBlock;
LOG(INFO) << "Warp size: " << prop.warpSize;
LOG(INFO) << "Maximum memory pitch: " << prop.memPitch;
LOG(INFO) << "Maximum threads per block: " << prop.maxThreadsPerBlock;
LOG(INFO) << "Maximum dimension of block: "
<< prop.maxThreadsDim[0] << ", " << prop.maxThreadsDim[1] << ", "
<< prop.maxThreadsDim[2];
LOG(INFO) << "Maximum dimension of grid: "
<< prop.maxGridSize[0] << ", " << prop.maxGridSize[1] << ", "
<< prop.maxGridSize[2];
LOG(INFO) << "Clock rate: " << prop.clockRate;
LOG(INFO) << "Total constant memory: " << prop.totalConstMem;
LOG(INFO) << "Texture alignment: " << prop.textureAlignment;
LOG(INFO) << "Concurrent copy and execution: "
<< (prop.deviceOverlap ? "Yes" : "No");
LOG(INFO) << "Number of multiprocessors: " << prop.multiProcessorCount;
LOG(INFO) << "Kernel execution timeout: "
<< (prop.kernelExecTimeoutEnabled ? "Yes" : "No");
return;
} class Caffe::RNG::Generator {
public:
Generator() : rng_(new caffe::rng_t(cluster_seedgen())) {}
explicit Generator(unsigned int seed) : rng_(new caffe::rng_t(seed)) {}
caffe::rng_t* rng() { return rng_.get(); }
private:
shared_ptr<caffe::rng_t> rng_;
}; Caffe::RNG::RNG() : generator_(new Generator()) { } Caffe::RNG::RNG(unsigned int seed) : generator_(new Generator(seed)) { } Caffe::RNG& Caffe::RNG::operator=(const RNG& other) {
generator_.reset(other.generator_.get());
return *this;
} void* Caffe::RNG::generator() {
return static_cast<void*>(generator_->rng());
}
// cublas的geterrorstring
const char* cublasGetErrorString(cublasStatus_t error) {
switch (error) {
case CUBLAS_STATUS_SUCCESS:
return "CUBLAS_STATUS_SUCCESS";
case CUBLAS_STATUS_NOT_INITIALIZED:
return "CUBLAS_STATUS_NOT_INITIALIZED";
case CUBLAS_STATUS_ALLOC_FAILED:
return "CUBLAS_STATUS_ALLOC_FAILED";
case CUBLAS_STATUS_INVALID_VALUE:
return "CUBLAS_STATUS_INVALID_VALUE";
case CUBLAS_STATUS_ARCH_MISMATCH:
return "CUBLAS_STATUS_ARCH_MISMATCH";
case CUBLAS_STATUS_MAPPING_ERROR:
return "CUBLAS_STATUS_MAPPING_ERROR";
case CUBLAS_STATUS_EXECUTION_FAILED:
return "CUBLAS_STATUS_EXECUTION_FAILED";
case CUBLAS_STATUS_INTERNAL_ERROR:
return "CUBLAS_STATUS_INTERNAL_ERROR";
#if CUDA_VERSION >= 6000
case CUBLAS_STATUS_NOT_SUPPORTED:
return "CUBLAS_STATUS_NOT_SUPPORTED";
#endif
#if CUDA_VERSION >= 6050
case CUBLAS_STATUS_LICENSE_ERROR:
return "CUBLAS_STATUS_LICENSE_ERROR";
#endif
}
return "Unknown cublas status";
}
// curand的getlasterrorstring
const char* curandGetErrorString(curandStatus_t error) {
switch (error) {
case CURAND_STATUS_SUCCESS:
return "CURAND_STATUS_SUCCESS";
case CURAND_STATUS_VERSION_MISMATCH:
return "CURAND_STATUS_VERSION_MISMATCH";
case CURAND_STATUS_NOT_INITIALIZED:
return "CURAND_STATUS_NOT_INITIALIZED";
case CURAND_STATUS_ALLOCATION_FAILED:
return "CURAND_STATUS_ALLOCATION_FAILED";
case CURAND_STATUS_TYPE_ERROR:
return "CURAND_STATUS_TYPE_ERROR";
case CURAND_STATUS_OUT_OF_RANGE:
return "CURAND_STATUS_OUT_OF_RANGE";
case CURAND_STATUS_LENGTH_NOT_MULTIPLE:
return "CURAND_STATUS_LENGTH_NOT_MULTIPLE";
case CURAND_STATUS_DOUBLE_PRECISION_REQUIRED:
return "CURAND_STATUS_DOUBLE_PRECISION_REQUIRED";
case CURAND_STATUS_LAUNCH_FAILURE:
return "CURAND_STATUS_LAUNCH_FAILURE";
case CURAND_STATUS_PREEXISTING_FAILURE:
return "CURAND_STATUS_PREEXISTING_FAILURE";
case CURAND_STATUS_INITIALIZATION_FAILED:
return "CURAND_STATUS_INITIALIZATION_FAILED";
case CURAND_STATUS_ARCH_MISMATCH:
return "CURAND_STATUS_ARCH_MISMATCH";
case CURAND_STATUS_INTERNAL_ERROR:
return "CURAND_STATUS_INTERNAL_ERROR";
}
return "Unknown curand status";
}
#endif // CPU_ONLY
} // namespace caffe
梳理caffe代码common(八)的更多相关文章
- 梳理caffe代码blob(三)
贯穿整个caffe的就是数据blob: #ifndef CAFFE_BLOB_HPP_ #define CAFFE_BLOB_HPP_ #include <algorithm> #incl ...
- Caffe学习系列(二)Caffe代码结构梳理,及相关知识点归纳
前言: 通过检索论文.书籍.博客,继续学习Caffe,千里之行始于足下,继续努力.将自己学到的一些东西记录下来,方便日后的整理. 正文: 1.代码结构梳理 在终端下运行如下命令,可以查看caffe代码 ...
- 手机自动化测试:Appium源码分析之跟踪代码分析八
手机自动化测试:Appium源码分析之跟踪代码分析八 poptest是国内唯一一家培养测试开发工程师的培训机构,以学员能胜任自动化测试,性能测试,测试工具开发等工作为目标.如果对课程感兴趣,请大家 ...
- SSD(single shot multibox detector)算法及Caffe代码详解[转]
转自:AI之路 这篇博客主要介绍SSD算法,该算法是最近一年比较优秀的object detection算法,主要特点在于采用了特征融合. 论文:SSD single shot multibox det ...
- SSD算法及Caffe代码详解(最详细版本)
SSD(single shot multibox detector)算法及Caffe代码详解 https://blog.csdn.net/u014380165/article/details/7282 ...
- Caffe代码分析--crop_layer.cu
因为要修改Caffe crop layer GPU部分的代码,现将自己对这部分GPU代码的理解总结一下,请大家多多指教! crop layer完成的功能(以matlab的方式表示):A(N,C,H,W ...
- 【Caffe代码解析】Layer网络层
Layer 功能: 是全部的网络层的基类,当中.定义了一些通用的接口,比方前馈.反馈.reshape,setup等. #ifndef CAFFE_LAYER_H_ #define CAFFE_LAYE ...
- 【Caffe代码解析】Blob
主要功能: Blob 是Caffe作为传输数据的媒介,不管是网络权重參数,还是输入数据,都是转化为Blob数据结构来存储,网络,求解器等都是直接与此结构打交道的. 其直观的能够把它看成一个有4纬的结构 ...
- 【Caffe代码解析】compute_image_mean
功能: 计算训练数据库的平均图像. 由于平均归一化训练图像会对结果有提升,所以Caffe里面,提供了一个可选项. 用法: compute_image_mean [FLAGS] INPUT_DB [OU ...
随机推荐
- 第四章 vim 可视模式
第四章 vim 可视模式 vim的可视模式允许我们选中一块文本区域并进行操作 3种不同的可视模式 分为 操作字符文本 行文本 块文本 .命令用来重复执行可视模式中的命令 只有在操作面 ...
- BNUOJ 5227 Max Sum
Max Sum Time Limit: 1000ms Memory Limit: 32768KB This problem will be judged on HDU. Original ID: ...
- python简易爬虫,帮助理解re模块
20161203更新: 1.使用了BS4解析html 2.使用了mysql-connector插入了数据库表 pip install mysql-connector import urllib.req ...
- Pizza Delivery
Pizza Delivery 时间限制: 2 Sec 内存限制: 128 MB 题目描述 Alyssa is a college student, living in New Tsukuba Cit ...
- OTOCI(bzoj 1180)
Description 给出n个结点以及每个点初始时对应的权值wi.起始时点与点之间没有连边.有3类操作: 1.bridge A B:询问结点A与结点B是否连通.如果是则输出“no”.否则输出“yes ...
- Codeforces983E. NN country
新鲜出炉! $n \leq 200000$的树,给$m \leq 200000$条链,$q \leq 200000$个询问,每次问一条询问链最少用m条中的几条给定链覆盖其所有边,可能无解. 首先确定一 ...
- laravel 查询构造器2
//查询构造器 public function query() { //获取所有的数据 $student = DB::table('student')->get(); var_dump($stu ...
- ActivityGroup中监听返回按键
如果你想使用ActivityGroup来统一管理Activity的话,当然首先这是一种很好的方法,但是如果你想在ActivityGroup里面拦截返回按键来进行统一管理的话,直接覆写onKeyDown ...
- configure: error: Cannot find php_pdo_driver.h.
安装pdo_mysql cd /usr/local/src/php-5.4.0/ext/pdo_mysql/ /usr/local/php/bin/phpize # /usr/local/php为 ...
- Filter和Interceptor的终归作用还是从入口修改或验证请求进来的数据
Filter是Java EE标准.Inteceptor是Spring 标准. Filter在servlet前面,Interveptor在servlet之后 Filter和Inteceptor都可以改变 ...