在调试JNI程序时,所有的Shell都已经加载完成,而唯一真正核心的cv::SolvePnP却不能在JNI里面获得通行证,经过反复测试都不能运行,因此只能忍痛舍弃,自行编写一个具有相似功能的函数对其进行替代。

原函数是这样的:

void rec3D::reconstruct3D(const vector<Point2f>& image_points, Mat& rvecs, Mat& tvecs)
{
print_Matf( cam.intrinsic_Mat );
print_Matf( cam.disCoeff );
print_VecPointf(image_points);
print_VecPointf(cam.object_points ); cv::solvePnP(cam.object_points , image_points , cam.intrinsic_Mat , cam.disCoeff , rvecs , tvecs , false , CV_P3P); print_Matf( cam.intrinsic_Mat );
print_Matf( cam.disCoeff );
print_VecPointf(image_points);
print_VecPointf(cam.object_points );
print_Matf( rvecs );
print_Matf( tvecs );
}

当然,函数后面的测试时没有机会调用的,而函数前面的测试则是准确无误的....

1.   关于多点投射的函数cv::solvePnP

参考链接:http://blog.csdn.net/abc20002929/article/details/8520063

Wiki链接:CV照相机 标定和三维重建 详细资料

计算多点透视问题,有离散的方法和解方程的方法,分别为cvPOSIT 和SolePnP。

应用场景:给定物体3D点集与对应的图像2D点集,之后进行姿态计算(即求旋转与位移矩阵)。

输入输出:

1.输入都是3D点集和对应的2D点集,其中cvPOSIT的3D点包含在posit_object结构中

2.输出均包括旋转矩阵和位移向量

形式不同:

solvePnP有摄像机的一些内参

语义不同:

除了一堆数据类型检查和转化外,其实solvePnP调用的是cvFindExtrinsicCameraParams2通过已知的内参进行未知外参求解,是一个精确解;而cvPOSIT是用仿射投影模型近似透视投影模型下,不断迭代计算出来的估计值(在物体深度变化相对于物体到摄像机的距离比较大的时候,这种算法可能不收敛)。

2. 函数原型

Finds an object pose from 3D-2D point correspondences.

solvePnP(http://docs.opencv.org/modules/calib3d /doc/camera_calibration_and_3d_reconstruction.html#solvepnp)

bool solvePnP(

InputArray objectPoints, InputArray imagePoints, InputArray cameraMatrix, InputArray distCoeffs,

 OutputArray rvec, OutputArray tvec,

bool useExtrinsicGuess=false, int flags=ITERATIVE )

cvPOSIT(http://www.opencv.org.cn/index.php/Cv%E7%85%A7%E7%9B%B8%E6%9C%BA%E5%AE%9A%E6%A0%87%E5%92%8C%E4%B8%89%E7%BB%B4%E9%87%8D%E5%BB%BA#POSIT)

执行POSIT算法

void cvPOSIT(

 CvPOSITObject* posit_object, CvPoint2D32f* image_points,

>double focal_length,

CvTermCriteria criteria, CvMatr32f rotation_matrix,  CvVect32f translation_vector );

3.Cv2.410的源代码

OpenCV参考手册第392页

bool cv::solvePnP( InputArray _opoints, InputArray _ipoints,
InputArray _cameraMatrix, InputArray _distCoeffs,
OutputArray _rvec, OutputArray _tvec, bool useExtrinsicGuess, int flags )
{
Mat opoints = _opoints.getMat(), ipoints = _ipoints.getMat();
int npoints = std::max(opoints.checkVector(3, CV_32F), opoints.checkVector(3, CV_64F));
CV_Assert( npoints >= 0 && npoints == std::max(ipoints.checkVector(2, CV_32F), ipoints.checkVector(2, CV_64F)) );
_rvec.create(3, 1, CV_64F);
_tvec.create(3, 1, CV_64F);
Mat cameraMatrix = _cameraMatrix.getMat(), distCoeffs = _distCoeffs.getMat(); if (flags == CV_EPNP)
{
cv::Mat undistortedPoints;
cv::undistortPoints(ipoints, undistortedPoints, cameraMatrix, distCoeffs);
epnp PnP(cameraMatrix, opoints, undistortedPoints); cv::Mat R, rvec = _rvec.getMat(), tvec = _tvec.getMat();
PnP.compute_pose(R, tvec);
cv::Rodrigues(R, rvec);
return true;
}
else if (flags == CV_P3P)
{
CV_Assert( npoints == 4);
cv::Mat undistortedPoints;
cv::undistortPoints(ipoints, undistortedPoints, cameraMatrix, distCoeffs);
p3p P3Psolver(cameraMatrix); cv::Mat R, rvec = _rvec.getMat(), tvec = _tvec.getMat();
bool result = P3Psolver.solve(R, tvec, opoints, undistortedPoints);
if (result)
cv::Rodrigues(R, rvec);
return result;
}
else if (flags == CV_ITERATIVE)
{
CvMat c_objectPoints = opoints, c_imagePoints = ipoints;
CvMat c_cameraMatrix = cameraMatrix, c_distCoeffs = distCoeffs;
CvMat c_rvec = _rvec.getMat(), c_tvec = _tvec.getMat();
cvFindExtrinsicCameraParams2(&c_objectPoints, &c_imagePoints, &c_cameraMatrix,
c_distCoeffs.rows*c_distCoeffs.cols ? &c_distCoeffs : 0,
&c_rvec, &c_tvec, useExtrinsicGuess );
return true;
}
else
CV_Error(CV_StsBadArg, "The flags argument must be one of CV_ITERATIVE or CV_EPNP");
return false;
}



有效代码:

else if (flags == CV_P3P)
{
CV_Assert( npoints == 4);
cv::Mat undistortedPoints;
cv::undistortPoints(ipoints, undistortedPoints, cameraMatrix, distCoeffs);
p3p P3Psolver(cameraMatrix); cv::Mat R, rvec = _rvec.getMat(), tvec = _tvec.getMat();
bool result = P3Psolver.solve(R, tvec, opoints, undistortedPoints);
if (result)
cv::Rodrigues(R, rvec);
return result;
}

4. 函数mySolvePnP()

int rec3D::mySolvePnP(
//vector<cv::Point3f> &_opoints , vector<cv::Point2f> &_ipoints ,
InputArray &_opoints , InputArray &_ipoints ,
//cv::Mat &intrinsic_Mat , cv::Mat &disCoeff ,
InputArray &_cameraMatrix , InputArray &_distCoeffs ,
//cv::Mat &rvecs , cv::Mat &tvecs ,
OutputArray &_rvec , OutputArray &_tvec ,
bool useExtrinsicGuess, int flags)
{
cv::Mat opoints = _opoints.getMat(), ipoints = _ipoints.getMat();
int npoints = std::max(opoints.checkVector(3, CV_32F), opoints.checkVector(3, CV_64F));
CV_Assert( npoints >= 0 && npoints == std::max(ipoints.checkVector(2, CV_32F), ipoints.checkVector(2, CV_64F)) );
_rvec.create(3, 1, CV_64F);
_tvec.create(3, 1, CV_64F);
cv::Mat cameraMatrix = _cameraMatrix.getMat(), distCoeffs = _distCoeffs.getMat(); {
CV_Assert( npoints == 4);
cv::Mat undistortedPoints;
cv::undistortPoints(ipoints, undistortedPoints, cameraMatrix, distCoeffs);
p3p P3Psolver(cameraMatrix); cv::Mat R, rvec = _rvec.getMat(), tvec = _tvec.getMat();
bool result = P3Psolver.solve(R, tvec, opoints, undistortedPoints);
if (result)
cv::Rodrigues(R, rvec);
return result;
}
return true;
}
暂时未完成,待修改...................

真正运行的JNI函数:

真正可以运行的JNI函数:
int rec3D::mySolvePnP(
vector<cv::Point3f> &_opoints ,const vector<cv::Point2f> &_ipoints ,
//InputArray &_opoints , InputArray &_ipoints ,
cv::Mat &intrinsic_Mat , cv::Mat &disCoeff ,
//InputArray &_cameraMatrix , InputArray &_distCoeffs ,
cv::Mat &_rvecs , cv::Mat &_tvecs ,
//OutputArray &_rvec , OutputArray &_tvec ,
bool useExtrinsicGuess, int flags)
{
cv::Mat opoints = Mat(_opoints, true);// _opoints.getMat();
cv::Mat ipoints = Mat(_ipoints , true);///_ipoints.getMat(); int npoints = std::max(opoints.checkVector(3, CV_32F), opoints.checkVector(3, CV_64F));//The error source! wishchin ! CV_Assert( npoints >= 0 && npoints == std::max(ipoints.checkVector(2, CV_32F), ipoints.checkVector(2, CV_64F)) );
//_rvecs= new Mat(3, 1, CV_64F);//_rvec.create(3, 1, CV_64F);
//_tvecs= new Mat(3, 1, CV_64F);//.create(3, 1, CV_64F); //cv::Mat cameraMatrix = _cameraMatrix.getMat(), distCoeffs = _distCoeffs.getMat();
cv::Mat cameraMatrix = intrinsic_Mat.clone();
cv::Mat distCoeffs = disCoeff.clone(); //{ CV_Assert( npoints == 4);
cv::Mat undistortedPoints(ipoints.rows,ipoints.cols, ipoints.type() );
//undistortedPoints 为输出参数!
undistortPointsMy(ipoints, undistortedPoints, cameraMatrix, distCoeffs);//undistortPoints(ipoints, undistortedPoints, cameraMatrix, distCoeffs); //1.修改函数
//p3pMy P3Psolver= new p3pMy(cameraMatrix);
p3pMy P3Psolver(cameraMatrix); //cv::Mat R, rvec = _rvec.getMat(), tvec = _tvec.getMat();
//cv::Mat R = _rvecs.clone();
//cv::Mat rvec = _rvecs.clone();
//cv::Mat tvec = _tvecs.clone();
////2.修改函数
//bool result = P3Psolver.solve(R, tvec, opoints, undistortedPoints); cv::Mat R;// = _rvecs.clone();
//cv::Mat rvec = _rvecs.clone();
//cv::Mat tvec = _tvecs.clone(); //2.修改函数
bool result = P3Psolver.solve(_rvecs, _tvecs, opoints, undistortedPoints); if (result) cv::Rodrigues(_rvecs, R);//cv::Rodrigues(R, rvec); //return result;
//}
return true;
} //void rec3D::undistortPointsMy( InputArray _src, OutputArray _dst,
// InputArray _cameraMatrix,
// InputArray _distCoeffs,
// InputArray _Rmat,
// InputArray _Pmat )
void rec3D::undistortPointsMy(
cv::Mat &_src, cv::Mat &_dst,
cv::Mat &_cameraMatrix,
cv::Mat &_distCoeffs)
{
cv::Mat src =_src;// _src.getMat();
cv::Mat cameraMatrix = _cameraMatrix;//_cameraMatrix.getMat();
cv::Mat distCoeffs = _distCoeffs;//_distCoeffs.getMat();//, R = _Rmat.getMat(), P = _Pmat.getMat();
cv::Mat R;
cv::Mat P; CV_Assert( src.isContinuous() && (src.depth() == CV_32F || src.depth() == CV_64F) &&
((src.rows == 1 && src.channels() == 2) || src.cols*src.channels() == 2)); //_dst.create(src.size(), src.type(), -1, true);
Mat dst = _dst;//_dst.getMat(); //Fuck!引用传来传去的! dst矩阵!
CvMat _csrc = src, _cdst = dst, _ccameraMatrix = cameraMatrix;
CvMat matR, matP, _cdistCoeffs, *pR=0, *pP=0, *pD=0;
if( R.data )
pR = &(matR = R);
if( P.data )
pP = &(matP = P);
if( distCoeffs.data )
pD = &(_cdistCoeffs = distCoeffs); //Fuck!引用传来传去的! dst矩阵!
//cvUndistortPoints(&_csrc, &_cdst, &_ccameraMatrix, pD, pR, pP);
cvUndistortPointsMyRf(&_csrc, &_cdst, &_ccameraMatrix, pD, pR, pP);
} //逻辑混乱!!!
void rec3D::cvUndistortPointsMy(
const CvMat* _src, CvMat* _dst, const CvMat* _cameraMatrix,const CvMat* _distCoeffs,
const CvMat* matR, const CvMat* matP )
{
double A[3][3], RR[3][3], k[8]={0,0,0,0,0,0,0,0}, fx, fy, ifx, ify, cx, cy;
CvMat matA=cvMat(3, 3, CV_64F, A), _Dk;
CvMat _RR=cvMat(3, 3, CV_64F, RR);
const CvPoint2D32f* srcf;
const CvPoint2D64f* srcd;
CvPoint2D32f* dstf;
CvPoint2D64f* dstd;
int stype, dtype;
int sstep, dstep;
int i, j, n, iters = 1; CV_Assert( CV_IS_MAT(_src) && CV_IS_MAT(_dst) &&
(_src->rows == 1 || _src->cols == 1) &&
(_dst->rows == 1 || _dst->cols == 1) &&
_src->cols + _src->rows - 1 == _dst->rows + _dst->cols - 1 &&
(CV_MAT_TYPE(_src->type) == CV_32FC2 || CV_MAT_TYPE(_src->type) == CV_64FC2) &&
(CV_MAT_TYPE(_dst->type) == CV_32FC2 || CV_MAT_TYPE(_dst->type) == CV_64FC2)); CV_Assert( CV_IS_MAT(_cameraMatrix) &&
_cameraMatrix->rows == 3 && _cameraMatrix->cols == 3 ); cvConvert( _cameraMatrix, &matA );//这一句关系到画三维坐标系,但看起来毫无用处! if( _distCoeffs )
{
CV_Assert( CV_IS_MAT(_distCoeffs) &&
(_distCoeffs->rows == 1 || _distCoeffs->cols == 1) &&
(_distCoeffs->rows*_distCoeffs->cols == 4 ||
_distCoeffs->rows*_distCoeffs->cols == 5 ||
_distCoeffs->rows*_distCoeffs->cols == 8)); _Dk = cvMat( _distCoeffs->rows, _distCoeffs->cols,
CV_MAKETYPE(CV_64F,CV_MAT_CN(_distCoeffs->type)), k); cvConvert( _distCoeffs, &_Dk );
iters = 5;
} if( matR )
{
CV_Assert( CV_IS_MAT(matR) && matR->rows == 3 && matR->cols == 3 );
cvConvert( matR, &_RR );
}
else
cvSetIdentity(&_RR); if( matP )
{
double PP[3][3];
CvMat _P3x3, _PP=cvMat(3, 3, CV_64F, PP);
CV_Assert( CV_IS_MAT(matP) && matP->rows == 3 && (matP->cols == 3 || matP->cols == 4));
cvConvert( cvGetCols(matP, &_P3x3, 0, 3), &_PP );
cvMatMul( &_PP, &_RR, &_RR );
} srcf = (const CvPoint2D32f*)_src->data.ptr;
srcd = (const CvPoint2D64f*)_src->data.ptr;
dstf = (CvPoint2D32f*)_dst->data.ptr;
dstd = (CvPoint2D64f*)_dst->data.ptr;
stype = CV_MAT_TYPE(_src->type);
dtype = CV_MAT_TYPE(_dst->type);
sstep = _src->rows == 1 ? 1 : _src->step/CV_ELEM_SIZE(stype);
dstep = _dst->rows == 1 ? 1 : _dst->step/CV_ELEM_SIZE(dtype); n = _src->rows + _src->cols - 1; fx = A[0][0];
fy = A[1][1];
ifx = 1./fx;
ify = 1./fy;
cx = A[0][2];
cy = A[1][2]; for( i = 0; i < n; i++ )
{
double x, y, x0, y0;
if( stype == CV_32FC2 )
{
x = srcf[i*sstep].x;
y = srcf[i*sstep].y;
}
else
{
x = srcd[i*sstep].x;
y = srcd[i*sstep].y;
} x0 = x = (x - cx)*ifx;
y0 = y = (y - cy)*ify; // compensate distortion iteratively
for( j = 0; j < iters; j++ )
{
double r2 = x*x + y*y;
double icdist = (1 + ((k[7]*r2 + k[6])*r2 + k[5])*r2)/(1 + ((k[4]*r2 + k[1])*r2 + k[0])*r2);
double deltaX = 2*k[2]*x*y + k[3]*(r2 + 2*x*x);
double deltaY = k[2]*(r2 + 2*y*y) + 2*k[3]*x*y;
x = (x0 - deltaX)*icdist;
y = (y0 - deltaY)*icdist;
} double xx = RR[0][0]*x + RR[0][1]*y + RR[0][2];
double yy = RR[1][0]*x + RR[1][1]*y + RR[1][2];
double ww = 1./(RR[2][0]*x + RR[2][1]*y + RR[2][2]);
x = xx*ww;
y = yy*ww; if( dtype == CV_32FC2 )
{
dstf[i*dstep].x = (float)x;
dstf[i*dstep].y = (float)y;
}
else
{
dstd[i*dstep].x = x;
dstd[i*dstep].y = y;
}
}
} void rec3D::cvUndistortPointsMyRf(
const CvMat* _src, CvMat* _dst,
const CvMat* _cameraMatrix,
const CvMat* _distCoeffs,
const CvMat* matR,
const CvMat* matP )
{
double A[3][3], RR[3][3];
double k[8]={0,0,0,0,0,0,0,0};
double fx, fy, ifx, ify, cx, cy; CvMat matA=cvMat(3, 3, CV_64F, A);
CvMat _Dk;
CvMat _RR=cvMat(3, 3, CV_64F, RR); const CvPoint2D32f* srcf;
const CvPoint2D64f* srcd;
CvPoint2D32f* dstf;
CvPoint2D64f* dstd; int stype, dtype;
int sstep, dstep;
int i, j, n, iters = 1; CV_Assert(
CV_IS_MAT(_src) && CV_IS_MAT(_dst) &&
(_src->rows == 1 || _src->cols == 1) &&
(_dst->rows == 1 || _dst->cols == 1) &&
_src->cols + _src->rows - 1 == _dst->rows + _dst->cols - 1 &&
(CV_MAT_TYPE(_src->type) == CV_32FC2 || CV_MAT_TYPE(_src->type) == CV_64FC2) &&
(CV_MAT_TYPE(_dst->type) == CV_32FC2 || CV_MAT_TYPE(_dst->type) == CV_64FC2)); CV_Assert( CV_IS_MAT(_cameraMatrix) &&
_cameraMatrix->rows == 3 && _cameraMatrix->cols == 3 ); //图像到矩阵的转换:cvConvert( src, dst );
//cvConvert( _cameraMatrix, &matA );//这一句关系到画三维坐标系,但看起来毫无用处!
cvConvertMy( _cameraMatrix, &matA ); if( _distCoeffs )
{
CV_Assert( CV_IS_MAT(_distCoeffs) &&
(_distCoeffs->rows == 1 || _distCoeffs->cols == 1) &&
(_distCoeffs->rows*_distCoeffs->cols == 4 ||
_distCoeffs->rows*_distCoeffs->cols == 5 ||
_distCoeffs->rows*_distCoeffs->cols == 8)); _Dk = cvMat( _distCoeffs->rows, _distCoeffs->cols,
CV_MAKETYPE(CV_64F,CV_MAT_CN(_distCoeffs->type)), k); //cvConvert( _distCoeffs, &_Dk );
cvConvertMy(_distCoeffs, &_Dk );
iters = 5;
} if( matR )
{
CV_Assert( CV_IS_MAT(matR) && matR->rows == 3 && matR->cols == 3 );
//cvConvert( matR, &_RR );
cvConvertMy(matR, &_RR );
}
else
cvSetIdentity(&_RR); if( matP )
{
double PP[3][3];
CvMat _P3x3, _PP=cvMat(3, 3, CV_64F, PP);
CV_Assert( CV_IS_MAT(matP) && matP->rows == 3 && (matP->cols == 3 || matP->cols == 4));
//cvConvert( cvGetCols(matP, &_P3x3, 0, 3), &_PP );
cvConvertMy( cvGetCols(matP, &_P3x3, 0, 3), &_PP );
cvMatMul( &_PP, &_RR, &_RR );
} srcf = (const CvPoint2D32f*)_src->data.ptr;
srcd = (const CvPoint2D64f*)_src->data.ptr; //dst的指针引用!
dstf = (CvPoint2D32f*)_dst->data.ptr;
dstd = (CvPoint2D64f*)_dst->data.ptr;
stype = CV_MAT_TYPE(_src->type);
dtype = CV_MAT_TYPE(_dst->type);
sstep = _src->rows == 1 ? 1 : _src->step/CV_ELEM_SIZE(stype);
dstep = _dst->rows == 1 ? 1 : _dst->step/CV_ELEM_SIZE(dtype); n = _src->rows + _src->cols - 1; fx = A[0][0];
fy = A[1][1];
ifx = 1./fx;
ify = 1./fy;
cx = A[0][2];
cy = A[1][2]; for( i = 0; i < n; i++ )
{
double x, y, x0, y0;
if( stype == CV_32FC2 )
{
x = srcf[i*sstep].x;
y = srcf[i*sstep].y;
}
else
{
x = srcd[i*sstep].x;
y = srcd[i*sstep].y;
} x0 = x = (x - cx)*ifx;
y0 = y = (y - cy)*ify; // compensate distortion iteratively
for( j = 0; j < iters; j++ )
{
double r2 = x*x + y*y;
double icdist = (1 + ((k[7]*r2 + k[6])*r2 + k[5])*r2)/(1 + ((k[4]*r2 + k[1])*r2 + k[0])*r2);
double deltaX = 2*k[2]*x*y + k[3]*(r2 + 2*x*x);
double deltaY = k[2]*(r2 + 2*y*y) + 2*k[3]*x*y;
x = (x0 - deltaX)*icdist;
y = (y0 - deltaY)*icdist;
} double xx = RR[0][0]*x + RR[0][1]*y + RR[0][2];
double yy = RR[1][0]*x + RR[1][1]*y + RR[1][2];
double ww = 1./(RR[2][0]*x + RR[2][1]*y + RR[2][2]);
x = xx*ww;
y = yy*ww; if( dtype == CV_32FC2 )
{
dstf[i*dstep].x = (float)x;
dstf[i*dstep].y = (float)y;
}
else
{
dstd[i*dstep].x = x;
dstd[i*dstep].y = y;
}
}
} //bool rec3D::cvConvertMy(const CvMat* _cameraMatrix,double matA[][3] )
//{
//
// return true;
//} bool rec3D::cvConvertMy(const CvMat* Mat, CvMat* MatA )
{
int C = Mat->step/ Mat->cols/sizeof(Mat->type);
for (int row = 0; row < Mat->rows && row < MatA->rows; row++){ float* ptr = (float*)(Mat->data.ptr + row * Mat->step);//第row行数据的起始指针
float* ptrD = (float*)(MatA->data.ptr + row * MatA->step); for (int col = 0; col < Mat->cols && col < MatA->cols; col++)
{
for (int i=0;i<C;++i)
{
*(ptrD+C*col+i) = *(ptr+C*col+i);
}
//*(ptrD+3*col) = *(ptr+3*col);//cout<<*(ptr+3*col)<<endl;
//*(ptrD+3*col+1) = *(ptr+3*col+1);//cout<<*(ptr+3*col+1)<<endl;
//*(ptrD+3*col+2) = *(ptr+3*col+2);//cout<<*(ptr+3*col+2)<<endl;
}
} return true;
}

完成!

三维重建:多点透视cvSolvePNP的替代函数(Code)的更多相关文章

  1. PHP正则表达式函数的替代函数

    1,preg_split()函数将字符串按照某元素分割,分割后结果以数组方式返回. php中explode()可以实现此功能.array explode(string $pattern,string ...

  2. 相机标定:PNP基于单应面解决多点透视问题

              利用二维视野内的图像,求出三维图像在场景中的位姿,这是一个三维透视投影的反向求解问题.常用方法是PNP方法,需要已知三维点集的原始模型. 本文做了大量修改,如有不适,请移步原文:  ...

  3. 解决PHP中json_encode()不支持中文的替代函数

    json_encode()是PHP中将数组转换成JSON格式的函数,因为各种原因只要是中文就会乱码= = 找到了这么一个函数替代了原有的json_encode(),能很好地输出中文~ function ...

  4. pandas实现excel中的数据透视表和Vlookup函数功能

    在孩子王实习中做的一个小工作,方便整理数据. 目前这几行代码是实现了一个数据透视表和匹配的功能,但是将做好的结果写入了不同的excel中, 如何实现将结果连续保存到同一个Excel的同一个工作表中?还 ...

  5. str_split 分隔中文出现乱码 替代函数

    function mbstringtoarray($str,$charset) { $strlen=mb_strlen($str); while($strlen){ $array[]=mb_subst ...

  6. Python 中MATLABspline函数的替代函数

    调用scipy模块,其中有对应的函数UnivariateSpline.与MATLAB中spline函数不同的是,这个函数返回值是一个插值函数,而非插值结果. import scipy spline = ...

  7. console.log() 替代函数

    var log = console.log.bind(console); log('d')

  8. json_encode替代函数

    <?php   function jsonEncode($var) {     if (function_exists('json_encode')) {         return json ...

  9. PHP常用函数和常见疑难问题解答

    PHP常用库函数介绍 一.PHP字符串操作常用函数 1.确定字符串长度  int strlen(string str) 2.比较两个字符串 a. strcmp函数对两个字符串进行二进制安全的比较,并区 ...

随机推荐

  1. 51nod1103 N的倍数

    [题解] 先预处理出模N意义下的前缀和sum[i]. 1.如果sum[i]=0,那么1~i的数之和就是N的倍数 2.sum[i]%N总共有0~N-1这N种情况:根据1,如果sum[i]为0则必定有解: ...

  2. 我的ArcGis9.3 到Arcgis10.0 升级步骤

    因为之前一直安装的是Arcgis 9.3 版本,领导发了个10.0版本说,该升级了,结果就开始了漫漫的升级路. 个人操作过程,只是个别. 一.卸载Arcgis9.3 这个过程真说是艰辛啊. 首先,卸载 ...

  3. ActiveMQ学习总结(5)——Java消息服务JMS详解

    JMS: Java消息服务(Java Message Service) JMS是用于访问企业消息系统的开发商中立的API.企业消息系统可以协助应用软件通过网络进行消息交互. JMS的编程过程很简单,概 ...

  4. debug jdk source can't watch variable what it is

    https://www.cnblogs.com/shuaiqing/p/7525841.html https://stackoverflow.com/questions/18255474/debug- ...

  5. 2015 Changchun Regional

    弱没机会去长春,但拿了题来做了,加上请教各路大牛,理论AC了一发,但没实现~(感谢各路有形无形的大牛的指导) A题~Too Rich 给你1,5,10,20,50,100,200,500,1000,2 ...

  6. update_notifier 造成nodejs进程数量增长的问题

    最近运维老大j哥找到我说了一个事儿:某私有化部署的线上环境nodejs进程数量多达1000+,对比公版线上环境的66个进程数显得十分诡异.并且单个nodejs进程所占用swap空间也较大,也不释放空间 ...

  7. 关于QT版本的安装配置的一些困惑

    大概是之前安装和使用QT太顺利了,什么都没注意就开始使用了.在使用VS2012开发Qt5.31的程序一段时间以后,虽然好用,但是发现其编译的程序不能在XP上使用,要打补丁才行.不仅VS2012本身要打 ...

  8. Get-Acl 查看文件权限

    https://blogs.msmvps.com/erikr/2007/09/26/set-permissions-on-a-specific-service-windows/ Get-Acl .\L ...

  9. CodeForces - 811C Vladik and Memorable Trip(dp)

    C. Vladik and Memorable Trip time limit per test 2 seconds memory limit per test 256 megabytes input ...

  10. nyoj--990--蚂蚁感冒(模拟)(思维题)

    蚂蚁感冒 时间限制:1000 ms  |  内存限制:65535 KB 难度:2 描述 长100厘米的细长直杆子上有n只蚂蚁.它们的头有的朝左,有的朝右. 每只蚂蚁都只能沿着杆子向前爬,速度是1厘米/ ...