前言

在3D游戏当中,我们经常会使用到照相机这个东西,无论你使用的是哪一款引擎,都会用到,同时,照相机这个东西涉及到的东西比较多,基础知识需要扎实一些才可以。

如何使用

很久之前做项目的时候用到过一次,已经忘记的差不多了。为此,我找了cpp test中的示例代码,找了一个相对简单的代码,我把关键代码贴出来。

auto layer3D=Layer::create();
addChild(layer3D,0);
_layer3D=layer3D; _shader =GLProgram::createWithFilenames("Sprite3DTest/fog.vert","Sprite3DTest/fog.frag");
_state = GLProgramState::create(_shader); _sprite3D1 = Sprite3D::create("Sprite3DTest/teapot.c3b");
_sprite3D2 = Sprite3D::create("Sprite3DTest/teapot.c3b"); _sprite3D1->setGLProgramState(_state);
_sprite3D2->setGLProgramState(_state);
//pass mesh's attribute to shader
long offset = 0;
auto attributeCount = _sprite3D1->getMesh()->getMeshVertexAttribCount();
for (auto i = 0; i < attributeCount; i++) {
auto meshattribute = _sprite3D1->getMesh()->getMeshVertexAttribute(i);
_state->setVertexAttribPointer(s_attributeNames[meshattribute.vertexAttrib],
meshattribute.size,
meshattribute.type,
GL_FALSE,
_sprite3D1->getMesh()->getVertexSizeInBytes(),
(GLvoid*)offset);
offset += meshattribute.attribSizeBytes;
} long offset1 = 0;
auto attributeCount1 = _sprite3D2->getMesh()->getMeshVertexAttribCount();
for (auto i = 0; i < attributeCount1; i++) {
auto meshattribute = _sprite3D2->getMesh()->getMeshVertexAttribute(i);
_state->setVertexAttribPointer(s_attributeNames[meshattribute.vertexAttrib],
meshattribute.size,
meshattribute.type,
GL_FALSE,
_sprite3D2->getMesh()->getVertexSizeInBytes(),
(GLvoid*)offset1);
offset1 += meshattribute.attribSizeBytes;
} _state->setUniformVec4("u_fogColor", Vec4(0.5,0.5,0.5,1.0));
_state->setUniformFloat("u_fogStart",10);
_state->setUniformFloat("u_fogEnd",60);
_state->setUniformInt("u_fogEquation" ,0); _layer3D->addChild(_sprite3D1);
_sprite3D1->setPosition3D( Vec3( 0, 0,0 ) );
_sprite3D1->setScale(2.0f);
_sprite3D1->setRotation3D(Vec3(-90,180,0)); _layer3D->addChild(_sprite3D2);
_sprite3D2->setPosition3D( Vec3( 0, 0,-20 ) );
_sprite3D2->setScale(2.0f);
_sprite3D2->setRotation3D(Vec3(-90,180,0)); // 在所有子节点都创建完了以后才创建照相机
if (_camera == nullptr)
{
// 创建投影的摄像机
_camera=Camera::createPerspective(60, (GLfloat)s.width/s.height, 1, 1000);
// 设置标识符
_camera->setCameraFlag(CameraFlag::USER1);
_camera->setPosition3D(Vec3(0, 30, 40));
// 设置摄像机看着目标
_camera->lookAt(Vec3(0,0,0), Vec3(0, 1, 0));
// 记得把摄像机加入到场景中
_layer3D->addChild(_camera);
}
_layer3D->setCameraMask(2);

这段代码其实就是创建了一个3d的模型,并且让照相机照着这个3d模型,可以重点看创建照相机的代码。

源码分析

源码中的英文注释已经说的很详细了,但我还是用英文来写一下。

.h文件代码:

 ****************************************************************************/
#ifndef _CCCAMERA_H__
#define _CCCAMERA_H__ #include "2d/CCNode.h"
#include "3d/CCFrustum.h"
#include "renderer/CCQuadCommand.h"
#include "renderer/CCCustomCommand.h"
#include "renderer/CCFrameBuffer.h" NS_CC_BEGIN class Scene;
class CameraBackgroundBrush; /**
* Note:
* Scene creates a default camera. And the default camera mask of Node is 1, therefore it can be seen by the default camera.
* During rendering the scene, it draws the objects seen by each camera in the added order except default camera. The default camera is the last one being drawn with.
* It's usually a good idea to render 3D objects in a separate camera.
* And set the 3d camera flag to CameraFlag::USER1 or anything else except DEFAULT. Dedicate The DEFAULT camera for UI, because it is rendered at last.
* You can change the camera order to get different result when depth test is not enabled.
* For each camera, transparent 3d sprite is rendered after opaque 3d sprite and other 2d objects.
*/
// 上面英文就已经介绍的很好了,
// 相机的标识符,每个Node中有个_cameraMask的属性,当相机的_cameraFlag & _cameraMask为true时,
// 该Node可以被相机看到
enum class CameraFlag
{
DEFAULT = 1,
USER1 = 1 << 1,
USER2 = 1 << 2,
USER3 = 1 << 3,
USER4 = 1 << 4,
USER5 = 1 << 5,
USER6 = 1 << 6,
USER7 = 1 << 7,
USER8 = 1 << 8,
};
/**
* Defines a camera .
*/
class CC_DLL Camera :public Node
{
friend class Scene;
friend class Director;
friend class EventDispatcher;
public:
/**
* The type of camera.
*/
// 相机类型,分别是投影和正交
enum class Type
{
PERSPECTIVE = 1,
ORTHOGRAPHIC = 2
};
public:
/**
* Creates a perspective camera.
*
* @param fieldOfView The field of view for the perspective camera (normally in the range of 40-60 degrees).
* @param aspectRatio The aspect ratio of the camera (normally the width of the viewport divided by the height of the viewport).
* @param nearPlane The near plane distance.
* @param farPlane The far plane distance.
*/
/**
创建一个透视相机。
参数:
fieldOfView 透视相机的可视角度 (一般是在40-60度之间).
aspectRatio 相机的长宽比(通常会使用视窗的宽度除以视窗的高度)。
nearPlane 近平面的距离。
farPlane 远平面的距离。
*/
static Camera* createPerspective(float fieldOfView, float aspectRatio, float nearPlane, float farPlane);
/**
* Creates an orthographic camera.
*
* @param zoomX The zoom factor along the X-axis of the orthographic projection (the width of the ortho projection).
* @param zoomY The zoom factor along the Y-axis of the orthographic projection (the height of the ortho projection).
* @param nearPlane The near plane distance.
* @param farPlane The far plane distance.
*/
/**
创建一个正交相机。
参数:
zoomX 沿x轴的正交投影的缩放因子(正交投影的宽度)。
zoomY 沿y轴的正交投影的缩放因子(正交投影的高度)。
nearPlane 近平面的距离。
farPlane 远平面的距离。
*/
static Camera* createOrthographic(float zoomX, float zoomY, float nearPlane, float farPlane); /** create default camera, the camera type depends on Director::getProjection, the depth of the default camera is 0 */
/**
创建默认的相机,相机的类型取决于Director::getProjection,默认的相机深度是0
*/
static Camera* create(); /**
* Gets the type of camera.
*
* @return The camera type.
*/
Camera::Type getType() const { return _type; } /**get & set Camera flag*/
CameraFlag getCameraFlag() const { return (CameraFlag)_cameraFlag; }
void setCameraFlag(CameraFlag flag) { _cameraFlag = (unsigned short)flag; } /**
* Make Camera looks at target
*
* @param target The target camera is point at
* @param up The up vector, usually it's Y axis
*/
/**
使相机看着目标
参数:
target 目标的位置
up 相机向上的向量,通常这是Y轴
*/
virtual void lookAt(const Vec3& target, const Vec3& up = Vec3::UNIT_Y); /**
* Gets the camera's projection matrix.
*
* @return The camera projection matrix.
*/
/**
获取相机的投影矩阵。
返回:
相机投影矩阵。
*/
const Mat4& getProjectionMatrix() const;
/**
* Gets the camera's view matrix.
*
* @return The camera view matrix.
*/
/**
获取相机的视图矩阵。
返回:
相机视图矩阵。
*/
const Mat4& getViewMatrix() const; /**get view projection matrix*/
/**
得到视图投影矩阵。
*/
const Mat4& getViewProjectionMatrix() const; /* convert the specified point in 3D world-space coordinates into the screen-space coordinates.
*
* Origin point at left top corner in screen-space.
* @param src The world-space position.
* @return The screen-space position.
*/
/*
把指定坐标点从世界坐标转换为屏幕坐标。 原点在屏幕坐标系的左上角。
参数:
src 世界的位置。
返回:
屏幕的位置。
*/
Vec2 project(const Vec3& src) const; /* convert the specified point in 3D world-space coordinates into the GL-screen-space coordinates.
*
* Origin point at left bottom corner in GL-screen-space.
* @param src The 3D world-space position.
* @return The GL-screen-space position.
*/
/*
把指定坐标点从3D世界坐标转换为GL坐标。 原点在GL屏幕坐标系的左下角。
参数:
src 3D世界的位置。
返回:
GL屏幕空间的位置。
*/
Vec2 projectGL(const Vec3& src) const; /**
* Convert the specified point of screen-space coordinate into the 3D world-space coordinate.
*
* Origin point at left top corner in screen-space.
* @param src The screen-space position.
* @return The 3D world-space position.
*/
// 和上面的相反
Vec3 unproject(const Vec3& src) const; /**
* Convert the specified point of GL-screen-space coordinate into the 3D world-space coordinate.
*
* Origin point at left bottom corner in GL-screen-space.
* @param src The GL-screen-space position.
* @return The 3D world-space position.
*/
// 和上面的相反
Vec3 unprojectGL(const Vec3& src) const; /**
* Convert the specified point of screen-space coordinate into the 3D world-space coordinate.
*
* Origin point at left top corner in screen-space.
* @param size The window size to use.
* @param src The screen-space position.
* @param dst The 3D world-space position.
*/
void unproject(const Size& size, const Vec3* src, Vec3* dst) const; /**
* Convert the specified point of GL-screen-space coordinate into the 3D world-space coordinate.
*
* Origin point at left bottom corner in GL-screen-space.
* @param size The window size to use.
* @param src The GL-screen-space position.
* @param dst The 3D world-space position.
*/
void unprojectGL(const Size& size, const Vec3* src, Vec3* dst) const; /**
* Is this aabb visible in frustum
*/
/**
aabb在视椎体内是否可见
*/
bool isVisibleInFrustum(const AABB* aabb) const; /**
* Get object depth towards camera
*/
/**
获取朝向相机的物体深度。
*/
float getDepthInView(const Mat4& transform) const; /**
* set depth, camera with larger depth is drawn on top of camera with smaller depth, the depth of camera with CameraFlag::DEFAULT is 0, user defined camera is -1 by default
*/
/**
设置深度,相比深度小的,深度较大的相机会绘制在顶端,标识是CameraFlag::DEFAULT的相机深度是0,用户定义的相机深度默认为-1
*/
void setDepth(int8_t depth); /**
* get depth, camera with larger depth is drawn on top of camera with smaller depth, the depth of camera with CameraFlag::DEFAULT is 0, user defined camera is -1 by default
*/
int8_t getDepth() const { return _depth; } /**
get rendered order
*/
/**
获取渲染顺序。
*/
int getRenderOrder() const; /**
* Get the frustum's far plane.
*/
/**
获取视椎体远平面。
*/
float getFarPlane() const { return _farPlane; } /**
* Get the frustum's near plane.
*/
/**
获取视椎体近平面。
*/
float getNearPlane() const { return _nearPlane; } //override
virtual void onEnter() override;
virtual void onExit() override; /**
* Get the visiting camera , the visiting camera shall be set on Scene::render
*/
/**
获取绘制的相机,绘制的相机会在Scene::render中设置。
*/
static const Camera* getVisitingCamera() { return _visitingCamera; } /**
* Get the default camera of the current running scene.
*/
/**
获取到当前运行场景的默认相机。
*/
static Camera* getDefaultCamera();
/**
Before rendering scene with this camera, the background need to be cleared. It clears the depth buffer with max depth by default. Use setBackgroundBrush to modify the default behavior
*/
/**
在相机渲染所属的场景前,需要对背景进行清除。它以默认的深度值清除缓存,可以通过setBackgroundBrush 函数获取深度值。
*/
void clearBackground();
/**
Apply the FBO, RenderTargets and viewport.
*/
/**
应用帧缓冲,渲染目标和视图。
*/
void apply();
/**
Set FBO, which will attach several render target for the rendered result.
*/
/**
设置帧缓冲,从中可以获取到一些需要渲染的目标。
*/
void setFrameBufferObject(experimental::FrameBuffer* fbo);
/**
Set Viewport for camera.
*/
/**
设置相机视口。
*/
void setViewport(const experimental::Viewport& vp) { _viewport = vp; } /**
* Whether or not the viewprojection matrix was updated since the last frame.
* @return True if the viewprojection matrix was updated since the last frame.
*/
/**
视图矩阵是否在上一帧被更新。
*/
bool isViewProjectionUpdated() const {return _viewProjectionUpdated;} /**
* set the background brush. See CameraBackgroundBrush for more information.
* @param clearBrush Brush used to clear the background
*/
/**
设置背景刷,通过CameraBackgroundBrush 查看更多详情。
*/
void setBackgroundBrush(CameraBackgroundBrush* clearBrush); /**
* Get clear brush
*/
CameraBackgroundBrush* getBackgroundBrush() const { return _clearBrush; } /**
遍历所有子节点,并且循环递归得发送它们的渲染指令。
参数:
renderer 指定一个渲染器
parentTransform 父节点放射变换矩阵
parentFlags 渲染器标签
重载 Node .
*/
virtual void visit(Renderer* renderer, const Mat4 &parentTransform, uint32_t parentFlags) override; bool isBrushValid(); CC_CONSTRUCTOR_ACCESS:
Camera();
~Camera(); /**
* Set the scene,this method shall not be invoke manually
*/
/**
设置场景,这个方法不应该手动调用 .
*/
void setScene(Scene* scene); /**set additional matrix for the projection matrix, it multiplies mat to projection matrix when called, used by WP8*/
void setAdditionalProjection(const Mat4& mat); /** init camera */
/**
初始化相机。包括透视相机和正交相机。
*/
bool initDefault();
bool initPerspective(float fieldOfView, float aspectRatio, float nearPlane, float farPlane);
bool initOrthographic(float zoomX, float zoomY, float nearPlane, float farPlane);
void applyFrameBufferObject();
void applyViewport();
protected: Scene* _scene; //Scene camera belongs to
Mat4 _projection;
mutable Mat4 _view;
mutable Mat4 _viewInv;
mutable Mat4 _viewProjection;
Vec3 _up;
Camera::Type _type;
float _fieldOfView;
float _zoom[2];
float _aspectRatio;
float _nearPlane;
float _farPlane;
mutable bool _viewProjectionDirty;
bool _viewProjectionUpdated; //Whether or not the viewprojection matrix was updated since the last frame.
unsigned short _cameraFlag; // camera flag
mutable Frustum _frustum; // camera frustum
mutable bool _frustumDirty;
int8_t _depth; //camera depth, the depth of camera with CameraFlag::DEFAULT flag is 0 by default, a camera with larger depth is drawn on top of camera with smaller depth
static Camera* _visitingCamera; CameraBackgroundBrush* _clearBrush; //brush used to clear the back ground experimental::Viewport _viewport; experimental::FrameBuffer* _fbo;
protected:
static experimental::Viewport _defaultViewport;
public:
static const experimental::Viewport& getDefaultViewport() { return _defaultViewport; }
static void setDefaultViewport(const experimental::Viewport& vp) { _defaultViewport = vp; }
};

.cpp文件代码

 ****************************************************************************/
#include "2d/CCCamera.h"
#include "2d/CCCameraBackgroundBrush.h"
#include "base/CCDirector.h"
#include "platform/CCGLView.h"
#include "2d/CCScene.h"
#include "renderer/CCRenderer.h"
#include "renderer/CCQuadCommand.h"
#include "renderer/CCGLProgramCache.h"
#include "renderer/ccGLStateCache.h"
#include "renderer/CCFrameBuffer.h"
#include "renderer/CCRenderState.h" NS_CC_BEGIN Camera* Camera::_visitingCamera = nullptr;
experimental::Viewport Camera::_defaultViewport; Camera* Camera::getDefaultCamera()
{
auto scene = Director::getInstance()->getRunningScene();
if(scene)
{
return scene->getDefaultCamera();
} return nullptr;
} Camera* Camera::create()
{
Camera* camera = new (std::nothrow) Camera();
camera->initDefault();
camera->autorelease();
camera->setDepth(0.f); return camera;
} Camera* Camera::createPerspective(float fieldOfView, float aspectRatio, float nearPlane, float farPlane)
{
auto ret = new (std::nothrow) Camera();
if (ret)
{
ret->initPerspective(fieldOfView, aspectRatio, nearPlane, farPlane);
ret->autorelease();
return ret;
}
CC_SAFE_DELETE(ret);
return nullptr;
} Camera* Camera::createOrthographic(float zoomX, float zoomY, float nearPlane, float farPlane)
{
auto ret = new (std::nothrow) Camera();
if (ret)
{
ret->initOrthographic(zoomX, zoomY, nearPlane, farPlane);
ret->autorelease();
return ret;
}
CC_SAFE_DELETE(ret);
return nullptr;
} Camera::Camera()
: _scene(nullptr)
, _viewProjectionDirty(true)
, _cameraFlag(1)
, _frustumDirty(true)
, _depth(-1)
, _fbo(nullptr)
{
_frustum.setClipZ(true);
_clearBrush = CameraBackgroundBrush::createDepthBrush(1.f);
_clearBrush->retain();
} Camera::~Camera()
{
CC_SAFE_RELEASE_NULL(_fbo);
CC_SAFE_RELEASE(_clearBrush);
} const Mat4& Camera::getProjectionMatrix() const
{
return _projection;
}
const Mat4& Camera::getViewMatrix() const
{
Mat4 viewInv(getNodeToWorldTransform());
static int count = sizeof(float) * 16;
if (memcmp(viewInv.m, _viewInv.m, count) != 0)
{
_viewProjectionDirty = true;
_frustumDirty = true;
_viewInv = viewInv;
_view = viewInv.getInversed();
}
return _view;
}
void Camera::lookAt(const Vec3& lookAtPos, const Vec3& up)
{
//camera->lookAt必须在camera->setPostion3D之后,因为其在运行过程中调用了getPosition3D()
//定义y方向的归一化向量。
Vec3 upv = up;
upv.normalize(); //计算x、y、z、方向上的向量。
Vec3 zaxis;
Vec3::subtract(this->getPosition3D(), lookAtPos, &zaxis);
zaxis.normalize(); Vec3 xaxis;
Vec3::cross(upv, zaxis, &xaxis);
xaxis.normalize(); Vec3 yaxis;
Vec3::cross(zaxis, xaxis, &yaxis);
yaxis.normalize(); //将上面计算的向量值构造旋转矩阵
Mat4 rotation;
rotation.m[0] = xaxis.x;
rotation.m[1] = xaxis.y;
rotation.m[2] = xaxis.z;
rotation.m[3] = 0;
rotation.m[4] = yaxis.x;
rotation.m[5] = yaxis.y;
rotation.m[6] = yaxis.z;
rotation.m[7] = 0;
rotation.m[8] = zaxis.x;
rotation.m[9] = zaxis.y;
rotation.m[10] = zaxis.z;
rotation.m[11] = 0; /*
定义四元数,将旋转矩阵转换为四元数。
通过四元数来设置3D空间中的旋转角度。要保证四元数是经过归一化的。
*/
Quaternion quaternion;
Quaternion::createFromRotationMatrix(rotation,&quaternion);
quaternion.normalize();
setRotationQuat(quaternion);
} const Mat4& Camera::getViewProjectionMatrix() const
{
getViewMatrix();
if (_viewProjectionDirty)
{
_viewProjectionDirty = false;
Mat4::multiply(_projection, _view, &_viewProjection);
} return _viewProjection;
} void Camera::setAdditionalProjection(const Mat4& mat)
{
_projection = mat * _projection;
getViewProjectionMatrix();
} bool Camera::initDefault()
{
auto size = Director::getInstance()->getWinSize();
//create default camera
auto projection = Director::getInstance()->getProjection();
switch (projection)
{
case Director::Projection::_2D:
{
initOrthographic(size.width, size.height, -1024, 1024);
setPosition3D(Vec3(0.0f, 0.0f, 0.0f));
setRotation3D(Vec3(0.f, 0.f, 0.f));
break;
}
case Director::Projection::_3D:
{
float zeye = Director::getInstance()->getZEye();
initPerspective(60, (GLfloat)size.width / size.height, 10, zeye + size.height / 2.0f);
Vec3 eye(size.width/2, size.height/2.0f, zeye), center(size.width/2, size.height/2, 0.0f), up(0.0f, 1.0f, 0.0f);
setPosition3D(eye);
lookAt(center, up);
break;
}
default:
CCLOG("unrecognized projection");
break;
}
return true;
} bool Camera::initPerspective(float fieldOfView, float aspectRatio, float nearPlane, float farPlane)
{
_fieldOfView = fieldOfView;
_aspectRatio = aspectRatio;
_nearPlane = nearPlane;
_farPlane = farPlane;
Mat4::createPerspective(_fieldOfView, _aspectRatio, _nearPlane, _farPlane, &_projection);
_viewProjectionDirty = true;
_frustumDirty = true; return true;
} bool Camera::initOrthographic(float zoomX, float zoomY, float nearPlane, float farPlane)
{
_zoom[0] = zoomX;
_zoom[1] = zoomY;
_nearPlane = nearPlane;
_farPlane = farPlane;
Mat4::createOrthographicOffCenter(0, _zoom[0], 0, _zoom[1], _nearPlane, _farPlane, &_projection);
_viewProjectionDirty = true;
_frustumDirty = true; return true;
} Vec2 Camera::project(const Vec3& src) const
{
Vec2 screenPos; auto viewport = Director::getInstance()->getWinSize();
Vec4 clipPos;
getViewProjectionMatrix().transformVector(Vec4(src.x, src.y, src.z, 1.0f), &clipPos); CCASSERT(clipPos.w != 0.0f, "clipPos.w can't be 0.0f!");
float ndcX = clipPos.x / clipPos.w;
float ndcY = clipPos.y / clipPos.w; screenPos.x = (ndcX + 1.0f) * 0.5f * viewport.width;
screenPos.y = (1.0f - (ndcY + 1.0f) * 0.5f) * viewport.height;
return screenPos;
} Vec2 Camera::projectGL(const Vec3& src) const
{
Vec2 screenPos; auto viewport = Director::getInstance()->getWinSize();
Vec4 clipPos;
getViewProjectionMatrix().transformVector(Vec4(src.x, src.y, src.z, 1.0f), &clipPos); CCASSERT(clipPos.w != 0.0f, "clipPos.w can't be 0.0f!");
float ndcX = clipPos.x / clipPos.w;
float ndcY = clipPos.y / clipPos.w; screenPos.x = (ndcX + 1.0f) * 0.5f * viewport.width;
screenPos.y = (ndcY + 1.0f) * 0.5f * viewport.height;
return screenPos;
} Vec3 Camera::unproject(const Vec3& src) const
{
Vec3 dst;
unproject(Director::getInstance()->getWinSize(), &src, &dst);
return dst;
} Vec3 Camera::unprojectGL(const Vec3& src) const
{
Vec3 dst;
unprojectGL(Director::getInstance()->getWinSize(), &src, &dst);
return dst;
} void Camera::unproject(const Size& viewport, const Vec3* src, Vec3* dst) const
{
CCASSERT(src && dst, "vec3 can not be null"); Vec4 screen(src->x / viewport.width, ((viewport.height - src->y)) / viewport.height, src->z, 1.0f);
screen.x = screen.x * 2.0f - 1.0f;
screen.y = screen.y * 2.0f - 1.0f;
screen.z = screen.z * 2.0f - 1.0f; getViewProjectionMatrix().getInversed().transformVector(screen, &screen);
if (screen.w != 0.0f)
{
screen.x /= screen.w;
screen.y /= screen.w;
screen.z /= screen.w;
} dst->set(screen.x, screen.y, screen.z);
} void Camera::unprojectGL(const Size& viewport, const Vec3* src, Vec3* dst) const
{
CCASSERT(src && dst, "vec3 can not be null"); Vec4 screen(src->x / viewport.width, src->y / viewport.height, src->z, 1.0f);
screen.x = screen.x * 2.0f - 1.0f;
screen.y = screen.y * 2.0f - 1.0f;
screen.z = screen.z * 2.0f - 1.0f; getViewProjectionMatrix().getInversed().transformVector(screen, &screen);
if (screen.w != 0.0f)
{
screen.x /= screen.w;
screen.y /= screen.w;
screen.z /= screen.w;
} dst->set(screen.x, screen.y, screen.z);
} bool Camera::isVisibleInFrustum(const AABB* aabb) const
{
if (_frustumDirty)
{
_frustum.initFrustum(this);
_frustumDirty = false;
}
return !_frustum.isOutOfFrustum(*aabb);
} float Camera::getDepthInView(const Mat4& transform) const
{
Mat4 camWorldMat = getNodeToWorldTransform();
const Mat4 &viewMat = camWorldMat.getInversed();
float depth = -(viewMat.m[2] * transform.m[12] + viewMat.m[6] * transform.m[13] + viewMat.m[10] * transform.m[14] + viewMat.m[14]);
return depth;
} void Camera::setDepth(int8_t depth)
{
if (_depth != depth)
{
_depth = depth;
if (_scene)
{
//notify scene that the camera order is dirty
_scene->setCameraOrderDirty();
}
}
} void Camera::onEnter()
{
if (_scene == nullptr)
{
auto scene = getScene();
if (scene)
{
setScene(scene);
}
}
Node::onEnter();
} void Camera::onExit()
{
// remove this camera from scene
setScene(nullptr);
Node::onExit();
} // 设置成当前的场景,把之前的场景删除掉,并且把自己加入到场景的照相机当中
void Camera::setScene(Scene* scene)
{
if (_scene != scene)
{
//remove old scene
if (_scene)
{
auto& cameras = _scene->_cameras;
auto it = std::find(cameras.begin(), cameras.end(), this);
if (it != cameras.end())
cameras.erase(it);
_scene = nullptr;
}
//set new scene
if (scene)
{
_scene = scene;
auto& cameras = _scene->_cameras;
auto it = std::find(cameras.begin(), cameras.end(), this);
if (it == cameras.end())
{
_scene->_cameras.push_back(this);
//notify scene that the camera order is dirty
_scene->setCameraOrderDirty();
}
}
}
} void Camera::clearBackground()
{
if (_clearBrush)
{
_clearBrush->drawBackground(this);
}
} void Camera::setFrameBufferObject(experimental::FrameBuffer *fbo)
{
CC_SAFE_RETAIN(fbo);
CC_SAFE_RELEASE_NULL(_fbo);
_fbo = fbo;
if(_scene)
{
_scene->setCameraOrderDirty();
}
} void Camera::applyFrameBufferObject()
{
if(nullptr == _fbo)
{
experimental::FrameBuffer::applyDefaultFBO();
}
else
{
_fbo->applyFBO();
}
} void Camera::apply()
{
applyFrameBufferObject();
applyViewport();
} void Camera::applyViewport()
{
if(nullptr == _fbo)
{
glViewport(getDefaultViewport()._left, getDefaultViewport()._bottom, getDefaultViewport()._width, getDefaultViewport()._height);
}
else
{
glViewport(_viewport._left * _fbo->getWidth(), _viewport._bottom * _fbo->getHeight(),
_viewport._width * _fbo->getWidth(), _viewport._height * _fbo->getHeight());
} } int Camera::getRenderOrder() const
{
int result(0);
if(_fbo)
{
result = _fbo->getFID()<<8;
}
else
{
result = 127 <<8;
}
result += _depth;
return result;
} void Camera::visit(Renderer* renderer, const Mat4 &parentTransform, uint32_t parentFlags)
{
_viewProjectionUpdated = _transformUpdated;
return Node::visit(renderer, parentTransform, parentFlags);
} void Camera::setBackgroundBrush(CameraBackgroundBrush* clearBrush)
{
CC_SAFE_RETAIN(clearBrush);
CC_SAFE_RELEASE(_clearBrush);
_clearBrush = clearBrush;
} bool Camera::isBrushValid()
{
return _clearBrush != nullptr && _clearBrush->isValid();
}

暂时就这些。

[Cocos2dx] CCCamera照相机 详解的更多相关文章

  1. [cocos2d-x] --- CCNode类详解

    Email : awodefeng@163.com 1 CCNode是cocos2d-x中一个很重要的类,CCNode是场景.层.菜单.精灵等的父类.而我们在使用cocos2d-x时,接触最多的就是场 ...

  2. cocos2d-x 创建工程详解

    我们的编写的第一个程序一般习惯上都命名为HelloWorld,从它开始再学习其他的内容.下面介绍的第一个Cocos2d-x游戏我们也命名为HelloWorld. 创建工程 在Cocos2d-x早期版本 ...

  3. cocos2d-x 多分辨率适配详解(转载),以前北京团队设计的游戏,也是用这套方案

    http://blog.csdn.net/kyo7552/article/details/17163487 多种分辨率的适配一直都是一个蛋疼的问题,各家公司可能都有自己的一套方案.今天我为大家介绍的是 ...

  4. Cocos2d-x 3.X手游开发实例详解

    Cocos2d-x 3.X手游开发实例详解(最新最简Cocos2d-x手机游戏开发学习方法,以热门游戏2048.卡牌为例,完整再现手游的开发过程,实例丰富,代码完备,Cocos2d-x作者之一林顺和泰 ...

  5. Cocos2d-x win7 + vs2010 配置图文详解

    Cocos2d-x win7 + vs2010 配置图文详解 下载最新版的cocos2d-x.打开浏览器,输入cocos2d-x.org,然后选择Download,本教程写作时最新版本为cocos2d ...

  6. cocos2dx常见的46中+22中动作详解

    cocos2dx常见的46中+22中动作详解 分类: iOS2013-10-16 00:44 1429人阅读 评论(0) 收藏 举报 bool HelloWorld::init(){    ///// ...

  7. cocos2d-x 详解之 CCAction(动作)

    关于动作部分,总的来说使用起来比较简单,创建一个动作,然后让可渲染节点如精灵去执行这个动作即可.cocos2dx提供了很多类型的动作,使用起来也很方便.本节重点介绍动作CCAction的子类之一时间动 ...

  8. Cocos2d-x 3.0坐标系详解(转载)

    Cocos2d-x 3.0坐标系详解 Cocos2d-x坐标系和OpenGL坐标系相同,都是起源于笛卡尔坐标系. 笛卡尔坐标系 笛卡尔坐标系中定义右手系原点在左下角,x向右,y向上,z向外,OpenG ...

  9. Learning Cocos2d-x for WP8(5)——详解Menu菜单

    原文:Learning Cocos2d-x for WP8(5)--详解Menu菜单 C#(wp7)兄弟篇Learning Cocos2d-x for XNA(5)——详解Menu菜单 菜单是游戏必不 ...

随机推荐

  1. AopProxyUtils.getSingletonTarget(Ljava/lang/Object;)Ljava/lang/Object;大坑

    这个问题太坑了,试了好多个版本,都是依赖冲突导致的, https://blog.csdn.net/qq_15003505/article/details/78430595 最后找到这一篇博客解决了,就 ...

  2. Flip Bits

    Determine the number of bits required to flip if you want to convert integer n to integer m. Notice ...

  3. Framebuffer 驱动学习总结(一) ---- 总体架构及关键结构体

    一.Framebuffer 设备驱动总体架构 帧缓冲设备为标准的字符型设备,在Linux中主设备号29,定义在/include/linux/major.h中的FB_MAJOR,次设备号定义帧缓冲的个数 ...

  4. avalonJS-源码阅读(三) VMODEL

    avalon的重头戏.终于要到我最期待的vmodel了. ps:这篇博文想做的全一点,错误少一点,所以会有后续的更新在这篇文章中. 状态:一稿 目录[-] avalon dom小结 数据结构 观察者模 ...

  5. kettle简单插入与更新

    Kettle简介:Kettle是一款国外开源的ETL工具,纯java编写,可以在Window.Linux.Unix上运行,数据抽取高效稳定.Kettle 中文名称叫水壶,该项目的主程序员MATT 希望 ...

  6. centos7 部署镜像仓库 harbor

    =============================================== 2018/4/16_第2次修改                       ccb_warlock 更新 ...

  7. Gitflow工作流

    什么是Gitflow工作流 Gitflow工作流定义了一个围绕项目发布的严格分支模型.虽然比功能分支工作流复杂几分,但提供了用于一个健壮的用于管理大型项目的框架. Gitflow工作流没有用超出功能分 ...

  8. Centos之字符串搜索命令grep

    grep [选项] 字符串 文件名 在文件当中匹配符合条件的字符串 选项: -i 忽略大小写 -v 排除指定字符串 [root@localhost ~]# grep "work" ...

  9. linux nc命令使用详解(转)

    linux nc命令使用详解 功能说明:功能强大的网络工具 语 法:nc [-hlnruz][-g<网关...>][-G<指向器数目>][-i<延迟秒数>][-o& ...

  10. 《精通Python设计模式》学习结构型之适配器模式

    大名鼎鼎~~ 在兼容老系统和其它系统外调用时,用得着~ class Synthesizer: def __init__(self, name): self.name = name def __str_ ...