OpenGL ES2 缩放移动
OpenGL ES Transformations with Gestures
Gestures: Intuitive, sophisticated and easy to implement!
In this tutorial, you’ll learn how to use gestures to control OpenGL ES transformations by building a sophisticated model viewer app for 3D objects.
For this app, you’ll take maximum advantage of the iPhone’s touchscreen to implement an incredibly intuitive interface. You’ll also learn a bit of 3D math and use this knowledge to master basic model manipulation.
The hard work has already been done for you, specifically in our tutorial series How To Export Blender Models to OpenGL ES, allowing you to concentrate on nothing but transformations and gestures. The aforementioned series is an excellent build-up to this tutorial, since you’ll be using virtually the same code and resources. If you missed it, you’ll also be fine if you’ve read our OpenGL ES 2.0 for iPhone or Beginning OpenGL ES 2.0 with GLKit tutorials.
Note: Since this is literally a “hands-on” tutorial that depends on gestures, you’ll definitely need an iOS device to fully appreciate the implementation. The iPhone/iPad Simulator can’t simulate all the gestures covered here.
Getting Started
First, download the starter pack for this tutorial.
As mentioned before, this is essentially the same project featured in our Blender to OpenGL ES tutorial series. However, the project has been refactored to present a neat and tidy GLKit View Controller class—MainViewController
—that hides most of the OpenGL ES shader implementation and 3D model rendering.
Have a look at MainViewController.m to see how everything works, and then build and run. You should see the screen below:
The current model viewer is very simple, allowing you to view two different models in a fixed position. So far it’s not terribly interesting, which is why you’ll be adding the wow factor by implementing gesture recognizers!
Gesture Recognizers
Any new iPhone/iPad user will have marveled at the smooth gestures that allow you to navigate the OS and its apps, such as pinching to zoom or swiping to scroll. The 3D graphics world is definitely taking notice, since a lot of high-end software, including games, requires a three-button mouse or double thumbsticks to navigate their worlds. Touchscreen devices have changed all this and allow for new forms of input and expression. If you’re really forward-thinking, you may have already implemented gestures in your apps.
An Overview
Although we’re sure you’re familiar with them, here’s quick overview of the four gesture recognizers you’ll implement in this tutorial:
Pan (One Finger)
Pan (Two Fingers)
Pinch
Rotation
The first thing you need to do is add them to your interface.
Adding Gesture Recognizers
Open MainStoryboard.storyboard and drag a Pan Gesture Recognizer
from your Object library and drop it onto your GLKit View
, as shown below:
Next, show the Assistant editor in Xcode with MainStoryboard.storyboard in the left window and MainViewController.m in the right window. Click on your Pan Gesture Recognizer
and control+drag a connection from it to MainViewController.m to create an Action in the file. Enter pan
for the Name of your new action and UIPanGestureRecognizer
for the Type. Use the image below as a guide:
Repeat the process above for a Pinch Gesture Recognizer
and a Rotation Gesture Recognizer
. The Action for the former should have the Name pinch
with Type UIPinchGestureRecognizer
, while the latter should have the Name rotation
with Type UIRotationGestureRecognizer
. If you need help, use the image below:
Solution Inside: Adding Pinch and Rotation Gesture Recognizers | Show |
---|---|
Revert Xcode back to your Standard editor view and open MainStoryboard.storyboard. Select your Pan Gesture Recognizer
and turn your attention to the right sidebar. Click on the Attributes inspector tab and set the Maximum number of Touches to 2
, since you’ll only be handling one-finger and two-finger pans.
Next, open MainViewController.m and add the following lines to pan:
:
// Pan (1 Finger) |
Similarly, add the following line to pinch:
:
NSLog(@"Pinch"); |
And add the following to rotation:
:
NSLog(@"Rotation"); |
As you might have guessed, these are simple console output statements to test your four new gestures, so let’s do just that: build and run! Perform all four gestures on your device and check the console to verify your actions.
Gesture Recognizer Data
Now let’s see some actual gesture data. Replace both NSLog()
statements in pan:
with:
CGPoint translation = [sender translationInView:sender.view]; |
At the beginning of every new pan, you set the touch point of the gesture (translation
) as the origin (0.0, 0.0) for the event. While the event is active, you divide its reported coordinates over its total view size (width for x
, height for y
) to get a total range of 1.0 in each direction. For example, if the gesture event begins in the middle of the view, then its range will be: -0.5 ≤ x ≤ +0.5 from left to right and -0.5 ≤ y ≤ +0.5 from top to bottom.
Pop quiz! If the gesture event begins in the top-left corner of the view, what is its range?
Solution Inside: Pan Gesture Range | Show |
---|---|
The pinch and rotation gestures are much easier to handle. Replace the NSLog()
statement in pinch:
with this:
float scale = [sender scale]; |
And replace the NSLog()
statement in rotation:
with the following:
float rotation = GLKMathRadiansToDegrees([sender rotation]); |
At the beginning of every new pinch, the distance between your two fingers has a scale
of 1.0. If you bring your fingers together, the scale of the gesture decreases for a zoom-out effect. If you move your fingers apart, the scale of the gesture increases for a zoom-in effect.
A new rotation gesture always begins at 0.0 radians, which you conveniently convert to degrees for this exercise with the function GLKMathRadiansToDegrees()
. A clockwise rotation increases the reported angle, while a counterclockwise rotation decreases the reported angle.
Build and run! Once again, perform all four gestures on your device and check the console to verify your actions. You should see that pinching inward logs a decrease in the scale, rotating clockwise logs a positive angle and panning to the bottom-right logs a positive displacement.
Handling Your Transformations
With your gesture recognizers all set, you’ll now create a new class to handle your transformations. Click File\New\File… and choose the iOS\Cocoa Touch\Objective-C class template. Enter Transformations for the class and NSObject for the subclass. Make sure both checkboxes are unchecked, click Next and then click Create.
Open Transformations.h and replace the existing file contents with the following:
#import <GLKit/GLKit.h> |
These are the main methods you’ll implement to control your model’s transformations. You’ll examine each in detail within their own sections of the tutorial, but for now they will mostly remain dummy implementations.
Open Transformations.m and replace the existing file contents with the following:
#import "Transformations.h" |
There are a few interesting things happening with _depth
, so let’s take a closer look:
_depth
is a variable specific toTransformations
which will determine the depth of your object in the scene.- You assign the variable
z
to_depth
in your initializer, and nowhere else. - You position your model-view matrix at the (x,y) center of your view with the values (0.0, 0.0) and with a z-value of
-_depth
. You do this because, in OpenGL ES, the negative z-axis runs into the screen.
That’s all you need to render your model with an appropriate model-view matrix. :]
Open MainViewController.m
and import your new class by adding the following statement to the top of your file:
#import "Transformations.h" |
Now add a property to access your new class, right below the @interface
line:
@property (strong, nonatomic) Transformations* transformations; |
Next, initialize transformations
by adding the following lines to viewDidLoad
:
// Initialize transformations |
The only value doing anything here is the depth of 5.0f
. You’re using this value because the projection matrix of your scene has near and far clipping planes of 0.1f
and 10.0f
, respectively (see the function calculateMatrices
), thus placing your model right in the middle of the scene.
Locate the function calculateMatrices
and replace the following lines:
GLKMatrix4 modelViewMatrix = GLKMatrix4Identity; |
With these:
GLKMatrix4 modelViewMatrix = [self.transformations getModelViewMatrix]; |
Build and run! Your starship is still there, but it appears to have shrunk!
You’re handling your new model-view matrix by transformations
, which set a depth of 5.0 units. Your previous model-view matrix had a depth of 2.5 units, meaning that your starship is now twice as far away. You could easily revert the depth, or you could play around with your starship’s scale…
The Scale Transformation
The first transformation you’ll implement is also the easiest: scale. Open Transformations.m and add the following variables inside the @interface
extension at the top of your file:
// Scale |
All of your transformations will have start and end values. The end value will be the one actually transforming your model-view matrix, while the start value will track the gesture’s event data.
Next, add the following line to initWithDepth:Scale:Translation:Rotation:
, inside the if
statement:
// Scale |
And add the following line to getModelViewMatrix
, after you translate the model-view matrix—transformation order does matter, as you’ll learn later on:
modelViewMatrix = GLKMatrix4Scale(modelViewMatrix, _scaleEnd, _scaleEnd, _scaleEnd); |
With that line, you scale your model-view matrix uniformly in (x,y,z) space.
To test your new code, open MainViewController.m and locate the function viewDidLoad
. Change the Scale:
initialization of self.transformations
from 1.0f
to 2.0f
, like so:
self.transformations = [[Transformations alloc] initWithDepth:5.0f Scale:2.0f Translation:GLKVector2Make(0.0f, 0.0f) Rotation:GLKVector3Make(0.0f, 0.0f, 0.0f)]; |
Build and run! Your starship will be twice as big as your last run and look a lot more proportional to the size of your scene.
Back in Transformations.m, add the following line to scale:
:
_scaleEnd = s * _scaleStart; |
As mentioned before, the starting scale value of a pinch gesture is 1.0, increasing with a zoom-in event and decreasing with a zoom-out event. You haven’t assigned a value to _scaleStart
yet, so here’s a quick question: should it be 1.0? Or maybe s
?
The answer is neither. If you assign either of those values to _scaleStart
, then every time the user performs a new scale gesture, the model-view matrix will scale back to either 1.0 or s
before scaling up or down. This will cause the model to suddenly contract or expand, creating a jittery experience. You want your model to conserve its latest scale so that the transformation is continuously smooth.
To make it so, add the following line to start
:
_scaleStart = _scaleEnd; |
You haven’t called start
from anywhere yet, so let’s see where it belongs. Open MainViewController.m and add the following function at the bottom of your file, before the @end
statement:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event |
touchesBegan:withEvent:
is the first method to respond whenever your iOS device detects a touch on the screen, before the gesture recognizers kick in. Therefore, it’s the perfect place to call start
and conserve your scale values.
Next, locate the function pinch:
and replace the NSLog()
statement with:
[self.transformations scale:scale]; |
Build and run! Pinch the touchscreen to scale your model up and down. :D
That’s pretty exciting!
The Translation Transformation
Just like a scale transformation, a translation needs two variables to track start and end values. Open Transformations.m and add the following variables inside your @interface
extension:
// Translation |
Similarly, you only need to initialize _translationEnd
in initWithDepth:Scale:Translation:Rotation:
. Do that now:
// Translation |
Scroll down to the function getModelViewMatrix
and change the following line:
modelViewMatrix = GLKMatrix4Translate(modelViewMatrix, 0.0f, 0.0f, -_depth); |
To this:
modelViewMatrix = GLKMatrix4Translate(modelViewMatrix, _translationEnd.x, _translationEnd.y, -_depth); |
Next, add the following lines to translate:withMultiplier:
:
// 1 |
Let’s see what’s happening here:
m
is a multiplier that helps convert screen coordinates into OpenGL ES coordinates. It is defined when you call the function from MainViewController.m.dx
anddy
represent the rate of change of the current translation in x and y, relative to the latest position of_translationEnd
. In screen coordinates, the y-axis is positive in the downwards direction and negative in the upwards direction. In OpenGL ES, the opposite is true. Therefore, you subtract the rate of change in y from_translationEnd.y
.- Finally, you update
_translationEnd
and_translationStart
to reflect the new end and start positions, respectively.
As mentioned before, the starting translation value of a new pan gesture is (0.0, 0.0). That means all new translations will be relative to this origin point, regardless of where the model actually is in the scene. It also means the value assigned to _translationStart
for every new pan gesture will always be the origin.
Add the following line to start
:
_translationStart = GLKVector2Make(0.0f, 0.0f); |
Everything is in place, so open MainViewController.m and locate your pan:
function. Replace the NSLog()
statement inside your first if
conditional for a single touch with the following:
[self.transformations translate:GLKVector2Make(x, y) withMultiplier:5.0f]; |
Build and run! Good job—you can now move your starship around with the touch of a finger! (But not two.)
A Quick Math Lesson: Quaternions
Before you move onto the last transformation—rotation—you need to know a bit about quaternions. This lesson will thankfully be pretty quick, though, since GLKit provides an excellent math library to deal with quaternions.
Quaternions are a complex mathematical system with many applications, but for this tutorial you’ll only be concerned with their spatial rotation properties. The main advantage of quaternions in this respect is that they don’t suffer from gimbal lock, unlike Euler angles.
Euler angles are a common representation for rotations, usually in (x,y,z) form. When rotating an object in this space, there are many opportunities for two axes to align with each other. In these cases, one degree of freedom is lost since any change to either of the aligned axes applies the same rotation to the object being transformed—that is, the two axes become one. That is a gimbal lock, and it will cause unexpected results and jittery animations.
A gimbal lock, from Wikipedia.
One reason to prefer Euler angles to quaternions is that they are intrinsically easier to represent and to read. However, GLKQuaternion simplifies the complexity of quaternions and reduces a rotation to four simple steps:
- Create a quaternion that represents a rotation around an axis.
- For each (x,y,z) axis, multiply the resulting quaternion against a master quaternion.
- Derive the 4×4 matrix that performs an (x,y,z) rotation based on a quaternion.
- Calculate the product of the resulting matrix with the main model-view matrix.
You’ll be implementing these four simple steps shortly. :]
Quaternions and Euler angles are very deep subjects, so check out these summaries from CH Robotics if you wish to learn more: Understanding Euler Angles and Understanding Quaternions.
The Rotation Transformation: Overview
In this tutorial, you’ll use two different types of gesture recognizers to control your rotations: two-finger pan and rotation. The reason for this is that your iOS device doesn’t have a single gesture recognizer that reports three different types of values, one for each (x,y,z) axis. Think about the ones you’ve covered so far:
- Pinch produces a single float, perfect for a uniform scale across all three (x,y,z) axes.
- One-finger pan produces two values corresponding to movement along the x-axis and the y-axis, just like your translation implementation.
No gesture can accurately represent rotation in 3D space. Therefore, you must define your own rule for this transformation.
Rotation about the z-axis is very straightforward and intuitive with the rotation gesture, but rotation about the x-axis and/or y-axis is slightly more complicated. Thankfully, the two-finger pan gesture reports movement along both of these axes. With a little more effort, you can use it to represent a rotation.
Let’s start with the easier one first. :]
Z-Axis Rotation With the Rotation Gesture
Open Transformations.m and add the following variables inside your @interface
extension:
// Rotation |
This is slightly different than your previous implementations for scale and translation, but it makes sense given your new knowledge of quaternions. Before moving on, add the following variable just below:
// Vectors |
As mentioned before, your quaternions will represent a rotation around an axis. This axis is actually a vector, since it specifies a direction—it’s not along z, it’s either front-facing or back-facing.
Complete the vector’s implementation by initializing it inside initWithDepth:Scale:Translation:Rotation:
with the following line:
// Vectors |
As you can see, the vector is front-facing because its direction is towards the screen.
Note: Previously, I mentioned that in OpenGL ES, negative z-values go into the screen. This is because OpenGL ES uses a right-handed coordinate system. GLKit, on the other hand (pun intended), uses the more conventional left-handed coordinate system.
Left-handed and right-handed coordinate systems, from Learn OpenGL ES
Next, add the following lines to initWithDepth:Scale:Translation:Rotation:
, right after the code you just added above:
r.z = GLKMathDegreesToRadians(r.z); |
These lines perform the first two steps of the quaternion rotation described earlier:
- You create a quaternion that represents a rotation around an axis by using
GLKQuaternionMakeWithAngleAndVector3Axis()
. - You multiply the resulting quaternion against a master quaternion using
GLKQuaternionMultiply()
.
All calculations are performed with radians, hence the call to GLKMathDegreesToRadians()
. With quaternions, a positive angle performs a counterclockwise rotation, so you send in the negative value of your angle: -r.z
.
To complete the initial setup, add the following line to getModelViewMatrix
, right after you create modelViewMatrix
:
GLKMatrix4 quaternionMatrix = GLKMatrix4MakeWithQuaternion(_rotationEnd); |
Then, add the following line to your matrix calculations, after the translation and before the scale:
modelViewMatrix = GLKMatrix4Multiply(modelViewMatrix, quaternionMatrix); |
These two lines perform the last two steps of the quaternion rotation described earlier:
- You derive the 4×4 matrix that performs an (x,y,z) rotation based on a quaternion, using
GLKMatrix4MakeWithQuaternion()
. - You calculate the product of the resulting matrix with the main model-view matrix using
GLKMatrix4Multiply()
.
Note: The order of your transformations is not arbitrary. Imagine the following instructions given to two different people:
- Starting from point
P
: taken
steps forward; turn to your left; then pretend to be a giant twice your size. - Starting from point
P
: pretend to be a giant twice your size; turn to your left; then taken
steps forward.
See the difference below:
Even though the instructions have the same steps, the two people end up at different points, P’1
and P’2
. This is because Person 1 first walks (translation), then turns (rotation), then grows (scale), thus ending n
paces in front of point P
. With the other order, Person 2 first grows, then turns, then walks, thus taking giant-sized steps towards the left and ending 2n
paces to the left of point P
.
Open MainViewController.m and test your new code by changing the z-axis initialization angle of self.transformations
to 180.0
inside viewDidLoad
:
self.transformations = [[Transformations alloc] initWithDepth:5.0f Scale:2.0f Translation:GLKVector2Make(0.0f, 0.0f) Rotation:GLKVector3Make(0.0f, 0.0f, 180.0f)]; |
Build and run! You’ve caught your starship in the middle of a barrel roll.
After you’ve verified that this worked, revert the change, since you would rather have your app launch with the starship properly oriented.
The next step is to implement the rotation with your rotation gesture. Open Transformations.m and add the following lines to rotate:withMultiplier:
:
float dz = r.z - _rotationStart.z; |
This is a combination of your initialization code and your translation implementation. dz
represents the rate of change of the current rotation about the z-axis. Then you simply update _rotationStart
and _rotationEnd
to reflect the new start and end positions, respectively.
There is no need to convert r.z
to radians this time, since the rotation gesture’s values are already in radians. r.x
and r.y
will be passed along as 0.0, so you don’t need to worry about them too much—for now.
As you know, a new rotation gesture always begins with a starting value of 0.0. Therefore, all new rotations will be relative to this zero angle, regardless of your model’s actual orientation. Consequently, the value assigned to _rotationStart
for every new rotation gesture will always be an angle of zero for each axis.
Add the following line to start
:
_rotationStart = GLKVector3Make(0.0f, 0.0f, 0.0f); |
To finalize this transformation implementation, open MainViewController.m and locate your rotation:
function. Replace the NSLog()
statement with the following:
[self.transformations rotate:GLKVector3Make(0.0f, 0.0f, rotation) withMultiplier:1.0f]; |
Since a full rotation gesture perfectly spans 360 degrees, there is no need to implement a multiplier here, but you’ll find it very useful in the next section.
Lastly, since your calculations are expecting radians, change the preceding line:
float rotation = GLKMathRadiansToDegrees([sender rotation]); |
To this:
float rotation = [sender rotation]; |
Build and run! You can now do a full barrel roll. :D
X- and Y-Axis Rotation With the Two-Finger Pan Gesture
This implementation for rotation about the x-axis and/or y-axis is very similar to the one you just coded for rotation about the z-axis, so let’s start with a little challenge!
Add two new variables to Transformations.m, _right
and _up
, and initialize them inside your class initializer. These variables represent two 3D vectors, one pointing right and the other pointing up. Take a peek at the instructions below if you’re not sure how to implement them or if you want to verify your solution:
Solution Inside: Right and Up Vectors | Show |
---|---|
For an added challenge, see if you can initialize your (x,y) rotation properly, just as you did for your z-axis rotation with the angle r.z
and the vector _front
. The correct code is available below if you need some help:
Solution Inside: Rotation Initialization | Show |
---|---|
Good job! There’s not a whole lot of new code here, so let’s keep going. Still in Transformations.m, add the following lines to rotate:withMultiplier:
, just above dz
:
float dx = r.x - _rotationStart.x; |
Once again, this should be familiar—you’re just repeating your z-axis logic for the x-axis and the y-axis. The next part is a little trickier, though…
Add the following lines to rotate:withMultiplier:
, just after _rotationStart
:
_rotationEnd = GLKQuaternionMultiply(GLKQuaternionMakeWithAngleAndVector3Axis(dx*m, _up), _rotationEnd); |
For the z-axis rotation, your implementation rotated the ship about the z-axis and all was well, because that was the natural orientation of the gesture. Here, you face a different situation. If you look closely at the code above, you’ll notice that dx
rotates about the _up
vector (y-axis) and dy
rotates about the _right
vector (x-axis). The diagram below should help make this clear:
And you finally get to use m
! A pan gesture doesn’t report its values in radians or even degrees, but rather as 2D points, so m
serves as a converter from points to radians.
Finish the implementation by opening MainViewController.m and replacing the contents of your current two-touch else if
conditional inside pan:
with the following:
const float m = GLKMathDegreesToRadians(0.5f); |
The value of m
dictates that for every touch-point moved in the x- and/or y-direction, your model rotates 0.5
degrees.
Build and run! Your model is fully rotational. Woo-hoo!
Nice one—that’s a pretty fancy model viewer you’ve built!
Locking Your Gestures/Transformations
You’ve fully implemented your transformations, but you may have noticed that sometimes the interface accidentally alternates between two transformations—for example, if you remove a finger too soon or perform an unclear gesture. To keep this from happening, you’ll now write some code to make sure your model viewer only performs one transformation for every continuous touch.
Open Transformations.h and add the following enumerator and property to your file, just below your @interface
statement:
typedef enum TransformationState |
state
defines the current transformation state of your model viewer app, whether it be a scale (S_SCALE
), translation (S_TRANSLATION
) or rotation (S_ROTATION
). S_NEW
is a value that will be active whenever the user performs a new gesture.
Open Transformations.m and add the following line to start
:
self.state = S_NEW; |
See if you can implement the rest of the transformation states in their corresponding methods.
Solution Inside: Transformation States | Show |
---|---|
Piece of cake! Now open MainViewController.m and add a state
conditional to each gesture. I’ll give you the pan:
implementations for free and leave the other two as a challenge. :]
Modify pan:
to look like this:
- (IBAction)pan:(UIPanGestureRecognizer *)sender |
Click below to see the solution for the other two—but give it your best shot first!
Solution Inside: Pinch and Rotation States | Show |
---|---|
Build and run! See what cool poses you can set for your model and have fun playing with your new app.
Congratulations on completing this OpenGL ES Transformations With Gestures tutorial!
Where to Go From Here?
Here is the completed project with all of the code and resources from this tutorial. You can also find its repository on GitHub.
If you completed this tutorial, you’ve developed a sophisticated model viewer using the latest technologies from Apple for 3D graphics (GLKit and OpenGL ES) and touch-based user interaction (gesture recognizers). Most of these technologies are unique to mobile devices, so you’ve definitely learned enough to boost your mobile development credentials!
You should now understand a bit more about basic transformations—scale, translation and rotation—and how you can easily implement them with GLKit. You’ve learned how to add gesture recognizers to a View Controller and read their main event data. Furthermore, you’ve created a very slick app that you can expand into a useful portfolio tool for 3D artists. Challenge accepted? ;]
If you have any questions, comments or suggestions, feel free to join the discussion below!
OpenGL ES2 缩放移动的更多相关文章
- Cocos2d-x中使用OpenGL ES2.0编写shader
这几天在看子龙山人的关于OpenGL的文章,先依葫芦画瓢,能看到些东西,才能慢慢深入了解,当入门文章不错,但是其中遇到的一些问题,折腾了一些时间,为了方便和我一样的小白们,在这篇文章中进行写补充. O ...
- iOS开发——图形编程OC篇&OpenGL ES2.0编程步骤
OpenGL ES2.0编程步骤 OpenGL ES (OpenGL for Embedded Systems) 是 OpenGL 三维图形 API 的子集,针对手机.PDA和游戏主机等嵌入式设备而设 ...
- Eclipse中通过Android模拟器调用OpenGL ES2.0函数操作步骤
原文地址: Eclipse中通过Android模拟器调用OpenGL ES2.0函数操作步骤 - 网络资源是无限的 - 博客频道 - CSDN.NET http://blog.csdn.net/fen ...
- OpenGL ES2.0入门详解
引自:http://blog.csdn.net/wangyuchun_799/article/details/7736928 1.决定你要支持的OpenGL ES的版本.目前,OpenGL ES包含 ...
- OPENGL ES2.0如何不使用glActiveTexture而显示多个图片
https://www.oschina.net/question/253717_72107 用opengl es 2.0显示多个图片的话,我只会一种方式,先将图片生成纹理,然后用下面的方式渲染 // ...
- Android +NDK+eclipse+opengl ES2.0 开启深度測试
參考:https://www.opengl.org/discussion_boards/showthread.php/172736-OpenGL-ES-Depth-Buffer-Problem 环境: ...
- OpenGL ES2.0 基本编程
1. EGL OpenGL ES命令须要一个rendering context和一个drawing surface. Rendering Context: 保存当前的OpenGL ES状态. Draw ...
- android openGL ES2 一切从绘制纹理開始
纹理.在openGL中,能够理解为载入到显卡显存中的图片.Android设备在2.2開始支持openGL ES2.0.从前都是ES1.0 和 ES1.1的版本号.简单来说,openGL ES是为了嵌入 ...
- OpenGL ES2.0 入门经典例子
原文链接地址:http://www.raywenderlich.com/3664/opengl-es-2-0-for-iphone-tutorial 免责申明(必读!):本博客提供的所有教程的翻译原稿 ...
随机推荐
- Linux下Zabbix5.0 LTS添加MySQL监控,实现邮件报警并执行预处理操作
依据前文:Linux下Zabbix5.0 LTS监控基础原理及安装部署(图文教程) 环境,继续添加MySQL应用集. 第一部分:添加Zabbix自带的MySQL应用集. 在ZabbixClient-0 ...
- 如何使用原生的Hystrix
什么是Hystrix 前面已经讲完了 Feign 和 Ribbon,今天我们来研究 Netflix 团队开发的另一个类库--Hystrix. 从抽象层面看,Hystrix 是一个保护器.它可以保护我们 ...
- python实现拉普拉斯图像金字塔
一,定义 二,代码: 要求:拉普拉斯金字塔时,图像大小必须是2的n次方*2的n次方,不然会报错 1 # -*- coding=GBK -*- 2 import cv2 as cv 3 4 5 #高斯金 ...
- Go语言核心36讲(Go语言实战与应用九)--学习笔记
31 | sync.WaitGroup和sync.Once 我们在前几次讲的互斥锁.条件变量和原子操作都是最基本重要的同步工具.在 Go 语言中,除了通道之外,它们也算是最为常用的并发安全工具了. 说 ...
- [bzoj2400]Optimal Marks
首先肯定每一位单独考虑,对于每一位,源点连向该位点权为0的节点inf的边,点权为1的节点连向汇点inf的边,每一条无向边拆成两条流量为1的有向边,跑最小割. 考虑一组割,一定将原图划分成源点和汇点两部 ...
- [hdu4747]Mex
首先计算出以1为左端点的所有区间的mex,考虑删除左端点仍然维护这个序列:设当前删除点下一次出现在y,y~n的mex不变,从左端点到y的点中大于删除值的点要变成删除值,因为这个是不断递增的,所以是一段 ...
- [noi795]保镖
容易证明,最终方案一定是某一个排列无限循环,那么就要满足$\sum ai<=max(bi+ai)$,对所有数按照ai+bi排序后,枚举最大值,用权值线段树维护之前的ai最少要选几个 1 #inc ...
- mysql注入绕过information_schema过滤
1.利用mysql5.7新增的sys.schema_auto_increment_columns 这是sys数据库下的一个视图,基础数据来自与information_schema,他的作用是对表的自增 ...
- 网站每日UV数据指标去重统计
package com.iexecloud.cloud.casemanager;import redis.clients.jedis.Jedis;import java.text.SimpleDate ...
- Spring扩展点-v5.3.9
Spring 扩展点 **本人博客网站 **IT小神 www.itxiaoshen.com 官网地址****:https://spring.io/projects/spring-framework T ...