GPUImageFramebuffer类用于管理帧缓冲对象,负责帧缓冲对象的创建和销毁,读取帧缓冲内容

  属性

  @property(readonly) CGSize size

  说明:只读属性,在实现中,设置缓冲区的size

 

  @property(readonly) GPUTextureOptions textureOptions

  说明:纹理的选项

 

  @property(readonly) GLuint texture

  说明:管理纹理

 

  @property(readonly) BOOL missingFramebuffer

  说明:指示是否丢失帧缓冲对象

  

  方法

   - (id)initWithSize:(CGSize)framebufferSize

    说明:创建一个size为framebufferSize大小的帧缓冲对象

    参数:framebuffer的size。

    返回:创建成功的帧缓冲对象。

    实现

- (id)initWithSize:(CGSize)framebufferSize;
{
GPUTextureOptions defaultTextureOptions;
defaultTextureOptions.minFilter = GL_LINEAR;
defaultTextureOptions.magFilter = GL_LINEAR;
defaultTextureOptions.wrapS = GL_CLAMP_TO_EDGE;
defaultTextureOptions.wrapT = GL_CLAMP_TO_EDGE;
defaultTextureOptions.internalFormat = GL_RGBA;
defaultTextureOptions.format = GL_BGRA;
defaultTextureOptions.type = GL_UNSIGNED_BYTE; if (!(self = [self initWithSize:framebufferSize textureOptions:defaultTextureOptions onlyTexture:NO]))
{
return nil;
} return self;
}

 

     - (id)initWithSize:(CGSize)framebufferSize textureOptions:(GPUTextureOptions)fboTextureOptions onlyTexture:(BOOL)onlyGenerateTexture

    说明:创建一个size为framebufferSize大小的帧缓冲对象

    参数:framebufferSize为framebuffer的size。fboTextureOptions是纹理的详细配置。onlyGenerateTexture说明是否只创建纹理而不创建陈帧缓冲对象。

    返回:创建成功的帧缓冲对象。

    实现

- (id)initWithSize:(CGSize)framebufferSize textureOptions:(GPUTextureOptions)fboTextureOptions onlyTexture:(BOOL)onlyGenerateTexture;
{
if (!(self = [super init]))
{
return nil;
} _textureOptions = fboTextureOptions;
_size = framebufferSize;
framebufferReferenceCount = ;
referenceCountingDisabled = NO;
_missingFramebuffer = onlyGenerateTexture; if (_missingFramebuffer)
{
runSynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext useImageProcessingContext];
[self generateTexture];
framebuffer = ;
});
}
else
{
[self generateFramebuffer];
}
return self;
}

   - (id)initWithSize:(CGSize)framebufferSize overriddenTexture:(GLuint)inputTexture

    说明:创建一个size为framebufferSize大小的帧缓冲对象

    参数:inputTexture为输入的纹理,用于渲染图片。

    返回:创建成功的帧缓冲对象。

    实现

- (id)initWithSize:(CGSize)framebufferSize overriddenTexture:(GLuint)inputTexture;
{
if (!(self = [super init]))
{
return nil;
} GPUTextureOptions defaultTextureOptions;
defaultTextureOptions.minFilter = GL_LINEAR;
defaultTextureOptions.magFilter = GL_LINEAR;
defaultTextureOptions.wrapS = GL_CLAMP_TO_EDGE;
defaultTextureOptions.wrapT = GL_CLAMP_TO_EDGE;
defaultTextureOptions.internalFormat = GL_RGBA;
defaultTextureOptions.format = GL_BGRA;
defaultTextureOptions.type = GL_UNSIGNED_BYTE; _textureOptions = defaultTextureOptions;
_size = framebufferSize;
framebufferReferenceCount = ;
referenceCountingDisabled = YES; _texture = inputTexture; return self;
}

   - (void)activateFramebuffer

    说明:激活刚创建的framebuffer对象。只有调用它后,才会起作用。

    实现 

- (void)activateFramebuffer;
{
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glViewport(, , (int)_size.width, (int)_size.height);
}

   - (void)lock

    说明:引用计数管理 +1

    实现

- (void)lock;
{
if (referenceCountingDisabled)
{
return;
} framebufferReferenceCount++;
}

   - (void)unlock

    说明:引用计数管理 -1

    实现

- (void)unlock;
{
if (referenceCountingDisabled)
{
return;
} NSAssert(framebufferReferenceCount > , @"Tried to overrelease a framebuffer, did you forget to call -useNextFrameForImageCapture before using -imageFromCurrentFramebuffer?");
framebufferReferenceCount--;
if (framebufferReferenceCount < )
{
[[GPUImageContext sharedFramebufferCache] returnFramebufferToCache:self];
}
}

   - (void)clearAllLocks

    说明:引用计数管理 设置为0

    实现

- (void)clearAllLocks;
{
framebufferReferenceCount = ;
}

   - (void)disableReferenceCounting

    说明:引用计数管理 禁用引用计数

    实现

- (void)disableReferenceCounting;
{
referenceCountingDisabled = YES;
}

   - (void)enableReferenceCounting

    说明:引用计数管理 启用引用计数

    实现

- (void)enableReferenceCounting;
{
referenceCountingDisabled = NO;
}

   - (CGImageRef)newCGImageFromFramebufferContents

    说明:输出帧缓冲内容。【说明待更新......】

    实现

- (CGImageRef)newCGImageFromFramebufferContents;
{
// a CGImage can only be created from a 'normal' color texture
NSAssert(self.textureOptions.internalFormat == GL_RGBA, @"For conversion to a CGImage the output texture format for this filter must be GL_RGBA.");
NSAssert(self.textureOptions.type == GL_UNSIGNED_BYTE, @"For conversion to a CGImage the type of the output texture of this filter must be GL_UNSIGNED_BYTE."); __block CGImageRef cgImageFromBytes; runSynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext useImageProcessingContext]; NSUInteger totalBytesForImage = (int)_size.width * (int)_size.height * ;
// It appears that the width of a texture must be padded out to be a multiple of 8 (32 bytes) if reading from it using a texture cache GLubyte *rawImagePixels; CGDataProviderRef dataProvider = NULL;
if ([GPUImageContext supportsFastTextureUpload])
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSUInteger paddedWidthOfImage = CVPixelBufferGetBytesPerRow(renderTarget) / 4.0;
NSUInteger paddedBytesForImage = paddedWidthOfImage * (int)_size.height * ; glFinish();
CFRetain(renderTarget); // I need to retain the pixel buffer here and release in the data source callback to prevent its bytes from being prematurely deallocated during a photo write operation
[self lockForReading];
rawImagePixels = (GLubyte *)CVPixelBufferGetBaseAddress(renderTarget);
dataProvider = CGDataProviderCreateWithData((__bridge_retained void*)self, rawImagePixels, paddedBytesForImage, dataProviderUnlockCallback);
[[GPUImageContext sharedFramebufferCache] addFramebufferToActiveImageCaptureList:self]; // In case the framebuffer is swapped out on the filter, need to have a strong reference to it somewhere for it to hang on while the image is in existence
#else
#endif
}
else
{
[self activateFramebuffer];
rawImagePixels = (GLubyte *)malloc(totalBytesForImage);
glReadPixels(, , (int)_size.width, (int)_size.height, GL_RGBA, GL_UNSIGNED_BYTE, rawImagePixels);
dataProvider = CGDataProviderCreateWithData(NULL, rawImagePixels, totalBytesForImage, dataProviderReleaseCallback);
[self unlock]; // Don't need to keep this around anymore
} CGColorSpaceRef defaultRGBColorSpace = CGColorSpaceCreateDeviceRGB(); if ([GPUImageContext supportsFastTextureUpload])
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
cgImageFromBytes = CGImageCreate((int)_size.width, (int)_size.height, , , CVPixelBufferGetBytesPerRow(renderTarget), defaultRGBColorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst, dataProvider, NULL, NO, kCGRenderingIntentDefault);
#else
#endif
}
else
{
cgImageFromBytes = CGImageCreate((int)_size.width, (int)_size.height, , , * (int)_size.width, defaultRGBColorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaLast, dataProvider, NULL, NO, kCGRenderingIntentDefault);
} // Capture image with current device orientation
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(defaultRGBColorSpace); }); return cgImageFromBytes;
}

   - (void)restoreRenderTarget

    说明:还原渲染目标对象

    实现

- (void)restoreRenderTarget;
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
[self unlockAfterReading];
CFRelease(renderTarget);
#else
#endif
}

   - (void)lockForReading

    说明:锁定PixelBuffer

    实现:

- (void)lockForReading
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
if ([GPUImageContext supportsFastTextureUpload])
{
if (readLockCount == )
{
CVPixelBufferLockBaseAddress(renderTarget, );
}
readLockCount++;
}
#endif
}

   - (void)unlockAfterReading

    说明:解锁PixelBuffer

- (void)unlockAfterReading
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
if ([GPUImageContext supportsFastTextureUpload])
{
NSAssert(readLockCount > , @"Unbalanced call to -[GPUImageFramebuffer unlockAfterReading]");
readLockCount--;
if (readLockCount == )
{
CVPixelBufferUnlockBaseAddress(renderTarget, );
}
}
#endif
}

   - (NSUInteger)bytesPerRow

    说明:获取pixel buffer的行字节数

    实现

- (NSUInteger)bytesPerRow;
{
if ([GPUImageContext supportsFastTextureUpload])
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
return CVPixelBufferGetBytesPerRow(renderTarget);
#else
return _size.width * ; // TODO: do more with this on the non-texture-cache side
#endif
}
else
{
return _size.width * ;
}
}

   - (GLubyte *)byteBuffer

    说明:获取pixel buffer的基地址

    实现

- (GLubyte *)byteBuffer;
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
[self lockForReading];
GLubyte * bufferBytes = CVPixelBufferGetBaseAddress(renderTarget);
[self unlockAfterReading];
return bufferBytes;
#else
return NULL; // TODO: do more with this on the non-texture-cache side
#endif
}

完整代码

#import <Foundation/Foundation.h>

#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
#import <OpenGLES/EAGL.h>
#import <OpenGLES/ES2/gl.h>
#import <OpenGLES/ES2/glext.h>
#else
#import <OpenGL/OpenGL.h>
#import <OpenGL/gl.h>
#endif #import <QuartzCore/QuartzCore.h>
#import <CoreMedia/CoreMedia.h> typedef struct GPUTextureOptions {
GLenum minFilter;
GLenum magFilter;
GLenum wrapS;
GLenum wrapT;
GLenum internalFormat;
GLenum format;
GLenum type;
} GPUTextureOptions; @interface GPUImageFramebuffer : NSObject @property(readonly) CGSize size;
@property(readonly) GPUTextureOptions textureOptions;
@property(readonly) GLuint texture;
@property(readonly) BOOL missingFramebuffer; // Initialization and teardown
- (id)initWithSize:(CGSize)framebufferSize;
- (id)initWithSize:(CGSize)framebufferSize textureOptions:(GPUTextureOptions)fboTextureOptions onlyTexture:(BOOL)onlyGenerateTexture;
- (id)initWithSize:(CGSize)framebufferSize overriddenTexture:(GLuint)inputTexture; // Usage
- (void)activateFramebuffer; // Reference counting
- (void)lock;
- (void)unlock;
- (void)clearAllLocks;
- (void)disableReferenceCounting;
- (void)enableReferenceCounting; // Image capture
- (CGImageRef)newCGImageFromFramebufferContents;
- (void)restoreRenderTarget; // Raw data bytes
- (void)lockForReading;
- (void)unlockAfterReading;
- (NSUInteger)bytesPerRow;
- (GLubyte *)byteBuffer; @end

 

#import "GPUImageFramebuffer.h"
#import "GPUImageOutput.h" @interface GPUImageFramebuffer()
{
GLuint framebuffer;
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
CVPixelBufferRef renderTarget;
CVOpenGLESTextureRef renderTexture;
NSUInteger readLockCount;
#else
#endif
NSUInteger framebufferReferenceCount;
BOOL referenceCountingDisabled;
} - (void)generateFramebuffer;
- (void)generateTexture;
- (void)destroyFramebuffer; @end void dataProviderReleaseCallback (void *info, const void *data, size_t size);
void dataProviderUnlockCallback (void *info, const void *data, size_t size); @implementation GPUImageFramebuffer @synthesize size = _size;
@synthesize textureOptions = _textureOptions;
@synthesize texture = _texture;
@synthesize missingFramebuffer = _missingFramebuffer; #pragma mark -
#pragma mark Initialization and teardown - (id)initWithSize:(CGSize)framebufferSize textureOptions:(GPUTextureOptions)fboTextureOptions onlyTexture:(BOOL)onlyGenerateTexture;
{
if (!(self = [super init]))
{
return nil;
} _textureOptions = fboTextureOptions;
_size = framebufferSize;
framebufferReferenceCount = ;
referenceCountingDisabled = NO;
_missingFramebuffer = onlyGenerateTexture; if (_missingFramebuffer)
{
runSynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext useImageProcessingContext];
[self generateTexture];
framebuffer = ;
});
}
else
{
[self generateFramebuffer];
}
return self;
} - (id)initWithSize:(CGSize)framebufferSize overriddenTexture:(GLuint)inputTexture;
{
if (!(self = [super init]))
{
return nil;
} GPUTextureOptions defaultTextureOptions;
defaultTextureOptions.minFilter = GL_LINEAR;
defaultTextureOptions.magFilter = GL_LINEAR;
defaultTextureOptions.wrapS = GL_CLAMP_TO_EDGE;
defaultTextureOptions.wrapT = GL_CLAMP_TO_EDGE;
defaultTextureOptions.internalFormat = GL_RGBA;
defaultTextureOptions.format = GL_BGRA;
defaultTextureOptions.type = GL_UNSIGNED_BYTE; _textureOptions = defaultTextureOptions;
_size = framebufferSize;
framebufferReferenceCount = ;
referenceCountingDisabled = YES; _texture = inputTexture; return self;
} - (id)initWithSize:(CGSize)framebufferSize;
{
GPUTextureOptions defaultTextureOptions;
defaultTextureOptions.minFilter = GL_LINEAR;
defaultTextureOptions.magFilter = GL_LINEAR;
defaultTextureOptions.wrapS = GL_CLAMP_TO_EDGE;
defaultTextureOptions.wrapT = GL_CLAMP_TO_EDGE;
defaultTextureOptions.internalFormat = GL_RGBA;
defaultTextureOptions.format = GL_BGRA;
defaultTextureOptions.type = GL_UNSIGNED_BYTE; if (!(self = [self initWithSize:framebufferSize textureOptions:defaultTextureOptions onlyTexture:NO]))
{
return nil;
} return self;
} - (void)dealloc
{
[self destroyFramebuffer];
} #pragma mark -
#pragma mark Internal - (void)generateTexture;
{
glActiveTexture(GL_TEXTURE1);
glGenTextures(, &_texture);
glBindTexture(GL_TEXTURE_2D, _texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, _textureOptions.minFilter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, _textureOptions.magFilter);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, _textureOptions.wrapS);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, _textureOptions.wrapT); // TODO: Handle mipmaps
} - (void)generateFramebuffer;
{
runSynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext useImageProcessingContext]; glGenFramebuffers(, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer); // By default, all framebuffers on iOS 5.0+ devices are backed by texture caches, using one shared cache
if ([GPUImageContext supportsFastTextureUpload])
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
CVOpenGLESTextureCacheRef coreVideoTextureCache = [[GPUImageContext sharedImageProcessingContext] coreVideoTextureCache];
// Code originally sourced from http://allmybrain.com/2011/12/08/rendering-to-a-texture-with-ios-5-texture-cache-api/ CFDictionaryRef empty; // empty value for attr value.
CFMutableDictionaryRef attrs;
empty = CFDictionaryCreate(kCFAllocatorDefault, NULL, NULL, , &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); // our empty IOSurface properties dictionary
attrs = CFDictionaryCreateMutable(kCFAllocatorDefault, , &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs, kCVPixelBufferIOSurfacePropertiesKey, empty); CVReturn err = CVPixelBufferCreate(kCFAllocatorDefault, (int)_size.width, (int)_size.height, kCVPixelFormatType_32BGRA, attrs, &renderTarget);
if (err)
{
NSLog(@"FBO size: %f, %f", _size.width, _size.height);
NSAssert(NO, @"Error at CVPixelBufferCreate %d", err);
} err = CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, coreVideoTextureCache, renderTarget,
NULL, // texture attributes
GL_TEXTURE_2D,
_textureOptions.internalFormat, // opengl format
(int)_size.width,
(int)_size.height,
_textureOptions.format, // native iOS format
_textureOptions.type,
,
&renderTexture);
if (err)
{
NSAssert(NO, @"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
} CFRelease(attrs);
CFRelease(empty); glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
_texture = CVOpenGLESTextureGetName(renderTexture);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, _textureOptions.wrapS);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, _textureOptions.wrapT); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), );
#endif
}
else
{
[self generateTexture]; glBindTexture(GL_TEXTURE_2D, _texture); glTexImage2D(GL_TEXTURE_2D, , _textureOptions.internalFormat, (int)_size.width, (int)_size.height, , _textureOptions.format, _textureOptions.type, );
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _texture, );
} #ifndef NS_BLOCK_ASSERTIONS
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
NSAssert(status == GL_FRAMEBUFFER_COMPLETE, @"Incomplete filter FBO: %d", status);
#endif glBindTexture(GL_TEXTURE_2D, );
});
} - (void)destroyFramebuffer;
{
runSynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext useImageProcessingContext]; if (framebuffer)
{
glDeleteFramebuffers(, &framebuffer);
framebuffer = ;
} if ([GPUImageContext supportsFastTextureUpload] && (!_missingFramebuffer))
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
if (renderTarget)
{
CFRelease(renderTarget);
renderTarget = NULL;
} if (renderTexture)
{
CFRelease(renderTexture);
renderTexture = NULL;
}
#endif
}
else
{
glDeleteTextures(, &_texture);
} });
} #pragma mark -
#pragma mark Usage - (void)activateFramebuffer;
{
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glViewport(, , (int)_size.width, (int)_size.height);
} #pragma mark -
#pragma mark Reference counting - (void)lock;
{
if (referenceCountingDisabled)
{
return;
} framebufferReferenceCount++;
} - (void)unlock;
{
if (referenceCountingDisabled)
{
return;
} NSAssert(framebufferReferenceCount > , @"Tried to overrelease a framebuffer, did you forget to call -useNextFrameForImageCapture before using -imageFromCurrentFramebuffer?");
framebufferReferenceCount--;
if (framebufferReferenceCount < )
{
[[GPUImageContext sharedFramebufferCache] returnFramebufferToCache:self];
}
} - (void)clearAllLocks;
{
framebufferReferenceCount = ;
} - (void)disableReferenceCounting;
{
referenceCountingDisabled = YES;
} - (void)enableReferenceCounting;
{
referenceCountingDisabled = NO;
} #pragma mark -
#pragma mark Image capture void dataProviderReleaseCallback (void *info, const void *data, size_t size)
{
free((void *)data);
} void dataProviderUnlockCallback (void *info, const void *data, size_t size)
{
GPUImageFramebuffer *framebuffer = (__bridge_transfer GPUImageFramebuffer*)info; [framebuffer restoreRenderTarget];
[framebuffer unlock];
[[GPUImageContext sharedFramebufferCache] removeFramebufferFromActiveImageCaptureList:framebuffer];
} - (CGImageRef)newCGImageFromFramebufferContents;
{
// a CGImage can only be created from a 'normal' color texture
NSAssert(self.textureOptions.internalFormat == GL_RGBA, @"For conversion to a CGImage the output texture format for this filter must be GL_RGBA.");
NSAssert(self.textureOptions.type == GL_UNSIGNED_BYTE, @"For conversion to a CGImage the type of the output texture of this filter must be GL_UNSIGNED_BYTE."); __block CGImageRef cgImageFromBytes; runSynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext useImageProcessingContext]; NSUInteger totalBytesForImage = (int)_size.width * (int)_size.height * ;
// It appears that the width of a texture must be padded out to be a multiple of 8 (32 bytes) if reading from it using a texture cache GLubyte *rawImagePixels; CGDataProviderRef dataProvider = NULL;
if ([GPUImageContext supportsFastTextureUpload])
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSUInteger paddedWidthOfImage = CVPixelBufferGetBytesPerRow(renderTarget) / 4.0;
NSUInteger paddedBytesForImage = paddedWidthOfImage * (int)_size.height * ; glFinish();
CFRetain(renderTarget); // I need to retain the pixel buffer here and release in the data source callback to prevent its bytes from being prematurely deallocated during a photo write operation
[self lockForReading];
rawImagePixels = (GLubyte *)CVPixelBufferGetBaseAddress(renderTarget);
dataProvider = CGDataProviderCreateWithData((__bridge_retained void*)self, rawImagePixels, paddedBytesForImage, dataProviderUnlockCallback);
[[GPUImageContext sharedFramebufferCache] addFramebufferToActiveImageCaptureList:self]; // In case the framebuffer is swapped out on the filter, need to have a strong reference to it somewhere for it to hang on while the image is in existence
#else
#endif
}
else
{
[self activateFramebuffer];
rawImagePixels = (GLubyte *)malloc(totalBytesForImage);
glReadPixels(, , (int)_size.width, (int)_size.height, GL_RGBA, GL_UNSIGNED_BYTE, rawImagePixels);
dataProvider = CGDataProviderCreateWithData(NULL, rawImagePixels, totalBytesForImage, dataProviderReleaseCallback);
[self unlock]; // Don't need to keep this around anymore
} CGColorSpaceRef defaultRGBColorSpace = CGColorSpaceCreateDeviceRGB(); if ([GPUImageContext supportsFastTextureUpload])
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
cgImageFromBytes = CGImageCreate((int)_size.width, (int)_size.height, , , CVPixelBufferGetBytesPerRow(renderTarget), defaultRGBColorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst, dataProvider, NULL, NO, kCGRenderingIntentDefault);
#else
#endif
}
else
{
cgImageFromBytes = CGImageCreate((int)_size.width, (int)_size.height, , , * (int)_size.width, defaultRGBColorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaLast, dataProvider, NULL, NO, kCGRenderingIntentDefault);
} // Capture image with current device orientation
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(defaultRGBColorSpace); }); return cgImageFromBytes;
} - (void)restoreRenderTarget;
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
[self unlockAfterReading];
CFRelease(renderTarget);
#else
#endif
} #pragma mark -
#pragma mark Raw data bytes - (void)lockForReading
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
if ([GPUImageContext supportsFastTextureUpload])
{
if (readLockCount == )
{
CVPixelBufferLockBaseAddress(renderTarget, );
}
readLockCount++;
}
#endif
} - (void)unlockAfterReading
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
if ([GPUImageContext supportsFastTextureUpload])
{
NSAssert(readLockCount > , @"Unbalanced call to -[GPUImageFramebuffer unlockAfterReading]");
readLockCount--;
if (readLockCount == )
{
CVPixelBufferUnlockBaseAddress(renderTarget, );
}
}
#endif
} - (NSUInteger)bytesPerRow;
{
if ([GPUImageContext supportsFastTextureUpload])
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
return CVPixelBufferGetBytesPerRow(renderTarget);
#else
return _size.width * ; // TODO: do more with this on the non-texture-cache side
#endif
}
else
{
return _size.width * ;
}
} - (GLubyte *)byteBuffer;
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
[self lockForReading];
GLubyte * bufferBytes = CVPixelBufferGetBaseAddress(renderTarget);
[self unlockAfterReading];
return bufferBytes;
#else
return NULL; // TODO: do more with this on the non-texture-cache side
#endif
} - (GLuint)texture;
{
// NSLog(@"Accessing texture: %d from FB: %@", _texture, self);
return _texture;
} @end

  

GPUImage API文档之GPUImageFramebuffer类的更多相关文章

  1. GPUImage API 文档之GPUImageFilter类

    GPUImageFilter类 方法 - (id)initWithVertexShaderFromString:(NSString *)vertexShaderString fragmentShade ...

  2. GPUImage API 文档之GPUImageOutput类

    GPUImageOutput类将静态图像纹理上传到OpenGL ES中,然后使用这些纹理去处理进程链中的下一个对象.它的子类可以获得滤镜处理后的图片功能.[本文讲的很少,由于有许多地方不清楚,以后会更 ...

  3. GPUImage API文档之GPUImageContext类

    GPUImageContext类,提供OpenGL ES基本环境,我们一般不会用到,所以讲的很简单. 属性 @property(readonly, nonatomic) dispatch_queue_ ...

  4. GPUImage API文档之GPUImageFramebufferCache类

    GPUImageFramebufferCache类负责管理GPUImageFramebuffer对象,是一个GPUImageFramebuffer对象的缓存. 方法 - (GPUImageFrameb ...

  5. GPUImage API 文档之GPUImagePicture类

    GPUImagePicture类静态图像处理操作,它可以是需要处理的静态图像,也可以是一张作为纹理使用的图片,调用向它发送processImage消息,进行图像滤镜处理. 方法 - (id)initW ...

  6. GPUImage API文档之GLProgram类

    GLProgram是GPUImage中代表openGL ES 中的program,具有glprogram功能. 属性 @property(readwrite, nonatomic) BOOL init ...

  7. GPUImage API文档之GPUImageInput协议

    GPUImageInput协议主要包含一些输入需要渲染目标的操作. - (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)t ...

  8. 通过API文档查询Math类的方法,打印出近似圆,只要给定不同半径,圆的大小就会随之发生改变

    package question; import java.util.Scanner; import java.lang.Math; public class MathTest { /** * 未搞懂 ...

  9. Java,面试题,简历,Linux,大数据,常用开发工具类,API文档,电子书,各种思维导图资源,百度网盘资源,BBS论坛系统 ERP管理系统 OA办公自动化管理系统 车辆管理系统 各种后台管理系统

    Java,面试题,简历,Linux,大数据,常用开发工具类,API文档,电子书,各种思维导图资源,百度网盘资源BBS论坛系统 ERP管理系统 OA办公自动化管理系统 车辆管理系统 家庭理财系统 各种后 ...

随机推荐

  1. Technical Information ARM-related JTAG / SWD / SWV / ETM Target Interfaces

    https://www.computex.co.jp/eg/products/pdf/technical_pdf/arm_if01_gijutsu_eng.pdf

  2. CRC校验的实现

    本例提供的是通过查表发来实现CRC校验. CRC余式表如下: unsigned int crctab[256] ={/*CRC余式表 */ 0x0000, 0x1021, 0x2042, 0x3063 ...

  3. 使用C#的泛型队列Queue实现生产消费模式

    本篇体验使用C#的泛型队列Queue<T>实现生产消费模式. 如果把生产消费想像成自动流水生产线的话,生产就是流水线的物料,消费就是某种设备对物料进行加工的行为,流水线就是队列. 现在,要 ...

  4. IIS7下配置ASP+ACCESS环境

    先要设置应用程序池(Application Pool)为Classic .NET AppPool,而不是默认的Default AppPool,可以在网站目录里对每个站点设置,也可以在站点进行单独设置. ...

  5. 关于Maven项目build时出现No compiler is provided in this environment的处理(转)

    本文转自https://blog.csdn.net/lslk9898/article/details/73836745 近日有同事遇到在编译Maven项目时出现[ERROR] No compiler ...

  6. SQL 参考

    本主题将介绍 ArcGIS 中的选择表达式所用的常规查询的各个元素.ArcGIS 中的查询表达式使用常规 SQL 语法. 警告: SQL 语法不适用于使用字段计算器计算字段. 字段 在 SQL 表达式 ...

  7. SharePoint JavaScript API 根据文件路径删除文件

    最近,有这么个需求,然后写了几行代码,记录一下.有需要的可以参考一下. 有几个需要注意的地方,就是文件URL要传相对地址,使用网站对象之前要Load一下. 当然,如果你的网站不在根路径下,还可以用oW ...

  8. SharePoint JavaScript API in application pages

    前言 最近,在SharePoint 应用程序页中写JavaScript API,进行一些数据交互.其实,很简单的事情却遇到了问题,记录一下,希望能对遇到类似问题的人以帮助. 引用JavaScript ...

  9. docker 容器间网络配置

    创建一个docker容器,docker系统会自动为该容器分配一个ip地址,通常是172.17开头. 我们可以在主机上用 docker inspect 命令 或者进入容器用ifconfig命令来查看容器 ...

  10. zoj2334 Monkey King , 并查集,可并堆,左偏树

    提交地址:点击打开链接 题意:  N(N<=10^5)仅仅猴子,初始每仅仅猴子为自己猴群的猴王.每仅仅猴子有一个初始的力量值.这些猴子会有M次会面. 每次两仅仅猴子x,y会面,若x,y属于同一个 ...