AVCaptureSession拍照,摄像,载图总结
AVCaptureSession
[IOS开发]拍照,摄像,载图总结
1 建立Session 2 添加 input 3 添加output 4 开始捕捉 5 为用户显示当前录制状态 6 捕捉 7 结束捕捉 8 参考 1 建立Session 1.1 声明session
1.2 设置采集的质量
1.3 重新设置session
2 添加input 2.1 配置一个device (查找前后摄像头)
2.2 设备的前后切换 Switching Between Devices
2.3 添加输入设备到当前session
3 添加输出设备到session AVCaptureMovieFileOutput to output to a movie file (输出一个 视频文件) AVCaptureVideoDataOutput if you want to process frames from the video being captured (可以采集数据从指定的视频中) AVCaptureAudioDataOutput if you want to process the audio data being captured (采集音频) AVCaptureStillImageOutput if you want to capture still images with accompanying metadata (采集静态图片) 3.1 添加一个output 到session
3.2 保存视频到文件 Saving to a Movie File 3.2.1 声明一个输出
3.2.2 配置写到指定的文件
3.2.3 确定文件是否写成功 captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: 实现这个方法
3.3 对采集载图 3.3.1 设置采集图片的像素格式 说实话下面这段的像素格式我也似懂非懂, 感觉是不是像素的对像素的质量会有一些影响 You can use the videoSettings property to specify a custom output format. The video settings property is a dictionary; currently, the only supported key is kCVPixelBufferPixelFormatTypeKey. The recommended pixel format choices for iPhone 4 are kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange or kCVPixelFormatType_32BGRA; for iPhone 3G the recommended pixel format choices are kCVPixelFormatType_422YpCbCr8or kCVPixelFormatType_32BGRA. Both Core Graphics and OpenGL work well with the BGRA format: // Create a VideoDataOutput and add it to the session AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease]; [session addOutput:output]; // Configure your output. dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL); [output setSampleBufferDelegate:self queue:queue]; dispatch_release(queue); // Specify the pixel format output.videoSettings = [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]; 3.3.2 采集静态图片 AVCaptureStillImageOutput 这个类可以采集静态图片。
Pixel and Encoding Formats Different devices support different image formats:
可以自己指定想要捕捉的格式, 下面就可以指定捕捉一个JPEG的图片
如果使用JPEG图片格式, 就不应该再指定其它的压缩了, output 会自动压缩, 这个压缩会使用硬件加速。 而我们要使用这个图片数据时。 可以使用jpegStillImageNSDataRepresentation: 这个方法来获取相应的NSData,这个方法不会做重复压缩的动作。 jpegStillImageNSDataRepresentation: Returns an NSData representation of a still image data and metadata attachments in a JPEG sample buffer. + (NSData *)jpegStillImageNSDataRepresentation:(CMSampleBufferRef)jpegSampleBuffer Parameters jpegSampleBuffer The sample buffer carrying JPEG image data, optionally with Exif metadata sample buffer attachments. This method throws an NSInvalidArgumentException if jpegSampleBuffer is NULL or not in the JPEG format. Return Value An NSData representation of jpegSampleBuffer. Discussion This method merges the image data and Exif metadata sample buffer attachments without re-compressing the image. The returned NSData object is suitable for writing to disk. 捕捉图片 Capturing an Image When you want to capture an image, you send the output a captureStillImageAsynchronouslyFromConnection:completionHandler: message. The first argument is the connection you want to use for the capture. You need to look for the connection whose input port is collecting video:
The second argument to captureStillImageAsynchronouslyFromConnection:completionHandler: is a block that takes two arguments: a CMSampleBuffer containing the image data, and an error. The sample buffer itself may contain metadata, such as an Exif dictionary, as an attachment. You can modify the attachments should you want, but note the optimization for JPEG images discussed in “Pixel and Encoding Formats.”
5 为用户显示当前的录制状态 5.1 录制预览
Video Gravity Modes The preview layer supports three gravity modes that you set using videoGravity:
6 捕捉 下面是一个完整的过程 Putting it all Together: Capturing Video Frames as UIImage Objects This brief code example to illustrates how you can capture video and convert the frames you get to UIImage objects. It shows you how to:
Note: To focus on the most relevant code, this example omits several aspects of a complete application, including memory management. To use AV Foundation, you are expected to have enough experience with Cocoa to be able to infer the missing pieces. Create and Configure a Capture Session You use an AVCaptureSession object to coordinate the flow of data from an AV input device to an output. Create a session, and configure it to produce medium resolution video frames.
Create and Configure the Device and Device Input Capture devices are represented by AVCaptureDevice objects; the class provides methods to retrieve an object for the input type you want. A device has one or more ports, configured using an AVCaptureInput object. Typically, you use the capture input in its default configuration. Find a video capture device, then create a device input with the device and add it to the session.
Create and Configure the Data Output You use an AVCaptureVideoDataOutput object to process uncompressed frames from the video being captured. You typically configure several aspects of an output. For video, for example, you can specify the pixel format using the videoSettings property, and cap the frame rate by setting the minFrameDuration property. Create and configure an output for video data and add it to the session; cap the frame rate to 15 fps by setting the minFrameDuration property to 1/15 second:
The data output object uses delegation to vend the video frames. The delegate must adopt the AVCaptureVideoDataOutputSampleBufferDelegate protocol. When you set the data output’s delegate, you must also provide a queue on which callbacks should be invoked.
You use the queue to modify the priority given to delivering and processing the video frames. Implement the Sample Buffer Delegate Method In the delegate class, implement the method (captureOutput:didOutputSampleBuffer:fromConnection:) that is called when a sample buffer is written. The video data output object delivers frames as CMSampleBuffers, so you need to convert from the CMSampleBuffer to a UIImage object. The function for this operation is shown in “Converting a CMSampleBuffer to a UIImage.”
Remember that the delegate method is invoked on the queue you specified in setSampleBufferDelegate:queue:; if you want to update the user interface, you must invoke any relevant code on the main thread. Starting and Stopping Recording After configuring the capture session, you send it a startRunning message to start the recording.
To stop recording, you send the session a stopRunning message. DEMO Code 这个demo 运行时出了一个问题,- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection 里载到图后把图片转换为UIImage后转出来后怎么都不显示图片, 经查后直接转为NSData传出来后一切正常, 特别说明一下。 // Create and configure a capture session and start it running - (void)setupCaptureSession { NSError *error = nil; // Create the session AVCaptureSession *session = [[AVCaptureSession alloc] init]; // Configure the session to produce lower resolution video frames, if your // processing algorithm can cope. We'll specify medium quality for the // chosen device. session.sessionPreset = AVCaptureSessionPresetLow; // Find a suitable AVCaptureDevice AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; // Create a device input with the device and add it to the session. AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error]; if (!input) { // Handling the error appropriately. } [session addInput:input]; // Create a VideoDataOutput and add it to the session AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease]; [session addOutput:output]; // Configure your output. dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL); [output setSampleBufferDelegate:self queue:queue]; dispatch_release(queue); // Specify the pixel format output.videoSettings = [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]; // 添加界面显示 AVCaptureVideoPreviewLayer *previewLayer = nil; previewLayer = [[[AVCaptureVideoPreviewLayer alloc] initWithSession:session] autorelease]; [previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill]; CGRect layerRect = [[[self view] layer] bounds]; [previewLayer setBounds:layerRect]; [previewLayer setPosition:CGPointMake(CGRectGetMidX(layerRect),CGRectGetMidY(layerRect))]; [[[self view] layer] addSublayer:previewLayer]; // If you wish to cap the frame rate to a known value, such as 15 fps, set // minFrameDuration. // output.minFrameDuration = CMTimeMake(1, 15); // Start the session running to start the flow of data [session startRunning]; sessionGlobal = session; // Assign session to an ivar. // [self setSession:session]; isCapture = FALSE; UIView *v = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 300, 300)]; v.backgroundColor = [UIColor blueColor]; v.layer.masksToBounds = YES; v1 = [v retain]; [self.view addSubview:v]; // [v release]; start = [[NSDate date] timeIntervalSince1970]; before = start; num = 0; } (NSTimeInterval)getTimeFromStart { NSDate* dat = [NSDate dateWithTimeIntervalSinceNow:0]; NSTimeInterval now = [dat timeIntervalSince1970]*1; NSTimeInterval b = now - start; return b; } - (void)showImage:(NSData *)topImageData { if(num > 5) { [sessionGlobal stopRunning]; return; } num ++; NSString *numStr = [NSString stringWithFormat:@"%d.jpg", num]; NSString *path = [NSHomeDirectory() stringByAppendingPathComponent:numStr]; NSLog(@"PATH : %@", path); [topImageData writeToFile:path atomically:YES]; UIImageView *imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 100, 100)]; imageView.layer.masksToBounds = YES; imageView.backgroundColor = [UIColor redColor]; UIImage *img = [[UIImage alloc] initWithData:topImageData]; imageView.image = img; [img release]; [self.view addSubview:imageView]; [imageView release]; [self.view setNeedsDisplay]; // [v1 setNeedsDisplay]; } // Delegate routine that is called when a sample buffer was written - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { NSDate* dat = [NSDate dateWithTimeIntervalSinceNow:0]; NSTimeInterval now = [dat timeIntervalSince1970]*1; NSLog(@" before: %f num: %f" , before, now - before); if((now - before) > 5) { before = [[NSDate date] timeIntervalSince1970]; // Create a UIImage from the sample buffer data UIImage *image = [self imageFromSampleBuffer:sampleBuffer]; if(image != nil) { // NSTimeInterval t = [self getTimeFromStart]; NSData* topImageData = UIImageJPEGRepresentation(image, 1.0); [self performSelectorOnMainThread:@selector(showImage:) withObject:topImageData waitUntilDone:NO]; } } } // Create a UIImage from sample buffer data - (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer { CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); // Lock the base address of the pixel buffer CVPixelBufferLockBaseAddress(imageBuffer,0); // Get the number of bytes per row for the pixel buffer size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); // Get the pixel buffer width and height size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); // Create a device-dependent RGB color space CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); if (!colorSpace) { NSLog(@"CGColorSpaceCreateDeviceRGB failure"); return nil; } // Get the base address of the pixel buffer void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); // Get the data size for contiguous planes of the pixel buffer. size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer); // Create a Quartz direct-access data provider that uses data we supply CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, baseAddress, bufferSize, NULL); // Create a bitmap image from data supplied by our data provider CGImageRef cgImage = CGImageCreate(width, height, 8, 32, bytesPerRow, colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little, provider, NULL, true, kCGRenderingIntentDefault); CGDataProviderRelease(provider); CGColorSpaceRelease(colorSpace); // Create and return an image object representing the specified Quartz image UIImage *image = [UIImage imageWithCGImage:cgImage]; CGImageRelease(cgImage); CVPixelBufferUnlockBaseAddress(imageBuffer, 0); return image; } 7 结束捕捉 - (void)stopVideoCapture:(id)arg { //停止摄像头捕抓 if(self->avCaptureSession){ [self->avCaptureSession stopRunning]; self->avCaptureSession= nil; [labelStatesetText:@"Video capture stopped"]; } 8 参考 xcode 自带文档 Media Capture 比较完整的捕捉代码 http://chenweihuacwh.iteye.com/blog/734229 随便找的2个 http://blog.csdn.net/guo_hongjun1611/article/details/7992294 |
AVCaptureSession拍照,摄像,载图总结的更多相关文章
- 自定义使用AVCaptureSession 拍照,摄像,载图
转载自 http://blog.csdn.net/andy_jiangbin/article/details/19823333 拍照,摄像,载图总结 1 建立Session 2 添加 input ...
- android99 拍照摄像
package com.itheima.camera; import java.io.File; import android.net.Uri; import android.os.Bundle; i ...
- Tensorflow 模型持久化saver及加载图结构
主要内容: 1. 直接保存,加载模型; (可以指定加载,保存的var_list) 2. 加载,保存指定变量的模型 3. slim加载模型使用 4. 加载模型图结构和参数等 tensorflow 恢复部 ...
- WP8.1开发中ListView控件加载图列表的简单使用(1)
我也是刚接触WP编程没几个月,就是在这段时间一直闲着没事,然后又比较喜欢WP这款系统,就学习了WP这方面的开发言语,自学是很困难的,掌握这方面的资料不多,很初级,就是自己在网上找资料学习过程中,看到别 ...
- 学习EXT.JS5时的重点载图
组件实例化的五种方式,最后一种不建议了 MVVM的图示,及controller的生存周期和MVC的不一样. VIEWCONTROLLER如何得到VIEW的实例呢,注意LOOKUPREFERENCE的使 ...
- 纯jascript解决手机端拍照、选图后图片被旋转问题
需要的js1 需要的js2 这里主要用到Orientation属性. Orientation属性说明如下: 旋转角度 参数 0° 1 顺时针90° 6 逆时针90° 8 180° 3 <!DOC ...
- Loading...加载图收集
收集来源:http://cs.fangjia.com/zoushi/
- Android下 调用原生相机拍照摄像
1 http://www.cnblogs.com/franksunny/archive/2011/11/17/2252926.html 2 http://www.cnblogs.com/vir56k/ ...
- UIImagePickerController拍照与摄像(转)
转载自:http://blog.sina.com.cn/s/blog_68edaff101019ppe.html (2012-11-23 14:38:40) 标签: ios iphone 拍照 摄像 ...
随机推荐
- python3学习笔记(三):注释和字符串
一.注释 为了让别人能够更容易理解程序,使用注释是非常有效的,即使是自己回头再看旧代码也是一样. # 打印圆的周长: print(2* pi* r) 在python 中用井号(#)表示注释.井号(#) ...
- for循环,foreach, map,reduce用法对比+for in,for of
for不做赘述,相当简单: foreach方法: forEach() 方法用于调用数组的每个元素,并将元素传递给回调函数. 注意: forEach() 对于空数组是不会执行回调函数的. array.f ...
- js,正则实现金钱格式化
https://blog.csdn.net/qq_36279445/article/details/78889305 https://github.com/jawil/blog/issues/30
- Java并发编程的艺术笔记(三)——Thread.join()
t.join()方法只会使主线程进入等待池并等待t线程执行完毕后才会被唤醒.并不影响同一时刻处在运行状态的其他线程.它能够使得t.join()中的t优先执行,当t执行完后才会执行其他线程.能够使得线程 ...
- 用ps 查看线程状态
ps -eLo pid,tid,class,rtprio,ni,pri,psr,pcpu,pmem,stat,wchan:30,comm 线程相关选项: THREAD DISPLAY H Show t ...
- WPF SAP水晶报表例子和打包Setup
<Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x=" ...
- python检测进程是否存在
import win32com.client def check_exsit(process_name): WMI = win32com.client.GetObject('winmgmts:') p ...
- Python学习之==>有依赖关系的接口开发
一.接口需求 1.登录接口 (1)登录成功后将session信息存入redis数据库并设置失效时间为600秒 (2)构造返回结果的对象flask.make_response() (3)产生cookie ...
- 【MM系列】SAP 关于物料间的替代问题
公众号:SAP Technical 本文作者:matinal 原文出处:http://www.cnblogs.com/SAPmatinal/ 原文链接:[MM系列]SAP 关于物料间的替代问题 前 ...
- xmake新增对WDK驱动编译环境支持
xmake v2.2.1新版本现已支持WDK驱动编译环境,我们可以直接在系统原生cmd终端下,执行xmake进行驱动编译,甚至配合vscode, sublime text, IDEA等编辑器+xmak ...