swift通过摄像头读取每一帧的图片,并且做识别做人脸识别
最近帮别人做一个项目,主要是使用摄像头做人脸识别
github地址:https://github.com/qugang/AVCaptureVideoTemplate
要使用IOS的摄像头,需要使用AVFoundation 库,库里面的东西我就不介绍。
启动摄像头需要使用AVCaptureSession 类。
然后得到摄像头传输的每一帧数据,需要使用AVCaptureVideoDataOutputSampleBufferDelegate 委托。
首先在viewDidLoad 里添加找摄像头设备的代码,找到摄像头设备以后,开启摄像头
|
1
2
3
4
5
6
7
8
9
10
11
12
13
|
captureSession.sessionPreset = AVCaptureSessionPresetLowlet devices = AVCaptureDevice.devices()for device in devices { if (device.hasMediaType(AVMediaTypeVideo)) { if (device.position == AVCaptureDevicePosition.Front) { captureDevice = device as?AVCaptureDevice if captureDevice != nil { println("Capture Device found") beginSession() } } }} |
beginSession,开启摄像头:
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
func beginSession() { var err : NSError? = nil captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &err)) let output = AVCaptureVideoDataOutput() let cameraQueue = dispatch_queue_create("cameraQueue", DISPATCH_QUEUE_SERIAL) output.setSampleBufferDelegate(self, queue: cameraQueue) output.videoSettings = [kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA] captureSession.addOutput(output) if err != nil { println("error: \(err?.localizedDescription)") } previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) previewLayer?.videoGravity = "AVLayerVideoGravityResizeAspect" previewLayer?.frame = self.view.bounds self.view.layer.addSublayer(previewLayer) captureSession.startRunning()} |
开启以后,实现captureOutput 方法:
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
|
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) { if(self.isStart) { let resultImage = sampleBufferToImage(sampleBuffer) let context = CIContext(options:[kCIContextUseSoftwareRenderer:true]) let detecotr = CIDetector(ofType:CIDetectorTypeFace, context:context, options:[CIDetectorAccuracy: CIDetectorAccuracyHigh]) let ciImage = CIImage(image: resultImage) let results:NSArray = detecotr.featuresInImage(ciImage,options: ["CIDetectorImageOrientation" : 6]) for r in results { let face:CIFaceFeature = r as! CIFaceFeature; let faceImage = UIImage(CGImage: context.createCGImage(ciImage, fromRect: face.bounds),scale: 1.0, orientation: .Right) NSLog("Face found at (%f,%f) of dimensions %fx%f", face.bounds.origin.x, face.bounds.origin.y,pickUIImager.frame.origin.x, pickUIImager.frame.origin.y) dispatch_async(dispatch_get_main_queue()) { if (self.isStart) { self.dismissViewControllerAnimated(true, completion: nil) self.didReceiveMemoryWarning() self.callBack!(face: faceImage!) } self.isStart = false } } }} |
在每一帧图片上使用CIDetector 得到人脸,CIDetector 还可以得到眨眼,与微笑的人脸,如果要详细使用去官方查看API
上面就是关键代码,设置了有2秒的延迟,2秒之后开始人脸检测。
全部代码:
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
|
//// ViewController.swift// AVSessionTest//// Created by qugang on 15/7/8.// Copyright (c) 2015年 qugang. All rights reserved.//import UIKitimport AVFoundationclass AVCaptireVideoPicController: UIViewController,AVCaptureVideoDataOutputSampleBufferDelegate { var callBack :((face: UIImage) ->())? let captureSession = AVCaptureSession() var captureDevice : AVCaptureDevice? var previewLayer : AVCaptureVideoPreviewLayer? var pickUIImager : UIImageView = UIImageView(image: UIImage(named: "pick_bg")) var line : UIImageView = UIImageView(image: UIImage(named: "line")) var timer : NSTimer! var upOrdown = true var isStart = false override func viewDidLoad() { super.viewDidLoad() captureSession.sessionPreset = AVCaptureSessionPresetLow let devices = AVCaptureDevice.devices() for device in devices { if (device.hasMediaType(AVMediaTypeVideo)) { if (device.position == AVCaptureDevicePosition.Front) { captureDevice = device as?AVCaptureDevice if captureDevice != nil { println("Capture Device found") beginSession() } } } } pickUIImager.frame = CGRect(x: self.view.bounds.width / 2 - 100, y: self.view.bounds.height / 2 - 100,width: 200,height: 200) line.frame = CGRect(x: self.view.bounds.width / 2 - 100, y: self.view.bounds.height / 2 - 100, width: 200, height: 2) self.view.addSubview(pickUIImager) self.view.addSubview(line) timer = NSTimer.scheduledTimerWithTimeInterval(0.01, target: self, selector: "animationSate", userInfo: nil, repeats: true) NSTimer.scheduledTimerWithTimeInterval(2, target: self, selector: "isStartTrue", userInfo: nil, repeats: false) } func isStartTrue(){ self.isStart = true } override func didReceiveMemoryWarning(){ super.didReceiveMemoryWarning() captureSession.stopRunning() } func animationSate(){ if upOrdown { if (line.frame.origin.y >= pickUIImager.frame.origin.y + 200) { upOrdown = false } else { line.frame.origin.y += 2 } } else { if (line.frame.origin.y <= pickUIImager.frame.origin.y) { upOrdown = true } else { line.frame.origin.y -= 2 } } } func beginSession() { var err : NSError? = nil captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &err)) let output = AVCaptureVideoDataOutput() let cameraQueue = dispatch_queue_create("cameraQueue", DISPATCH_QUEUE_SERIAL) output.setSampleBufferDelegate(self, queue: cameraQueue) output.videoSettings = [kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA] captureSession.addOutput(output) if err != nil { println("error: \(err?.localizedDescription)") } previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) previewLayer?.videoGravity = "AVLayerVideoGravityResizeAspect" previewLayer?.frame = self.view.bounds self.view.layer.addSublayer(previewLayer) captureSession.startRunning() } func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) { if(self.isStart) { let resultImage = sampleBufferToImage(sampleBuffer) let context = CIContext(options:[kCIContextUseSoftwareRenderer:true]) let detecotr = CIDetector(ofType:CIDetectorTypeFace, context:context, options:[CIDetectorAccuracy: CIDetectorAccuracyHigh]) let ciImage = CIImage(image: resultImage) let results:NSArray = detecotr.featuresInImage(ciImage,options: ["CIDetectorImageOrientation" : 6]) for r in results { let face:CIFaceFeature = r as! CIFaceFeature; let faceImage = UIImage(CGImage: context.createCGImage(ciImage, fromRect: face.bounds),scale: 1.0, orientation: .Right) NSLog("Face found at (%f,%f) of dimensions %fx%f", face.bounds.origin.x, face.bounds.origin.y,pickUIImager.frame.origin.x, pickUIImager.frame.origin.y) dispatch_async(dispatch_get_main_queue()) { if (self.isStart) { self.dismissViewControllerAnimated(true, completion: nil) self.didReceiveMemoryWarning() self.callBack!(face: faceImage!) } self.isStart = false } } } } private func sampleBufferToImage(sampleBuffer: CMSampleBuffer!) -> UIImage { let imageBuffer: CVImageBufferRef = CMSampleBufferGetImageBuffer(sampleBuffer) CVPixelBufferLockBaseAddress(imageBuffer, 0) let baseAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0) let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer) let width = CVPixelBufferGetWidth(imageBuffer) let height = CVPixelBufferGetHeight(imageBuffer) let colorSpace: CGColorSpaceRef = CGColorSpaceCreateDeviceRGB() let bitsPerCompornent = 8 var bitmapInfo = CGBitmapInfo((CGBitmapInfo.ByteOrder32Little.rawValue | CGImageAlphaInfo.PremultipliedFirst.rawValue) as UInt32) let newContext = CGBitmapContextCreate(baseAddress, width, height, bitsPerCompornent, bytesPerRow, colorSpace, bitmapInfo) as CGContextRef let imageRef: CGImageRef = CGBitmapContextCreateImage(newContext) let resultImage = UIImage(CGImage: imageRef, scale: 1.0, orientation: UIImageOrientation.Right)! return resultImage } func imageResize (imageObj:UIImage, sizeChange:CGSize)-> UIImage{ let hasAlpha = false let scale: CGFloat = 0.0 UIGraphicsBeginImageContextWithOptions(sizeChange, !hasAlpha, scale) imageObj.drawInRect(CGRect(origin: CGPointZero, size: sizeChange)) let scaledImage = UIGraphicsGetImageFromCurrentImageContext() return scaledImage }} |
swift通过摄像头读取每一帧的图片,并且做识别做人脸识别的更多相关文章
- OpenCV摄像头人脸识别
注: 从外设摄像装置中获取图像帧,把每帧的图片与人脸特征进行匹配,用方框框住识别出来的人脸 需要用到的函数: CvHaarClassifierCascade* cvLoadHaarClassifier ...
- 基于OpenCV读取摄像头进行人脸检测和人脸识别
前段时间使用OpenCV的库函数实现了人脸检测和人脸识别,笔者的实验环境为VS2010+OpenCV2.4.4,opencv的环境配置网上有很多,不再赘述.检测的代码网上很多,记不清楚从哪儿copy的 ...
- QQ摄像头读取条码
跟我学机器视觉-HALCON学习例程中文详解-QQ摄像头读取条码 第一步:插入QQ摄像头,安装好驱动(有的可能免驱动) 第二步:打开HDevelop,点击助手—打开新的Image Acquisitio ...
- 跟我学机器视觉-HALCON学习例程中文详解-QQ摄像头读取条码
跟我学机器视觉-HALCON学习例程中文详解-QQ摄像头读取条码 第一步:插入QQ摄像头,安装好驱动(有的可能免驱动) 第二步:打开HDevelop,点击助手-打开新的Image Acquisitio ...
- opencv摄像头读取图片
# 摄像头捕获图像或视频import numpy as np import cv2 # 创建相机的对象 cap = cv2.VideoCapture(0) while(True): # 读取相机所拍到 ...
- Opencv摄像头实时人脸识别
Introduction 网上存在很多人脸识别的文章,这篇文章是我的一个作业,重在通过摄像头实时采集人脸信息,进行人脸检测和人脸识别,并将识别结果显示在左上角. 利用 OpenCV 实现一个实时的人脸 ...
- Python3利用Dlib19.7实现摄像头人脸识别的方法
0.引言 利用python开发,借助Dlib库捕获摄像头中的人脸,提取人脸特征,通过计算欧氏距离来和预存的人脸特征进行对比,达到人脸识别的目的: 可以自动从摄像头中抠取人脸图片存储到本地,然后提取构建 ...
- matlab使用摄像头人脸识别
#关于matlab如何读取图片.视频.摄像头设备数据# 参见:http://blog.csdn.net/u010177286/article/details/45646173 但是,关于摄像头读取,上 ...
- OpenCV视频读取播放,视频转换为图片
转载请注明出处!!! http://blog.csdn.net/zhonghuan1992 OpenCV视频读取播放,视频转换为图片 介绍几个有关视频读取的函数: VideoCapture::Vide ...
随机推荐
- PC--CSS技巧
1.图片不存在的时候,显示一个默认图片 <img src=”01.jpg” onerror=”this.src=’02.jpg'” /> 2.CSS强制图片自适应大小 img {width ...
- sql执行顺序整理
sql的执行顺序,是优化sql语句执行效率必须要掌握的.各个数据库可能有细小的差别,但大体顺序是相同的,这里只做大致说明. 一.总体执行顺序 在sql语句执行之前,还有SQL语句准备执行阶段,这里不做 ...
- Android学习总结——Popup menu:弹出式菜单
PopupMenu,弹出菜单,一个模态形式展示的弹出风格的菜单,绑在在某个View上,一般出现在被绑定的View的下方(如果下方有空间). 注意:弹出菜单是在API 11和更高版本上才有效的. 核心步 ...
- flexible.js字体大小诡异现象解析及解决方案
最近在做一个手机端页面时,遇到了一个奇怪的问题:字体的显示大小,与在CSS中指定的大小不一致.大家可以查看这个Demo(记得打开Chrome DevTools). 就如上图所示,你可以发现,原本指定的 ...
- 关于select元素的一些基本知识
为select元素绑定值的几个方法: 一.通过字符串拼接,让后追加到select元素下, 二.通过DOM创建option元素,为其绑上value值和文本: function loadProvinve( ...
- asp.net 通过js调用webService注意
通过JavaSrcipt调用WebService格式: //通过SricptManager 的,services标签添加web服务引用 <asp:ScriptManager runat=&quo ...
- icon数目
[UIApplication sharedApplication].applicationIconBadgeNumber = currentBadgeValue.integerValue;
- 创建对象时引用的关键字,assign,copy,retain
创建对象时引用的关键字:assign: 简单赋值,不更改索引计数(强引用)copy: 建立一个索引计数为1的对象,然后释放旧对象retain:释放旧的对象,将旧对象的值赋予输入对象,再提高输入对象的索 ...
- JS闭包(一)
闭包是指有权访问另一个函数作用域中的变量的函数. 创建闭包的常见方法:在一个函数内部创建另一个函数. 对彻底理解闭包,需要知道如何创建作用域链以及作用域链有什么作用的细节. 闭包的功能: 保存函数执行 ...
- [Leetcode] Find the minimum in rotated sorted array
我在Github上新建了一个解答Leetcode问题的Project, 大家可以参考, 目前是Java 为主,里面有leetcode上的题目,解答,还有一些基本的单元测试,方便大家起步. 题目: Su ...