自定义使用AVCaptureSession 拍照,摄像,载图
转载自 http://blog.csdn.net/andy_jiangbin/article/details/19823333
拍照,摄像,载图总结
1 建立Session
2 添加 input
3 添加output
4 开始捕捉
5 为用户显示当前录制状态
6 捕捉
7 结束捕捉
8 参考
1 建立Session
1.1 声明session
AVCaptureSession *session = [[AVCaptureSession alloc] init]; |
// Add inputs and outputs. |
[session startRunning]; |
1.2 设置采集的质量
Symbol |
Resolution |
Comments |
High |
Highest recording quality. This varies per device. |
|
Medium |
Suitable for WiFi sharing. The actual values may change. |
|
Low |
Suitable for 3G sharing. The actual values may change. |
|
640x480 |
VGA. |
|
1280x720 |
720p HD. |
|
Photo |
Full photo resolution. This is not supported for video output. |
if ([session canSetSessionPreset:AVCaptureSessionPreset1280x720]) { |
session.sessionPreset = AVCaptureSessionPreset1280x720; |
} |
else { |
// Handle the failure. |
} |
1.3 重新设置session
[session beginConfiguration]; |
// Remove an existing capture device. |
// Add a new capture device. |
// Reset the preset. |
[session commitConfiguration]; |
2 添加input
2.1 配置一个device (查找前后摄像头)
NSArray *devices = [AVCaptureDevice devices]; |
for (AVCaptureDevice *device in devices) { |
NSLog(@"Device name: %@", [device localizedName]); |
if ([device hasMediaType:AVMediaTypeVideo]) { |
if ([device position] == AVCaptureDevicePositionBack) { |
NSLog(@"Device position : back"); |
} |
else { |
NSLog(@"Device position : front"); |
} |
} |
} |
2.2 设备的前后切换 Switching Between Devices
AVCaptureSession *session = <#A capture session#>; |
[session beginConfiguration]; |
[session removeInput:frontFacingCameraDeviceInput]; |
[session addInput:backFacingCameraDeviceInput]; |
[session commitConfiguration]; |
2.3 添加输入设备到当前session
NSError *error; |
AVCaptureDeviceInput *input = |
[AVCaptureDeviceInput deviceInputWithDevice:device error:&error]; |
if (!input) { |
// Handle the error appropriately. |
} |
AVCaptureSession *captureSession = <#Get a capture session#>; |
AVCaptureDeviceInput *captureDeviceInput = <#Get a capture device input#>; |
if ([captureSession canAddInput:captureDeviceInput]) { |
[captureSession addInput:captureDeviceInput]; |
} |
else { |
// Handle the failure. |
} |
3 添加输出设备到session
AVCaptureMovieFileOutput to
output to a movie file (输出一个 视频文件)
AVCaptureVideoDataOutput if
you want to process frames from the video being captured (可以采集数据从指定的视频中)
AVCaptureAudioDataOutput if
you want to process the audio data being captured (采集音频)
AVCaptureStillImageOutput if
you want to capture still images with accompanying metadata (采集静态图片)
3.1 添加一个output 到session
AVCaptureSession *captureSession = <#Get a capture session#>; |
AVCaptureMovieFileOutput *movieOutput = <#Create and configure a movie output#>; |
if ([captureSession canAddOutput:movieOutput]) { |
[captureSession addOutput:movieOutput]; |
} |
else { |
// Handle the failure. |
} |
3.2 保存视频到文件 Saving to a Movie File
3.2.1 声明一个输出
AVCaptureMovieFileOutput *aMovieFileOutput = [[AVCaptureMovieFileOutput alloc] init]; |
CMTime maxDuration = <#Create a CMTime to represent the maximum duration#>; |
aMovieFileOutput.maxRecordedDuration = maxDuration; |
aMovieFileOutput.minFreeDiskSpaceLimit |
3.2.2 配置写到指定的文件
AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>; |
NSURL *fileURL = <#A file URL that identifies the output location#>; |
[aMovieFileOutput startRecordingToOutputFileURL:fileURL recordingDelegate:<#The delegate#>]; |
3.2.3 确定文件是否写成功
captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: 实现这个方法
- (void)captureOutput:(AVCaptureFileOutput *)captureOutput |
didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL |
fromConnections:(NSArray *)connections |
error:(NSError *)error { |
BOOL recordedSuccessfully = YES; |
if ([error code] != noErr) { |
// A problem occurred: Find out if the recording was successful. |
id value = [[error userInfo] objectForKey:AVErrorRecordingSuccessfullyFinishedKey]; |
if (value) { |
recordedSuccessfully = [value boolValue]; |
} |
} |
// Continue as appropriate... |
3.3 对采集载图
3.3.1 设置采集图片的像素格式
说实话下面这段的像素格式我也似懂非懂, 感觉是不是像素的对像素的质量会有一些影响
You can use the videoSettings property
to specify a custom output format. The video settings property is a dictionary; currently, the only supported key is kCVPixelBufferPixelFormatTypeKey.
The recommended pixel format choices for iPhone 4 arekCVPixelFormatType_420YpCbCr8BiPlanarVideoRange or kCVPixelFormatType_32BGRA;
for iPhone 3G the recommended pixel format choices arekCVPixelFormatType_422YpCbCr8 or kCVPixelFormatType_32BGRA.
Both Core Graphics and OpenGL work well with the BGRA format:
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
[session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
3.3.2 采集静态图片
AVCaptureStillImageOutput 这个类可以采集静态图片。
Preset |
iPhone 3G |
iPhone 3GS |
iPhone 4 (Back) |
iPhone 4 (Front) |
High |
400x304 |
640x480 |
1280x720 |
640x480 |
Medium |
400x304 |
480x360 |
480x360 |
480x360 |
Low |
400x304 |
192x144 |
192x144 |
192x144 |
640x480 |
N/A |
640x480 |
640x480 |
640x480 |
1280x720 |
N/A |
N/A |
1280x720 |
N/A |
Photo |
1600x1200 |
2048x1536 |
2592x1936 |
640x480 |
Pixel and Encoding Formats
Different devices support different image formats:
iPhone 3G |
iPhone 3GS |
iPhone 4 |
yuvs, 2vuy, BGRA, jpeg |
420f, 420v, BGRA, jpeg |
420f, 420v, BGRA, jpeg |
可以自己指定想要捕捉的格式, 下面就可以指定捕捉一个JPEG的图片
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init]; |
NSDictionary *outputSettings = @{ AVVideoCodecKey : AVVideoCodecJPEG}; |
[stillImageOutput setOutputSettings:outputSettings]; |
如果使用JPEG图片格式, 就不应该再指定其它的压缩了, output 会自动压缩, 这个压缩会使用硬件加速。 而我们要使用这个图片数据时。 可以使用jpegStillImageNSDataRepresentation: 这个方法来获取相应的NSData,这个方法不会做重复压缩的动作。
jpegStillImageNSDataRepresentation:
Returns an NSData representation of a still image data and metadata attachments in a JPEG sample buffer.
+ (NSData *)jpegStillImageNSDataRepresentation:(CMSampleBufferRef)jpegSampleBuffer
Parameters
jpegSampleBuffer
The sample buffer carrying JPEG image data, optionally with Exif metadata sample buffer attachments.
This method throws an NSInvalidArgumentException if
jpegSampleBuffer is NULL or not in the JPEG format.
Return Value
An NSData representation of jpegSampleBuffer.
Discussion
This method merges the image data and Exif metadata sample buffer attachments without re-compressing the image.
The returned NSData object is suitable for writing to disk.
捕捉图片 Capturing an Image
When you want to capture an image, you send the output a captureStillImageAsynchronouslyFromConnection:completionHandler: message.
The first argument is the connection you want to use for the capture.
You need to look for the connection whose input port is collecting
video:
AVCaptureConnection *videoConnection = nil; |
for (AVCaptureConnection *connection in stillImageOutput.connections) { |
for (AVCaptureInputPort *port in [connection inputPorts]) { |
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) { |
videoConnection = connection; |
break; |
} |
} |
if (videoConnection) { break; } |
} |
The second argument to captureStillImageAsynchronouslyFromConnection:completionHandler: is a block that
takes two arguments: a CMSampleBuffer containing the image
data, and an error. The sample buffer itself may contain metadata, such
as an Exif dictionary, as an attachment. You can modify the attachments
should you want, but note the optimization for JPEG images discussed
in “Pixel
and Encoding Formats.”
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: |
^(CMSampleBufferRef imageSampleBuffer, NSError *error) { |
CFDictionaryRef exifAttachments = |
CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL); |
if (exifAttachments) { |
// Do something with the attachments. |
} |
// Continue as appropriate. |
}]; |
5 为用户显示当前的录制状态
5.1 录制预览
AVCaptureSession *captureSession = <#Get a capture session#>; |
CALayer *viewLayer = <#Get a layer from the view in which you want to present the preview#>; |
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession]; |
[viewLayer addSublayer:captureVideoPreviewLayer]; |
Video Gravity Modes
The preview layer supports three gravity modes that you set using videoGravity:
- AVLayerVideoGravityResizeAspect:
This preserves the aspect ratio, leaving black bars where the video does not fill the available screen area. - AVLayerVideoGravityResizeAspectFill:
This preserves the aspect ratio, but fills the available screen area, cropping the video when necessary. - AVLayerVideoGravityResize:
This simply stretches the video to fill the available screen area, even if doing so distorts the image.
6 捕捉
下面是一个完整的过程
Putting it all Together: Capturing Video Frames as UIImage Objects
This brief code example to illustrates how you can capture video and convert the frames you get to UIImage objects. It shows you how to:
- Create an AVCaptureSession object
to coordinate the flow of data from an AV input device to an output - Find the AVCaptureDevice object
for the input type you want - Create an AVCaptureDeviceInput object
for the device - Create an AVCaptureVideoDataOutput object
to produce video frames - Implement a delegate for the AVCaptureVideoDataOutput object
to process video frames - Implement a function to convert the CMSampleBuffer received by the delegate into a UIImage object
Note: To focus on the most relevant code, this example
omits several aspects of a complete application, including memory
management. To use AV Foundation, you are expected to have enough
experience with Cocoa to be able to infer the missing
pieces.
Create and Configure a Capture Session
You use an AVCaptureSession object
to coordinate the flow of data from an AV input device to an output.
Create a session, and configure it to produce medium resolution video
frames.
AVCaptureSession *session = [[AVCaptureSession alloc] init]; |
session.sessionPreset = AVCaptureSessionPresetMedium; |
Create and Configure the Device and Device Input
Capture devices are represented by AVCaptureDevice objects;
the class provides methods to retrieve an object for the input type you
want. A device has one or more ports, configured using an AVCaptureInput object.
Typically, you use the capture input in its default configuration.
Find a video capture device, then create a device input with the device and add it to the session.
AVCaptureDevice *device = |
[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; |
NSError *error = nil; |
AVCaptureDeviceInput *input = |
[AVCaptureDeviceInput deviceInputWithDevice:device error:&error]; |
if (!input) { |
// Handle the error appropriately. |
} |
[session addInput:input]; |
Create and Configure the Data Output
You use an AVCaptureVideoDataOutput object
to process uncompressed frames from the video being captured. You
typically configure several aspects of an output. For video, for
example, you can specify the pixel format using the videoSettings property,
and cap the frame rate by setting the minFrameDuration property.
Create and configure an output for video data and add it to the session; cap the frame rate to 15 fps by setting the minFrameDuration property to 1/15 second:
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init]; |
[session addOutput:output]; |
output.videoSettings = |
@{ (NSString *)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) }; |
output.minFrameDuration = CMTimeMake(1, 15); |
The data output object uses delegation to vend the video frames. The delegate must adopt the AVCaptureVideoDataOutputSampleBufferDelegateprotocol.
When you set the data output’s delegate, you must also provide a queue on which callbacks should be invoked.
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL); |
[output setSampleBufferDelegate:self queue:queue]; |
dispatch_release(queue); |
You use the queue to modify the priority given to delivering and processing the video frames.
Implement the Sample Buffer Delegate Method
In the delegate class, implement the method (captureOutput:didOutputSampleBuffer:fromConnection:)
that is called when a sample buffer is written. The video data output
object delivers frames as CMSampleBuffers, so you need to convert from
the CMSampleBuffer to a UIImageobject. The function for this operation
is shown in “Converting
a CMSampleBuffer to a UIImage.”
- (void)captureOutput:(AVCaptureOutput *)captureOutput |
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer |
fromConnection:(AVCaptureConnection *)connection { |
UIImage *image = imageFromSampleBuffer(sampleBuffer); |
// Add your code here that uses the image. |
} |
Remember that the delegate method is invoked on the queue you specified in setSampleBufferDelegate:queue:;
if you want to update the user interface, you must invoke any relevant code on the main thread.
Starting and Stopping Recording
After configuring the capture session, you send it a startRunning message
to start the recording.
[session startRunning]; |
To stop recording, you send the session a stopRunning message.
DEMO Code
这个demo 运行时出了一个问题,- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection 里载到图后把图片转换为UIImage后转出来后怎么都不显示图片, 经查后直接转为NSData传出来后一切正常,
特别说明一下。
// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
AVCaptureSession *session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPresetLow;
// Find a suitable AVCaptureDevice
AVCaptureDevice *device = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input) {
// Handling the error appropriately.
}
[session addInput:input];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
[session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// 添加界面显示
AVCaptureVideoPreviewLayer *previewLayer = nil;
previewLayer = [[[AVCaptureVideoPreviewLayer alloc] initWithSession:session] autorelease];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CGRect layerRect = [[[self view] layer] bounds];
[previewLayer setBounds:layerRect];
[previewLayer setPosition:CGPointMake(CGRectGetMidX(layerRect),CGRectGetMidY(layerRect))];
[[[self view] layer] addSublayer:previewLayer];
// If you wish to cap the frame rate to a known value, such as 15 fps, set
// minFrameDuration.
// output.minFrameDuration = CMTimeMake(1, 15);
// Start the session running to start the flow of data
[session startRunning];
sessionGlobal = session;
// Assign session to an ivar.
// [self setSession:session];
isCapture = FALSE;
UIView *v = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 300, 300)];
v.backgroundColor = [UIColor blueColor];
v.layer.masksToBounds = YES;
v1 = [v retain];
[self.view addSubview:v];
// [v release];
start = [[NSDate date] timeIntervalSince1970];
before = start;
num = 0;
}
(NSTimeInterval)getTimeFromStart
{
NSDate* dat = [NSDate dateWithTimeIntervalSinceNow:0];
NSTimeInterval now = [dat timeIntervalSince1970]*1;
NSTimeInterval b = now - start;
return b;
}
- (void)showImage:(NSData *)topImageData
{
if(num > 5)
{
[sessionGlobal stopRunning];
return;
}
num ++;
NSString *numStr = [NSString stringWithFormat:@"%d.jpg", num];
NSString *path = [NSHomeDirectory() stringByAppendingPathComponent:numStr];
NSLog(@"PATH : %@", path);
[topImageData writeToFile:path atomically:YES];
UIImageView *imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 100, 100)];
imageView.layer.masksToBounds = YES;
imageView.backgroundColor = [UIColor redColor];
UIImage *img = [[UIImage alloc] initWithData:topImageData];
imageView.image = img;
[img release];
[self.view addSubview:imageView];
[imageView release];
[self.view setNeedsDisplay];
// [v1 setNeedsDisplay];
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
NSDate* dat = [NSDate dateWithTimeIntervalSinceNow:0];
NSTimeInterval now = [dat timeIntervalSince1970]*1;
NSLog(@" before: %f num: %f" , before,
now - before);
if((now - before) > 5)
{
before = [[NSDate date] timeIntervalSince1970];
// Create a UIImage from the sample buffer data
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
if(image != nil)
{ // NSTimeInterval t = [self getTimeFromStart];
NSData* topImageData = UIImageJPEGRepresentation(image, 1.0);
[self performSelectorOnMainThread:@selector(showImage:) withObject:topImageData waitUntilDone:NO];
}
}
}
// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer,0);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if (!colorSpace)
{
NSLog(@"CGColorSpaceCreateDeviceRGB failure");
return nil;
}
// Get the base address of the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the data size for contiguous planes of the pixel buffer.
size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer);
// Create a Quartz direct-access data provider that uses data we supply
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, baseAddress, bufferSize,
NULL);
// Create a bitmap image from data supplied by our data provider
CGImageRef cgImage =
CGImageCreate(width,
height,
8,
32,
bytesPerRow,
colorSpace,
kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little,
provider,
NULL,
true,
kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
// Create and return an image object representing the specified Quartz image
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return image;
}
7 结束捕捉
- (void)stopVideoCapture:(id)arg
{
//停止摄像头捕抓
if(self->avCaptureSession){
[self->avCaptureSession stopRunning];
self->avCaptureSession= nil;
[labelStatesetText:@"Video capture stopped"];
}
8 参考
xcode 自带文档 Media Capture
比较完整的捕捉代码 http://chenweihuacwh.iteye.com/blog/734229
随便找的2个
http://blog.csdn.net/guo_hongjun1611/article/details/7992294
自定义使用AVCaptureSession 拍照,摄像,载图的更多相关文章
- AVCaptureSession拍照,摄像,载图总结
AVCaptureSession [IOS开发]拍照,摄像,载图总结 1 建立Session 2 添加 input 3 添加output 4 开始捕捉 5 为用户显示当前录制状态 6 捕捉 7 ...
- 如何用uniapp+vue开发自定义相机插件——拍照+录像功能
调用手机的相机功能并实现拍照和录像是很多APP与插件都必不可少的一个功能,今天智密科技就来分享一下如何基于uniapp + vue实现自定义相机界面,并且实现: 1: 自定义拍照 2: 自定义录像 3 ...
- 仿百度壁纸客户端(二)——主页自定义ViewPager广告定时轮播图
仿百度壁纸客户端(二)--主页自定义ViewPager广告定时轮播图 百度壁纸系列 仿百度壁纸客户端(一)--主框架搭建,自定义Tab + ViewPager + Fragment 仿百度壁纸客户端( ...
- Android Camera开发系列(下)——自定义Camera实现拍照查看图片等功能
Android Camera开发系列(下)--自定义Camera实现拍照查看图片等功能 Android Camera开发系列(上)--Camera的基本调用与实现拍照功能以及获取拍照图片加载大图片 上 ...
- [MISSAJJ原创]cell内 通过SDWebImage自定义创建动态菊花加载指示器
最后更新已经放到了github上了 MISSAJJ自己写的一个基于SDWebImage自定义的管理网络图片加载的工具类(普通图片加载,渐现Alpha图片加载,菊花Indicator动画加载) 经常在项 ...
- iOS Swift WisdomScanKit二维码扫码SDK,自定义全屏拍照SDK,系统相册图片浏览,编辑SDK
iOS Swift WisdomScanKit 是一款强大的集二维码扫码,自定义全屏拍照,系统相册图片编辑多选和系统相册图片浏览功能于一身的 Framework SDK [1]前言: 今天给大家 ...
- Android—实现自定义相机倒计时拍照
这篇博客为大家介绍Android自定义相机,并且实现倒计时拍照功能 首先自定义拍照会用到SurfaceView控件显示照片的预览区域,以下是布局文件: 两个TextView是用来显示提示信息和倒计时的 ...
- android99 拍照摄像
package com.itheima.camera; import java.io.File; import android.net.Uri; import android.os.Bundle; i ...
- JQuery自定义插件详解之Banner图滚动插件
前 言 JRedu JQuery是什么相信已经不需要详细介绍了.作为时下最火的JS库之一,JQuery将其"Write Less,Do More!"的口号发挥的极致.而帮助J ...
随机推荐
- timus1004 最小环()Floyd 算法
通过别人的数据搞了好久才成功,果然还是不够成熟 做题目还是算法不能融会贯通 大意即找出图中至少3个顶点的环,且将环中点按顺序输出 用floyd算法求最小环 因为floyd算法求最短路径是通过中间量k的 ...
- Only one instance of a ScriptManager can be added to the page.
一般出现在一个页面用了多个用户控件,而每个用户控件中都用到了ScriptManager,最好的办法是控件中不要加上 <asp:ScriptManager ID="Scr ...
- sqlserver错误"试图扩大物理文件时,MODIFY FILE 遇到操作系统错误 112(磁盘空间不足。)。"处理
正常还原的时候报错: Microsoft SQL-DMO (ODBC SQLState: 42000)---------------------------试图扩大物理文件时,MODIFY FILE ...
- cocos2d_x 问题汇总
1.生成so文件时,报“No rule to make target ”错误 解决方法:将.\xxx[appname]\proj.android\obj\local\armeabi\objs中的文件全 ...
- OK335xS mac address hacking
/*********************************************************************** * OK335xS mac address hacki ...
- 使用ffmpeg向crtmpserver发布rtsp流
ffmpeg的调用命令如下: ffmpeg -re -i xxx.mp4 -vcodec copy -acodec copy -f rtsp rtsp://127.0.0.1/live/mystre ...
- Mysqlbackup 备份详解(mysql官方备份工具)
A.1全库备份. 命令: mysqlbackup --defaults-file=/home/mysql-server/mysql3/my.cnf --user=root --password=ro ...
- 【转】ACE开发环境搭建
Windows平台 1) 下载ACE源码 ACE官方网址:http://www.cs.wustl.edu/~schmidt/ACE.html ACE下载地址:http://downloa ...
- ubuntu12.10设置thunderbird开机自启动
sudo gedit eclipse.desktop #创建一个thnuderbird.desktop文件 [Desktop Entry] Type=Application Exec=/usr/bin ...
- [原创]个人工具 - YE快速复制助手(YeFastcopyHelper)
版本:v1.3.216 更新时间:2014/02/16 * 代码完善 + 右键关于显示当前版本号,点击并链接到软件帮助页 Technorati 标签: NET,.NET 3.5,asion C#,Ch ...