我正在制作应用程序,希望从前置摄像头捕获图像,而不显示任何类型的捕获屏幕。我想在没有任何用户交互的情况下完全在代码中拍摄照片。我如何为前置摄像头做到这一点?iOS:从前置摄像头捕获图像
回答
您可能需要使用AVFoundation
来捕获视频流/图像而不显示它。与UIImagePickerController
不同,它不能“开箱即用”。以苹果公司的AVCam
为例让你开始。
如何使用AVFoundation前置摄像头捕捉到的图像:
发展注意事项:
- 检查您的应用程序和图像方向设置仔细
- AVFoundation及其相关框架是讨厌的庞然大物和很难理解/实施。我做了我的代码精简越好,但请检查出一个更好的解释这个优秀的教程(网站没有提供任何更多,通过archive.org链接): http://www.benjaminloulier.com/posts/ios4-and-direct-access-to-the-camera
ViewController.h
// Frameworks
#import <CoreVideo/CoreVideo.h>
#import <CoreMedia/CoreMedia.h>
#import <AVFoundation/AVFoundation.h>
#import <UIKit/UIKit.h>
@interface CameraViewController : UIViewController <AVCaptureVideoDataOutputSampleBufferDelegate>
// Camera
@property (weak, nonatomic) IBOutlet UIImageView* cameraImageView;
@property (strong, nonatomic) AVCaptureDevice* device;
@property (strong, nonatomic) AVCaptureSession* captureSession;
@property (strong, nonatomic) AVCaptureVideoPreviewLayer* previewLayer;
@property (strong, nonatomic) UIImage* cameraImage;
@end
ViewController.m
#import "CameraViewController.h"
@implementation CameraViewController
- (void)viewDidLoad
{
[super viewDidLoad];
[self setupCamera];
[self setupTimer];
}
- (void)setupCamera
{
NSArray* devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for(AVCaptureDevice *device in devices)
{
if([device position] == AVCaptureDevicePositionFront)
self.device = device;
}
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:self.device error:nil];
AVCaptureVideoDataOutput* output = [[AVCaptureVideoDataOutput alloc] init];
output.alwaysDiscardsLateVideoFrames = YES;
dispatch_queue_t queue;
queue = dispatch_queue_create("cameraQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
NSString* key = (NSString *) kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[output setVideoSettings:videoSettings];
self.captureSession = [[AVCaptureSession alloc] init];
[self.captureSession addInput:input];
[self.captureSession addOutput:output];
[self.captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
self.previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
// CHECK FOR YOUR APP
self.previewLayer.frame = CGRectMake(0, 0, self.view.frame.size.height, self.view.frame.size.width);
self.previewLayer.orientation = AVCaptureVideoOrientationLandscapeRight;
// CHECK FOR YOUR APP
[self.view.layer insertSublayer:self.previewLayer atIndex:0]; // Comment-out to hide preview layer
[self.captureSession startRunning];
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
self.cameraImage = [UIImage imageWithCGImage:newImage scale:1.0f orientation:UIImageOrientationDownMirrored];
CGImageRelease(newImage);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}
- (void)setupTimer
{
NSTimer* cameraTimer = [NSTimer scheduledTimerWithTimeInterval:2.0f target:self selector:@selector(snapshot) userInfo:nil repeats:YES];
}
- (void)snapshot
{
NSLog(@"SNAPSHOT");
self.cameraImageView.image = self.cameraImage; // Comment-out to hide snapshot
}
@end
连接这件事与快照一个UIImageView一个UIViewController,它会努力!快照是以2.0秒的间隔编程获取的,没有任何用户输入。注释掉所选行以删除预览图层和快照反馈。
还有其他问题/意见,请让我知道!
在UIImagePickerController类的文档中有一个叫做takePicture的方法。它说:
使用此方法结合自定义叠加视图来启动静态图像的程序捕获。这支持在不离开界面的情况下拍摄多张照片,但要求您隐藏默认的图像选取器控件。
我转换上面的代码从Objc到夫特3,如果任何人仍然会在2017年
import UIKit
import AVFoundation
class CameraViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
@IBOutlet weak var cameraImageView: UIImageView!
var device: AVCaptureDevice?
var captureSession: AVCaptureSession?
var previewLayer: AVCaptureVideoPreviewLayer?
var cameraImage: UIImage?
override func viewDidLoad() {
super.viewDidLoad()
setupCamera()
setupTimer()
}
func setupCamera() {
let discoverySession = AVCaptureDeviceDiscoverySession(deviceTypes: [.builtInWideAngleCamera],
mediaType: AVMediaTypeVideo,
position: .front)
device = discoverySession?.devices[0]
let input: AVCaptureDeviceInput
do {
input = try AVCaptureDeviceInput(device: device)
} catch {
return
}
let output = AVCaptureVideoDataOutput()
output.alwaysDiscardsLateVideoFrames = true
let queue = DispatchQueue(label: "cameraQueue")
output.setSampleBufferDelegate(self, queue: queue)
output.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable: kCVPixelFormatType_32BGRA]
captureSession = AVCaptureSession()
captureSession?.addInput(input)
captureSession?.addOutput(output)
captureSession?.sessionPreset = AVCaptureSessionPresetPhoto
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer?.frame = CGRect(x: 0.0, y: 0.0, width: view.frame.width, height: view.frame.height)
view.layer.insertSublayer(previewLayer!, at: 0)
captureSession?.startRunning()
}
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: .allZeros))
let baseAddress = UnsafeMutableRawPointer(CVPixelBufferGetBaseAddress(imageBuffer!))
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!)
let width = CVPixelBufferGetWidth(imageBuffer!)
let height = CVPixelBufferGetHeight(imageBuffer!)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let newContext = CGContext(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo:
CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue)
let newImage = newContext!.makeImage()
cameraImage = UIImage(cgImage: newImage!)
CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: .allZeros))
}
func setupTimer() {
_ = Timer.scheduledTimer(timeInterval: 2.0, target: self, selector: #selector(snapshot), userInfo: nil, repeats: true)
}
func snapshot() {
print("SNAPSHOT")
cameraImageView.image = cameraImage
}
}
另外一种解决方案,我发现了一个较短的溶液用于获取从CMSampleBuffer图像:
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
let myPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let myCIimage = CIImage(cvPixelBuffer: myPixelBuffer!)
let videoImage = UIImage(ciImage: myCIimage)
cameraImage = videoImage
}
谢谢,这是一个非常有用的起点。 – 2017-11-06 22:31:52
没问题,我很高兴它是有用的,不知道它是否仍然可以与Swift 4一起使用而不会弹出警告.. – 2017-11-07 09:57:07
不仅仅是警告,某些东西需要更改,但修复 - 它主要处理它。 – 2017-11-07 22:17:31
转换上面的代码斯威夫特4
import UIKit
import AVFoundation
class CameraViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
@IBOutlet weak var cameraImageView: UIImageView!
var device: AVCaptureDevice?
var captureSession: AVCaptureSession?
var previewLayer: AVCaptureVideoPreviewLayer?
var cameraImage: UIImage?
override func viewDidLoad() {
super.viewDidLoad()
setupCamera()
setupTimer()
}
func setupCamera() {
let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera],
mediaType: AVMediaType.video,
position: .front)
device = discoverySession.devices[0]
let input: AVCaptureDeviceInput
do {
input = try AVCaptureDeviceInput(device: device!)
} catch {
return
}
let output = AVCaptureVideoDataOutput()
output.alwaysDiscardsLateVideoFrames = true
let queue = DispatchQueue(label: "cameraQueue")
output.setSampleBufferDelegate(self, queue: queue)
output.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String: kCVPixelFormatType_32BGRA]
captureSession = AVCaptureSession()
captureSession?.addInput(input)
captureSession?.addOutput(output)
captureSession?.sessionPreset = AVCaptureSession.Preset.photo
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
previewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
previewLayer?.frame = CGRect(x: 0.0, y: 0.0, width: view.frame.width, height: view.frame.height)
view.layer.insertSublayer(previewLayer!, at: 0)
captureSession?.startRunning()
}
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: 0))
let baseAddress = UnsafeMutableRawPointer(CVPixelBufferGetBaseAddress(imageBuffer!))
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!)
let width = CVPixelBufferGetWidth(imageBuffer!)
let height = CVPixelBufferGetHeight(imageBuffer!)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let newContext = CGContext(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo:
CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue)
let newImage = newContext!.makeImage()
cameraImage = UIImage(cgImage: newImage!)
CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: 0))
}
func setupTimer() {
_ = Timer.scheduledTimer(timeInterval: 2.0, target: self, selector: #selector(snapshot), userInfo: nil, repeats: true)
}
@objc func snapshot() {
print("SNAPSHOT")
cameraImageView.image = cameraImage
}
}
- 1. html5从ipad摄像头捕获图像
- 2. 从java摄像头捕获图像?
- 3. 从c摄像头捕获图像#
- 4. 从网络摄像头捕获图像
- 5. 从Java摄像头捕获
- 6. 用Ruby捕获摄像头的图像
- 7. 同时从前置和后置摄像头捕获视频
- 8. 从网络摄像头捕捉图像
- 9. 从网络摄像头捕捉图像
- 10. 从前置摄像头切换到后置摄像头JS
- 11. 从iOS摄像头捕捉视频
- 12. 如何获取前置摄像头拍摄的图像路径
- 13. 从前置摄像头捕捉时始终能看到镜像iOS 5.0
- 14. IP摄像头捕获
- 15. Visual C摄像头捕获
- 16. C#WPF - 从DLL捕获摄像头
- 17. OpenCV从外部摄像头捕获
- 18. 图像从前置摄像头倒置保存
- 19. 从网络摄像头和商店捕获图像
- 20. 如何从android摄像头捕获原始图像
- 21. 如何检测图像捕获从哪个摄像头php
- 22. C#:从多个(USB)摄像头捕获静止图像
- 23. 从Android摄像头捕获单张图像的快速方法
- 24. Opencv:从摄像头捕获的图像始终为灰色
- 25. .NET应用程序从PDA摄像头捕获图像
- 26. OpenCV无法从isight摄像头捕获图像
- 27. Android摄像头:前置摄像头镜像
- 28. 测试iPhone4上的前置摄像头或后置摄像头
- 29. 在后置摄像头和前置摄像头之间切换
- 30. Android前置摄像头
你的意思是默默的捕捉图像,而无需用户知道anyt兴奋吗? – rid 2012-04-17 20:57:07
是的,我知道它听起来不好,但它完全无害。该应用程序将导致他们拉一张有趣的脸,我想捕捉它,让他们看到他们看起来有多傻。 – mtmurdock 2012-04-17 20:58:01
你对这样一个特性的实现可能是无害的,但是我可以想到很多其他的实例,这些实例除了(这可能是为什么它是不可能的)。 – inkedmn 2012-12-28 00:35:39