2012-04-17 139 views
16

我正在制作应用程序,希望从前置摄像头捕获图像,而不显示任何类型的捕获屏幕。我想在没有任何用户交互的情况下完全在代码中拍摄照片。我如何为前置摄像头做到这一点?iOS:从前置摄像头捕获图像

+2

你的意思是默默的捕捉图像,而无需用户知道anyt兴奋吗? – rid 2012-04-17 20:57:07

+2

是的,我知道它听起来不好,但它完全无害。该应用程序将导致他们拉一张有趣的脸,我想捕捉它,让他们看到他们看起来有多傻。 – mtmurdock 2012-04-17 20:58:01

+1

你对这样一个特性的实现可能是无害的,但是我可以想到很多其他的实例,这些实例除了(这可能是为什么它是不可能的)。 – inkedmn 2012-12-28 00:35:39

回答

3

您可能需要使用AVFoundation来捕获视频流/图像而不显示它。与UIImagePickerController不同,它不能“开箱即用”。以苹果公司的AVCam为例让你开始。

41

如何使用AVFoundation前置摄像头捕捉到的图像:

发展注意事项:

  • 检查您的应用程序和图像方向设置仔细
  • AVFoundation及其相关框架是讨厌的庞然大物和很难理解/实施。我做了我的代码精简越好,但请检查出一个更好的解释这个优秀的教程(网站没有提供任何更多,通过archive.org链接): http://www.benjaminloulier.com/posts/ios4-and-direct-access-to-the-camera

ViewController.h

// Frameworks 
#import <CoreVideo/CoreVideo.h> 
#import <CoreMedia/CoreMedia.h> 
#import <AVFoundation/AVFoundation.h> 
#import <UIKit/UIKit.h> 

@interface CameraViewController : UIViewController <AVCaptureVideoDataOutputSampleBufferDelegate> 

// Camera 
@property (weak, nonatomic) IBOutlet UIImageView* cameraImageView; 
@property (strong, nonatomic) AVCaptureDevice* device; 
@property (strong, nonatomic) AVCaptureSession* captureSession; 
@property (strong, nonatomic) AVCaptureVideoPreviewLayer* previewLayer; 
@property (strong, nonatomic) UIImage* cameraImage; 

@end 

ViewController.m

#import "CameraViewController.h" 

@implementation CameraViewController 

- (void)viewDidLoad 
{ 
    [super viewDidLoad]; 

    [self setupCamera]; 
    [self setupTimer]; 
} 

- (void)setupCamera 
{  
    NSArray* devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]; 
    for(AVCaptureDevice *device in devices) 
    { 
     if([device position] == AVCaptureDevicePositionFront) 
      self.device = device; 
    } 

    AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:self.device error:nil]; 
    AVCaptureVideoDataOutput* output = [[AVCaptureVideoDataOutput alloc] init]; 
    output.alwaysDiscardsLateVideoFrames = YES; 

    dispatch_queue_t queue; 
    queue = dispatch_queue_create("cameraQueue", NULL); 
    [output setSampleBufferDelegate:self queue:queue]; 

    NSString* key = (NSString *) kCVPixelBufferPixelFormatTypeKey; 
    NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; 
    NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
    [output setVideoSettings:videoSettings]; 

    self.captureSession = [[AVCaptureSession alloc] init]; 
    [self.captureSession addInput:input]; 
    [self.captureSession addOutput:output]; 
    [self.captureSession setSessionPreset:AVCaptureSessionPresetPhoto]; 

    self.previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession]; 
    self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill; 

    // CHECK FOR YOUR APP 
    self.previewLayer.frame = CGRectMake(0, 0, self.view.frame.size.height, self.view.frame.size.width); 
    self.previewLayer.orientation = AVCaptureVideoOrientationLandscapeRight; 
    // CHECK FOR YOUR APP 

    [self.view.layer insertSublayer:self.previewLayer atIndex:0]; // Comment-out to hide preview layer 

    [self.captureSession startRunning]; 
} 

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection 
{ 
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    CVPixelBufferLockBaseAddress(imageBuffer,0); 
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    CGImageRef newImage = CGBitmapContextCreateImage(newContext); 

    CGContextRelease(newContext); 
    CGColorSpaceRelease(colorSpace); 

    self.cameraImage = [UIImage imageWithCGImage:newImage scale:1.0f orientation:UIImageOrientationDownMirrored]; 

    CGImageRelease(newImage); 

    CVPixelBufferUnlockBaseAddress(imageBuffer,0); 
} 

- (void)setupTimer 
{ 
    NSTimer* cameraTimer = [NSTimer scheduledTimerWithTimeInterval:2.0f target:self selector:@selector(snapshot) userInfo:nil repeats:YES]; 
} 

- (void)snapshot 
{ 
    NSLog(@"SNAPSHOT"); 
    self.cameraImageView.image = self.cameraImage; // Comment-out to hide snapshot 
} 

@end 

连接这件事与快照一个UIImageView一个UIViewController,它会努力!快照是以2.0秒的间隔编程获取的,没有任何用户输入。注释掉所选行以删除预览图层和快照反馈。

还有其他问题/意见,请让我知道!

+1

非常好!我建议这个答案被我的接受(假设它有效)。 – Tim 2012-12-28 19:17:20

+0

这会是Apple App Store友好吗? – mtmurdock 2012-12-30 21:18:17

+1

我不确定,这是我第一次考虑这样的应用。我猜你需要深入研究,并确保使用者/ Apple知道它不会被用于任何恶意目的(正如其他文章中所述)。你的应用听起来有趣无害,所以也许它会好起来的! – 2012-12-30 22:16:34

0

在UIImagePickerController类的文档中有一个叫做takePicture的方法。它说:

使用此方法结合自定义叠加视图来启动静态图像的程序捕获。这支持在不离开界面的情况下拍摄多张照片,但要求您隐藏默认的图像选取器控件。

2

我转换上面的代码从Objc到夫特3,如果任何人仍然会在2017年

import UIKit 
import AVFoundation 

class CameraViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate { 

@IBOutlet weak var cameraImageView: UIImageView! 

var device: AVCaptureDevice? 
var captureSession: AVCaptureSession? 
var previewLayer: AVCaptureVideoPreviewLayer? 
var cameraImage: UIImage? 

override func viewDidLoad() { 
    super.viewDidLoad() 

    setupCamera() 
    setupTimer() 
} 

func setupCamera() { 
    let discoverySession = AVCaptureDeviceDiscoverySession(deviceTypes: [.builtInWideAngleCamera], 
                  mediaType: AVMediaTypeVideo, 
                  position: .front) 
    device = discoverySession?.devices[0] 

    let input: AVCaptureDeviceInput 
    do { 
     input = try AVCaptureDeviceInput(device: device) 
    } catch { 
     return 
    } 

    let output = AVCaptureVideoDataOutput() 
    output.alwaysDiscardsLateVideoFrames = true 

    let queue = DispatchQueue(label: "cameraQueue") 
    output.setSampleBufferDelegate(self, queue: queue) 
    output.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable: kCVPixelFormatType_32BGRA] 

    captureSession = AVCaptureSession() 
    captureSession?.addInput(input) 
    captureSession?.addOutput(output) 
    captureSession?.sessionPreset = AVCaptureSessionPresetPhoto 

    previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) 
    previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill 

    previewLayer?.frame = CGRect(x: 0.0, y: 0.0, width: view.frame.width, height: view.frame.height) 

    view.layer.insertSublayer(previewLayer!, at: 0) 

    captureSession?.startRunning() 
} 

func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) { 
    let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) 
    CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: .allZeros)) 
    let baseAddress = UnsafeMutableRawPointer(CVPixelBufferGetBaseAddress(imageBuffer!)) 
    let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!) 
    let width = CVPixelBufferGetWidth(imageBuffer!) 
    let height = CVPixelBufferGetHeight(imageBuffer!) 

    let colorSpace = CGColorSpaceCreateDeviceRGB() 
    let newContext = CGContext(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: 
     CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue) 

    let newImage = newContext!.makeImage() 
    cameraImage = UIImage(cgImage: newImage!) 

    CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: .allZeros)) 
} 

func setupTimer() { 
    _ = Timer.scheduledTimer(timeInterval: 2.0, target: self, selector: #selector(snapshot), userInfo: nil, repeats: true) 
} 

func snapshot() { 
    print("SNAPSHOT") 
    cameraImageView.image = cameraImage 
} 
} 

另外一种解决方案,我发现了一个较短的溶液用于获取从CMSampleBuffer图像:

func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) { 
    let myPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) 
    let myCIimage = CIImage(cvPixelBuffer: myPixelBuffer!) 
    let videoImage = UIImage(ciImage: myCIimage) 
    cameraImage = videoImage 
} 
+2

谢谢,这是一个非常有用的起点。 – 2017-11-06 22:31:52

+0

没问题,我很高兴它是有用的,不知道它是否仍然可以与Swift 4一起使用而不会弹出警告.. – 2017-11-07 09:57:07

+0

不仅仅是警告,某些东西需要更改,但修复 - 它主要处理它。 – 2017-11-07 22:17:31

0

转换上面的代码斯威夫特4

import UIKit 
import AVFoundation 

class CameraViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate { 

@IBOutlet weak var cameraImageView: UIImageView! 

var device: AVCaptureDevice? 
var captureSession: AVCaptureSession? 
var previewLayer: AVCaptureVideoPreviewLayer? 
var cameraImage: UIImage? 

override func viewDidLoad() { 
    super.viewDidLoad() 

    setupCamera() 
    setupTimer() 
} 

func setupCamera() { 
    let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], 
                  mediaType: AVMediaType.video, 
                  position: .front) 
    device = discoverySession.devices[0] 

    let input: AVCaptureDeviceInput 
    do { 
     input = try AVCaptureDeviceInput(device: device!) 
    } catch { 
     return 
    } 

    let output = AVCaptureVideoDataOutput() 
    output.alwaysDiscardsLateVideoFrames = true 

    let queue = DispatchQueue(label: "cameraQueue") 
    output.setSampleBufferDelegate(self, queue: queue) 
    output.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String: kCVPixelFormatType_32BGRA] 

    captureSession = AVCaptureSession() 
    captureSession?.addInput(input) 
    captureSession?.addOutput(output) 
    captureSession?.sessionPreset = AVCaptureSession.Preset.photo 

    previewLayer = AVCaptureVideoPreviewLayer(session: captureSession!) 
    previewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill 
    previewLayer?.frame = CGRect(x: 0.0, y: 0.0, width: view.frame.width, height: view.frame.height) 

    view.layer.insertSublayer(previewLayer!, at: 0) 

     captureSession?.startRunning() 
    } 

    func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) { 
     let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) 
    CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: 0)) 
     let baseAddress = UnsafeMutableRawPointer(CVPixelBufferGetBaseAddress(imageBuffer!)) 
     let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!) 
     let width = CVPixelBufferGetWidth(imageBuffer!) 
     let height = CVPixelBufferGetHeight(imageBuffer!) 

     let colorSpace = CGColorSpaceCreateDeviceRGB() 
     let newContext = CGContext(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: 
     CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue) 

     let newImage = newContext!.makeImage() 
     cameraImage = UIImage(cgImage: newImage!) 

     CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: 0)) 
    } 

    func setupTimer() { 
     _ = Timer.scheduledTimer(timeInterval: 2.0, target: self, selector: #selector(snapshot), userInfo: nil, repeats: true) 
    } 

    @objc func snapshot() { 
     print("SNAPSHOT") 
     cameraImageView.image = cameraImage 
    } 
}