2016-08-15 28 views
0

这是我第一次使用iOS相机。 我试图创建一个只能拍摄照片的简单应用程序(静止图像)。 我使用来自WWDC代码:AVCam Customize PreviewLayer

https://developer.apple.com/library/ios/samplecode/AVCam/Introduction/Intro.html#//apple_ref/doc/uid/DTS40010112-Intro-DontLinkElementID_2 

我想创建一个自定义图片大小,图片里的一模一样:
在这里输入的形象描述

但结果是: enter image description here

我该如何修复它的广场大小?

谢谢!

编辑: 我附上了结果的图片。 enter image description here 我该如何解决它?

Edite 2:

CMPCameraViewController:

- (void)viewDidLoad 
{ 
[super viewDidLoad]; 

// Disable UI. The UI is enabled if and only if the session starts running. 
self.stillButton.enabled = NO; 

// Create the AVCaptureSession. 
self.session = [[AVCaptureSession alloc] init]; 

// Setup the preview view. 
self.previewView.session = self.session; 

// Communicate with the session and other session objects on this queue. 
self.sessionQueue = dispatch_queue_create("session queue", DISPATCH_QUEUE_SERIAL); 

self.setupResult = AVCamSetupResultSuccess; 


// Setup the capture session. 
// In general it is not safe to mutate an AVCaptureSession or any of its inputs, outputs, or connections from multiple threads at the same time. 
// Why not do all of this on the main queue? 
// Because -[AVCaptureSession startRunning] is a blocking call which can take a long time. We dispatch session setup to the sessionQueue 
// so that the main queue isn't blocked, which keeps the UI responsive. 
dispatch_async(self.sessionQueue, ^{ 
    if (self.setupResult != AVCamSetupResultSuccess) { 
     return; 
    } 

    self.backgroundRecordingID = UIBackgroundTaskInvalid; 
    NSError *error = nil; 

    AVCaptureDevice *videoDevice = [CMPCameraViewController deviceWithMediaType:AVMediaTypeVideo preferringPosition:AVCaptureDevicePositionBack]; 
    AVCaptureDeviceInput *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error]; 

    if (! videoDeviceInput) { 
     NSLog(@"Could not create video device input: %@", error); 
    } 

    [self.session beginConfiguration]; 

    if ([self.session canAddInput:videoDeviceInput]) { 
     [self.session addInput:videoDeviceInput]; 
     self.videoDeviceInput = videoDeviceInput; 

     dispatch_async(dispatch_get_main_queue(), ^{ 
      // Why are we dispatching this to the main queue? 
      // Because AVCaptureVideoPreviewLayer is the backing layer for AAPLPreviewView and UIView 
      // can only be manipulated on the main thread. 
      // Note: As an exception to the above rule, it is not necessary to serialize video orientation changes 
      // on the AVCaptureVideoPreviewLayer’s connection with other session manipulation. 

      // Use the status bar orientation as the initial video orientation. Subsequent orientation changes are handled by 
      // -[viewWillTransitionToSize:withTransitionCoordinator:]. 
      UIInterfaceOrientation statusBarOrientation = [UIApplication sharedApplication].statusBarOrientation; 
      AVCaptureVideoOrientation initialVideoOrientation = AVCaptureVideoOrientationPortrait; 
      if (statusBarOrientation != UIInterfaceOrientationUnknown) { 
       initialVideoOrientation = (AVCaptureVideoOrientation)statusBarOrientation; 
      } 

      AVCaptureVideoPreviewLayer *previewLayer = (AVCaptureVideoPreviewLayer *)self.previewView.layer; 
      previewLayer.connection.videoOrientation = initialVideoOrientation; 
      previewLayer.bounds = _previewView.frame; 
      //previewLayer.connection.videoOrientation = UIInterfaceOrientationLandscapeLeft; 
     }); 
    } 
    else { 
     NSLog(@"Could not add video device input to the session"); 
     self.setupResult = AVCamSetupResultSessionConfigurationFailed; 
    } 

    AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio]; 
    AVCaptureDeviceInput *audioDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error]; 

    if (! audioDeviceInput) { 
     NSLog(@"Could not create audio device input: %@", error); 
    } 

    if ([self.session canAddInput:audioDeviceInput]) { 
     [self.session addInput:audioDeviceInput]; 
    } 
    else { 
     NSLog(@"Could not add audio device input to the session"); 
    } 

    AVCaptureMovieFileOutput *movieFileOutput = [[AVCaptureMovieFileOutput alloc] init]; 
    if ([self.session canAddOutput:movieFileOutput]) { 
     [self.session addOutput:movieFileOutput]; 
     AVCaptureConnection *connection = [movieFileOutput connectionWithMediaType:AVMediaTypeVideo]; 
     if (connection.isVideoStabilizationSupported) { 
      connection.preferredVideoStabilizationMode = AVCaptureVideoStabilizationModeAuto; 
     } 
     self.movieFileOutput = movieFileOutput; 
    } 
    else { 
     NSLog(@"Could not add movie file output to the session"); 
     self.setupResult = AVCamSetupResultSessionConfigurationFailed; 
    } 

    AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init]; 
    if ([self.session canAddOutput:stillImageOutput]) { 
     stillImageOutput.outputSettings = @{AVVideoCodecKey : AVVideoCodecJPEG}; 
     [self.session addOutput:stillImageOutput]; 
     self.stillImageOutput = stillImageOutput; 
    } 
    else { 
     NSLog(@"Could not add still image output to the session"); 
     self.setupResult = AVCamSetupResultSessionConfigurationFailed; 
    } 

    [self.session commitConfiguration]; 
});  
} 

CMPPreviewView:

+ (Class)layerClass 
    { 
    return [AVCaptureVideoPreviewLayer class]; 
    } 

- (AVCaptureSession *)session 
{ 
AVCaptureVideoPreviewLayer *previewLayer = (AVCaptureVideoPreviewLayer *)self.layer; 
return previewLayer.session; 
} 

- (void)setSession:(AVCaptureSession *)session 
{ 
AVCaptureVideoPreviewLayer *previewLayer = (AVCaptureVideoPreviewLayer *)self.layer; 
previewLayer.session = session; 
((AVPlayerLayer *)[self layer]).videoGravity = AVLayerVideoGravityResize; 
    } 

回答

0

苹果AVCam代码是进入摄影发展的一个很好的起点。

你所要做的是修改你的视频预览层的大小。这是通过更改videoGravity设置完成的。下面是一个例子一个方面填充类型查看:

[Swift 3] 

previewView.videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill 

现在,为你填写你需要定义层的边界,然后使用AVLayerVideoGravityResize矩形的情况。

请注意:这不会影响拍摄照片的大小。它只是修改视频预览图层的大小。这是一个重要的区别。为了修改拍摄照片的尺寸,您需要执行裁剪操作(可以通过各种方式轻松完成裁剪操作),但看起来这不是您的意图。

祝你好运。

编辑:现在,它似乎你有兴趣裁剪捕获的UIImage。

[Swift 3] 
// I'm going to assume you've done something like this to store the captured data to a UIImage object 
//If not, I would do so 
let myImage = UIImage(data: capturedImageData)! 

// using core graphics (the cg in cgImage) you can perform all kinds of image manipulations--crop, rotation, mirror, etc. 
// here's crop to a rectangle--fill in with your desired values 
let myRect = CGRect(x: ..., y: ..., width: ..., height: ...) 
myImage = myImage.cgImage?.cropping(to: myRect) 

希望这能回答你的问题。

+0

谢谢你我会试试看。 ,我会阅读有关裁剪,因为我也需要它.. –

+0

你能看到我的编辑?谢谢! –

+0

嘿。我想这可能是我不了解你的原始问题,所以让我花一秒钟与你澄清。您是否尝试制作视频**预览**图层(您在拍摄照片之前看到的)填充广场的大小?或者,你是否正在寻找将**拍摄的**照片裁剪为该广场的尺寸?我认为我们正在谈论视频预览层,这确实需要调整视频重力。还请包括您尝试实现当前版本的代码,以便我可以看到您尝试的内容。 –

相关问题