2013-02-26 110 views
1

在我的项目中,我需要在一个独特的结果图像上复制一个视频帧的每个帧。iOS - 视频帧处理优化

捕获视频帧并不是什么大不了的事。它会是这样的:

// duration is the movie lenght in s. 
// frameDuration is 1/fps. (or 24fps, frameDuration = 1/24) 
// player is a MPMoviePlayerController 
for (NSTimeInterval i=0; i < duration; i += frameDuration) { 
    UIImage * image = [player thumbnailImageAtTime:i timeOption:MPMovieTimeOptionExact]; 

    CGRect destinationRect = [self getDestinationRect:i]; 
    [self drawImage:image inRect:destinationRect fromRect:originRect]; 

    // UI feedback 
    [self performSelectorOnMainThread:@selector(setProgressValue:) withObject:[NSNumber numberWithFloat:x/totalFrames] waitUntilDone:NO]; 
} 

问题出现在我尝试实施drawImage:inRect:fromRect:方法。
我试图this code,其中:

  1. 从视频帧创建CGImageCreateWithImageInRect新CGImage以提取图像的块。
  2. 请在ImageContext一个CGContextDrawImage提请块

但是当视频达到12-14s,我的iPhone 4S宣布他的第三个内存警告和崩溃。我异形与泄漏工具的应用,并没有发现任何泄漏在所有...

我不是在石英很强。有没有更好的优化方式来实现这一目标?

回答

1

最后,我保留了我的代码的Quartz部分,并改变了我检索图像的方式。

现在我使用AVFoundation,这是一个更快的解决方案。

// Creating the tools : 1/ the video asset, 2/ the image generator, 3/ the composition, which helps to retrieve video properties. 
AVURLAsset *asset = [[[AVURLAsset alloc] initWithURL:moviePathURL 
              options:[NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], AVURLAssetPreferPreciseDurationAndTimingKey, nil]] autorelease]; 
AVAssetImageGenerator *generator = [[[AVAssetImageGenerator alloc] initWithAsset:asset] autorelease]; 
generator.appliesPreferredTrackTransform = YES; // if I omit this, the frames are rotated 90° (didn't try in landscape) 
AVVideoComposition * composition = [AVVideoComposition videoCompositionWithPropertiesOfAsset:asset]; 

// Retrieving the video properties 
NSTimeInterval duration = CMTimeGetSeconds(asset.duration); 
frameDuration = CMTimeGetSeconds(composition.frameDuration); 
CGSize renderSize = composition.renderSize; 
CGFloat totalFrames = round(duration/frameDuration); 

// Selecting each frame we want to extract : all of them. 
NSMutableArray * times = [NSMutableArray arrayWithCapacity:round(duration/frameDuration)]; 
for (int i=0; i<totalFrames; i++) { 
    NSValue *time = [NSValue valueWithCMTime:CMTimeMakeWithSeconds(i*frameDuration, composition.frameDuration.timescale)]; 
    [times addObject:time]; 
} 

__block int i = 0; 
AVAssetImageGeneratorCompletionHandler handler = ^(CMTime requestedTime, CGImageRef im, CMTime actualTime, AVAssetImageGeneratorResult result, NSError *error){ 
    if (result == AVAssetImageGeneratorSucceeded) { 
     int x = round(CMTimeGetSeconds(requestedTime)/frameDuration); 
     CGRect destinationStrip = CGRectMake(x, 0, 1, renderSize.height); 
     [self drawImage:im inRect:destinationStrip fromRect:originStrip inContext:context]; 
    } 
    else 
     NSLog(@"Ouch: %@", error.description); 
    i++; 
    [self performSelectorOnMainThread:@selector(setProgressValue:) withObject:[NSNumber numberWithFloat:i/totalFrames] waitUntilDone:NO]; 
    if(i == totalFrames) { 
     [self performSelectorOnMainThread:@selector(performVideoDidFinish) withObject:nil waitUntilDone:NO]; 
    } 
}; 

// Launching the process... 
generator.requestedTimeToleranceBefore = kCMTimeZero; 
generator.requestedTimeToleranceAfter = kCMTimeZero; 
generator.maximumSize = renderSize; 
[generator generateCGImagesAsynchronouslyForTimes:times completionHandler:handler]; 

即使有很长的视频,它也需要时间,但它永远不会崩溃!

+0

马丁嗨,一路图像提取物是完美的,但在应用程序,如果视频时长超过30秒,然后应用内存警告崩溃。你有其他的方式或者有什么变化吗?谢谢 – iBhavik 2013-05-14 06:34:15

+0

嗨。它不应该与长视频崩溃。检查你的代码,也许你在处理程序块中包含泄漏。您无法将所有提取的图像保留在内存中,因为设备没有足够的内存空间。 – Martin 2013-05-14 08:06:28

+0

@iBhavik你有没有找到任何解决方案 – Nil 2017-07-06 15:29:22

0

除了Martin的回答,我建议缩小这个电话所获得图像的大小;也就是说,增加的属性[generator.maximumSize = CGSizeMake(width,height)];使图像尽可能小,这样他们就不会占用太多的内存