2012-02-11 104 views
1

我试图处理本地视频文件,并简单地做对像素数据一些分析。没有输出。我目前的代码遍历视频的每一帧,但我实际上一次只想跳过〜15帧,以加快速度。有没有办法在不解码的情况下跳过帧?跳过帧,而视频处理在iOS上

在ffmpeg的,我可以简单地叫av_read_frame不调用avcodec_decode_video2。

在此先感谢!这里是我当前的代码:

- (void) readMovie:(NSURL *)url 
{ 

    [self performSelectorOnMainThread:@selector(updateInfo:) withObject:@"scanning" waitUntilDone:YES]; 

    startTime = [NSDate date]; 

    AVURLAsset * asset = [AVURLAsset URLAssetWithURL:url options:nil]; 

    [asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler: 
    ^{ 
     dispatch_async(dispatch_get_main_queue(), 
         ^{ 



          AVAssetTrack * videoTrack = nil; 
          NSArray * tracks = [asset tracksWithMediaType:AVMediaTypeVideo]; 
          if ([tracks count] == 1) 
          { 
           videoTrack = [tracks objectAtIndex:0]; 

           videoDuration = CMTimeGetSeconds([videoTrack timeRange].duration); 

           NSError * error = nil; 

           // _movieReader is a member variable 
           _movieReader = [[AVAssetReader alloc] initWithAsset:asset error:&error]; 
           if (error) 
            NSLog(@"%@", error.localizedDescription);  

           NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 
           NSNumber* value = [NSNumber numberWithUnsignedInt: kCVPixelFormatType_420YpCbCr8Planar]; 

           NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 

           AVAssetReaderTrackOutput* output = [AVAssetReaderTrackOutput 
                 assetReaderTrackOutputWithTrack:videoTrack 
                 outputSettings:videoSettings]; 
           output.alwaysCopiesSampleData = NO; 

           [_movieReader addOutput:output]; 

           if ([_movieReader startReading]) 
           { 
            NSLog(@"reading started"); 

            [self readNextMovieFrame]; 
           } 
           else 
           { 
            NSLog(@"reading can't be started"); 
           } 
          } 
         }); 
    }]; 
} 


- (void) readNextMovieFrame 
{ 
    //NSLog(@"readNextMovieFrame called"); 
    if (_movieReader.status == AVAssetReaderStatusReading) 
    { 
     //NSLog(@"status is reading"); 

     AVAssetReaderTrackOutput * output = [_movieReader.outputs objectAtIndex:0]; 
     CMSampleBufferRef sampleBuffer = [output copyNextSampleBuffer]; 
     if (sampleBuffer) 
     { // I'm guessing this is the expensive part that we can skip if we want to skip frames 
      CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

      // Lock the image buffer 
      CVPixelBufferLockBaseAddress(imageBuffer,0); 

      // Get information of the image 
      uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
      size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
      size_t width = CVPixelBufferGetWidth(imageBuffer); 
      size_t height = CVPixelBufferGetHeight(imageBuffer); 

      // do my pixel analysis 

      // Unlock the image buffer 
      CVPixelBufferUnlockBaseAddress(imageBuffer,0); 
      CFRelease(sampleBuffer); 


      [self readNextMovieFrame]; 
     } 
     else 
     { 
      NSLog(@"could not copy next sample buffer. status is %d", _movieReader.status); 

      NSTimeInterval scanDuration = -[startTime timeIntervalSinceNow]; 

      float scanMultiplier = videoDuration/scanDuration; 

      NSString* info = [NSString stringWithFormat:@"Done\n\nvideo duration: %f seconds\nscan duration: %f seconds\nmultiplier: %f", videoDuration, scanDuration, scanMultiplier]; 

      [self performSelectorOnMainThread:@selector(updateInfo:) withObject:info waitUntilDone:YES]; 
     } 


    } 
    else 
    { 
     NSLog(@"status is now %d", _movieReader.status); 


    } 

} 


- (void) updateInfo: (id*)message 
{ 
    NSString* info = [NSString stringWithFormat:@"%@", message]; 

    [infoTextView setText:info]; 
} 
+0

你有没有找到一个解决方案?我想做同样的事情。 – GuruMeditation 2012-08-27 12:31:17

+0

都能跟得上 - 从来没有:-( – 2012-08-28 18:51:05

回答

1

如果你想少精确的帧处理(而不是逐帧),你应该使用AVAssetImageGenerator

此类返回你问一个特定的时间框架。

具体来说,建立一个数组,填充时间与每次0.5秒差异的时间间隔(iPhone电影大约29.3 fps,如果你想每隔15帧大约每30秒一帧),让图像生成器返回你的框架。

因为你可以看到你所要求的时间和框架的实际时间每帧。它的默认值是你问的0.5s左右宽容,但你也可以改变,通过改变性质:

requestedTimeToleranceBeforerequestedTimeToleranceAfter

我希望我回答你的问题, 好运。

+0

感谢Or.Ron我打这之前,但它仍然显得相对缓慢这是我唯一的选择。? – 2012-02-13 02:36:50