2016-04-25 58 views
3

我正在使用iPhone相机捕捉实时视频并将像素缓冲区提供给执行某些对象识别的网络。下面是相关的代码:(我不会发布用于设置AVCaptureSession 等,因为这是非常标准的代码。)用iOS中的白色像素替换部分像素缓冲区

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { 
    CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

    OSType sourcePixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer); 
    int doReverseChannels; 
    if (kCVPixelFormatType_32ARGB == sourcePixelFormat) { 
     doReverseChannels = 1; 
    } else if (kCVPixelFormatType_32BGRA == sourcePixelFormat) { 
     doReverseChannels = 0; 
    } else { 
     assert(false); 
    } 

    const int sourceRowBytes = (int)CVPixelBufferGetBytesPerRow(pixelBuffer); 
    const int width = (int)CVPixelBufferGetWidth(pixelBuffer); 
    const int fullHeight = (int)CVPixelBufferGetHeight(pixelBuffer); 
    CVPixelBufferLockBaseAddress(pixelBuffer, 0); 
    unsigned char* sourceBaseAddr = CVPixelBufferGetBaseAddress(pixelBuffer); 
    int height; 
    unsigned char* sourceStartAddr; 
    if (fullHeight <= width) { 
     height = fullHeight; 
     sourceStartAddr = sourceBaseAddr; 
    } else { 
     height = width; 
     const int marginY = ((fullHeight - width)/2); 
     sourceStartAddr = (sourceBaseAddr + (marginY * sourceRowBytes)); 
    } 
} 

网络然后采取sourceStartAddrwidthheightsourceRowBytes & doReverseChannels为投入。

我的问题如下:什么是最简单和/或最有效的方法来替换或删除一部分图像数据与所有白色'像素'?是否有可能直接覆盖部分像素缓冲区数据,如果是,如何?

我只对这个像素缓冲区的工作原理有一个非常基本的理解,所以我很抱歉如果我在这里丢失了一些非常基本的东西。与我在Stackoverflow上发现的最密切相关的问题是this one,其中EAGLContext用于将文本添加到视频帧。虽然这实际上只适用于我的目标,只需要替换单张图像,但我认为如果应用于每个视频帧,此步骤会导致性能下降,并且我想知道是否还有其他方法。任何帮助在这里将不胜感激。

回答

5

这里是操纵CVPixelBufferRef,而无需使用其他库等核心图形或OpenGL一种简单的方法:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { 
    CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

    const int kBytesPerPixel = 4; 
    CVPixelBufferLockBaseAddress(pixelBuffer, 0); 
    int bufferWidth = (int)CVPixelBufferGetWidth(pixelBuffer); 
    int bufferHeight = (int)CVPixelBufferGetHeight(pixelBuffer); 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer); 
    uint8_t *baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer); 

    for (int row = 0; row < bufferHeight; row++) 
    { 
     uint8_t *pixel = baseAddress + row * bytesPerRow; 
     for (int column = 0; column < bufferWidth; column++) 
     { 
      if ((row < 100) && (column < 100) { 
       pixel[0] = 255; // BGRA, Blue value 
       pixel[1] = 255; // Green value 
       pixel[2] = 255; // Red value 
      } 
      pixel += kBytesPerPixel; 
     } 
    } 

    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); 

    // Do whatever needs to be done with the pixel buffer 
} 

这将覆盖用白色像素的图像在100×100个像素的左上补丁。

我在这个Apple Developer Example中找到了这个解决方案,名为RosyWriter

有点惊讶我没有得到任何答案在这里考虑这是多么容易的结果。希望这可以帮助某人。