2011-04-08 116 views
3

我试图制作的应用程序的主要目标是点对点视频流。 (有点像使用蓝牙/ WiFi的FaceTime)。如何从AVCaptureAudioDataOutput播放音频采样缓冲区

使用AVFoundation,我能够捕捉视频/音频采样缓冲区。然后我发送视频/ audo采样缓冲区数据。现在的问题是在接收端处理样本缓冲区数据。

至于视频采样缓冲区,我能够从采样缓冲区获得UIImage。但对于音频采样缓冲区,我不知道如何处理它,所以我可以播放音频。

所以问题是如何处理/播放音频采样缓冲区

现在我只是在绘制波形,就像苹果的波浪示例代码:

CMSampleBufferRef sampleBuffer; 

CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer); 
NSUInteger channelIndex = 0; 

CMBlockBufferRef audioBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer); 
size_t audioBlockBufferOffset = (channelIndex * numSamples * sizeof(SInt16)); 
size_t lengthAtOffset = 0; 
size_t totalLength = 0; 
SInt16 *samples = NULL; 
CMBlockBufferGetDataPointer(audioBlockBuffer, audioBlockBufferOffset, &lengthAtOffset, &totalLength, (char **)(&samples)); 

int numSamplesToRead = 1; 
for (int i = 0; i < numSamplesToRead; i++) { 

    SInt16 subSet[numSamples/numSamplesToRead]; 
    for (int j = 0; j < numSamples/numSamplesToRead; j++) 
     subSet[j] = samples[(i * (numSamples/numSamplesToRead)) + j]; 

    SInt16 audioSample = [Util maxValueInArray:subSet ofSize:(numSamples/numSamplesToRead)]; 
    double scaledSample = (double) ((audioSample/SINT16_MAX)); 

    // plot waveform using scaledSample 
    [updateUI:scaledSample]; 
} 
+0

回答我自己的问题。我不认为有一种方法可以播放音频样本,而不必先将它保存到文件中。可能有解决方案,但我无法找到一个解决方案。 – calampunay 2011-06-16 15:53:29

+2

当然,您可以播放它,请参阅回放部分[here](http://atastypixel.com/blog/using-remoteio-audio-unit/)。 – 2011-07-07 15:36:30

回答

-4

为了显示视频,你可以使用 (这里:让ARGB画面,并转换成的Qt(诺基亚QT)而QImage则可以通过其他图像替换)

地方它委托类

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
    didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
      fromConnection:(AVCaptureConnection *)connection 

NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; 

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

CVPixelBufferLockBaseAddress(imageBuffer,0); 

SVideoSample sample; 

sample.pImage  = (char *)CVPixelBufferGetBaseAddress(imageBuffer); 
sample.bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
sample.width  = CVPixelBufferGetWidth(imageBuffer); 
sample.height  = CVPixelBufferGetHeight(imageBuffer); 

QImage img((unsigned char *)sample.pImage, sample.width, sample.height, sample.bytesPerRow, QImage::Format_ARGB32); 

self->m_receiver->eventReceived(img); 

CVPixelBufferUnlockBaseAddress(imageBuffer,0); 
[pool drain]; 
相关问题