我想将2个CAF文件本地转换为一个文件。这两个CAF文件是单声道流,理想情况下,我希望它们是一个立体声文件,这样我就可以从一个声道获得麦克风,从另一个声道获得扬声器。如何在iOS中将2个单声道文件转换为单个立体声文件?
我最初是通过使用AVAssetTrack和AVMutableCompositionTracks开始的,但是我无法解决混音问题。我的合并文件是一个单一的单一流,交错两个文件。所以我选择了AVAudioEngine路线。
从我的理解,我可以通过我的两个文件作为输入节点,将它们连接到混音器,并有一个能够获得立体声混音的输出节点。输出文件具有立体声布局,但没有音频数据似乎写入它,因为我可以在Audacity中打开它并查看立体声布局。在installTapOnBus调用周围放置dipatch sephamore信号也没有多大帮助。 CoreAudio一直是一个难以理解的挑战,因此我们将不胜感激。
// obtain path of microphone and speaker files
NSString *micPath = [[NSBundle mainBundle] pathForResource:@"microphone" ofType:@"caf"];
NSString *spkPath = [[NSBundle mainBundle] pathForResource:@"speaker" ofType:@"caf"];
NSURL *micURL = [NSURL fileURLWithPath:micPath];
NSURL *spkURL = [NSURL fileURLWithPath:spkPath];
// create engine
AVAudioEngine *engine = [[AVAudioEngine alloc] init];
AVAudioFormat *stereoFormat = [[AVAudioFormat alloc] initStandardFormatWithSampleRate:16000 channels:2];
AVAudioMixerNode *mainMixer = engine.mainMixerNode;
// create audio files
AVAudioFile *audioFile1 = [[AVAudioFile alloc] initForReading:micURL error:nil];
AVAudioFile *audioFile2 = [[AVAudioFile alloc] initForReading:spkURL error:nil];
// create player input nodes
AVAudioPlayerNode *apNode1 = [[AVAudioPlayerNode alloc] init];
AVAudioPlayerNode *apNode2 = [[AVAudioPlayerNode alloc] init];
// attach nodes to the engine
[engine attachNode:apNode1];
[engine attachNode:apNode2];
// connect player nodes to engine's main mixer
stereoFormat = [mainMixer outputFormatForBus:0];
[engine connect:apNode1 to:mainMixer fromBus:0 toBus:0 format:audioFile1.processingFormat];
[engine connect:apNode2 to:mainMixer fromBus:0 toBus:1 format:audioFile2.processingFormat];
[engine connect:mainMixer to:engine.outputNode format:stereoFormat];
// start the engine
NSError *error = nil;
if(![engine startAndReturnError:&error]){
NSLog(@"Engine failed to start.");
}
// create output file
NSString *mergedAudioFile = [[micPath stringByDeletingLastPathComponent] stringByAppendingPathComponent:@"merged.caf"];
[[NSFileManager defaultManager] removeItemAtPath:mergedAudioFile error:&error];
NSURL *mergedURL = [NSURL fileURLWithPath:mergedAudioFile];
AVAudioFile *outputFile = [[AVAudioFile alloc] initForWriting:mergedURL settings:[engine.inputNode inputFormatForBus:0].settings error:&error];
// write from buffer to output file
[mainMixer installTapOnBus:0 bufferSize:4096 format:[mainMixer outputFormatForBus:0] block:^(AVAudioPCMBuffer *buffer, AVAudioTime *when){
NSError *error;
BOOL success;
NSLog(@"Writing");
if((outputFile.length < audioFile1.length) || (outputFile.length < audioFile2.length)){
success = [outputFile writeFromBuffer:buffer error:&error];
NSCAssert(success, @"error writing buffer data to file, %@", [error localizedDescription]);
if(error){
NSLog(@"Error: %@", error);
}
}
else{
[mainMixer removeTapOnBus:0];
NSLog(@"Done writing");
}
}];
}
你持有的强引用你写的AVAudioFile? – dave234
@ Dave,outputFile在写入之前不存在。在强引用方面,我将audioFile设置为写入mergedURL,这是mergedAudioFile的fileURLWithPath。没有其他对象/变量引用outputFile,并且在installTapOnBus调用之后我没有销毁它。 – A21
这种方法的一个弱点是,你将不得不等待文件的持续时间被渲染为一个。这就是说,如果你坚持使用AVAudioEngine,你可能会试着让这两个文件先玩。然后,一旦该步骤完成,安装轻击并写入文件。但如果我自己做,我会使用C API。 – dave234