2016-10-27 52 views
3

我正在使用SpeakerBox应用程序作为我的VOIP应用程序的基础。我已经设法让所有的东西都能正常工作,但我似乎无法摆脱音频从麦克风到设备扬声器的“短路”。使用CallKit在VOIP应用程序中短路音频

换句话说,当我打电话时,我可以听到讲话者的声音以及对方的声音。我该如何改变这一点?

AVAudioSession设置:

AVAudioSession *sessionInstance = [AVAudioSession sharedInstance]; 

    NSError *error = nil; 
    [sessionInstance setCategory:AVAudioSessionCategoryPlayAndRecord error:&error]; 
    XThrowIfError((OSStatus)error.code, "couldn't set session's audio category"); 

    [sessionInstance setMode:AVAudioSessionModeVoiceChat error:&error]; 
    XThrowIfError((OSStatus)error.code, "couldn't set session's audio mode"); 

    NSTimeInterval bufferDuration = .005; 
    [sessionInstance setPreferredIOBufferDuration:bufferDuration error:&error]; 
    XThrowIfError((OSStatus)error.code, "couldn't set session's I/O buffer duration"); 

    [sessionInstance setPreferredSampleRate:44100 error:&error]; 
    XThrowIfError((OSStatus)error.code, "couldn't set session's preferred sample rate"); 

IO单元的设置:

- (void)setupIOUnit 
{ 
try { 
    // Create a new instance of Apple Voice Processing IO 

    AudioComponentDescription desc; 
    desc.componentType = kAudioUnitType_Output; 
    desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO; 
    desc.componentManufacturer = kAudioUnitManufacturer_Apple; 
    desc.componentFlags = 0; 
    desc.componentFlagsMask = 0; 

    AudioComponent comp = AudioComponentFindNext(NULL, &desc); 
    XThrowIfError(AudioComponentInstanceNew(comp, &_rioUnit), "couldn't create a new instance of Apple Voice Processing IO"); 

    // Enable input and output on Apple Voice Processing IO 
    // Input is enabled on the input scope of the input element 
    // Output is enabled on the output scope of the output element 

    UInt32 one = 1; 
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &one, sizeof(one)), "could not enable input on Apple Voice Processing IO"); 
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &one, sizeof(one)), "could not enable output on Apple Voice Processing IO"); 

    // Explicitly set the input and output client formats 
    // sample rate = 44100, num channels = 1, format = 32 bit floating point 

    CAStreamBasicDescription ioFormat = CAStreamBasicDescription(44100, 1, CAStreamBasicDescription::kPCMFormatFloat32, false); 
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &ioFormat, sizeof(ioFormat)), "couldn't set the input client format on Apple Voice Processing IO"); 
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &ioFormat, sizeof(ioFormat)), "couldn't set the output client format on Apple Voice Processing IO"); 

    // Set the MaximumFramesPerSlice property. This property is used to describe to an audio unit the maximum number 
    // of samples it will be asked to produce on any single given call to AudioUnitRender 
    UInt32 maxFramesPerSlice = 4096; 
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, sizeof(UInt32)), "couldn't set max frames per slice on Apple Voice Processing IO"); 

    // Get the property value back from Apple Voice Processing IO. We are going to use this value to allocate buffers accordingly 
    UInt32 propSize = sizeof(UInt32); 
    XThrowIfError(AudioUnitGetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, &propSize), "couldn't get max frames per slice on Apple Voice Processing IO"); 

    // We need references to certain data in the render callback 
    // This simple struct is used to hold that information 

    cd.rioUnit = _rioUnit; 
    cd.muteAudio = &_muteAudio; 
    cd.audioChainIsBeingReconstructed = &_audioChainIsBeingReconstructed; 

    // Set the render callback on Apple Voice Processing IO 
    AURenderCallbackStruct renderCallback; 
    renderCallback.inputProc = performRender; 
    renderCallback.inputProcRefCon = NULL; 
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &renderCallback, sizeof(renderCallback)), "couldn't set render callback on Apple Voice Processing IO"); 

    // Initialize the Apple Voice Processing IO instance 
    XThrowIfError(AudioUnitInitialize(_rioUnit), "couldn't initialize Apple Voice Processing IO instance"); 
} 

catch (CAXException &e) { 
    NSLog(@"Error returned from setupIOUnit: %d: %s", (int)e.mError, e.mOperation); 
} 
catch (...) { 
    NSLog(@"Unknown error returned from setupIOUnit"); 
} 

return; 
} 

要启动IOUnit:

NSError *error = nil; 
[[AVAudioSession sharedInstance] setActive:YES error:&error]; 
if (nil != error) NSLog(@"AVAudioSession set active (TRUE) failed with error: %@", error); 

OSStatus err = AudioOutputUnitStart(_rioUnit); 
if (err) NSLog(@"couldn't start Apple Voice Processing IO: %d", (int)err); 
return err; 

要停止IOUnit

NSError *error = nil; 
[[AVAudioSession sharedInstance] setActive:NO withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&error]; 
if (nil != error) NSLog(@"AVAudioSession set active (FALSE) failed with error: %@", error); 

OSStatus err = AudioOutputUnitStop(_rioUnit); 
if (err) NSLog(@"couldn't stop Apple Voice Processing IO: %d", (int)err); 
return err; 

我使用PJSIP作为我的SIP堆栈并具有Asterisk服务器。这个问题必须是客户端,因为我们也有一个基于Android的PJSIP实现没有这个问题。

+0

我正在调查与我的应用程序几乎相同的问题。如果我正确理解扬声器中的配置,扬声器代码将输入流传输到扬声器。所以我没有使用该示例代码。我正在使用pjsua_set_no_snd_dev()和pjsua_set_snd_dev()。在我的情况下,另一方受到这个短路问题的影响。顺便说一下,如果我不使用CallKit,我的植入工作正常。 –

+0

好吧,我的问题也存在于较旧的iOS版本中,另一方可以听他自己的声音,但我不确定这个问题在哪里以及是什么原因。在你的情况下,我会说,使用pjsip函数。有关pjsip和CallKit的更多详细信息,请查看https://trac.pjsip.org/repos/ticket/1941 –

回答

2

我使用WebRTC来解决相同的问题。我终于得出结论,你不应该在AudioController.mm中设置IOUnit,而是把它留给PJSIP(在我的情况下是WebRTC)。

速战速决如下:在ProviderDelegate.swiftdidActivate audioSession 注释掉[self setupIOUnit];在AudioController.mm的setupAudioChain以及startAudio()

相关问题