2011-05-09 235 views
15

是否可以将音频输出重定向到手机扬声器并仍然使用麦克风耳机输入?将音频输出重定向到手机扬声器和麦克风输入到耳机

如果我将音频路由重定向到手机扬声器而不是耳机,它也会重定向麦克风。这是有道理的,但我似乎无法只是重定向麦克风输入?有任何想法吗?

这里是我使用重定向音频扬声器代码:

UInt32 doChangeDefaultRoute = true;   
propertySetError = AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryDefaultToSpeaker, sizeof(doChangeDefaultRoute), &doChangeDefaultRoute); 
NSAssert(propertySetError == 0, @"Failed to set audio session property: OverrideCategoryDefaultToSpeaker"); 
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker; 
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute,sizeof (audioRouteOverride),&audioRouteOverride); 
+1

您可能知道这一点,但是当您启用扬声器时,您还可以在手机上启用麦克风。但是,在iPod上,如果启用扬声器,您仍然可以从耳机获得麦克风声音。由于iPod上没有麦克风,故意使用。 我简单地得到了一个ios 4.3 sdk应用程序,通过在路由更改后再次初始化AUgraph,从耳机和扬声器输出中获取麦克风,但它间歇性地发生,现在根本不会发生(ios 4.3+ xcode 4+) – zeAttle 2012-01-10 14:34:29

回答

4

它看起来并不像它的可能的话,我怕。

Audio Session Programming Guide - kAudioSessionProperty_OverrideAudioRoute

如果耳机的时候插入设置该属性的值 到kAudioSessionOverrideAudioRoute_Speaker,该系统改变了输入 音频路由以及输出:输入是来自 的内置麦克风;输出转到内置扬声器。

this question

+0

它可能值得跟Tommy在[这个问题](http://stackoverflow.com/questions/4002133/forcing-iphone-microphone-as-audio-input)中所做的相同步骤来找到可用的'AVCaptureDevice's – 2012-01-10 16:09:09

6

可能的复制这是可能的,但它是挑剔你如何设置它。

[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord error:nil]; 
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker; 
AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride); 

它使用AVAudioSessionCategoryPlayAndRecord是非常重要或者路由将无法进入扬声器。一旦为音频会话设置了覆盖路由,您可以使用AVAudioPlayer实例并向扬声器发送一些输出。

希望能像其他人一样为我工作。关于此的文档很分散,但Skype应用程序证明这是可能的。坚持,我的朋友们! :)

一些苹果文档的位置:http://developer.apple.com/library/ios/#documentation/AudioToolbox/Reference/AudioSessionServicesReference/Reference/reference.html

执行页面上的搜索kAudioSessionProperty_OverrideAudioRoute

2

你可以做的是迫使音频输出的扬声器在任何情况下:

UI Hacker - iOS: Force audio output to speakers while headphones are plugged in

@interface AudioRouter : NSObject 

+ (void) initAudioSessionRouting; 
+ (void) switchToDefaultHardware; 
+ (void) forceOutputToBuiltInSpeakers; 

@end 

and

#import "AudioRouter.h" 
#import <AudioToolbox/AudioToolbox.h> 
#import <AVFoundation/AVFoundation.h> 

@implementation AudioRouter 

#define IS_DEBUGGING NO 
#define IS_DEBUGGING_EXTRA_INFO NO 

+ (void) initAudioSessionRouting { 

    // Called once to route all audio through speakers, even if something's plugged into the headphone jack 
    static BOOL audioSessionSetup = NO; 
    if (audioSessionSetup == NO) { 

     // set category to accept properties assigned below 
     NSError *sessionError = nil; 
     [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker error: &sessionError]; 

     // Doubly force audio to come out of speaker 
     UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker; 
     AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride); 

     // fix issue with audio interrupting video recording - allow audio to mix on top of other media 
     UInt32 doSetProperty = 1; 
     AudioSessionSetProperty (kAudioSessionProperty_OverrideCategoryMixWithOthers, sizeof(doSetProperty), &doSetProperty); 

     // set active 
     [[AVAudioSession sharedInstance] setDelegate:self]; 
     [[AVAudioSession sharedInstance] setActive: YES error: nil]; 

     // add listener for audio input changes 
     AudioSessionAddPropertyListener (kAudioSessionProperty_AudioRouteChange, onAudioRouteChange, nil); 
     AudioSessionAddPropertyListener (kAudioSessionProperty_AudioInputAvailable, onAudioRouteChange, nil); 

    } 

    // Force audio to come out of speaker 
    [[AVAudioSession sharedInstance] overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:nil]; 


    // set flag 
    audioSessionSetup = YES; 
} 

+ (void) switchToDefaultHardware { 
    // Remove forcing to built-in speaker 
    UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_None; 
    AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride); 
} 

+ (void) forceOutputToBuiltInSpeakers { 
    // Re-force audio to come out of speaker 
    UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker; 
    AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride); 


} 

void onAudioRouteChange (void* clientData, AudioSessionPropertyID inID, UInt32 dataSize, const void* inData) { 

    if(IS_DEBUGGING == YES) { 
     NSLog(@"==== Audio Harware Status ===="); 
     NSLog(@"Current Input: %@", [AudioRouter getAudioSessionInput]); 
     NSLog(@"Current Output: %@", [AudioRouter getAudioSessionOutput]); 
     NSLog(@"Current hardware route: %@", [AudioRouter getAudioSessionRoute]); 
     NSLog(@"=============================="); 
    } 

    if(IS_DEBUGGING_EXTRA_INFO == YES) { 
     NSLog(@"==== Audio Harware Status (EXTENDED) ===="); 
     CFDictionaryRef dict = (CFDictionaryRef)inData; 
     CFNumberRef reason = CFDictionaryGetValue(dict, kAudioSession_RouteChangeKey_Reason); 
     CFDictionaryRef oldRoute = CFDictionaryGetValue(dict, kAudioSession_AudioRouteChangeKey_PreviousRouteDescription); 
     CFDictionaryRef newRoute = CFDictionaryGetValue(dict, kAudioSession_AudioRouteChangeKey_CurrentRouteDescription); 
     NSLog(@"Audio old route: %@", oldRoute); 
     NSLog(@"Audio new route: %@", newRoute); 
     NSLog(@"========================================="); 
    } 



} 

+ (NSString*) getAudioSessionInput { 
    UInt32 routeSize; 
    AudioSessionGetPropertySize(kAudioSessionProperty_AudioRouteDescription, &routeSize); 
    CFDictionaryRef desc; // this is the dictionary to contain descriptions 

    // make the call to get the audio description and populate the desc dictionary 
    AudioSessionGetProperty (kAudioSessionProperty_AudioRouteDescription, &routeSize, &desc); 

    // the dictionary contains 2 keys, for input and output. Get output array 
    CFArrayRef outputs = CFDictionaryGetValue(desc, kAudioSession_AudioRouteKey_Inputs); 

    // the output array contains 1 element - a dictionary 
    CFDictionaryRef diction = CFArrayGetValueAtIndex(outputs, 0); 

    // get the output description from the dictionary 
    CFStringRef input = CFDictionaryGetValue(diction, kAudioSession_AudioRouteKey_Type); 
    return [NSString stringWithFormat:@"%@", input]; 
} 

+ (NSString*) getAudioSessionOutput { 
    UInt32 routeSize; 
    AudioSessionGetPropertySize(kAudioSessionProperty_AudioRouteDescription, &routeSize); 
    CFDictionaryRef desc; // this is the dictionary to contain descriptions 

    // make the call to get the audio description and populate the desc dictionary 
    AudioSessionGetProperty (kAudioSessionProperty_AudioRouteDescription, &routeSize, &desc); 

    // the dictionary contains 2 keys, for input and output. Get output array 
    CFArrayRef outputs = CFDictionaryGetValue(desc, kAudioSession_AudioRouteKey_Outputs); 

    // the output array contains 1 element - a dictionary 
    CFDictionaryRef diction = CFArrayGetValueAtIndex(outputs, 0); 

    // get the output description from the dictionary 
    CFStringRef output = CFDictionaryGetValue(diction, kAudioSession_AudioRouteKey_Type); 
    return [NSString stringWithFormat:@"%@", output]; 
} 

+ (NSString*) getAudioSessionRoute { 
    /* 
    returns the current session route: 
    * ReceiverAndMicrophone 
    * HeadsetInOut 
    * Headset 
    * HeadphonesAndMicrophone 
    * Headphone 
    * SpeakerAndMicrophone 
    * Speaker 
    * HeadsetBT 
    * LineInOut 
    * Lineout 
    * Default 
    */ 

    UInt32 rSize = sizeof (CFStringRef); 
    CFStringRef route; 
    AudioSessionGetProperty (kAudioSessionProperty_AudioRoute, &rSize, &route); 

    if (route == NULL) { 
     NSLog(@"Silent switch is currently on"); 
     return @"None"; 
    } 
    return [NSString stringWithFormat:@"%@", route]; 
} 

@end 
相关问题