2016-12-27 96 views
0

如何检测语音听写是否用于UITextView?或麦克风按钮在UI的TextViewios:如何检测语音听写是否用于UITextView?或在键盘上点击麦克风按钮

enter image description here

+0

[ios:如何检测语音听写是否用于UITextField?或麦克风按钮被轻敲在键盘上](http://stackoverflow.com/questions/32652775/ios-how-to-detect-if-voice-dictation-was-used-for-uitextfield-or-microphone-bu) – Saavaj

+0

@ Saavaj,请你详细的看我的问题,我提到UITextview不是UITextField.UITextView是不同于UITextField – Nithya

回答

0

敲击键盘上您可以使用语音工具框架,使用Siri的语音识别。 先导入语音工具框架然后确认一个委托 这里是快捷版本。它可能有助于

import Speech 

    class ViewController: UIViewController, SFSpeechRecognizerDelegate { 

    private var recognitionRequest: SFSpeechAudioBufferRecognitionRequest? 
    private var recognitionTask: SFSpeechRecognitionTask? 
    private let audioEngine = AVAudioEngine() 

    private let speechRecognizer = SFSpeechRecognizer(locale: Locale.init(identifier: "en-US")) 

     override func viewDidLoad() { 
     super.viewDidLoad() 
     self.authorizeSpeech() 
     } 

     private func authorizeSpeech() { 
     SFSpeechRecognizer.requestAuthorization { (authStatus) in //4 

     var isButtonEnabled = false 

     switch authStatus { //5 
      case .authorized: 
      isButtonEnabled = true 

      case .denied: 
      isButtonEnabled = false 
      print("User denied access to speech recognition") 

      case .restricted: 
      isButtonEnabled = false 
      print("Speech recognition restricted on this device") 

     case .notDetermined: 
      isButtonEnabled = false 
      print("Speech recognition not yet authorized") 
     } 

     OperationQueue.main.addOperation() { 
      print(isButtonEnabled) //this tells that speech authorized or not 
     } 
     } 
     } 


    } 

现在在你添加一些自定义消息的info.plist

<key>NSMicrophoneUsageDescription</key> <string>Your microphone will be used to record your speech when you press the Start Recording button.</string> 

    <key>NSSpeechRecognitionUsageDescription</key> <string>Speech recognition will be used to determine which words you speak into this device microphone.</string> 

现在创建)

func startRecording() { 

if recognitionTask != nil { 
    recognitionTask?.cancel() 
    recognitionTask = nil 
} 

let audioSession = AVAudioSession.sharedInstance() 
do { 
    try audioSession.setCategory(AVAudioSessionCategoryRecord) 
    try audioSession.setMode(AVAudioSessionModeMeasurement) 
    try audioSession.setActive(true, with: .notifyOthersOnDeactivation) 
} catch { 
    print("audioSession properties weren't set because of an error.") 
} 

recognitionRequest = SFSpeechAudioBufferRecognitionRequest() 

guard let inputNode = audioEngine.inputNode else { 
    fatalError("Audio engine has no input node") 
} 

guard let recognitionRequest = recognitionRequest else { 
    fatalError("Unable to create an SFSpeechAudioBufferRecognitionRequest object") 
} 

recognitionRequest.shouldReportPartialResults = true 

recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in 

    var isFinal = false 

    if result != nil { 

     your_text_view.text = result?.bestTranscription.formattedString 
     isFinal = (result?.isFinal)! 
    } 

    if error != nil || isFinal { 
     self.audioEngine.stop() 
     inputNode.removeTap(onBus: 0) 

     self.recognitionRequest = nil 
     self.recognitionTask = nil 


    } 
}) 

let recordingFormat = inputNode.outputFormat(forBus: 0) 
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in 
    self.recognitionRequest?.append(buffer) 
} 

audioEngine.prepare() 

do { 
    try audioEngine.start() 
} catch { 
    print("audioEngine couldn't start because of an error.") 
} 

} 

确认委托

所谓的startRecording(新功能
func speechRecognizer(_ speechRecognizer: SFSpeechRecognizer, availabilityDidChange available: Bool) { 
    if available { 
     startRecording() 
    } else { 
     //print("not implement") 
    } 
    } 
+0

@thanks Umesh Verma,我想我的客观的C版本,请你分享它对我来说是非常有用的 – Nithya

相关问题