I am trying to use speech recognition to do a few things. But I’m trying to detect & understand words while iTunes is playing. I need to set up the Speech preferences to “Listen continuously with keyword”. Keyword “Optional before commands” is not a viable choice.
What I would like to do is to detect when the Speech Recognition Server hears the proper keyword & activates the microphone.
Once the system activates the microphone, I want to launch an applescript to mute the iTunes player. (This part is easy, of course.)
Can anyone help me locate a boolean that I can access to tell me when the system has heard & recognized the proper keyword & activated the microphone?
Thanks very much.
BTW, my system details are included in the system info.
Model: 2.33 GHz MacBook Pro (17")
AppleScript: Script Editor 2.2.1
Browser: Firefox 3.6.3
Operating System: Mac OS X (10.5)
It depends on your relationship in distance from speakers and microphone. I have a media center I control with speech recognition. The microphone picks up sound wonderfully when there is no background noise. Infact I can be in another room and loudly relay my command (not yelling) and it will pick up. When I watch a show I move the mic closer to myself and place a cone over the input that funnels my voice into the mic while deflecting the input from the television.
Short answer,there is no software or programming method to accomplish this. Between hardware and ingenuity you can rig something up to work.
Just wanted to add a few things to my previous post to touch on your other requests.
You can turn on Upon Recognition Play this sound. With this you will get a “ding” or whatever that will acknowledge every command picked up by the system. Underneath the Speech oval that appears when you relay a command there will be a small “soft window” that will appear that also shows exact acknowledgement.
You can create speech commands that will lower raise mute or maximize volume. Please note that they will only work if the input reaches the system, but when you can’t find the remote and there is a silent moment these commands work beautifully.
set volume output volume(0)
set volume output volume(100)
set out_vol to output volume of (get volume settings)
set volume output volume(out_vol - 10)
set out_vol to output volume of (get volume settings)
set volume output volume(out_vol + 10)
Also, on my MacBook Pro speech commands will only be picked up about 6 times out of 10 regardless of background noise. Where as my Mac Mini Media center as an external USB AK5370 Mic will pick up the commands probably a good 9 out of 10 (its actually better than that but left room for some error).
Both you & Peter are on the right track for the optimum solution: separate the music output from the sound input. A directional mic input and/or external speakers would do great.
But I’ve got to allow for the non-optimal condition as well, which is no mic & no speakers.
By experimenting, I find that I can get the system to acknowledge a keyword if I simply say it during a quiet moment during the music.
I was trying to tap into this boolean bit that tells the system that the keyword has been spoken & to try to interpret spoken commands. I’d watch for this bit change (if I knew where to find it) and use it to key a script to mute the sound. A nice elegant solution.
Here’s an alternate (perhaps better) way to do this that doesn’t require finding this “keyword spoken” bit.
Turn off all command set responses on the Commands panel of the Speech Prefs (i.e., Address book, Global Speakable items, App Specific Items, etc.) I don’t want the system responding to any generic commands during the time that this routine is running.
Set the Keyword to be “optional”.
As soon as any correct command is heard (from a “listen for” list), then the sound goes immediately to mute, so that it can accurately interpret the rest of the commands.
After the routine is finished, reset all the speech prefs to their initial values.
I’ve read where people recommend changing plist files directly, instead of using GUI scripting of preferences like this.
Unfortunately, for all of my searching, I’ve not been able to find the plist files that would allow me to change these selections.
Anyone have any suggestion?
When I try to do this with GUI scripting, I run into some bizarre behavior.
Below is a script that opens the Speech Prefs, selects the commands window, and then attempts to change the status of the “Application Specific Items” check box.
-- Open Speech Preferences pane
tell application "System Preferences"
set the current pane to pane id "com.apple.preference.speech"
reveal anchor "SpeechRecognition" of pane id "com.apple.preference.speech"
tell application "System Events"
tell process "System Preferences"
-- click Commands button
click radio button "Commands" of tab group 1 of tab group 1 of window "Speech"
-- Note that "Address Book" line is highlighted when the pane first opens
-- In line below, checkbox 1 of row 1 is "Address Book"
-- checkbox 1 of row 2 is "Global Speakable Items"
-- checkbox 1 of row 3 is "Application Specific Items"
click checkbox 1 of row 3 of table 1 of scroll area 1 of tab group 1 of tab group 1 of window "Speech"
When I run it, the commands window opens with the “Address Book” item highlighted (in gray). What happens is that, no matter which check box row I tell it to change, only the highlighted line check box does change. And then the proper check box flickers momentarily, but doesn’t change status.
If I change the highlighted line from Address Book to some other choice & re-run the script, then whichever line is highlighted will have its check box clicked.
Curiouser & curiouser…
All told, I’d really prefer to learn to change the plist files directly.