Hi -
I’ve built a utility which I’m tentatively calling “MakeItSo” that will be of interest to any AppleScripter who has wanted to do something substantial with Apple’s speech recognition engine. As you know, you can already call “listen for” in your scripts. But “listen for” was intended to be a minimalist speech command interface and while fun, it is very, very limited.
I’ve built a Cocoa application that allows you to define interesting command grammars which employ recursive language models and associate the models with AppleScript handlers. Here’s a specific example - lets say I want to be able to say “play something from classical guitar” where “classical guitar” the name of one of my playlists. Using MakeItSo, I quickly create a language model that contains an embedded model: “play something from ”. Then I create my AppleScript handler called play_from_playlist which takes an AppleScript record as its only parameter. I use MakeItSo to link the model and the script. At recognition time my script handler is called with a record containing the recognized phrase and the bindings of all of the embedded language models. E.g.: {recognized_phrase: “play something from classical guitar”, playlist: “classical guitar”}
I think you see the power in this.
I’ve also built a Cocoa framework that allows you add this functionality to your cocoa applications. I suspect there is a way to link to my framework in ASStudio but I am not a studio user at the moment.
So, scripters interested in speech recognition and brave enough to give me feedback on this tool - drop me a note and I’ll point you to more information and a download site.
Cheers,
Tom Bonura