Speakable A.I. Tiger KScript App

I’ve begun working on an A.I. Applescript App.

The way I’m envisioning it right now is that if the program can take an input sentence and use applescript to use the Dictionary.app contents(using the Dictionary asdictionary) as the lexicon from which to match input against as well as using the dictionary as a lexicon from which to construct responses from.

Initially, I’d envisioned that there could be an XML based language created (which I’m calling SPKML), that would at first be based loosely on the AIML language…

It would be cool to see that kind of hybrid created with applescript, allowing for Speech Recognition, Text To Speech and learning by reading from and generating compliant XML data with new knowledge…

Chris Johnson
New Media Artisan

chris@spkml.com

Keep us posted. Sounds very ambitious.

Hi,

You can use SpeechRecognitionServer. Here’s a simple example:


property phrase_list : {"hello", "how are you?", "goodbye"}
property response_list : {"greetings", "I'm fine", "see you later"}
--
tell application "SpeechRecognitionServer"
	set the_phrase to (listen for phrase_list with prompt ¬
		"say a phrase from the list" displaying phrase_list)
end tell
set i to 0
repeat
	set i to i + 1
	set this_item to item i of phrase_list
	if this_item is the_phrase then exit repeat
end repeat
say (item i of response_list)

I don’t know how to use the Dictionary.app.

Edited: the script.

gl,

I recently found that you can also use do shell script and do javascript which should allow for the inclusion of other scripts like the Program E implimentation of the A.L.I.C.E. chat bot… The do javascript could come in handy when you crack open the Dictionary.WDGT and yank out the .js file, mod it and use with the Program E as the lexicon of words instead of just the AIML files…

Indeed, very ambitious but without having a representation for semantics, I doubt that you will get farther along than a fun prototype. Understanding an utterance, written or spoken, requires far more than a word list.

Also, WRT the AS example by kel, this use of “listen for” is a “one shot” kind of thing. You listen for a phrase and then the recognizer goes away. What you probably want to use is “listen continuously for.” Since that keeps the recognizer open until you stop it. However, it too has problems, particularly in timing out. You can’t just leave it running.

There is the option of using AS studio and call out to Cocoa’s NSSpeechRecognition class to do the dirty work. It is simple but lacks the depth that the Carbon API brings.

Tom

Hi Tom,

If you want the recognizer opened continuously and not timeout, you can use an error handler to trap the timeout error. e.g.


property phrase_list : {"hello", "how are you?", "goodbye", "quit"}
property response_list : {"greetings", "I'm fine", "see you later", "quitting"}
-- 
set the_phrase to ""
repeat until the_phrase is "quit"
	try
		tell application "SpeechRecognitionServer"
			set the_phrase to (listen continuously for phrase_list with prompt ¬
				"say a phrase from the list" displaying phrase_list with identifier "stop")
		end tell
		set i to 0
		repeat
			set i to i + 1
			set this_item to item i of phrase_list
			if this_item is the_phrase then exit repeat
		end repeat
		say (item i of response_list)
		if the_phrase is "quit" then
			tell application "SpeechRecognitionServer"
				stop listening for identifier "stop"
			end tell
		end if
	on error
		set the_phrase to ""
	end try
end repeat

gl,