It is not bad, but it depends. See my last two paragraphs for a summary.
The “extra” overhead usually comes down to having to create new processes.
Normal Mac OS X apps occupy only a single process, but some may use a small handful of processes. The difference is that app processes are (usually) only created once when the app launches and they exist until the app finishes (when the user activates the Quit menu item (or closes the window in single-window apps), when AppleScript sends a quit command, when the app crashes, etc.).
To run a script, do shell script needs to create a new process for the shell, and then the shell will usually need to create one or more processes for the commands it is going to execute. So the degree of the extra overhead really depends on the shell script you are running, but there is a bare minimum of one extra process (if all the script’s commands are built into the shell) and usually at least two (e.g. if the script runs a single instance of a shell tool sed, awk, perl, python, ruby, etc.).
Since many Unix-ish systems are designed around “small tools” (each in their own processes), these systems tend to be quite capable at creating extra processes. Folks working on porting Unix software to Windows are often plagued with process creation overhead because Windows is (reportedly) horrible a Unix-style (“fork + exec”) process creation. With Mac OS X, we are not so bad off. Process creation is probably not as fast on Mac OS X as with a Linux or FreeBSD kernel, but it still fairly “cheap” (Mac OS X uses some parts of the FreeBSD OS, but the kernel is different). They are not, however, free:
[code]bash-4.0$ time bash -c ‘jot 100 | while read num; do echo foo | perl -pe “y/a-z/A-Z/”; done > /dev/null’
real 0m2.621s
user 0m0.528s
sys 0m1.518s
bash-4.0$ time bash -c ‘jot 100 | while read num; do echo foo | sed “y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/”; done > /dev/null’
real 0m2.575s
user 0m0.442s
sys 0m1.599s
bash-4.0$ time bash -c ‘jot 100 | while read num; do echo foo | tr a-z A-Z; done > /dev/null’
real 0m1.800s
user 0m0.273s
sys 0m1.224s
bash-4.0$ time bash -c ‘jot 100 | while read num; do v=foo; echo ${v^^}; done > /dev/null’
real 0m0.137s
user 0m0.031s
sys 0m0.038s[/code]
The scripts “uppercase” 100 "foo"s into "FOO"s, but they do so in a way that uses lots of extra processes. With perl, and sed, it took 2.5 seconds (and 102 processes, one bash, one jot, one hundred perl/sed). Using tr in the same way took only 1.8 seconds (tr is a much smaller, dedicated tool; it is faster to load). But keeping the operations in the shell (using only two processes) took only 0.137 seconds for the same work (the bash I was using is from MacPorts, the default bash (version 2.05) on my system does not understand the ${parmater^^} syntax I used). So, it was an order of magnitude slower to use lots of extra processes. But the slowest was still less under a few seconds, which may be fast enough for some purposes. Plus that was a dumb way to do that operation: pulling the tool (perl/sed/tr) outside the loop reduces the process counts to just 3 and speeds things up dramatically (0.212, 0.173, and 0.133 seconds respectively; all nearly the same as keeping everything in the shell for a total of only 2 processes: saving a single process does not give much benefit, saving 99 processes is meaningful).
There is also memory overhead, but it is fairly small. The shell tools generally do not take much memory, but since many of them are programming languages themselves, they can use up all your memory if they are given memory hogging programs. Caching does have an effect on repeated use of shell tools, but it is not as dramatic as the typical AppleScript scenario of “the app loads slowly once, then you can tell it do many things fairly quickly”.
If you are sending or receiving large chunks of text to a shell tool you might run into “I/O” overhead. The issue here is that do shell script has to convert between the (typically) UTF-16 internal text representation used by AppleScript and the UTF-8 representation normally used at the shell level. This can really add up if you are pumping large amounts of data into or out of a shell tool.
It really comes down to what you need to accomplish. Using do shell script can be faster than either plain AppleScript or AppleScript directing an app/OSAX (the RPC overhead of AppleEvents can really add up, too), but it really depends on the data and the work that needs to be done. Sometimes AppleScript with a specialized app/OSAX will be faster, sometimes plain AppleScript (no apps, no OSAX) will be faster.
My recommendation is to “go with what you know”. If you are familiar with the shell tools, it might be faster to develop a working solution using do shell script. If you already know that FooApp and BarApp already have the AppleScript commands that you need, then it will probably be faster to develop that solution. Once you have a working solution (hopefully one that was quick and easy to develop), you can then test to decide if it is too slow. If it is, then see if it can be optimized, or explore other solutions. If you have no idea how to solve a problem, then you might need to consult someone with more expertise (they will have a the knowledge/experience to pick a particular implementation; at the very least, they will probably have a preferred solution/tool).