Curl?

I would like a backup-script that when it runs opens a few urls, reads the source into a textfile and saves it in a folder named with the days date.

url1 http://www.site.com/dbbackup.asp?username=xxx&password=xxx
url2…

I would like theese to be saved soething like this: desktop > backups > db050612 > site_com050612.sql

I will also do a ip-check on the asp-page so it cant spill it´s guts to anyone.

I think I found that I should use something called CURL, but thats all I know right now, does anyone have any more clues?

New step is running the script every day at noone or something.

Here is a quick example:

do shell script "curl http://www.google.com/ -o `date '+/Users/username/Desktop/backups/db%y%m%d/site_com%y%m%d.sql'`"

Type “man curl” in a Terminal window for all info.
And there are lotsa options to schedule it… You could even create a cron task with these shell statements, without the need of AS at all…

I had this:

set the_HTML to my get_url(“http://www.5clubs.com/backup/?username=xxx&password=xxx”)

on get_url(this_url)
set browser_string to “‘Mozilla/5.0 (Macintosh; U; PPC Mac OS X; en-us) AppleWebKit/125.4 (KHTML, like Gecko) Safari/125.9’”
return do shell script "curl " & (quoted form of this_url) & " -A " & browser_string
end get_url

set shortDate to do shell script “date +%y%m%d”

But that seems far more easy.

What is the -o, it gives an error.

The man curl was nice, but it gives a strange problem, it only prints so much text that fits in the window, and I get no scroll. When I make the window bigger I get more info.

As you can hear, this is like the first time I open the Terminal…

Another problem is that it is not creating a new file, I have to create the file first, but now I get access denied on this:

do shell script "curl -o http://www.5clubs.com/backup/?username=xxx&password=xxx `date '+/Users/pontusuggla/Desktop/backups/test.txt'`"

But I get no error on the -o now.

I get the same problem with all man pages.

The man page for cURL is available here curl - How To Use . This page is also useful curl - Tutorial .

Best wishes

John M

You “scroll” man pages using the space bar. And you exit them using the key “q”.
Can you post your not-working code based in my first example? (anyway, your code is near the same than mine, both are valid)

I found this

do shell script "curl http://www.5clubs.com/backup/ | bbedit"

which opens the source in bbedit and from there I could tell bbedit to save it where I like, but I don´t have bbedit on the server in mind (but I do have textEdit). So what I need now is how to create a file with this name (domain)050615.sql as a plain text-file

set fileName to do shell script "date +%y%m%d"

In asp I would use an array for the urls since this will become a loop

arrUrl = Split(“5clubs.com;apa.se”, “;”)

For i = 0 to Ubound(arrUrl)
code using “http://www.” & arrUrl(i) & “/backup/”
Next

How do I build this in AppleScript?

This is what I have now.

global pParentFolder, full_url

on idle theObject
   doBackup()
   -- return 24 * hours
   return 10 -- run every 10 seconds just for now
end idle

on run
   set pParentFolder to choose folder with prompt "Choose a folder where to save the backup files"
   set full_url to text returned of (display dialog "URL: " default answer "http://www.5clubs.com/backup/")
   doBackup()
end run

to doBackup()
   set dateTime to do shell script "date +%Y%m%d%-%H%M%S"
   set new_file to dateTime & ".sql"
   
   tell application "Finder" to make new file at pParentFolder with properties {name:dateTime & ".sql"}
   
   set sHtml to do shell script "curl " & (quoted form of full_url)
   
   set this_file to pParentFolder & new_file
   
   my write_to_file(sHtml, this_file, false)
end doBackup

on write_to_file(this_data, target_file, append_data)
   try
       set the target_file to the target_file as text
       set the open_target_file to ¬
           open for access file target_file with write permission
       if append_data is false then ¬
           set eof of the open_target_file to 0
       write this_data to the open_target_file starting at eof
       close access the open_target_file
       return true
   on error
       try
           close access file target_file
       end try
       return false
   end try
end write_to_file

Next step is making it work for more than one site and also making it download only the tables of my choise. I was thinking of saving the urls in a textfile and then store all the table-names tabseparated after. If I build a page that spills only the tablenames and have a function where the script gets the info, stores it on a row and maby I can just edit the list in the textfile. Something like this:

http://www.site.com/backup;tblAdmin;tblColors;tblMembers
http://subdomain.site/backup;tblStats;tblPictures

What do you think?