what CAN or CAN'T CURL do?

I just need and explanation here. I am writing a script that opens a secure safari page and populates a form from filemaker data and submits it. The resulting page is populated with fields which are based on what the server determines from the previous data submitted. These fields then get populated and the process continues on.
This continues for about 8 form submittals until a final result page is returned.
Right now, when any page is returned from the server, a handler looks through the source to determine what fields on the new page or errors are present and then populates the page’s fields with new data. It seems this session/customer is tracked by the server throughout the entire process.

Can CURL do this?
What about secure pages?
How are the results presented?
If cookies are used to track me(the user) does curl work with them?
What about if the current safari page has JavaScript on it doing junk… validations etc.
What about how the server tracks the session/customer through the process?
Why would I want to change to using CURL?

thank for in advance and sorry for all the questions… I see examples of CURL but I need to know what it can do.

“Can CURL do this?”
curl can do just about anything a browser can do. The problem is that you have to manually tell it to do some things… like store/send cookies, send passwords, emulate a browser/referrer, read page source, etc. If you already have much of the code to do this then you may be at an advantage. curl does not ‘automatically’ do anything, though, it is only a mechanism for retrieving data… how you interpret that data will still need to be handled by your script.

“What about secure pages?”
Yes, curl can handle secure pages. There are some hoops you may need to jump through depending on what ssl version you’re going through, but I’ve found it to be possible under most circumstances.

“How are the results presented?”
What results? Essentially, curl works with the source code and backend features… skipping any step where in a browser you would be shown a rendered html page. As I said, curl only gets data… whether a list of files (ftp) or the source code of a page (html/ftp)… it knows nothing of what that data ‘means’.

“If cookies are used to track me(the user) does curl work with them?”
Yes, curl can handle cookies, both automatically or manually. You can read them to file and interpret them as strings, or create a ‘netscape-style’ cookies file and have them received and passed automatically.

“What about if the current safari page has JavaScript on it doing junk… validations etc.”
This can get a bit tricky. If you are proficient in javascript, then you should be able to find out from the source code what all the javascript is doing and reproduce it in your script.

“What about how the server tracks the session/customer through the process?”
This is not really anything you have control over. Typically servers will handler tracking in one of two ways. They will either store information about you on the server in a database, like your ip address, session history, etc, and track that with url-encoded session id’s and/or cookies… or they will simply dump cookies containing lots of data onto your machine and interpret them as you go along without using any server-side database interaction. If using only cookies, you can use AS to pretty easily send and retrieve any ccokies they set. Some more complex server configurations will present more hoops to jump through, but you’ll still probably be able to figure them out.

“Why would I want to change to using CURL?”
I use curl when I want to provide my own interface, rather than relying on controlling safari to display it all. That is primarily driven by the fact that I typically develop ASStudio apps, rather than basic scripts. Honestly, if you have a system working that uses safari, it can handle much of the data processing you’ll have to do manually, and will eliminate the need for you to write all of the page parsing routines yourself. I wrote a farily complex login interface in applescript studio that started from a discussion HERE. It accessed a server using an older version of ssl, used javascript, used ‘post’ method forms, and cookies. It worked very well, but was quite hard to set up initially.

With a script that spans 8 html pages, you should get a firm grasp of EVERYTHING that the server and your browser are doing before you get started fooling with the script. Once you know everythign about the process, and have evaluated every html page… both forms and error pages… you will probably be able to get an idea of how big the project is and whether or not it’s worth the effort of writing all that custom html-parsing code.

If you decide to jump in, I can get you some more code that I’ve used and some hints on how to get around some of the obstacles I ran into. Also, check out the official curl Manual and ‘How to Use’ pages if you haven’t already for useful examples and syntax.

Good luck,
j