tester

Right then.

I have a mac that checks a folder on an NT machine every minute, if there are any jobs it picks them up and makes a list and then using a repeat loop it processes each file.
Problem: if the mac crashes everything stops and I am in trouble.

Could I use two macs and change the scripts so that instead of picking up all of the jobs it just picks up the top one moves it and processes, the other mac could also be running the same scripts so that would check and the top job would now be different. If one mac goes down the the other one still runs. 1. how do I get the macs to own the file so the other one does not try and take it.

I could copy across and delete the original but the other mac might be trying the same thing, I want it to say "HEY LEAVE MY FILE ALONE IT’S MINE, TAKE THE NEXT ONE.

I hope this makes sense.

Regards
Rick

Scripts can change the name of the picked file by adding a special prefix or suffix and they both don’t pick any file which has that particular prefix or suffix.

Maybe there’s folk with more production experience can give you a good solid recommendation, but if it were me dumped in the deepend scrambling for a solution I’d probably use a shared logfile. When a Mac decides to take a new job, first it looks in the logfile to see what files are already in-processing. Next it compares the contents of the job folder to find the first file not currently being processed, then logs that it is now working on that file and nobody else is to use it (e.g. job name, Mac name, time started, etc). Once it’s finished processing, it moves the job to a different folder and updates the logfile entry accordingly.

Of course, that presupposes that opening a file for access via Standard Additions when that file’s on a remote NT volume is going to ensure nobody else can access that file at the same time, otherwise it won’t work. Plus you’ve the problem of what happens if a Mac crashes while it has a logfile open - who’s going to close it again?

Plus you’ll have to deal with unfinished jobs that were started but not finished due to a crash - but at least your logfile will allow you to track these pretty easily, allowing you to pick up those jobs and restart them again.

But it might be an approach worth considering if nobody else can recommend a better one.

HTH
has

It seems to me that you’ve got three possible statuses for a job residing on the server: “To-Be-Processed”, “In-Process”, and “Processed”. How about establishing two more folders on the server to represent the latter two states? Then when you grab one or more files to the Mac(s) you move them to “In-Process” on the server. As each job completes, the Mac processing it moves it from “In-Process” to “Processed”. You could periodically manually check the “In-Process” folder for stuck jobs or create a stay-open script on a Mac to check the folder and alert you if any file is older than a preset limit.

[3/8/02 1:38:39 PM]