Corruption from Excel Mac to Win

Hello!

I’ve got a unique problem. I’m not 100% sure this is the right place to post it, but it does involve Applescript- and I think the fix could be using Applescript too. I’ll provide a little bit of background first- I’ve created a platform for users to download a number of excel workbooks. Once they download the workbook, they are actually unable to save the workbook locally, but upon trying to save, I have a number of macros that force the save to happen remotely, and with their particular username and password. That all works fine, and works accross Office 2003 - 2010 for Windows.

Excel 2011 for Mac didn’t like this though. So using AppleScript, I pass the excel workbook to Curl to upload it to the FTP server. Again, all is fine.

But occasionally (I have not figured out the trigger yet), when saving in Mac, and then opening on Windows- all the text in every single button in the workbook has become unrecognisable text. For example- “P cK [ £{ «£C+s£K £+ PkË *Ã +c Å¡Æ’”+ #ºC++£ 2Kc+ s{»q" is what it says in one box. Once it becomes corrupt for Windows, there is no way to go back! (Although, it still appears fine on Mac).

Now, I have no idea why this is happening?! Can anyone please shed some light? I first thought maybe it’s something to do with the way it was being uploaded? I am under the impression that Mac OSX uses Unicode based text and Excel uses Ascii, sometimes the two might come into confusion? But then- wouldn’t it affect all text, not just text in buttons?

Would it help if I was to change the way I’m using CURL- upload it via HTTP and set the data type to ASCII? Or something like that? I’m quite stuck!

If I can’t figure this out, I think I’m going to have to go through every worksheet in every workbook, and make every button an image instead- which would take dozens and dozens of hours!

Any help, or even any ideas or suggestions would be much appreciated!

Thank you in advance!

bump.

Any ideas, at all?

Hi,

I also encounter such problems from time to time during file transfers. Sometimes it helps to switch the upload mode (binary vs. ascii), but most often just zipping the file before uploading it solves the issue for me. After a successful upload, you can then unzip it on the server again. Yes, this complicates the workflow, but it helps to ensure data integrity. And zipping a file is just a one liner with the do shell script command (use ditto or zip).

Best regards from Berlin,

Martin

Well normally when working with latin text the difference between utf-8, CP1252 and MACROMAN are only different when using ascii characters above 127. So a German word like ‘länge’ (lentgh) the only character that is above 127 is ‘ä’ so that character should be the only character that would be presented wrong.

When I see text like ‘P cK [ £{ «£C+s£K £+ PkË *Ã +c Å¡Æ’"+ #ºC++£ 2Kc+ s{»q’ it will give me some disturbing thoughts. In the past, way before multibyte characters, There was a problem with the needs of different characters. So we made different encodings and those different encodings also needs different fonts on top of it (the byte value remains the same). Now it seems that you’ve used a certain font for the button names that doesn’t match with the document encoding. I’m not sure what you’ve used but there is a big change that you have to everything by hand.

Thanks for your replies!

I firstly modified my script to include the zipping, and extracting at the other end. That decreased the file size by 50-80% which was great, but did not solve the jibberish problem.

I also tried changing the font- no good.

I then decided to try inserting text boxes above the buttons, and assigning a macro to both the button and text box, but that still scrambled.

In the end- I ended up making a hidden “images” sheet, and referenced images as camera images. Took a few days, but the file size didn’t really increase.

Thanks for your replies and time!

Jarrod.