Thursday, November 29, 2007

My SE/30 is alive!

Earlier this month, Miller informed me that Freecycle Seattle members can recycle their old machines for free, and indicated that we might be better off without extra old hardware floating around. I countered with the offer that I would get rid of any hardware that wouldn’t work (or wasn’t worth repairing/saving). I had mused earlier that it would be sad if my SE/30 didn’t work, and now doubly so. But, I got it out of its long storage spot and turned it on and got the blinking question mark diskette icon! Well, all that means is my hard drive isn’t spinning up any more. I was able to coax my System 7 era Disk Tools 3.5” diskette in (boy, the ejector sounds like it is really having a difficult time working), and got it to boot.

I pointed it out to Miller, and we both agreed that it’s nice to see that some thematic elements that make a Mac a Mac are retained even to this day.

So, now that I’ve wiped off the crud that had grown(?) on it while in storage, is it worth trying to find someone to fix the hard drive? Or is it just something that I should replace? Anyone have an ethernet to old skool Appletalk solution and/or is it worth it to try and hook it up to the internet again?

Tomorrow, I get to see if my Duo 280 will boot in its docking station.

Multipartition Bootcamp

I have pretty much always lived with at least two OS partitions and one data partition, because I’m always testing some kind of pre-release OS or side-by-side testing app functionality on multiple OSes. Thus I want to be able to share applications and application data between the two OSes on the data partition. The advent of Bootcamp got me to give up my standard operating procedures to test drive running Windows on my laptop. It was great! Well, as great as running Windows is. I could have stayed there, even to the point of using MacDrive to treat my HFS+ Mac boot volume as combination boot volume and data disk between Mac and Windows... but no, I got greedy.

Having looked at first this post, and then later this hint, I decided to bring back multiple boot partitions. I tried using the disk utility mechanism to resize the volumes to no avail. I think it always corrupted the MBR that the bootcamp utility set up. I ended up buying iPartition and playing around with it a bunch, but most of my edits would cause Windows to no longer boot either. However, I did manage to come up with an order of operations that makes it work.
  1. Boot from your Leopard disk into its copy of Disk Utility.

  2. Format the disk as HFS+Journaled entirely

  3. Install Leopard.

  4. From Leopard, run Bootcamp.

  5. Resize the partition that Bootcamp suggests so that the Leopard partition is the size you ultimately want it to be, and that Windows takes up the rest.

  6. At this point, instead of rebooting and installing Windows, I booted onto a external USB drive with iPartition installed. (You could boot onto the iPartition boot CD instead.)

  7. Using iPartition, I shrank the Windows partition to its ultimate size, and added after it two extra HFS+J partitions for my other OS partition and my data partition.

  8. I put in the Windows Vista RTM DVD and rebooted holding down “C” to install Vista. It saw the shrunken partition, let me reformat it, and installed.

[These instructions are specifically for starting from scratch. If you’re trying to do this with pre-existing volumes, I suggest backing them all up to an external device using Disk Utility or better yet, Carbon Copy Cloner, and WinClone for bootcamp partitions. You should then be able to restore after the partitions have been finally resized. You may have to still boot from a Windows disk if only to format the bootcamp partition as NTFS (and not bother installing further) before WinClone can restore your backup.]

The only thing that is somewhat frustrating about this now is that, even though I can boot between Tiger, Leopard, and Vista, as far as Vista is concerned, the drive (disk0) is a MBR drive and has three main partitions, the EFI partition, the Leopard partition, and the Vista partition; the remaining space is “unused”. Disk Utility while booted into Mac OS X, on th e other hand, happily lists the other partitions. This isn’t usually annoying, but MacDrive only sees and can mount the Leopard partition. (Mediafour claims the the partition maps “are incorrect or damaged beyond MacDrive’s ability to handle.”) So if I wanted to still have an available-to-Windows data drive, I’m going to have to back up everything, and restart these instructions, only making the original Leopard partition large enough to accommodate the data partition, and move the ultimately Leopard partition to the end. *sigh*

If you all have better options, please let me know.

P.S., Since all this, yet another post describes how to do this, and add linux to the mix.

Wednesday, November 28, 2007

Cocoa-to-.NET bridge?

David also asked, “if a developer of a Cocoa application wanted to use a CoreCLR engine, but a Cocoa UI, would they just ‘load the CoreCLR.framework’ or is there more to it than that.”

There is no technical limitation of CoreCLR that would not allow calls from it into Cocoa (ObjC[++]) or vice versa, in the same way that Cocoa can call into C/C++ normally. The CoreCLR, like the desktop runtime, supports a hosting interface that allows .NET to be hosted in an application environment. Unlike the desktop runtime, CoreCLR is currently only ever hosted (e.g., in the browser control called Silverlight). You can ask the hosting interface to create a function delegate, which will take a managed function and turn it into a C-style function pointer, which, were you to call it, would marshal all the arguments into the managed world and run managed code. Beyond that, you could theorize creating something like the ObjC-Perl bridge, where managed objects were made visible directly to the ObjC runtime1.

That said, there are a couple of things that would stymie the average developer if they wanted to do this:

  • At the moment, only internal (i.e., Microsoft) clients of CoreCLR have access to the hosting interface.

  • The security model of CoreCLR, at least in the Silverlight timeframe, is changing so that only Microsoft trusted libraries have access to sensitive OS operations, and normal developers’ code would be sandboxed (much like it would be if it were running in Silverlight).

I don’t have too much insight as to whether these things might change in the future. However, there are a bunch of details that would have to be resolved first, e.g., how to ensure 3rd party CoreCLR users keep their CoreCLR serviced with the appropriate security fixes. If you’re interested, let your request be known in the feedback forums up on http://silverlight.net.
--
1Although, at this point, you’d end up with double garbage collection. If an ObjC object held a reference to a managed object, and then lost its last reference, then eventually the pool would get collected, which would then release the (possibly) last reference to the managed object, which would then get collected when the CLR GC occurs.

Trimming .NET

As I mentioned in a previous post, Silverlight 1.1 will have the CoreCLR component in it to provide a subset of .NET support to control WPF/E (as an alternative to JavaScript, which is available in the currently released Silverlight 1.0). Since David asked “how you go about selecting which parts of .NET (which is huge) you decide to port to the CoreCLR framework you are developing,” I ended up having to do a little research.

I joined the CLR team in August of ’06, and the work to “trim” the desktop runtime engine and frameworks down to the for-use-in-Silverlight CoreCLR runtime engine and frameworks had already been completed. Back in June(!)1 I talked with Jonathan Keljo, who was the program manager working on this problem, about what went on during this period.

Ultimately, we cribbed. Instead of trying to figure out for our (CLR) selves, we ended up looking at the pre-existing product that was already a slimmed-down version of .NET: the Compact Framework2. We looked at their surface area and decided to see if we could match theirs. They had foregone features, and we chose similarly. One of the many ways CF manages size changes is just limiting the number of convenience functions.

When we prototyped the CoreCLR surface area, we ripped out some stuff based on instinct–what were the biggest (in terms of data and code size) features? We took out the top few (fusion, COM Interop, server GC, debugging support3), and strangely enough, we were close to our size goals.

On the framework side, we had inherited some subsetting already from the Rotor project. Since the platform adaptation layer (PAL) did not support all of the APIs we were calling in Win32 in the desktop version of the CLR to support some managed libraries, some of those managed layers were removed. In certain cases, we expanded upon the pre-existing PAL. The PAL had been written to be simple and very cross-platform, but knowing we wanted to support the Macintosh, we could supplement it with Macintosh-specific implementations of certain important APIs so as to not have to overly subset.

It’s not the cool-use-of-advanced-software-refactoring-principles answer I might have wanted, but it’s reflective of the environment we often find ourselves in–partially constrained by our desire to leverage previous work, and partially enabled by that same desire.

--
1Yes, I know I’m behind.
2I knew this was going to be a cop-out answer, which is why I waited for so long to post. Perhaps I’ll get a chance to interview someone from CF and figure out how they went about their trimming procedures.
3Debugging support isn’t so much removed from CoreCLR as split out from the main product so it can be downloaded separately as an SDK.

Friday, November 02, 2007

Back from parental leave

I took off from September 21st on what was going to be a six-week parental leave scheduled to coincide with Miller returning to work, which in her case means teaching Arabic at the UW. Microsoft has a great benefit of one full month of leave is paid, though you can take up to three. I was planning on taking six weeks so I wouldn’t spend either a lot of vacation days or being unpaid for a while, since we still haven’t sold the condo we moved out of when we bought a home at the end of March. Miller, the trooper that she is, managed to switch to the “working mom” stage rather well. I, on the other hand, managed very few of the many pet projects I had exuberantly planned for myself while staying at home and watching Mabry. (Let’s just say I was frequently interrupted.) In any case, the whole child care thing got settled, and I’ve been kicked back to work two weeks early.

After getting back to work, I made it a point to get out from under the heap of mail I’d accumulated over the course of several years before starting any new projects. Some 8,000 semi-read Inbox e-mails later, I now only have a 400 message “reviewed” folder, which will probably have to have another categorizing pass made on them1. In some cases, it was a matter of realizing I was no longer (if in fact I ever was) in a position to affect some issues, and even though they may have been personally irritating, that I had to pick my battles. I still think I’m still going to have to figure out a better way to track conversations to ensure that things get handled correctly.

Among the pending things to do is to respond to some questions posed by David Weiss, and so I think I will do that now and have one less thing on my list…

P.S., Mabry has just turned three months and has been vocalizing at me as I’ve been writing this. Maybe now might be just after I play with her a bit. ☺
--
1During this purge, I think I stumbled on to some what must be n2 or longer algorithms in Entourage, as deleting large swaths of mail when visible in the UI would sometimes take ages.

My wife’s next OS won’t be Mac OS X…

… and I cannot help but think I am partially to blame, because this is no fault of Apple’s, but rather due to deficiencies of Mac Word.

Here I am, a guy who hasn’t been without a personal Mac since ‘90 (and had regular access to them as early as ‘87), a proponent of Mac software at Microsoft since joining in ‘95, and through my own development efforts, making the Microsoft Office experience better on the Mac, and my wife is choosing Windows because of limitations of our Mac software1.

Admittedly, Miller’s particular requirement is one that is not shared by a large percentage of the users or potential users of Mac Office2–she needs to collaborate with her students, fellow TAs, and professors on electronic documents written in Arabic. This is by no means a new requirement for her. She began her Arabic study four years back as a requirement for her masters degree in Comparative Religion, and decided to parlay her studies into a doctoral program in Arabic Studies.

The problem? Most all work is done in Microsoft Word documents, and Microsoft Word for the Macintosh does not support any right-to-left languages (aka, “bi-di”). If she works in Mac Word in Arabic, she gets completely disconnected characters3. She has unseated me many, many times at my Windows box in our office so that she can use Windows Word and actually get work done.

Having worked in the MacBU as a fellow developer, I had a great “in” to try and get this addressed. Unfortunately, every time I would bring it up, we’d do the back-of-the-envelope calculation of development cost4 versus the number of users who would use it (upgrade incentive for current users, and new users buying in due the feature). The internal statistics for Mac adoption in Arabic-using (and, to a lesser extent, Hebrew-using) nations did not make for a pretty picture for the “leverage” this feature would provide–our development dollars would probably be better spent on other features that had a impact * user-base value.

Of course, I’ve made alternate suggestions based on the software she does have on her Mac. In 10.4, at least, Pages had issues loading/working on Word documents with Arabic. (I know not whether creating documents from scratch work better; I suspect so, since even SimpleText, er, TextEdit does a fair job, AFAICT.)

The next question is, of course, why use Word documents then if another doc format has better Mac OS X (and theoretically cross-platform) support for Arabic? Answer: Other document types don’t don’t have ubiquitous, well-known editors. In Miller’s specific case, all U.W. students have access to Word for both platforms, and the labs have more than enough Windows boxes. Most Arabic students/professors don’t have this problem because they don’t use Macs. (Nor is this really an incentive to do so.) They do not have any reason to switch document formats for the (apparently-)minority class of Mac users.

I suppose, though, the issue isn’t completely closed. It’s possible someone will comment that we need to try some specific software to solve the problem. OTOH, it still may be that Miller may end up with a Macintosh computer for her next machine, but if so, it’ll probably be BootCamped to some form of Windows.

Updated: I wanted to show some examples, but not knowing Arabic, I needed some assistance. So, the Arabic word romanized as “mumkin” (meaning “possible”), looks like in TextEdit, but like in Word. Thanks to Miller for helping me out with these.

--
1Arguing protectionism here would be a little silly. Windows Word gets a fair amount of this functionality from Windows, so if Microsoft Office was a 3rd party, the Word team would have been in a similar pickle on Mac OS X, at least until relatively recently.
2And if you have hard data otherwise, please, please, please let me know so I can convince the Powers-That-Be that there’s a business case to addressing the problem.
3Arabic writing is like cursive in that characters look different if they begin a word, are in the middle of a word, or are at the end of the word. However, unlike cursive, they are seriously different, and the word becomes very hard to recognize, not to mention the layout problems that are caused (because they no longer are the same width).
4The cost has changed significantly over the course of years. Seven years ago, our best bet would have been to port the entire Windows support for ligatures (which Windows Word uses without having to implement itself), and bi-di support from Word. That would have been quite expensive. Nowadays, despite not being able to just replace the Word layout engine with ATSUI (so that we could continue to guarantee identical layout across versions/OSes), we could theoretically offload some of the work to ATSUI and then ask it what work it did and translate that into our own layout world. This is obviously less expensive, but still involved and prone to being a bug farm.