Wednesday, September 27, 2006

Machine speculation, redux

A couple of recent articles about Intel's new quad-core Core 2 offering coming early (in late 2006 rather than early 2007) from SFGate and the Washington Post reminded me that I didn't really act upon my previous machine speculation.

I had posited there was an outside chance that they'd update the laptop line for WWDC '06. It was much more likely that they'd release an Intel desktop, and they did. (I had given it a release date of December +/- 6 months, so it definitely came in on the early side.) Nonetheless, I was (and am) holding out for the low energy versions to get put into the machines for better battery life (and less cooking of the lap), and hoping that they'd be able to cram in a second two-core processor in. Well, with the announcement that availability of quad-core chips this year, with low energy versions "early next year", I might get (effectively) both. Unfortunately, since there's usually some lag time between chip availability and adoption, I may end up waiting another nine months until they could get put in the MacBook line. Ultimately, this comes down to when Apple updates the line and what they're going to do with it. I've got my fingers crossed.

Tuesday, September 19, 2006

Not just licking my wounds

I should have known better. I'm not particularly thin-skinned, but every time I end up going to an Apple conference, instead of being able to purely enjoy the new technology (and at WWDC, the camaraderie of other Mac developers), I find myself plucking daggers out of my (well, Microsoft's) back.

Yes, I know that the Reality Distortion Field is going to promise great things on the platform. But, having never been to a PDC or TechEd, do the Windows conferences heap the same load of scorn on alternative OSes (and by inference, developers on and for that OS)? I'm sure there are cross-platform developers who are just itching for the new technology who think, "Yeah, yeah, now tell us what's cool about your stuff!" There are probably such Mac stalwarts that don't care a whit about what Microsoft is doing or would rather not hear the name mentioned in their presence, and referring to "Redmond"1 repeatedly is just as bad.2

In any case, having been to WWDC 2003, and having had access to other MacBU folks' DVDs for '04 and live streaming for '05, I didn't get as much out of it as one of my new coworkers, who was roughly introduced to the society surrounding the technology. Instead, I enjoyed talking to several Apple developers working on the technology areas of my interest (mostly security-related), and put bugs in their ear about future directions. Secondly, I did not just throw away all those Microsoft daggers–omitting the ones that were purely spiteful, there were still several that complained of real problems in MacBU products that I was able to give feedback to the team about, not all of which were known issues.

The reason I've been quiet recently is not that I'm licking my wounds from WWDC, but that I'm diving headfirst into a bunch of new-to-me technology. Although my coworkers have some good insight about what sections of the technology have analogs to what I had been doing previously, it is still a little like drinking from a firehose. I hope to spend the next few months coming up to speed, and then when the product specs settle down and we can go public about it all, have some more technical blog entries about what I'm actually doing here in CoreCLR.
1 A quick search of software companies in Redmond, Washington turns up quite a number of results. Perhaps they're repeatedly referring to WildTangent? No? Maybe Hipsoft? Crick Software? Something tells me they're just afraid of mentioning Microsoft, in case it increases Microsoft brand recognition to their own detriment.
2 I've often wondered what purpose it serves to mock Windows at these conferences. I don't think it's going to dissuade Windows developers from developing for Windows OSes. Is it dissuading current Mac developers from leaving the platform? Or is it just patting them on the back for having made "the right choice" of OSes? Hmm.

Tuesday, July 25, 2006


Well, it's a bit of a strange time for me. I've accepted a position on the CoreCLR1 team working on the Macintosh version of the MiniCLR that's going into WPF/E2 and perhaps later other places. It's a little odd to be considering leaving a position I've ostensibly had since I got here (with only one major reorg), but fortunately(?) I have so much to do to wrap up what I was working on for MacBU that I haven't had time to ruminate overmuch. On the other hand, it's hard not to get excited about working on cool developer technologies.

After some amount of worry when it seemed the handoff between the two groups (which is scheduled to happen over this weekend) was going to jeopardize my going to WWDC, all is resolved, and I'm once again set to go down to San Francisco the end of next week. I'll just be wearing a slightly different hat is all.

I hope that within my first week at the new job I'll know enough about what we can tell other folks that I might have something to show at WWDC. Or at least blog about it some.

Many thanks to my colleagues at MacBU for making my stay fun. Who knows… maybe I'll come back sometime later with some newfangled scripting technology to use with Office. If only I could think of one…
1 The CLR in CoreCLR stands for Common Language Runtime, a fundamental piece of .NET technologies.
2 WPF/E is Windows Presentation Foundation/Everywhere, a subset of WPF designed to work outside of Windows.

Tuesday, July 11, 2006

Yet another framework

When I was younger, my dad showed off his Texas Instruments TI98+ scientific calculator, which for him was a godsend; he was doing medical research at the time, and being able to program up to fifty operations on a piece of data at a time made filling out his tables significantly easier. I was too young to really do much with it (or more to the point, have too much that I needed to do that could use it), but I "helped" (in that way that a child can "help" and have it take longer than it would've otherwise).

Fast forward a couple years, and our family acquired an Apple ][+. This wasn't our first computer1, but it was the first one he did any significant computing on. I used to watch him enter in page after page of AppleSauce Basic to take in data sets, perform calculations, and spit out the results. We used the heck out of this machine. Admittedly, as kids, we preferred to play Karateka, Moon Patrol, or earlier Aztek and Elevator Action2, we also had AppleWrite and I wrote my homework using that for years.3

Still later, I came back from college to find that my dad had purchased an IBM PC clone and had ceased doing any programming, but just used some already-created applications which did most of what he wanted. I ended up sneeking on and writing a simple BASIC program that would play Happy Birthday to him if he started his computer on his birthday. Apparently, he didn't do that for two years and was mightily surprised when it started making noises at him. In any case, one of the times I was working on programming and he was around, I asked him why he didn't just pick up this new BASIC and continue his work. His response was that he had already learned enough languages, and that picking up another was just not worth the time. I repsonded that it was not that different, that it was easy. He agreed that it was likely to be as easy as I said, but still not worth changing for, given the viability of the solution he already had.

At the time, I thought that I would never feel that way.

When I was maintaining the MacBU build system, it was written in Perl. I hadn't known Perl beforehand, and because I thought the system needed some tweaks, I learned just enough to make the tweaks I wanted. Then, of course, being the only person who was actually doing any work on it, they made me the official person to work on it. I ended up getting a pretty in-depth education in Perl as a result, from having to deal with the Perl plug-in to CodeWarrior (PerlIO anyone?) to writing tied filehandles to capture output into an XML time-stamped log. All this without the benefit of something like Affrus.

Eventually, I gave up working on the build system, and subsequent to that, it was decided that Python shall be the Way of the Future™, and the system that we're working on to support Xcode was to be adapted into Python from the old Perl-based CodeWarrior-driving system. Of course, now I'm back to where I was before – wanting to make tweaks to the system to make it work better for me. And learning Python isn't so terribly hard, since it's very similar to Perl. But it just seems sad to have to figure out a bunch of ever-so-slightly-different ways of doing the exact same thing.

I probably will finally put Python under my belt one of these days, but it's just not as compelling as it would have been to my child-self.

Not all new systems fall in this baliwick however. I'm currently working on some simple Cocoa stuff. Cocoa, too, has been long ignored by me, largely because very little of the MacBU-based code uses it. It is a whole new way of doing business, and its model is sufficiently different that it's taking a little "settle-in" time to grok it. It will be yet another framework to learn and to have to integrate into my other programming paradigms. However, it makes some things sufficiently simple, that it is compelling to learn. I just sometimes wish I were learning it faster.
1 Earlier, my dad had purchased a Sinclair ZX81 with the 16k memory pack. The Basic programming was rather simple and fragile – on one hand, the 16K memory pack's connection wasn't too good, and if you giggled the keyboard, you were likely to lose everything, and on the other hand, it took several tries of playing the tape before the computer would finally "take" the program. As it was, I think he chalked it up to a learning experience and didn't do any scientific work on it. My brothers and I played some super-lo-fi 2D space scroller (like Zaxxon, only not orthogonal).
2 Unfortunately, we didn't really have access to the good games, since it was a "Work" machine. I remember spending quality time at a friends house playing Wizardry. To this day, I still attribute my navigation skills to wandering that maze sans MILWA.
3 The Apple ][+ was also the computer I performed significant programming tasks on. I used AppleSoft BASIC to do some boring things (like modify other BASIC text-driven games, like Oregon Trail). I took a stab at basic graphics programming to show long addition. I tried several times (mostly failed attempts) to learn 6502 assembly, and really only ever emitted a few assembly things in BASIC code. My brothers mostly remember me trying to harangue them into helping me transcribe hex dumps of computer programs published in computer periodicals – they hated that, though less so when checksums were finally introduced.

Monday, July 03, 2006

Choice: productive(?) but unhappy

Doing my standard breadth-first-search of people's blogs (omitting links of little interest), I ran across the Simplessity blog (via Nadyne via Brian Johnson). The particular article linked from the title is about a presentation by Barry Schwartz about how as the number of choices goes to infinity, the productivity (of maximizers at least) goes up, but happiness (in general) goes down. Facinating. This probably explains why people end up being paralysed when faced with the number of options our products support (assuming they actually are aware of them). ("How best to format this document? Oh, crap, that was the wrong way." Even if it was the right way.)

I really like the idea of Libertarian Paternalism.

Thursday, June 29, 2006

How user discontent translates into profit motive

My (employer-inherited) goal is to sell boxes. Microsoft software product boxes, specifically. Fortunately for me and my CS degree, my particular function in the achievement in this goal has nothing to do with the boxes themselves or the manufacture of the discs, but really only with the software that goes in the box.

Consumers, purchasers of boxes, both like and dislike aspects of the box and its contents. The question that this post is trying to address is how (or whether) whatever dissatisfaction a consumer might have translates into profit motive to fix the problem.

Well, first, there's the recognition of the problem that has to happen. Many problems go unfixed for lack of recognition. In some cases, that can be despite outspoken users on the newsgroups. It can be because we recognized part of the problem but not the larger problem (or the still yet larger problem, etc.). In some cases, people don't even know there is a problem until a user experience person watches them try to use the software and choose a highly inefficient way (relative to some other way the software provides) of solving a task. Some amount of market research is created to see if we could establish what problems are key to get people to upgrade or buy new.

But even once a problem is identified, there's still some measurement as to its importance. The market research may also have details as to how big a populace has a particular problem. Even when there is no market research around a particular issue, our Program Managers, Developers and Testers have to take a stab at how likely it would be for someone to encounter certain problems and if that results in a subset of users, again, how large that subset would be (and how annoyed they're going to be when they see the problem). The size of the affected population multiplied by some factor related to how severe the issue is / how critical that functionality is to the work they do, determines the initial relative priority of working on one problem vs. another. When development and test come in with time estimates for how long it will take to fix and test problems, then there's a more fungible, but more final relative priority of working on problems.

This still doesn't really account for the nebulous issue of, when is a piece of software buggy enough that you give up and won't upgrade. I'm sure there's also research around that, but for us it seems that the quality bar that gets set is largely a touchy-feely setting, that gets set based on experience and consensus between the departments. We hope that that quality bar is sufficiently high to make sufficiently many people sufficiently happy with the product as to keep buying upgrades. Unfortunately, that bar setting is set by different people on a per-application basis, so there may not be uniformity there.

How can you affect that equation so that your bug gets fixed?
  • Send cogent, succinct, and detailed bug reports – generally, this means to the newsgroups. This improves the likelihood of problem recognition, and reduces the fix time for development, especially if they don't have to track down a reliable reproduction case. Remember, what you think is a bug Microsoft has been ignoring might be a bug that they've fixed several versions of and think might be gone for good (and haven't seen your case yet).

  • Call PSS – As I said before, this costs us (and you) money, and we generally track what problems are coming in. The higher the number of calls, the more likely PSS will ask us to fix it on their behalf. (How important this is is not clear.)

  • Make sure other people affected by this issue do the same – again, it's a numbers game, to some extent.

  • If it's something that's truly blocking your organization from upgrading, have your license purchaser let their TAM know – The more seats that are affected, the more important it is. In fact, MacBU has had a lot of pressure from Windows Office and Exchange to solve some problems because not having a sufficient Mac solution was hurting Enterprise sales in heterogenous OS environments.

  • Bribe your developer contact.

What doesn't affect the equation that you think might:
  • Saying how important this bug is – We take these statements with a grain of salt. It might be really important to you and yours, but how representative are you? If the affected users are 0.03%, and experiencing it isn't likely to keep them from buying the product, then it's less compelling than if 10% of the users are seeing it. Similarly, saying you're not going to buy the product anymore, is only so useful if we actually believe you and believe there are a lot of people like you. Bluffing or exaggerating may get results, but more likely, it may get you ignored as a noise generator. Now, if you can point to sharable market research that supports your claims, then, you should be talking to our product planners.

  • Saying how long the bug has been around – Sadly, this is nearly inversely proportional to how likely it is to get fixed. If we could afford to ship it once and suffered insufficient adverse effects, then we're more willing to ship it again. That said, there's always some legacy bugs that are just taking their time to percolate to the top of the priority queue.

  • Saying how much money Microsoft has to throw at the problem – That Microsoft has coffers is one thing, but we are law-bound to try and make the most money for our shareholders. That means at the business level, we have to justify spending money to make money. Fortunately, that doesn't go all the way down to the individual bug fix level exactly, though it is somewhat mirrored in our priority/severity accounting and triage processes.

  • Being polite – Whereas we would sincerely appreciate your politeness and avoiding mockery of us and our products in public forums, if you are actually making a cogent, succinct argument in favor of fixing an issue (that we can tell), then we'll use it despite whatever extra vitriol was included. Hey, we all get frustrated sometimes. On the other hand, no amount of politeness is going to make a super-expensive feature less expensive and thus more palatable.

Unfortunately, none of this is a guarantee, and unless you're a Premier Support customer wanting a QFE, it's entirely possible that nothing (visible) may come of your efforts, or at least that the fix comes a long time off (one or more product cycles). Maybe someday our process will be sufficiently transparent as to show our assigned profit expectations to fixing various problems, and allow people to post bug bounties to push the chances of their particular pet-peeve bug high enough to meet the cutoff bar for a release. But sadly, I don't believe this will ever come about. On one hand, market research is apparently incredibly difficult to publish publically, and without that information, a lot of our profit expectation assignments would just appear arbitrary and cause more consternation that it would create benefit. On the other hand, our expectations of profit making might be considered to be trade secrets, and thus to be hidden.

Tuesday, June 27, 2006

Miller returns; her iBook, wiped

Miller got back today from the last of her solo escapades for a while (at least until Burning Man). She had just finished spending a week with her family back home, and the week before that had been grading Spanish AP exams down in San Antonio. Right before she had left, she had just gotten her iBook back from the Apple store geniuses for more out-of-warranty-yet-free repairs to the hinge. Unfortunately, once at home, her iBook shut down and then subsequently would not boot. She called me at work and asked me to take care of it while she was gone, and then made her way to the airport.

A week ago, I decided to take a peek. The iBook would boot for a ways and then blink the "I can't find a blessed drive" icon. I pulled out the trusty Tiger disc and tried to boot from that, and it gave me an icon I had never seen before: a circle with a slash through it. After some Googling, it seemed that somehow the boot sector of the DVD couldn't be found, which seemed unlikely. I booted it up in target disk mode and connected it to my TiBook and there was no hard drive even listed. It did, however, show the Tiger DVD. I hadn't seen that before — previous versions of this error were easily rectified by selecting the drive in Disk Utility and selecting Repair, but with no disk listed...

I let Miller know I was going to have to take it in again. The geniuses tried the same stuff I did, and then said that it would need to go through a Repeat Repair. I was happy to find that it, too, would cost nothing. (It's nice that these repairs are free, but we've had to repair this iBook four separate times, and my TiBook has never needed anything!) They gave the perfunctory "make sure your data is backed up" speech, and asked whether I wanted to get in-house backup for something over $100. Here was my logic: If Miller had already taken this in before, and she got the same speech, and thus she's done everything she wanted with regards to data backup. I actually made the further assumption that she had, finally, taken my advice that she should back up the stuff she could not afford to lose. As it turns out, I was batting zero for two, but the final one was the kicker: My last assumption was, even if the hard drive was the culprit (rather than a controller or loose wiring or etc.), then they'd ship the damaged hard drive back with the machine so that I'd have a shot at recovery on the off chance that she hadn't backed up everything she wanted.

You don't get any broken parts back from Apple when they're replaced during a repair. I guess this makes sense, I just hadn't thought it through.

Fast forward to today, and I pick up her machine from the Genius bar, after having received the call that it had been received. They politely tell me what they did to it, and mention that they replaced the hard drive, as it had failed. I ask where the old hard drive is, and they respond, "Defective hard drives get scrapped or sent back to the manufacturer for warranty replacement, depending on the drive. It used to be policy that they'd retain them for seven days before doing so, in case someone wanted the drive, but that's no longer the case." Erp. I asked whether it was possible to retrieve it and their response was, "Probably not, since they would have already scrapped it or shipped it about the same time the iBook was shipped back here, but I can call AppleCare and check with the service manager to see what the deal is." I called Miller's parents' house and left a voice message about her machine having a brand, spanking new OS on it and that her hard drive was somewhere AWOL.

Miller called later, frantic, and said that this was a tragedy, and that yes, if I could try to get the drive back, that would be great. Apparently she had not backed up her drive... anywhere. No CD-R, not on her UW account, not in Yahoo, nowhere. I went through the AppleCare menu system with the feeling that I was asking the governor for a stay of execution. The tech investigated the case for me, said that the drive had already been scrapped and that they wouldn't really know how to look for it to get it back. But, he also said, that the reason it had been scrapped was that there wasn't anything they could do to revive the drive, so there wouldn't have been anything I could have done had I the drive. (Well, nothing shy of pulling out the plates and putting it another mechanism, which is admittedly, beyond my ken.)

We're still not quite sure of the information-lossage. Most of the, ahem, legitimate music can just be re-ripped. However, several years worth of academic papers, some amount of academic research, archives of e-mail correspondence, and most of her photo collection from Egypt are all resigned to the big bit bucket in the sky.

If there's anything to take away from this cautionary tale, it's back up your *#%^@ data.

Monday, June 26, 2006

My love affair with Type Libraries

It's just the registration that's annoying.

OLE Automation is one of those pieces of Mac Office which probably isn't understood by most people, even people here in the company. Strictly speaking, it's not really part of Office per se — the Windows version of Office doesn't include it, as it is a part of the OS1. But what the heck does it do?

Mainly, it allows a programmer to do is to define an application's object model and make that model available to be coded against or scripted. The central mechanism for this is what is called the Type Library. A type library contains a list of classes and their properties and methods, along with some annotations2. It is roughly analogous to an AppleScript dictionary, though there are some marked differences3. Most often, a type library is stored in a file, but it can be implemented in code as well, which allows for some extra dynamism4. A scripting client of OLE Automation can use it to read type information out of the type library to determine what can be called. If you go into Visual Basic for Applications, and choose View/Object Browser, the contents of that browser are just a visual representation of the type library (excluding items that are marked as hidden or restricted). Then VBA knows that if you're working with an object that is the Excel Application object, that only so many properties or methods are allowed, and it can autocomplete your code as you're typing. When you run your code, VBA will call into OLE Automation, and OLE Automation will point it at the code in Excel that needs to run.

The type library provides several ways to be a client of the code which the type library describes. At the base level, you can either call into a method directly through an interface pointer's vtable, or you can call into a method by referring to it by name. This latter functionality is defined by the IDispatch interface, specifically the Invoke method. A class can be set up to allow one or the other or both, and in the last case, it's called a dual interface. If you want to use an interface from C/C++ and it supports vtable-calling, then the mktyplib utility, which generates the type libraries, can also emit .h files for including into client projects. Otherwise, you can call IDispatch::Invoke and have it figure out how to call that function. If you don't know what methods you will need when you're compiling your code or script, it's still possible to perform late-time binding, and ask the type library what's available at runtime. This is what VBA does, and why it can handle scripting arbitrary OLE Automation objects, so long as they can be acquired from one of the Office applications' object models, or the object model of VBA itself.

For years, the eldest Office applications, Word, Excel, and PowerPoint, all had two different object models. One was the OLE Automation / VBA model, which was shared with our Windows counterparts. One was the AppleScript model. They may have had overlap, but they were created largely in parallel, and they did not have any kind of feature parity. As a result, whereas sometimes the OLE Automation model would get new methods, the AppleScript model might not get the same methods, and the disparity would increase. Eventually, we had the idea that we could try and unify these things.

I made our first attempt at it, and it didn't succeed. The OLE Automation to AppleScript mapping proved to be sufficiently inadequate for a simple translation layer. There were enough oddities in the way the applications' type libraries were created (and subsequently had scripts writing assuming those oddities would remain), that the AppleScript experience would have been awful.

For Office 2004, Jim Murphy took a slightly different approach; start with the mapping, and then write some custom code to handle the rough edges. As of this writing, there are still some rough edges left, but ultimately, both VisualBasic and AppleScript now code to a (largely) unified object model, and more importantly to a (largely) unified implementation. Less code and more functionality. What could be better?

Under the covers, OLE Automation looks at the type library and performs calculations to figure out how to form a call to the actual function. Implementing IDispatch::Invoke involves knowing how, for this architecture, ABI, and calling convention, to put arguments on the stack or in registers, call the method, and then retrieve the results of the operation. It's one of the few places in the Office products where hand-written assembly is actually necessary5.

But back to my initial caveat. While type libraries allow you to advertise arbitrary object models, one of the main issues has always been, "How do you create an instance of one of these objects?" On both Mac and Windows, the answer has always lived in the registry. When you register a type library, you place registry entries that list what version the type library is, where to find it on disk, and where to find the application or framework (er, DLL) code that implements the objects and methods. Then later, when asked to create a new object of a particular kind (e.g., when you Insert/Object "Microsoft Excel Worksheet" from within Microsoft Word), OLE can look it up in the registry and launch it (if it's an app) or load the shared library, and call the appropriate object creation code.

Now you get into two sets of annoying problems. (1) The files move. On Windows, if you move files from where they were installed, you'd better make sure to re-register the type libraries, or things will (possibly transparently) break. On the Mac, it's a little more resilient (we have aliases after all), but you can still make it break. (2) You have more than one version or more than one copy of the same version. If you register a type library, you're registering just it and just the application or shared library you pick. There can be only one. If you have two or more, you'll get rerouted to the one that was registered last. This is another form of DLL-hell, and whereas DLLs can be distributed with their applications (removing the sharing aspect of shared libraries) to avoid DLL-hell, type libraries can't be fixed the same way.

Ultimately, the problem is no different than the one that LaunchServices (or previously, the Desktop DB) solves. It keeps tracks of applications it's seen, and when asked for "", it can launch it, and it has disambiguating rules it uses to pick the most appropriate "" if there's more than one. As an aside, there's currently no way to leverage LaunchServices by having it cache additional data about files, so it could also track what info is currently in the registry.

My acquaintance with OLE Automation began when MacBU acquired the privilege of maintaining the Mac version of the OLE code base. I've seen its transition to CodeWarrior, through Carbonization, and now we're working on Xcode/Universalization. My best, and frankly irreplaceable, reference for all of this, beyond having access to the code itself, has been Inside OLE, Second Edition, by Kraig Brockschmidt, ISBN 1-55615-843-2, sadly out of print. It goes into much, much more technical detail about all of this and the rest of OLE, and explains it all very clearly.
1 Windows users will know it by a different name, oleaut32.dll.
2 These annotations include such things as alignment and indications of where to find help on this type or method. There's also a mechanism to add custom annotations.
3 Most OLE Automation types have analogues in the AppleScript world, but one of the hard ones is how to deal with methods that return an interface. There's no idea of returning a pointer in the AppleScript world — you have to return an object reference instead. Unfortunately, in some cases, a given interface won't necessarily have a way to turn itself into an object reference. If you arbitrarily created a "temporary" object store just so an object reference would get returned, the script writer would have to clean up after themselves, making sure to delete that object reference when done. While this is certainly possible, it's goes against the general AppleScript paradigm.
4 A type library could be generated on-the-fly at runtime. Usually, this wouldn't be necessary. But it would allow the type library system to be used to interact with a completely different scripting system whose contents wouldn't be known until runtime (or at least might change between the compile time and runtime). For example, it would be theoretically possible to write code that returned an ITypeLibrary interface which represented the contents of an AppleScript dictionary, and have that code be able to translate between OLE Automation types and AppleScript types so that you could use VBA to script a "normal" Macintosh scriptable application.
5 Another such arcane place involves the code that allows an interface to be hosted in a process other than where the implementation code exists. In these cases, we have to intercept the calls, send them via interprocess communication to OLE Automation running in the other process, and it will perform the call on our behalf. (The reverse process occurs when the result is returned.)

Machine speculation

It was the beginning of this year when the new Intel MacBook Pro made it into the Apple Store for pre-ordering that the Pavlovian drooling commenced. My wife, Miller, in her prudent fashion, suggested I wait six months before upgrading my 500 MHz TiBook. Partially, it's because she wants to make sure I'm not just infatuated with the new technogadget-of-the-month. I, on the other hand, was sufficiently sure that I've already purchased Parallels, but her logic also made sense otherwise. Whereas a new product line may have certain problems endemic to it, six months is probably a good length of time to see if they crop up. Also, I am wondering whether WWDC '06 will hold some more product releases in store. If they announce better MacBooks, it will have been worth the wait. If they announce other hardware, like a more standard desktop, it won't be that great for me personally, though I suppose it might be possible that it would drive the prices down on the MacBooks. Unlikely, but possible.

In any case, I'm still crankin' away on my TiBook. I miss the (relative) speediness of the CodeWarrior builds, where even on this old boy I could crank out a set of Mac Office builds overnight. Now it takes longer than that to crank out one flavor of one architecture. I no longer really use it for mainline development work for that reason. All that said, and though I'll be happy to finally purchase an Intel-based Mac and see some dramatic speed improvements, I'll be sort of sad to put this old workhorse to rest. The keyboard keys have all gotten shiny with extended use. The Moby sticker to the left of my touch pad has long outlived its earlier counterpart on my old Duo 280, and I'll have to seek out an appropriate replacement.

Speaking of my Duo 280, is there anything I could do to replace a cracked screen at this point, or should I relegate it to the junk heap? I've still got an old Duo Dock in my office, though at this point, all it's doing is supporting my Intel iMac so the screen height is more ergonomically correct. One of my non-tech-geek friends made the point over the weekend that all these old machines are just adding to the already considerable clutter in our house. We got into a spirited debate over whether to keep random extra, usually defunct or obsolete, tech gear around. At some point I used the word "shrine" and the discussion fizzled. Anyhow, I think having a working Duo 280 is worth keeping, but probably not a irreparable one. I'm afraid to pull my SE/30 out of the closet and see whether it'll still go. I don't want to have to turn it into a fish tank just to have a reason to save it.

Saturday, June 17, 2006

Long overdue photo

Nathan's head shot
I think I might burn out my blogging muscles for another nine months if I type up another long post, so I'll settle for putting up a photo that will go into my profile.

Piercing the veil

Corporations are rarely transparent in their operations, and Microsoft is no exception. It has historically difficult to understand exactly Microsoft was doing to address specific customer desires or complaints. Previously, the ways you might have been able to talk to Microsoft were:
  • Writing to MSWish - despite being monitored, user rarely got any feedback as to whether their bug was to be fixed or their feature suggestion taken into account other than the boilerplate text. [NOTE: MSWish is no longer operational at this time.]

  • Talking to PSS - complaints or feature requests get filtered through this organization, and instead of producing qualitative suggestions to the teams, more often they'd be quantitative suggestions (e.g., this feature area got a lot of calls). Talking to PSS costs both the user and Microsoft money. Users are disinclined to call unless they can't solve their problem in some other way, so we miss feedback. Microsoft is financially incented to reduce PSS calls; better products is just one possible side-effect.1 All told, a user won't know that their commentary is going to get used or not.

  • Talking to TAMs - Technical Account Managers handle the big accounts at Microsoft, and as such you're already a special class of user. You have more pull, but you're paying for the privilege, and the prospect of future purchases keeps our attention.

  • Writing to Newsgroups/Forums - There's a fair amount of people willing to aid those who use our products who write to newsgroups, but quite a few of them are not Microsoft employees. Some of them are MVPs, who are well informed and intelligently summarize major problems back to the product teams. Some Microsoftees monitor the lists, either as part of community outreach, or ad hoc, but engage individual users rarely.
With the exception of the TAMs, the general problem with all of these methods is that as a user, you'd not know whether it was worth your time to report the problem. Even if the problem was heard and understood, who knows what internal priority it would be given... would it fall of the list of things to do for this next patch release (or worse, fall off the list of things to do for a major release)? You won't know until you by product v. n+1, and try it out for yourself.

Part of the problem was that the Microsoft culture was one of employees being publicly silent. If you're an employee, and you post on a public forum, and you screw up (anything from revealing confidential information or trade secrets, to implying there might be a next release of something not officially announced (much less whether a particular issue would get addressed in that release), to negatively representing Microsoft), you may find yourself without a job the next day. There was little incentive to talk to the public, and when talking to the public anyway (say you get cornered on an airplane and reveal you work for Microsoft), you'd have to play defense lawyer, and say, "I can't comment on that."

My personal opinion is that this lack of transparency and interaction has hurt Microsoft's public image.

The advent of public bug reporting mechanism such as Apple's BugReporter or SourceForge's bug tracker didn't seem to perturb the way Microsoft did business; semi-standard industry practices aren't necessarily readily picked up. Instead, the advent of blogging has started to crack the official barrier between customer and employee, circumvent the aforementioned "official" routes. All of sudden, you could read a blog and get instant insight to what a product team was doing, even what technical hurdles were having to be addressed. Futhermore, with comments, you could give direct feedback to that blogger.

Consequently, employee bloggers are interacting with the rest of the world in a much more public fashion. Non-public-relations people are regularly writing commuiqués, and the PR and legal departments are nervously fidgiting, waiting for a major gaffe. On the whole, it seems promising; technology evangelists are getting people excited about products, and there's a new transparency into some of the workings at Microsoft. It's unclear how many neophyte bloggers have had to "seek employment outside of Microsoft", but my impression is that it is negligible.

The developer devision seems to be leading this new wave. The MSDN blogs were among the first I had heard of at the company. Furthermore, they're the first group trying to "close the loop" with services like LadyBug. I'm not sure if "close the loop" is some official customer-representative jargon, but I'm using it to mean to be able to receive customer comments, report back what, if anything, we're doing with them and why, and then, once action has been taken, to follow up with the customer to make sure that the action satisfies their desires.

Again, historically, we were far from being able to close the loop. If a user complained about an problem, even if we knew it was an "official problem", i.e., there was a bug report and it was deemed a valid problem, we weren't allowed tell the user that.2 Even if we could have told them it was a valid problem, we couldn't tell them much about when we would address it (assuming we knew) except under very limited circumstances.3 At best, we could have kept their contact information around and at the point where we finally addressed their problem and the software that did so was on the shelves, to report back to them. (I think, that happened in very isolated cases.) That still didn't really address the customers whose problems we elected not to solve.

While the new blogger-era rules of interaction are not fully written, it still seems like the new feedback tools and (the promise of) looser restrictions on customer communication will make it easier for customers to tell us what we need to know to do our jobs better and for us to let customers know that we "get it". (Or at least how much we've "gotten it", so they can take appropriate action.)
1 For example, we might add new wizards to aid people who don't know how to use a feature; this reduces support calls, but doesn't make the product better for those who already knew how to use those features. It may not even mean that the product is better when you account for all users of the product. If only 10% of customers are calling about a feature and the rest understand it, the only portion that's costing us money that we can measure are the 10%.
2 Notwithstanding our various Licensing Terms, previously known as EULAs, which warrant nothing about the software, the implication that there was a "bug" in the program could be (and in some cases, rightly so) construed as a "flaw" rather than "design limitation". Nobody wants to be sued, and software companies with deep pockets live in some non-zero fear of being the target of a landmark class-action consumer rights lawsuit akin to those in the automobile industry that might require fixing, recalling, and/or punitive damages.
3 If it was going to get fixed in a release not yet announced, that was a no-go. If it was going to get fixed in a release announced, but not on the verge of being shipped, there was a risk that the fix might get accidentally undone by some other work during the course of the project, and saying that it was going to be fixed meant having to make extra double sure that none of them were broken (not so hard), but also fix them if they were (which can be quite expensive if it's at the very end and we're trying to get the product out the door).

Eleven years

This coming Monday, I will have been working for Microsoft for eleven years, migrating from my initial position in Excel development into the newly-formed MacBU (or Macintosh Product Unit, as it was called in those days). I took some time this past week to look through what all I had really managed to accomplish over that time frame, partially wondering whether I was living up to my (self-gauged) potential. It's hard to recall all the little things I did for the various projects; the ship list seems to be the remaining milestones. Fortunately for my memory, those are permanently affixed to the Ship-It awards (I ran out of space on my first one), and I have the Shelf-of-Ship, a display of promotional boxes from the various products shipped (including the somewhat-hard-to-come-by Office 98J CD).

MacBU seems to have progressed a bunch over the years too. I missed the 68k to PPC transition that had happened with Office 4.2, but between picking up Windows components that Windows Office doesn't have to maintain for themselves (OLE, OLE Automation, Visual Basic for Applications), having to switch from our no-longer-supported Macintosh cross-compiler in MSDev to Metrowerks CodeWarrior, and undergoing a major OS revision (OS 9 to OS 10), we've come a long way. Now we get to look at CFM vs. Mach-O, moving to Xcode/gcc and adding Intel to the platforms (and largely the former two because of the latter). Roz has made a public commitment to work on it, and Erik Schweibert, my current lead, has already posted on some of those trials.

At the same time, whereas we don't have to play porting games with Windows Office, as their source diverges from ours, and as we need to maintain file format compatibility (to at least roundtrip the things our code doesn't support), we have a delicate balance to maintain in terms of what needs to get brought over from Windows Office and how it gets brought over: Do we port it and try and keep our code in sync with theirs? Do we clean it up as a v. n+1 version of the same feature? Do we rewrite it in a MacOffice paradigm? Do we do nothing and suffer the consequences? Each of these options has its negatives: Porting/synching code has the problem of many more developers working for Windows Office producing much more code than we have time to carefully review and integrate, and even if we do so, we have to live with the drawbacks of their particular implementation; on the good side, it means that if we have to do more porting of code added in a different release, it'll be easier to do since there will be fewer differences between our code and theirs. Cleaning it up suffers from the same problem of quantity, and whereas the finished product might be better and/or more elegant from a pure code-maintenance perspective, we complicate the future porting strategy — will we still be able to add the new feature they added to our "cleaner" version of the code, or will we have to throw out what we did and go back to straight porting later. (To be fair, it's not strictly a one-way operation; sometimes Windows Office ports code from us, and they face most of the same issues, if not the quantity issue. However, I doubt most of the pure code-cleaning gets back-ported unless it adds significant features; from most people's perspective, all it does is add risk to shipping on time.) Writing a MacOffice-version of the same feature has similar consequences, and when file format is involved, not supporting it is just not an option. Rick Schaut has been posting on the most current incarnation of file format compatibility since mid-2005.

For better or worse, MacBU hasn't been the only Macintosh software developer at Microsoft over these years. During its inception, a bunch of teams offloaded the Mac versions of their code, some with a big sigh of not-having-to-maintain/support-Mac relief. There were (and are) still exceptions... The Outlook team continued to produce a Macintosh Outlook client up to the 2001 edition. Encarta still produced their Mac applications (even though they usually had two developers all told for both platforms). The Windows Media Player team produced their, ahem, client for the Macintosh, though that has been recently supplanted with the Flip4Mac version. (This, by the way, caused a minor debacle, since their PR dept released the information on the same day Roz made the five year commitment.) The hardware group still produced their own Macintosh drivers, and the Services for Macintosh team on Windows Server worked on the Microsoft UAM, which, by the way, seems to have been replaced by an Apple-provided version of the same thing by 10.4.6 that even works on Intel. Included among the new exceptions are the WPF/E project, as well as the cross-platform shared source initiatives. In all of this, these groups are acting as peers, without a specific Macintosh oversight group. The tradeoff is that there's freedom for these teams to do what they need to do for their product, at the cost of a lack of Mac-product uniformity in quality, best practices, access to Apple information (MacBU has the closest link to Apple), and so forth.

As much as the new Intel boxes are creating new work for and my team, they still give me the excitement that something new is always happening in the Macintosh world. I look forward to writing more "interoperable" software, and hope to see another decade of closer Apple/Microsoft integration and partnership.