Monday, November 16, 2009

Delta is my friend

I think it was a couple years ago when working for the CLR and running into what appeared to be a compiler issue in a newer version of gcc. Zipping up the entire enlistment and shipping it to Apple would have been a rube mistake: (1) Microsoft doesn’t really want its source disseminated unnecessarily, even if the partners have a limited non-disclosure agreement. (2) Even the youngest tester knows the mantra: get a small, simple repro.

Suffice it to say that our source code is rather complex (and probably a small factor more than it need be). Generating a giant .ii file and then culling it manually to keep the reproduction while shedding intellectual property is a soul crushing problem. You can make some educated guesses at what is causing the compiler to behave oddly, but sometimes those guesses are ill-founded, and there’s no easy way to get to minimal repro.

Enter Delta, which Mike Stump of Apple pointed out to me. It takes the tedium out of reducing the reproduction down to a minimal one, making the obfuscation/renaming process usually much easier (unless something about the names themselves are problematic).

You start with your giant reproduction: g++ flags flags flags -c file.ii, where file.ii is the preprocessed version of your source (i.e., replacing the “-c” with “-E” at the gcc line). In our cases, this can be something like 1.8 MB of text file. You then write a simple “interestingness” decider script, e.g., that gcc succeeds in compiling the source, but that the output contains a particular warning. Then you run Delta in several iterations, and it selectively removes parts of the source file, tests the file for (continued) interestingness, and if it is still interesting, it caches the smaller result, and keeps going, looking for more text it can remove and still have the repro intact. Several hours later (of the computer’s time rather than your own), it can knock the size down to a few K of text, and then the whole problem becomes much more tractable.

I’ve probably had to post only six or so bugs that fell into this category, but Delta has saved me tremendous time and effort.

Getting background pictures on disk images to “stick”

As a part of working on an installer for TeachTown, I ran into a problem where we were trying to make an automated build process for the installer and making a nice disk image for the installer: the process of assigning a background image via AppleScript would work in the moment, but after you hdituil detached the image, it would vanish. e.g.,
osascript -e 'tell app "Finder" to set background picture of icon view options of container window of disk "disk_name" to POSIX file "full_path_to_image_file_on_disk_image"'
hdituil detach /Volumes/disk_name

We found that running sync wouldn’t help, but that if we waited a while (e.g., sleep 5), it would usually “take.” Well, Apple replied with the Magic™: The Finder doesn’t commit the changes made to Finder metadata immediately to disk, and whereas if you told the Finder to eject the disk, it would know better, write its metadata and then eject, but for whatever reason hdiutil detach would just unmount the disk image out from under the Finder and it wouldn’t necessarily have known to write the metadata first.

Workaround and/or fix: Use the update disk verb to force the Finder to write out the metadata. e.g.,
osascript -e 'tell app "Finder" to update disk "disk_name"'

That should make sure that background image and icon size/placement settings that you made get committed before the detach.

Tuesday, October 13, 2009

Avoiding strange gdb errors in SnowLeopard

If you’re suddenly experiencing the following warnings when loading a binary
unable to read unknown load command 0x80000022
and the following error when running your binary
Error calling thread_get_state for GP registers for thread 0x14855lxerror on line 207 of "/SourceCache/gdb/gdb-768/src/gdb/macosx/i386-macosx-nat-exec.c" in function "fetch_interior_register": (os/kern) invalid argument (0x4)
then it’s very likely you’re running the wrong version of gdb. In my case, I have both Xcode 3.0 and Xcode 3.2 installed, since the former is the “officially supported” toolset (i.e., what the build lab is using to produce our products). Unfortunately, our build environment, in an attempt to keep you from accidentally using the wrong tools / SDK to build the product, will set the path to include the official toolset’s /usr/bin (e.g., /Xcode3.0/usr/bin) before /usr/bin. Thus, I was accidentally getting the older version of gdb, which doesn’t work quite right on SnowLeopard.

Fix: Use the Xcode 3.2 (or later) version of gdb.

Monday, August 31, 2009

Rich support for multiple (versions of) OSes: autoconf?

After many years of working on multi-platform applications (and now also multi-platform platforms), I find it incredibly difficult to try to be rich on all of them, and not either drastically increase the cost of production (with regards to testing resources) or reduce overall quality (screwed up boundary cases, where the “boundary” is much, much closer than one’d like to customer uses). One of the longer term issues that especially affects the Mac platform for Silverlight is our mechanism for building the CoreCLR.

What we’re building today is highly related to Rotor v1 and Rotor v2. The Rotor project was the shared source release of the Common Language Infrastructure, aka SSCLI. Rotor v1 was fairly multi-platform because Microsoft really wanted to show that their new CLI was a realizable technology on alternative platforms. It was released under the shared source license so that academics could peer under the covers and that, despite a facile JIT-compiler and garbage collection mechanism, the system would work and also how it would work. Figuring they’d proved their point1, the Rotor update to v2 was hamstrung further — they needn’t have it work on all platforms. The nice framework they had built and managed to allow for building Rotor v1 on multiple platforms had suffered bitrot and would build on nothing but Windows.

Fast forward several years and past an internal project that never saw the light of day, and you get to the inception of the Silverlight project. They had a mechanism that would (mostly) build something for FreeBSD, and with some minor tweaks, would build for Mac OS X2. Furthermore, another (defunct) team had gone through the effort to port the commercial JIT-compiler for x86 and the commercial garbage collector to GNU, so they had most of what they needed to start working3. Several teammates and I worked on this for a couple of years before it was picked up by the Silverlight team for the 2.0 release. Over the course of that time, some effort went into not completely destroying the ability to build on other OSes. That said, the only shipping product that was using the Rotor project at all was the Mac version of the CoreCLR4. I am positive that not only that you could not build our CoreCLR on other platforms using our source, but that I, myself, included code and/or improper #ifdefs that would make it not work. Not by design, but simply by not having a product for that platform.

Mono/Moonlight is both a blessing and a curse in this regard. As much as I might have wanted a business proposition that would have put a Microsoft-written CoreCLR on more platforms than just MacOS, the environment was/is not ripe for such an idea. The great deal we have with the Mono project means we’re likely to get the platform on many, many more OSes than Microsoft proper would have been willing to fund. On the other hand, the “curse” side is that there really is no other platform than MacOS for the autoconf-based/multi-platform-aware/multiplatform-capable build system to build. No reason at all to have all this extra gook.

In fact, this gook gums up the works somewhat. We broke a whole bunch of original assumptions when we finally released the Mac OS CoreCLR. We went from the autoconf premise that you’re building on the machine that the built code is meant to run on. Instead, we wanted to run on systems 10.4 and up, independently of whether we were building on 10.4, 10.5, or even pre-releases of 10.65. Furthermore, we wanted to be warned of potential future issues on 10.5 and later. As this to the idea, before it was deprecated, that we might build x86 on ppc and vice versa, that there are build tools that we need to create to actually build the product, and then there’s the product itself. The former need to run on the current operating system (even if through some kind of emulation — e.g., if on x86_64, ppc (if Rosetta is installed) and x86 are valid build tool architectures) and the latter need to have the cross-OS-version behavior we want (i.e., not using any deprecated APIs for one of the later OSes, and selectively using new-OS APIs via dlsym or CFBundleFunctionPointerForName or weak-linking. If we had gotten it right (bwahaha), we’d’ve cached config.guess files for other architectures and made sure the built products would Actually Run™ on the platforms for which they were built6. As it stands now, we have this overly complicated system that, yes, allows us to actually use 10.5 compilers to build the stack canary support into the applications we use on 10.4 (and not when we're building using the 10.4 compilers, which is purely an internal testing mechanism), but it also means we pass all sorts of extra grik around:
  • Mac OS SDK
  • Min ver (currently 10.4)
  • TBD: Max ver (as max as we can get it)
  • -arch flag for gcc (since autoconf cannot guess this with any utility uniformly across 10.4/10.5/10.6)
Plus, we use this external mechanism to enforce these same things on our partners’ Xcode projects (not everyone made the decision to use the same build system for both Window & Mac, much less the NTBuild system that we inherited) — we invoke xcodebuild with the specific SDK and various other #defines we want.

However, even when we get this right, and we use the old APIs for older OSes and the new APIs selectively, i.e., on the OSes where they’re supported, there’s no default mechanism for demonstrating that we’re doing it right. Nothing to call out that we have these references to specific APIs that are deprecated, but only in OSes where we expect them to be usable. No handy mechanism to segregate out these uses so that they can be deprecated when we change our own internal requirements to support a newer OS. It’s all internal manual review. I suppose things move slowly in the OS world, but I’d prefer that we’d be able to qualify all of these call sites with the right metadata — use this on downlevel OSes and this on modern OSes and effectively remove it when it’s no longer necessary, with some kind of deprecated code not included warning to let us know, so we can remove it at our leisure.

Notwithstanding these cross-OS-version issues, there’s still the issue that the autoconf-centric mechanism gets stale by default. We regularly create or view new Xcode projects in newer versions of the Xcode toolsets just to see what flags get sent down to gcc/ld so we can emulate the new behavior in the configure.in scripts. There’s a fairly rational argument to be made that we’d require less intervention if we just used Xcode projects for everything. The counter-argument is that if we did, we probably would not have to be as introspective as we are about how projects are built to get them to build, or, more importantly, to Work™ — we’d blithely build and sort out the bodies later.

There’s still no real solution to having multiple config.guess files for the multiple OS versions your app supports and some summarizer that converts that into static code changes for the things that are determinable (like all versions of the OS support dlfcn.h functions), and into runtime code checks plus disparate behavior for things that came into existence with a particular OS version. At this point, is autoconf too unwieldy to keep? Should we move to a simpler mechanism and live with the stuck-to-this-OS-ness it implies? Hard to tell. Any change will require some work. The only question is, is it more work than we would otherwise need to do over the long haul.
--
1Actually, I don’t actually know what they were thinking; this is supposition on my part. It predates my joining the CLR team by a couple years.
2In fact, one of the reasons I got the job is that I complained to them saying that their Rotor v2 release had broken the Mac OS X build, and that I had some patches that would fix some of the damage. This put me in contact with one of cross-platform devs (from which I inherited most of the Mac responsibility ultimately) and the dev lead of the project, who was willing to give me contribute access, even though I was from another Dev organization. When they realized they needed a Mac guy, they talked to me. As it turns out, they might have benefited more from a Darwin/FreeBSD guy, since my knowledge was more old-skool Toolbox than a born-again *nix guy (since I had done *nix work in college when I wasn’t working on 68k code). At least at the time. No longer. Plus now they know more about CoreFoundation than they ever wanted to know.
3Sadly, they didn't have a commercial-quality JIT-compiler for the PowerPC, so they just recirculated the good ole FJIT. (The “F” stands for “fast”, and by “fast”, I mean JIT-compiler throughput, not JIT-compiled-code execution speed.) The results were pretty shoddy — it worked but the word “performant” wouldn’t be seen in the same room as it.
4The build system for Windows Desktop CLR maintained the rotor project for quite some time beyond its last release, in the event it would be shipped again. However, in the spirit of “testing just enough” (i.e., testing sparse points of the matrix), we stopped building the Window rotor project some time ago, presuming we were doing better testing by having a real Mac rotor project that we had to actually ship.
5Don’t even get me started on that you cannot code on Mac OS X to be OS SDK-independent without Major Hackery™. Regular changes to function prototypes break our warnings-as-errors compilations.
6In the days before PowerPC was removed from our list of shipping targets, we had the capacity to build the opposite architecture on the same machine to make sure you hadn’t hosed the other side due to your changes. Well, it was all fine and good, except that it would not run at all. It did a fine job of catching compiler warnings, but at the top of the world, the endien #defines were all wonky because the values for the current build machine had bled into the target architecture. In the event that PowerPC was ever made little endien (not just little endien mode, like the G4 and earlier), perhaps this might have worked.

Tuesday, August 04, 2009

Failure to clean

While considering what to do for MQ1, I ran across a presentation (video) that Peter Provost gave for NDC 2009. He has an awesome analogy between coding and cooking at 31m into the presentation:
I was very fortunate to work for a good chef, and one of the things that he taught me was to be constantly cleaning up as I was cooking… [If you are forced to have the big clean up at the end,] as you’re constantly piling on the scraps of food all over the counter, and the dirty knives and plates, it totally gets in your way. At the end you stop and say, “We can’t cook any more food for an hour; we got to clean up the kitchen.”
If you can imagine that a version six product (where a product cycle is about two years long) is like a kitchen where people have been constantly working in for twelve years, and at no time has anyone actually cleaned it up all the way, and finding clean space to work in and clean tools to do the work with is getting progressively harder and harder, then you have a very good idea of what writing software2 is like.
--
1MQ is Microsoft parlance (perhaps others too) for a “quality” milestone. Milestones is one way to divide up a large set of tasks that you want to work on during the course of a single product cycle, and the ones which add features are generally M1, M2, etc. MQ (or M0) generally starts (or is in the interstice between) product cycles, and generally focuses on infrastructure and code-quality improvements—things that only have a secondary impact on whether customers will want our software, i.e., that we are efficient in making those other changes.
2Well, writing software in a non-agile or limited-agility way.

Tuesday, July 21, 2009

Entourage is taking a work break

I honestly don’t remember when I first started using Entourage. I suspect it was back before Project Athena was named Entourage, and was simply the Mac version of Outlook Express. I’d hooked up my work Exchange server account via IMAP, and later Entourage’s built-in Exchange support. Yesterday, for the first time in many moons, I can’t use Entourage to access my work e-mail.

The reason: We’re moving on up. Many Microsofties’ accounts are getting migrated to Exchange 2010 so we all can dogfood, dogfood, dogfood. And WebDAV in Exchange 2010 has gone the way of the dodo1.

Never fear, intrepid Mac Office users: in lieu of switching to the aging, legacy, Windows-centric MAPI, the Mac Office folks are designing forward to the newfangled Exchange Web Services (EWS), the faster replacement for WebDAV. EWS works on Exchange 2007, and will have expanded capabilities in 2010.

Nonetheless, we dogfooders are stuck choosing between hot dogfood-on-dogfood action (i.e., the latest builds of Entourage EWS against the pre-release Exchange 2010) or waiting until Mac Office makes their release. Since I’m no longer on the team (i.e., not building and debugging it day in and day out), and it’s both about my home and my work data, I’m inclined to sit this one out, and wait on pins and needles until they come out with the new coolness.
--
1A better list of changing APIs for Exchange 2010 can be found here.

Saturday, July 11, 2009

Home networking and its woes

A couple weeks ago, I updated all of the Ethernet hubs in the house to support gigabit ethernet. It’s not that I have many devices that would take advantage of this, but I suspect more and more as time goes on will. Nonetheless, this most recent upgrade still didn’t quite work out perfectly: the Netgear ProSafe GS108 in the office connecting to the Netgear ProSafe GS105 up inside the Leviton Structured Media™ box won’t train to gigabit speeds, always falling back to 100 megabit. Grr. Not sure exactly how to diagnose the problem. (The same problem doesn’t occur between the 105 and an identical 108 unit in a different room.)

At the same time I updated the hubs, I decided it’d be nice to actually have more than one working telephone jack available. (I mean, it’s downright tragic to have this awesome patch system in the Leviton box and yet only have one real phone line—connected to the modem itself.) I could put little DSL filters in front of them, but that posed a problem for the only other (prospective) phone in the house — a wall-plate-mounted kitchen phone. We didn’t have a DSL filter that would fit that and have it stay on the wall. So, I did some research and found that of course Leviton makes a DSL filter board (47616-DSF) to stick into the box. When I got it, I realized that I didn’t have (nor really know how to use) a punchdown tool. Fortunately, one of my friends who is a hardware geek did, and gave me the explanation on how to use it. I rewired the phone to go through the board first, but got zero love -- no signal seemed to come out of the “to modem” (or at least it never reached the modem), even though the phone lines still seemed to work correctly. Double grr. Now it’s re-patched to the original configuration, and there’s still no phone in the kitchen.

After all this, we started noticing that our internet service seemed seriously degraded. We have Qwest “Platinum Package” using Drizzle as our ISP. Today, I looked at the modem’s web interface, and it said that our 7 Mbps connection was connected at 3360 Kpbs. That seemed rather unreasonable, so I did some research and found some ugliness — the current description of “Platinum Package” advertises “up to 7 Mbps” (and that makes sense to some extent, seeing as though they could have the fastest modem connection ever, and the ISP may still be the slow link), but they only guarantee at least 3 Mbps! I don’t remember that being part of the deal when I signed up; perhaps they changed the policy? (It’s not like they publish the historical policy changes so that you can see when the terms of the service changed out from under you.) I thought that it was possible that it was my forays into rewiring the punchdowns that caused the problem, but after connecting the modem directly into the telephone test jack where the phone comes into the house, it got even worse training. Putting it back where it was retrained it to 4400 Kbps, but then I updated the firmware on it (which required a reset and retrain), and now it’s back to 3700 Kbps. Let’s just say that our instant-watch Netflix movies on the Xbox 360 went from 4 bars, sometimes dropping to 3, to starting at 2 bars and bailing out entirely within a couple minutes. Really rather frustrating. Qwest’s web page talks about a “Quantum Package” (aka Fastest) that goes “up to 12 Mbps” (with a guarantee of what?), but their availability query suggests it’s not available for my phone/address.

I am tempted to jump the Qwest DSL ship (or at least them running the show — using Speakeasy or something) and venture into cable-internet. I just really want to have something akin to a guaranteed 1 MB/s (8192 Kbps) down and not have it preclude the ability to connect from the Internet into a machine on the home network.

With all of these sequential failures, I’m even less inclined to continue to plan an updated wireless network, complete with a guest VLAN. I had hoped that perhaps the ReadyNAS NV+ would support some RADIUS service so I could just make the denizens use 802.1x, and route guests only to the internet. Maybe someday.

Friday, July 10, 2009

Silverlight 3

Our partners over in the Silverlight Runtime (SLR, formerly Jolt) have done a bang-up job working on Silverlight 3, which is released to the public today. There are scads of new features. Go check it out!

We did some very targeted features in the CoreCLR for Silverlight 3, but it is largely the same engine as it was before. One of the important parts for Mac users is that there are some changes to be compatible with Snow Leopard in there. (Silverlight 2’s CoreCLR mostly works, but there are some edge cases that might show up issues, depending on the Silverlight application.)

Over here, we have our heads down for the most part, putting the finishing touches on the Visual Studio 2010 (aka Dev10) release. We’ve already released a Beta 1 of the new .NET Framework v4 (including our CLR bits). Finally, the desktop CLR will see some of the stuff that we’ve been showcasing in the CoreCLR! Furthermore, Visual Studio will have several improvements to support the design and debugging of Silverlight content.

It’s always nice to see one’s work finally make it to the public.

Thursday, July 09, 2009

Wednesday, May 20, 2009

Back in action

Just received a replacement hard drive, the 500GB Seagate Momentus 7200.4, and after spending 40m yesterday taking out the old 320GB 5400RPM drive that would periodically stop responding, I’ve been spending time getting my MacBook Pro back into action.

Transferring my Mac OS X partition with Disk Utility and my WinXP bootcamp partition with WinClone went very smoothly, and now I’m finally at the point where I'm going to put a code enlistment back on it. DevDiv for this release is using Team Foundation Server for our source control, and for the Mac side, we’re using the Teamprise client to access the server. It’s churning along in the background considering what files it’ll need to pull down to replace my deleted enlistment.

On the cool side, due to our data-at-rest policies for laptops, I’ve recreated an encrypted sparsebundle disk image for the purpose of storing my source. Historically, I would have just used the -encryption flag to hdiutil and relied on the security of my local machine’s keychain to keep the secret. However, since Gemalto has released Mac OS X tokend plugins for its .NET v2+ cards, I can use the certificate off of my Microsoft badge as the security, now requiring thieves to either crack the stock encryption or to also steal my badge and my PIN. (Good luck with that.)

Building services are going to be replacing the carpet and putting a fresh coat of paint on the walls this evening, so I’m putting all my books and peripherals into boxes. I only hope that my sync is done by the time my office is packed so I can go work in the new commons.

Sunday, April 19, 2009

The shackles of IP taint

or
How it’s easier to drink the Kool-Aid™
if it’s the only drink you’re allowed to have.

The problem: If you, as a developer, introduce intellectual property into Microsoft’s distributable software portfolio1, and the ownership of that intellectual property is under dispute, Microsoft may come under (additional) threat of lawsuit.

The “solution”: Make the policy such that Microsoft employees have no way of copying any external IP—make it against the rules to read Open Source code by default, even outside of their job.2 To a lesser extent, except for specific competitive reviews, even using alternative products (esp., if it’s a development product) is frowned upon lest you absorb possible IP in the visible framework code.

The ramifications for your average Microsoft software group aren’t that problematic. They’re generally consuming the Windows OS (Microsoft), using the Visual Studio tools to build their project (Microsoft), and the majority of the analysis and testing tools are Microsoft. Furthermore, if you don’t understand an API’s MSDN documentation, you can just load the symbols and debug right into it and find out for yourself what is going on (or going wrong).

But it does mean that you are limited to what the “state of the art” is to what the Microsoft-available solutions are, and furthermore to not really know what you’re missing.3

As soon as you get on the fringe, the all-Microsoft strategy starts to slip. As a MacBU developer and now as a CLR developer working at least part time on the Macintosh CoreCLR, very little is in-house. Trying to track down something in a closed-source Apple product is just as bad as Apple trying to track something down in our closed-source products, so we get a taste of what it’s like to be a third party. But even when it’s open source, we still can’t download it and look at it. Even if it would be expedient to do so to figure out what our (mis)use of it was. Even when the documentation is sketchy or ambiguous.

So as it stands, we rely heavily on ADC. We can’t peer into the code, but they can and give us hints, or code samples. And obviously, they go and fix bugs we report in their software, as well as in the family of open source projects that are released with Mac OS X. Which leaves us back again working in our own sandbox.

It sometimes takes years for external ideas and outstanding tools to penetrate into Microsoft, and it seems to me, almost entirely through people who are new industry or college hires, bringing in their previous backgrounds (taint notwithstanding4) with, generally, open source community. The rest of us have to work hard (and run a rather annoying gauntlet) to keep from relaxing into navel-gazing.
--
1As opposed to in-house tools that are not distributed.
2Unless it can be demonstrated that there are sufficient business reasons to justify the expense of having the legal team review the licensing, and incur the additional risk to Microsoft.
3On the other hand, we also have the problem that there are usually many different (sometimes nigh identical) solutions to the same problem that aren’t well publicized and thus proliferate.
4If a new hire made significant contributions to an open source product, then they’d be barred from working on any Microsoft product that would be of similar ilk, presuming that that would be the IP they might leak from external sources into the MS codebase. Unfortunately, there’s no good mechanism to determine what the overlap is, so it’s just a matter of how much risk the dev managers and legal team think they’d incur.

Tuesday, April 07, 2009

Baked Mac & Cheese

I’ve never really been particularly good in the kitchen. I tend to need a recipe and to have all the ingredients, and generally to only work on one thing at a time. That and I’m never quite sure where everything is, and generally have to look in three separate places.

Nonetheless, recently, I’ve been making a particular dish multiple times, and am getting better at timing things to be ready Just In Time™: The Joy of Cooking (‘97 edition)’s Baked Macaroni and Cheese.

My friend Mason, author of the Tiny Kitchen iPhone application, swears by the Cook’s Illustrated version (the older, adult version), and I will have to try it eventually. Nonetheless, his app has helped me on more than one occasion to set up a shopping list so I can remember to get everything I need for it.

Towards that end, I am adding it here, in Tiny Kitchen form, sans Joy’s copyrighted description:

---begin-tinykitchen---
Title: Baked Macaroni and Cheese (Joy of Cooking)
Time To Cook (in minutes): 45
Tags: dinner

----
2 cups elbow macaroni
3 tbsp. butter
2 tbsp. flour
2 cups milk
0.5000 - onion (minced)
1 - bay leaf
0.2500 tsp. paprika
2.2500 cups sharp Cheddar or Colby cheese (grated)
- - black pepper (freshly ground)
0.5000 cups bread crumbs (fresh)

----

Guid: 4EC3A73B-5A81-4EA2-A600-8C20A84AC72F
---end-tinykitchen---

I’ve tended to shop at Trader Joe’s for the ingredients, and use these specifics:
  • Like Cook’s Illustrated suggests, I use a blend of mostly sharp cheddar (extra sharp Celtic or extra sharp Wisconsin, for more orange-y look), and some Monterey Jack.

  • I use fusilli rather than elbow macaroni.

  • I use (organic) whole milk.

  • I also buy frozen chicken breasts, and broil two to add a little tasty protein to my fat and salt.

I’ve used Tiny Kitchen’s recipe scaling feature multiple times, having prepared it in a single batch only once. More often it’s in two or three times quantities. I’ve found it difficult to figure out what the right amount of time, or really, the right creamy consistency to reach before pulling it off the stove and adding the cheese. Also, trying to add salt and pepper to taste for varying quantities seems hard to get right.

I made it again this past weekend in a double batch, half for us, and half for our newly-parents-again friends S. Ben, Meghan, “little” Nathan, and now Eliza. From my own tastebuds, it was a good run.

Saturday, April 04, 2009

IP for loans?

With the big three auto manufacturers repeatedly asking the government for loans to stay alive, I think now is an excellent time to use our leverage to make those loans with the condition that they sell the IP rights on selected parts of their patent portfolio into the public domain in exchange for some amount of debt forgiveness. Whereas it’s going to be impossible to clean up the patent law in any near term, it would be interesting if some of those defensive patents became available for use without licensing. I wonder how much is in there that inhibited competition?

On the other hand, as an alternative to releasing them into the public domain, it might behoove American citizens to have the government retain these patent rights on our behalf. In today’s world of cross-licensing deals, and having one patent-infringement case always spawn a host of counter patent-infringment cases (in a MAD sense), perhaps it’s better to have a defense for the public.

Right now, if you, as an individual, infringe on a patent without the requisite licensing, you may get a cease-and-desist order (if they find out), and may have to pay a rather hefty fine (with treble damages if you knew you were infringing). Companies can get hosed, because if their product is based in part on the infringed patent, then, depending on the licensing that is available, it may no longer be a viable product once that happens. In the case where they’re simply copying the idea, that might be a good thing, but in the case where it was parallel discovery1, then they’re just out of luck.

If, like the companies with big patent portfolios, the government managed this portfolio on behalf of its citizens, they could do some interesting things. Try out this thought experiment based on the following licensing strategy:
  • Every citizen gets license to use anything it the government managed portfolio for non-commercial use. This equates nicely to the same effect of having the patent be in the public domain.

  • Any commercial entity can acquire a low cost license to use the patent, iff they agree to cross-license anything in their portfolio to the entire American public for non-commercial use as above for as long as they use the license.

  • Any commercial entity can retain licensing privileges for their IP, but there will be a much higher licensing cost to use a patent in the government portfolio.
Furthermore, the government can renew the patent as long as anyone else could.

Using this mechanism, we improve on the benefit to the public over these patents going into the public domain by (1) giving incentive to the companies to open their portfolio to the public, and (2) providing extra government income, which could be earmarked for the maintenance of the offices managing the government portfolio, as well as the patent office itself.

Personally, I think the IP system needs a fairly large overhaul to fix the issues of (1) the increasing (and varied) pace of discovery and obsolescence, (2) the volume of ideas and the general lack of cross-referencing2. Nonetheless, in lieu of a big rewrite (which is unlikely to occur wholesale), this incremental fix might be worthwhile. And GM could start us off.
--
1Parallel discovery is actually very likely to occur. Since there are treble damages for knowing you’re infringing, it behooves one to not know. Ultimately, companies stick their head in the sand and hope not to infringe, while their legal teams, separated from the rest of R&D by a Chinese wall sit ready to do patent searches if a lawsuit comes in the front door.
2I expect that the patent publications are like dry versions of the Whole Earth Catalog rather than a hierarchically organized, indexed, and cross-referenced document that leaves little doubt whether you’re product infringes and/or your claim is a new claim.

Thursday, April 02, 2009

When the government speculates

I’ve been listening to the Planet Money Podcast recently, since its recent cross-over broadcast on This American Life (375: Bad Bank). They’re doing a brilliant job at explaining the complicated financial strangeness that’s going on. In a recent episode, #23: In Search Of Bad Guys, about half-way through (14m 9s), they talk about solvency vs. liquidity. Check it out, and then come back.

The upshot is that the mortgages the banks are holding aren’t worth the full fair value, as not everyone is going to pay them. The banks want to sell them at as little loss as possible in exchange for reducing risk. The question is, is the price at which people will buy these (the bid price), artificially reduced by the market’s general lack of available funds to invest.
If we had prescient bean counters today, they would foretell that the real value is going to fall in one of these three segments.
  • Above what the bank is willing to sell it for—This is a real win for all folks involved, though least for the banks. The banks take a loss to reduce risk, but remain solvent. The investor-taxpayers make money on their investment while also staving off general economic problems caused by the closing of an insolvent bank.

  • Between what the bank is willing to sell the “toxic” asset and what the current market says it’s worth—This is the big hedge. The idea here is that the bank has already lost money on the investment, and the taxpayers are going to eat further losses in exchange for keeping the bank solvent, as insolvency presumably would cause even bigger losses than these across the whole economy. However, it’s still not as bad as the market currently looks (which is what the banks are currently arguing—whether it’s lack of liquidity or lack of good forecasting, the market is unnaturally depressed).

  • Below even what the current market thinks—This is the true ugliness, where we, the taxpayers, are eating even more of the bank’s losses. The bank is, for all of its previous losses, getting a great deal here, basically being given taxpayer funds to pay for its bad bets. As these costs increase, it becomes harder and harder to justify keeping the bank alive vs. just letting it fail. (I suspect the FDIC costs would be pretty easy to calculate; it’d be the derivative issues in the economy that would be hard to figure.)
There is an awesome self-reinforcement built into this system. If the government produces a program to buy toxic assets and preserve the banking infrastructure, then the economy is likely to improve (or at least improve sooner) resulting in an increased likelihood that the mortgages that are so toxic will actually be paid on. Now, if only they could show how we’re going to pay all this back and how long it will take.