You would think that because Apple provides their developer tools gratis that that means they’re “free.” There’s a rather marginal cost involved to actually physically making the switch in the build system for your product, larger if you aren’t using Apple’s whole toolchain1, but in and of itself, that’s still pretty cheap.
The real cost is the ineffable “baking time” cost for QA to be sufficiently happy with knowing that the toolchain change did not introduce any regressions2. You can immediately run all of your existing unit tests and scenario tests and have a good idea of the quality of your product, true. But, all the manual tests need to be run by hand, and depending on the size of that corpus, it can take a while. Furthermore, not everything gets tested in these official tests3, and some amount of ad hoc testing is generally desired. How much depends on the complexity of your product, the (apparent) understanding of QA of that complexity, and the general paranoia level of QA.
The best time to sell managers on tools upgrades is at the beginning of a project release, though, for long-term projects, the tools may change multiple times over the course of development. In that case, strategic tools changes even after beta releases may be desirable, but you have to sell them on the strategy: what is the benefit? In some cases, the benefit is inobvious–e.g., we know they improved the optimizer, but does that actually help us in our scenarios? These need private releases using the new tools just to measure performance changes before there's a benefit demonstrated. In other cases, new compiler or SDK features imply the ability to have a better feature, e.g., conditional linking of 10.5 APIs without having to clunkily redeclare them in your source that's otherwise tied to the 10.4 SDK, or for 10.5/Xcode 3.0 specifically, the ability to add DTrace custom probes. For us4, unless we posted the bug, it’s unclear what set of bugs were addressed in a new tools release to know whether any of them would affect us, save through testing. Ultimately, management can weigh the benefit with the known/estimated costs and choose to make the jump, or wait until we start the next release.
--
1For example, the delta for switching between the 10.4 headers and the 10.5 headers consisted only of having to add a few explicit #includes that we had gotten away with not having because another system-level #include had done that work for us. Deltas between compiler revisions is another story, since the compiler gets monotonically more strict over time.
2Whereas it is significantly more rare that a toolchain change will introduce a regression due to a compiler/linker bug, it is much more likely that such regressions are a product of substandard design that (1) took reliances on unspecified compiler behavior or (2) was luckily not revealed by a previous compiler.
3It should be no surprise that testing matrices are far larger than we have time to test. (Think along the line about testing until the heat death of the universe.) Official testing may only sparsely fill the matrices along tuples of interesting-to-combine technologies.
4Whereas some of these toolchains produced by Apple are Open Source, since Microsoft is also in the business of making compilers, it behooves us not to inspect code changes made in a specific revision of the tools.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment