About Me

Sunday, February 13, 2011

Making Software: Evidence to Diffuse Programming Holy Wars

I'd like to review a new software development book that I've found particularly interesting: Making Software: What Really Works, and Why We Believe It, edited by Andy Oram and Greg Wilson. After thoroughly enjoying Oram and Wilson's Beautiful Code, I was very much anticipating this next amalgamation of the thoughts of some of the industry's leading voices. Greg was a professor of Computer Science at U of T, my alma mater, and has been a great advisor to me in my career. He currently spends his time working on Software Carpentry, an effort to teach programming practices to scientists.
  • Does TDD work?
  • Is Python better than Java?
  • Are good programmers really 10 times more productive?
  • How do you measure programming performance?
  • Is open source software better than proprietary software?
  • Do design patterns work in practice?
It is enough to whisper one of these questions around a group of programmers to begin an impassioned debate. Can anyone actually be right? How can we answer these seemingly subjective questions? Making Software attempts to find credible qualitative and quantitative evidence to answer such questions. It is no longer adequate to present arguments without showing the facts. It is time we apply the scientific method to these questions, gather some solid evidence and impartially evaluate the implications.

In 2009, Thoughtworks' Martin Fowler gave a talk entitled "Three Years of Real-World Ruby" in which he presented the results of the 41 Ruby projects his company had worked on during that period. He surveyed programmers to see how they felt about working with the language. He argued for the adoption of Ruby by showing evidence of its success within his organization. This was fascinating real-world study and is exactly what the authors of Making Software would like to see more of.

Internally many software development companies are gathering evidence of their failures and successes in hopes of finding the magical formula for developing quality software fast. Few companies are willing to release such information to the public. This is part of why we don't have an abundance of empirical studies on software development. However, in recent years, more such studies have been appearing. High quality studies are out there waiting to be referenced.

My favourite chapter of the book was Steve McConnell's "What Does 10x Mean? Measuring Variations in Programmer Productivity". McConnell, of course, being famous for the highly successful Code Complete and other popular works such as Rapid Development and Software Estimation: Demystifying the Black Art. In this essay, McConnell provides substantial evidence that the "order of magnitude" difference in programming productivity is not merely anecdotal but a provable hypothesis. I've definitely seen this difference in productivity during my time at Electronic Arts; there were numerous experienced game developers who were clearly getting things done much faster than I could. However, I would imagine that familiarity with the code base played a major factor in that particular example. More interesting are the studies where programming teams are given new projects to work on and they are more or less on an even playing field. I find this is a very hot topic as such research may reveal the productivity secrets of the elite programmers. Now that's information us mere mortals are dying to hear.

What makes this book interesting is that it attempts to treat issues in software development in the same manner that we would treat anthropological issues. The authors take on controversial topics that programmers love to argue about and give us meaningful evidence to further the debates. Making Software is a great read for all programmers, regardless of whether you are 10x more productive or not.