From time to time I re-read two books: The Deadline and The Soul of a new Machine. They both are a sanity check to me and to what is going on around me. Both are in a novel form that makes them also a bit fun to read. This night I picked up again The Deadline, went to my favorite brasserie for some dinner an wine, I flipped through the pages and started reading somewhere in the middle.
Fine grained design (sorry I have only the German translation)
The key statement was that you can move quicker if you spend less less time on debugging. This is undoubtedly true I think, but their remedy is something I ever disliked: Fine-grained designed where you try to verify before starting coding . I never liked theses design documents for a simple reason: They spend lots of pages on the obvious to hide the sloppiness of what is poorly (if at all) understood. When verifying it you spend much time on the obvious and happily overlook the parts that aren’t well defined, as they can’t be that important – otherwise the author(s) would have spend more space on it, haven’t they? When implementing it, you’ll get the 90% syndrome: ask the developer after about half of the estimated time how far he got and he will answer “90% completed” - problem is – these ten percent will stay around forever, after 200% of the estimated time the programmer (now with tired eyes and a bad shave) will still tell “about 90% done”.
Short, I believe the only true design is the code (“The system is fully documented – granted you can read C++“- Z.Köntös), but of course not any code, it has to be expressive. In C days a basic design had been made from documented header files, in Java you create interfaces or if someone pays you to do so you’ll sketch endless variations of UML diagrams. UML is believed to be a superset of what you can code – given a smart generator it transforms your design into code where “you just have to fill the gaps” and changes to the design in the code “can be reverse-engineered”, NOT. There are some cases where certain diagrams are really useful to get you an overview of a project, but all examples I know fall in two categories:
Reverse engineered and handcrafted for better readability
Metacode that is feed into interpreters and MDD tools
If we have to accept that the design and the code have nothing in common, why not simply skipping this stage. 20 years ago this hadn’t been a proposition. But it is not true anymore. Modern languages and IDEs provide browsing and refactoring or mix in mechanisms that make the design malleable even if it it is (already) in code. With C and COBOL it might be possible as well, but it hadn’t been possible in their time. This means that your design process is less a constant than perhaps the design! If we accept this the consequences for the work are huge: If you use a language that is capable of expressing your design, it is idiotic to use a design tool outside to describe it. You’ll break your neck to describe in prose or diagrams what is a common idiom in your language (…API, DSL) – only to end up with something that is weaker than the code you have written. And this is the crucial point: Design tools and generators assume that their representation is complete, as any generator can only loose information – if your code is more expressive the generation and the reverse engineering of code cannot work at all. If you are inclined to follow, beware of the caveat: If your language is expressive enough, but you don’t use it appropriately the representation of you design will stay incomplete and you might need additional documentation - or better (but fewer) programmers.
You can use an expressive language to get around formal design, what do you get:
Instead of an intermediate layer, you have something you can work with
Your design is as live as the code is – any refactoring updates your documentation
But back to the claim of DeMarco’s book. It is is indeed still that verification of the design saves debugging, but that doesn’t mean that the design has to be made in another medium than the code. I came to this during my recent experiences learning Scala. I wrote some pretty complicated stuff and went naturally a bit wild with some feature of the language, but I noted one thing: I spend very little time on debugging (one thing I wrote was a specialized implementation of nauty). The language forced me to (or perhaps I did myself as I wanted to learn as much as possible from my examples) express concisely what I want to do. And as the language allows you to be extremely type-hard in your code it actually enabled me to do so. I also discovered that I wrote code that tended to be more declarative to avoid debugging into deeply nested list comprehensions. This topped even the positive experience I had years ago starting using Java, which I found pretty crippled as a language coming from C++.
One of my famous war stories is that I once wrote 400 lines of C++ on a single afternoon that compiled with no errors and also worked as I intended. I never managed to do it again. It was pretty convoluted code (technically a small interpreter) and I never managed to the same in another language I know very well: PL/SQL. And I have also Java code that has a complexity below those 400 lines C++ where I think that I’ll never be able to get them to work correctly.
So what is the key difference? Abstractions. Scala makes it easy to build abstractions, also because you get instantly rewarded by it, C++ as a very flexible language allows it also (though it is much harder), Java knows abstractions in form of APIs and frameworks – but this doesn’t help you as soon you are getting into region far from the API and your framework. These abstractions can be called DSLs, you shape your environment to you needs, perhaps even in multiple layers. In C++ operator overloading has been the too for that, in Java frameworks and APIs (quite clumsy in terms of syntax) where in Scala the difference between operators and methods dissolves completely ( as does the difference between a block and a parameter; and the extractors as antonyms to constructors). My C++ example was btw a good example of abstractions, many classes, few methods with am orthogonal interaction – such code either works or can’t be written at all. Making good abstrations in PL/SQL is as in Java on API-level.
Where is the dynamic world?
As much as I share the thoughts of Paul Graham, I can’t relate to his indissented preference for dynamic typing; there he is missing the point. Yes, malleable code is important, by if you have to keep track of each instance you might reinterpret it will likely blow your mind. I think he meant that you are coding DRY (Don’t Repeat Yourself), something that can be done perfectly (not that hacking style that is considered as expert Ruby) in Lisp (if you know macros!). Static languages can offer you the same comfort without resorting too often to macros (hard to implement in any language with syntax – Lisp doesn’t have any that’s why it is the only language having this feature) if you have type inference as well (Scala, C# 3.0). If you need to change a type, do it, the program will follow, but the compiler will shout at you if you crossed the line. This makes it even possible for others to change you program, because they get a quick feedback when they do something your initial design didn’t foresee.
How is most software written today? Unfortunately with the methods of the 70’s and the tools of the 80’s (if at all). Even if they happen to use a technology of the 90’s, the methods are the old ones and the code will not exploit what is possible, but what is in the line what the environment gives you. In summary: You have a spotty design, verbose inexpressive code (“it is simple everyone can write it” – “if you have some dozens of programmers in India”) and exploding budgets and/or disgruntled customers. Poorly understood design leads to missing/faulty features, correcting these in a mess of thousands of lines of indistinguishable rubbish slows you down; the actual development moves to the debugging stage (if you are still lucky with a customer a beta(?) tester) – I call this “debugging into existence”. Even if development is always trail and error - debugging is different: When you discover a misconception during development, you go and update the spec – in debugging, the only trace will be a “resolved” in the bug database, more over debugging should fix issues and mustn’t create new ones, thus most people that correct bugs, correct them as locally as they can, even for the price of copy&paste code and compromised architecture.