Programmers acknowledge no exhaustive set of principles by which software develops.
This entry aims to alter that observation not in the slightest.
It's that word, "Exhaustive," you see. Comprehensive. All-encompassing. The trouble is that software development is too large a field to admit set of principles brash enough to wear such a badge.
We have, however, plenty of subsets to choose from, collections that claim necessity but not sufficiency. Take Martin's excellent and ever-popular and may-their-popularity-ever-increase-because-they-make-just-so-much-sense SOLID principles, which he describes as:
Long may they reign.
So what are these Tulegatan principles of the title? Should these supercede SOLID in some sense?
No, they most definitely should not.
The SOLID principles are general principles of software development, crushing astronomical quantities of experience into mere molecules of text. The Tulegatan principles are not general, however. They focus on just one thing: source code structure.
But the SOLID principles also encompass structure so aren't these Tulegatan principles superfluous?
Not quite. Besides this breadth of application, the Tulegatan principles differ from the SOLID in that the Tulegatan principles are objective. They are objective by design. Objectivity shaped them from inception.
Though masterful, some of the SOLID principles are subjective. While this seldom causes concern - generally two programmers with a tour of duty two under their belts will agree on the principles and their application - this subjectivity leaves room for interpretation. What is a single responsibility? Tom's single responsibility, logging, might be two of Samantha's responsibilities, slogan internationalization and database storage. Cindy's many specific client interfaces to the GUI might be Andrew's needless over-engineering. Abstraction may be a language issue for Jeremy but a domain issue for Lucy.
You get the point.
Despite all this - once again - such discord rarely detonates by supporting beams to send a project smashing into ruin. The SOLID principles provide invaluable guidance despite their imprecision. Their great range of applicability, indeed, may even stem from it.
Nor are they alone. Endless, unraveling reams of principles paper the programmer's path, many offering advice as sage as it is nebulous. Keep It Simple Stupid, Don't Repeat Yourself, You Ain't Gonna Need It, Don't Repeat Yourself, etc. Subjectivity is not necessarily a disadvantage.
The Tulegatan principles, however, attempt stark objectivity. Yes, their jurisdiction is restricted - source code structure - but objectivity allows that we fallible humans may stand aside and watch as machines scuttle past to engage our designs. And not in a bad way.
So, yet another shortlist of principles, then. What are they, exactly?
The Tulegatan principles are hardly novel. Categorized not for novelty but because of their relevance to structure and their objectivity, they are:
Earth decidedly un-shattered, let's dive in though this will be a brief swim; later entries shall present each of the principles in vivid detail.
Source code structure improves with reducing duplication. Not much to see, here. Everyone's more or less agreed: duplication's bad. Don't do that.
The relates specifically to structure in the sense that duplicated sets of function dependencies (we use functions here but classes or packages would suffice) generally suggest that such duplications be removed to their own element.
This was all discussed here so we won't rehash.
Potential coupling is to water what coupling is to ice. Potential coupling explores the dynamic, coupling the static.
Source code structure, as we have defined it, relates to the set of functions and function dependencies. It does not take into account how a given set of accessors allow or forbid the evolution of those dependencies with time. This evolution is precisely what potential coupling addresses. In a sense, then, this principle does not affect the source code structure as it is but steers the structure through the twistings of its possible futures.
This was all discussed here and more specifically here and even in more digestible form here so we won't rehash.
Source code structure degrades with increasing depth. Avoid cyclic dependencies and keep dependency chains short.
This was also discussed here so we won't rehash.
EDIT: A later post suggests that method size is a better property to design against than cyclomatic complexity.
Another oldie.
Thomas McCabe's fabulous 1976 paper "A complexity measure" gives us all pause for thought. As with all great ideas, this one's simple: the more paths you have through a function the more difficult it is to test (and hence the less confidence you have that it works, all else being equal).
How does this relate to structure, as opposed to source code in general?
Given that we want our functions to be comprehensible and easily testable, we should decompose functions of high cyclomatic complexity into several functions will low cyclomatic complexity.
Two approaches present themselves.
Firstly, we can simply move, if practicable, the if-statements and loops to their own functions. Secondly, we can use a framework that actually re-uses if-statements and conditionals, like Google's Guava framework for Java. Either way, we will explicitly change the number of functions or their inter-relations and hence generate the structure that these principles attempt to sculpt.
"Contingency," our dictionary tell us, means, "Dependence on change." In terms of source code, contingency relates to the restrictions on the ordering of a sequence of function invocations. Consider the following function a():
function a() {
b();
c();
d();
}
Although the author has arranged these invocations in the given sequence, there is no functional reason within a() why this sequence must obtain. It may be the case that the sequence is crucial, any rearranging thereof causing the program to fail. This restriction is, however, not explicit within a(). We therefore say that the sequence of invocations is contingent.
In the following, f() is a non-contingent function:
function int f() {
int x = g();
int y = h(x);
return j(y);
}
f() is non-contingent because it is defined by the composition of functions. If we write the composition operator as o
then f() = (j o h o g)(x)
Function composition is non-contingent in that it expresses a sequence in which invocations should take place. The invocations are explicitly ordered. Changing this order may or may not cause a failure (even after renaming our variables) but the author has expressed the ordering intent and it is this intent which eradicates contingency.
Above, a() is purely contingent and f() is purely non-contingent but mixed functions of course occur: functions that contain contingent and composed invocation sequences. The contingency of a program is minimised by maximising the number of non-contingent functions. In other words, mixed functions should be decomposed into purely contingent and non-contingent functions.
A minor principle, this has the benefit of minimising not just sequence-rearrangement errors but also reduces the test set-up phase for the purely non-contingent functions. (We shall return to this.)
You can structure your code any way you like.
In this structuring enterprise, you can chose one of two alternatives: you can use principles to define and evaluate your structuring or you can wing-it.
If you choose the former then a little thought experiment, regularly performed, may help identify whether your code's structure is evolving as desired. This experiment applies no matter which principles you practice.
Imagine one dark night, while snow falls outside and all are asleep, a nasty imp appears, drives to your place of work, logs into your account, fires up Eclipse, changes every line of your entire, perfectly principle-structured code-base and checks the mess back in making sure to permanently erase all other previous commits and back-ups from every disc in existence. Nothing remains but his, altered code and there's no easy way back. He then vanishes. He then reappears and sends a weird email to your boss about your scrum master and then vanishes again, this time for good.
The thing is: he hasn't changed the code randomly.
In fact the product still runs perfectly and no user would notice any difference, the imp's just done the mother of all refactorings. He's rolled up your entire code-base into a single function. Actually, he needed a few functions - say, twenty - to take care of unavoidable call-backs from your framework and such like; nevertheless, you're left with twenty functions, each one hundreds of thousands of lines long. All of the contents of previous function implementations are still there, but they're mashed together into these monstrous blobs.
Your boss arrives at work and goes bananas.
He doesn't sack you, however. He's far meaner than that. He instead tasks you with re-structuring the whole code-base again, from scratch.
The thought experiment concludes with a question: given the the quickest way to re-instate sanity would be to extract the original methods from the blobs, would the final structure of your re-written code resemble that of the original?
The Tulegatan principles are:
They solve no problem in its entirety and are merely an aid to structuring source code.
CC Image Foundation courtesy of Martin Lopatka on Flickr.
CC Image Climbing up the Walls courtesy of Jason OX4 on Flickr.