This follows my previous post; “Design, Done Right?”
When arriving at a design, one fundamental assumption should be made. Any part of a design has the potential to be wrong. Wrong because something was missed or wrong because it eventually becomes obsolete and is replaced by something better. Given that we’re human, it is 100% likely that any given person is going to make a mistake or two during their lifetime and there’s no guarantee that it won’t be during the critical stages of design. Also, since it’s IT we’re talking about here, things can get obsolete quite quickly (except for COBOL from the 1970s apparently).
Also, let’s not forget that old maxim; “You don’t know what you don’t know”. A thoroughness arising from experience is the best insurance against this. After many years one gets a sense of what needs to be addressed, where the risks are and so forth. Whilst exhaustive analysis with the help of relevant domain experts may go much of the way to dealing with the unknowns, instinct plays a part too.
Designs should be done with the above in mind. To insure against the likelihood of being wrong, compartmentalization should be employed, making each component as independent as possible (decoupling, separation of concerns, etc.). That way, if an object, component or service turns out to be flawed, the remainder of the system is unaffected due to its isolation that is the natural result of these design philosophies.
So, although a design is probably never going to be perfect, shortcomings can certainly be planned for.
To the seasoned developer / architect, the above is architecture 101. So why is it that with virtually every new engagement, I find that these principles have been totally or mostly ignored in the client’s code base.
To understand this, one has to look at group intelligence versus individual intelligence.