Reality Bites
12 Jul 2010Architecture and design of software systems is quite an adventure. There are very little hard constraints in software and even less in software architecture. Almost anything can be designed. And vast majority of the designs will look good and feasible. Even if quite an intensive review process is applied. It is extremely difficult to find the mistakes in software architecture just by talking about it. As a consequence I dare to speculate that all non-trivial software architectures contain at least one error.
Software architecture needs to be put into the conflict with reality as soon as possible. Only the reality can uncover the problems. The architecture needs to be quickly applied to design. Key concepts should be designed down to the details early in the project. The design needs to rapidly lead to implementation of prototypes. Prototypes needs to be immediately tested. Problems need to be addressed as soon as possible. Solutions to the problems of prototypes will backfire to the design. Changes in the design will influence the architecture. Changes in the architecture will need new prototypes ... and we have a loop here. This loop should better be convergent and finite. All the architects need this kind of loop for the architecture to be of any practical use. The difference between good and bad architect is the speed of convergence. Bad architects will need many iterations and most of them will happen during project implementation phase. Changing the architecture during implementation is really expensive. Good architects will settle down the architecture in small number iterations and will have pretty stable basic concepts before the full-scale implementation starts. Few adjustments to the architecture during implementation are always necessary, but these should not fundamentally change the basic idea. Such projects can usually be delivered with reasonable costs.
Architecture that is not validated by implementing parts of it is just a theoretical exercise. It may be a good first step, but it definitely cannot be presented as a final, practical result. Untested architecture may be good for experiments and research, but it is almost worthless from engineering point of view.
This principle applies to standardization even more intensively than to the software architecture. Standards influence a lot of engineers. Standards can make entire families of technologies to either succeed or fail. Good standards are based on a working software. Only working software can provide assurance that a standard does not have any major flaws. IETF standards are based on a working software. That's the approach that contributed to the success of the Internet as a whole. But too many of the standardization bodies does not follow this practice. Some of us can well remember the infamous example of CORBA, but it looks like most people have already forgotten. The WS-* stack seems to be heading in the same direction. And there is one particular example that I would like to mention: Service Provisioning Markup Language (SPML). SPML defines a (web) service specified using XML schema (XSD). However, the XML schema for the current version of SPML standard is not even passing validation. It violates the Unique Particle Attribution (UPA) rule. Therefore the standard SPML schema is unusable for many implementations. E.g. Java JAX-B cannot process it and therefore it cannot be a JAX-WS service. I have seen that people that use it are modifying the schema to make it usable - but then, what's the point of "standard" there?
There is very little space for innovation in standardization process. Almost none. Innovation should happen in engineering and experimental projects and only the working results of such project should be standardized. However, design by committee is a well known and widely used anti-pattern. Avoid using standards that are not based on working software. And especially avoid creating such standards.