Quality is a quantity: it’s the probability that nothing will go amiss. As for any prediction about future events, quality, even set within a strictly demarcated context, could never be fully accounted for. Moreover, while contingent faults should be weighted by costs, their nature is by definition “exceptional”, and their repercussions are therefore difficult to apprehend. That’s the reason why a sound quality management must clearly distinguish between, on one hand, outcomes whose contingency falls, to some degree, under the capability of team members, and, on the other hand, risks whose origin can’t be rooted within project responsibilities.
That distinction must be set along the nature of requirements, business contexts, enterprise organization, applications, or developments.
Taking contingencies into account, one may say that the objective of QM is to match commitments with expectations, eventually. In between, QM may have to deal with different configurations, from standalone applications allowing constant and reliable cooperation between single developers and stakeholders, to cross-departments projects targeting distributed systems.
Contingencies and Risks
Quality management is necessarily bounded by risk management, and the first step is to set apart whatever can be achieved without risk or, more precisely, according to the source and nature of risks.
Clearly, the first risk to consider is misunderstanding; hence, whatever the tasks and parties concerned, the first rule must be “say what you mean, mean what you say.” Occam’s razor should follow as a companion rule: if the first rule is fulfilled, profligacy becomes a liability, parsimony an asset.
Next, QM should consider risks whose source can be identified and properly tagged:
- External: quality will always be contingent on the stability of business and technology contexts.
- Internal: given contexts set on managed time-scales, QM may focus on project management and use of resources.
Finally, quality hazards have to be identified along with possible course of events:
- External: how to deal with unexpected changes in business and technology contexts.
- Internal: how to manage shortcomings and failures stemming from project itself.
Whatever the target (requirements, specifications, models, source code, documents, etc), QM should be supported by four pillars:
- Understanding: deliverables must be formulated with languages and standards of the parties involved.
- Completeness: deliverables or tasks must include a predefined and finite set of managed elements to be checked at completion.
- Consistency: managed elements must be non ambiguous, reliable, and not contradictory. External consistency should therefore entails traceability between specifications and solutions.
- Correctness: reliability, usability, maintainability, etc.
Depending on development approach, quality management will go for continuous (agile projects) or staged (phased projects) assessment of deliverables.
As phased approaches usually rely on differentiated models, quality should be assessed with regard to their purpose, as exemplified by the MDA framework with platform independent models (PIMs), computation independent (CIMs) and platform specific (PSMs) ones:
- External correctness and completeness: How to verify that all the relevant individuals and features are taken into account. That can only be achieved empirically by building models open to falsification.
- Consistency: How to verify that the symbolic descriptions (categories and connectors) are complete, coherent and non redundant across models and abstraction levels. That can be formally verified.
- Internal correctness and completeness (alignment) : How to verify that current and required business processes are to be seamlessly and effectively supported by systems architectures.
By contrast, iterative approaches rely on continuous deliveries, which means that validation and verification are to be applied on deliverables. But continuity doesn’t mean confusion and differentiated tests are needed if regression is to be avoided.
QM aims at correctness, or, as noted above, the proper alignment of commitments and expectations regarding functionalities, performances, costs or schedules. And while effective management of completeness and traceability is a requisite, it remains a supporting factor and cannot be a substitute for validation procedures and quality metrics.
As far as models play a pivotal role (not always the case) , QM policies must first deal with contents consistency must be checked:
- Internally, depending on the level of completeness
- Externally, i.e against requirements, models, or development flows used as inputs.
- Against architectures: in that case consistency checks can go both ways, with technical or functional architecture modified in order to support new functionalities.
Policies can then be set according to contingencies:
- Risk bounded requirements: the later one decides, the more is known about contingencies and costs.
- Risk weighted contingencies: resources must be put where risks are identified, depending on cost/benefits.
- Refutation based requirements: since truth cannot be proven, requirements should include assertions subject to refutation.
- Test-driven development: the sooner one uses what is known, the sooner falsity can be established.
- Reuse of components or patterns, encapsulation, etc.
Those principles can be effectively supported by architecture driven system engineering:
- Completeness managed on a “need to know basis”.
- Traceability according to architectural levels and engineering responsibilities.
- Correctness established within and across models.
Completeness & Lean Models
Whereas the benefits of lean and just-in-time engineering processes have long be established, their significance for software engineering remains limited, except when models are not pivotal. And that is especially surprising considering that development flows, being symbolic, should be much more malleable than industrial ones.
If the “too much, too soon” syndrome is to be avoided, one needs to divide and hierarchize model contents, beginning with lean requirements. For that purpose some rationale must be introduced into model driven engineering, especially regarding model contents and development flows. Assuming that artifacts are duly tagged, selective completeness and consistency can be checked depending on their nature.
Architecture Based Traceability
Here again we are confronted with a well understood and recognized objective on one hand, inadequate solutions on the other hand. In that case traceability is stymied by the “spaghetti” syndrome, namely the exponential growth of dependencies whose complexity rapidly defies the human understanding.
While automated tools are clearly helpful, their benefits would be significantly leveraged if dependencies were filtered using built-in information regarding architectural relevancy. Such targeted traceability could then be used to support selective QM depending on responsibilities.
The design of business processes must be checked against enterprise and functional architectures, respectively for domains and processes, and system requirements.
Assuming requirements properly validated by business units, QM for system applications will have to deal with project planning, functional specifications, and modules design and development.
Quality management doesn’t stop with acceptance, and should be supported continuously all along application life cycles, in particular for operational concerns (enterprise architecture), maintenance and evolution of services (functional architecture) and components migration (technical architectures).
Quality Management & Model Driven Engineering
Layered models and architectural tiers can be used to tailor tests and ensure their non regression:
- Context requirements (actual objects, agents, events, and processes) provide the basis for system tests and acceptance. They will also bu used to verify the conformity of symbolic representations.
- Test files should be defined with CIM requirements. They will be used to validate platform independent models on one hand, to test unit components on the other hand.
- Test files associated with context and CIM requirements may also be used for integration and maintenance tests.
Whereas those principles should stand on their own, they will bring their full benefits when combined with contract driven development.
Testing: Hired Guns or Agile ?
Given that finding defects is not very gratifying by itself, delegating the task will offer few guarantees if not associated with rewards commensurate to findings; moreover, quality as a detached concern may easily turn into a collateral damage when set along mounting costs and scheduling constraints. Alternatively, quality checks may change into a more positive endeavor when conducted as an intrinsic part of development. That’s arguably a major argument in favor of the agile model of development, which integrate specifications, development, tests and acceptance in the same cycle.