If quality is the probability that nothing will go amiss, quality assurance should focus on preventing the risks. For that purpose checking specifications is probably one of the most effective policy.
Whatever the target (requirements, specifications, models, source code, documents, etc), QM should be supported by four pillars:
- Understanding: deliverables must be formulated with languages and standards of the parties involved.
- Completeness: deliverables or tasks must include a predefined and finite set of managed elements to be checked at completion.
- Consistency: managed elements must be non ambiguous, reliable, and not contradictory. External consistency should therefore entails traceability between specifications and solutions.
- Correctness: reliability, usability, maintainability, etc.
Depending on development approach, quality management will go for continuous (agile projects) or staged (phased projects) assessment of deliverables.
As phased approaches usually rely on differentiated models, quality should be assessed with regard to their purpose, as exemplified by the MDA framework with platform independent models (PIMs), computation independent (CIMs) and platform specific (PSMs) ones:
- External correctness and completeness: How to verify that all the relevant individuals and features are taken into account. That can only be achieved empirically by building models open to falsification.
- Consistency: How to verify that the symbolic descriptions (categories and connectors) are complete, coherent and non redundant across models and abstraction levels. That can be formally verified.
- Internal correctness and completeness (alignment) : How to verify that current and required business processes are to be seamlessly and effectively supported by systems architectures.
By contrast, iterative approaches rely on continuous deliveries, which means that validation and verification are to be applied on deliverables. But continuity doesn’t mean confusion and differentiated tests are needed if regression is to be avoided.
To begin with, some initial assessment of requirements capture is required before any resources are committed to development. Depending on the specificity and formal properties of domain language, this stage may be routine (non specific languages) or automated (formal languages).
Then, quality checks should be managed according requirements taxonomy.
Unit tests are internal; they deal with technical aspects of development units disregarding functional requirements.
Integration tests are internal; they deal with the design of deployment units (aka modules) disregarding functional requirements.
Function tests are internal and deal with system design regarding functional requirements.
Performance tests are external and deal with system design regarding non functional requirements.
Acceptance tests are external and deal with system design regarding all requirements set in test environment.
Installation tests are external and deal with the specific resources and procedures associated with setting the product in its operational context.
Operational tests are external and deal with system functionalities in actual environment.
Finally, usability tests deal with ergonomy and maintenance.