What is to be measured, and What for
Software engineering metrics can be set along three core rationales: predictive, preventive, or corrective.
- Predictive: estimators are used to define schedules and plan resources according to task scale and complexity.
- Preventive: statistics of past problems are used to anticipate risks, take precautionary measures, and allocate resources accordingly.
- Corrective: assessment of what works and what doesn’t, and what could be done to improve processes, methods, and metrics itself.
Hence, as far as software engineering is concerned, measurements should address three different topics:
- Size and complexity of the problem at hand. Function Points (whatever the variants) is the metrics of choice for system requirements. Metrics targeting products like instructions or lines of code are of limited interest due to their dependency on platform and technologies.
- Assessment of project achievements and resources used.
- Assessment of process maturity: resources, schedule, reliability, etc.
On those accounts, the main benefit of architecture driven system modelling is to provide:
- A sound and unbiased basis for function points computation, free of qualitative or expert-based inputs. More specifically, metrics can be directly associated to functional requirements like persistency, entry points, coupling, etc.
- A straightforward approach to project planning: instead of top-down, one-fits-all task definitions, work units can be set bottom-up depending on the nature of development flows.
- With tasks directly mapped to development outcomes, processes can be designed along development patterns, their capabilities precisely assessed, and potential improvements duly identified.
There is no such things as “statistical facts”. Statistics are designed artifacts, made on purpose, meant to support conjectural arguments or counter questionable ones. Hence, considering statistics per se is like counting fingers when the hand points at the moon.
That consideration is especially relevant when quality metrics are concerned. Applying statistical estimation can help to reduce risks and increase confidence levels by optimizing the use of limited resources. For that purpose estimators will have to be accurate, precise, sufficient and complete (I’m taking heed of a contribution by Sriram Mahalingam):
- Unbiased: the sample used as a basis must correctly represent the targeted population regarding organizational and technical context, requirements patterns, application life-cycle, etc.
- Sufficient: the size of the sample and the scope of data must be enough to rub out margins of error associated with the collection of data (aka efficiency).
- Efficient: data must be collected with accuracy (closeness of measurements of a quantity to its actual value) and precision (repeated measurements under unchanged conditions must show the same results).
- Consistent: outcomes must be fully predictable from the state of endogenous (i.e selected) factors, whatever the status of exogenous (i.e not taken into account) ones.
Those are technical provisions, necessary but not sufficient because estimators are useless without their scope and objectives being properly defined. Yet regression analysis can provide sound estimators when combined with patterns of development complexity.
Function Points Revisited
From requirements to ROI, software metrics are based upon function points, whether directly or indirectly. As comprehensively explained by Alain Abran (Software Metrics & Software Metrology), they are usually obtained from a confusing mix of inputs and measurements.
- Different kind of inputs: business contents, system functionalities, general systems characteristics.
- Different kind of measurements: objective tally, subjective guesses, and statistical regressions.
Yet things could be different were those elements reorganized:
- The complexity of a business domain should be measured independently of the systems that may support its business processes.
- For a given business process, one should be able to assess and compare different level of system functionalities.
- Finally, it should be possible to adjust a given level of functionalities depending on regulatory or operational constraints (aka non functional requirements, aka general system characteristics).
- Computation Independent Models (CIMs) come with intrinsic complexity stemming from objects and transactions specific to business domains. Whereas that complexity may be compounded by the way systems have to support the processes, they nonetheless should be measured on their own account if only to manage requirements portfolio accordingly.
- Platform Independent Models (PIMs) describe how systems support business processes and their complexity should be measured accordingly. That may be critical when different options are to be considered.
- The same reasoning should be applied for Platform Specific Models (PSMs) as different candidates are often to be considered for the same functional architecture.