Model Validity

Preamble

Models are meant to reflect purposes in contexts and their validity should be assessed accordingly.

What’s in a Model ? (Jean Carries)

As it happens, models’ purposes can be mapped to their logical foundations.

Models Purposes

As far as engineering is concerned, models are introduced to solve three very different kinds of problems:

  • Business processes (solutions) are meant to deal with business objectives (problems), e.g how to assess insurance premiums or compute missile trajectory.
  • System functionalities lend a hand in solving business problems. Use cases are widely used to describe how systems are to support business processes (problem), and system functionalities are combined to realize use cases (solutions).
  • System components provide technical solutions as they achieve targeted functionalities for different users, within distributed locations, under economic constraints on performances and resources.
What models are meant to do

That separation of concerns is epitomized by the Model Driven Architecture (MDA) distinction between platform independent (PIMs), computation independent (CIMs), and platform specific (PSMs) models:

  • Computation independent models (CIMs) describe organization and business processes independently of the role played by supporting systems.
  • Platform independent models (PIMs) describe the functionalities supported by systems independently of their implementation.
  • Platform specific models (PSMs) describe systems components depending on implementation platforms.

Those purposes can be mapped to models logical foundations.

Models & Logic

From a formal point of view, models can be set in two basic categories:

  • Descriptive (aka extensional) ones try to put actual objects, events, and processes into categories defined with regard to enterprise business objectives and organization.
  • Prescriptive (aka intensional) ones define what is expected of systems components at functional (PIMs) and technical level (PSMs).

mapsterrits_descpresc

When combined with completeness that classification produces four categories of models:

  • Extensional and partial: the model describes selected features of actual occurrences (e.g organization chart or IT service management)
  • Intensional and partial: the model describes selected features of types (e.g functional architecture or business strategy)
  • Extensional and complete: the model describes all features of actual occurrences (e.g business process or system configuration).
  • Intensional and complete: the model describes all features of types (e.g business rules or SW design)
Procs_IntExt0
The Logic in Models

Since analysis models describe specific business domains they are usually partial and extensional. On the contrary,  as design models are used to generate software components, they are to be intensional and complete.

Based on that taxonomy models can be checked with regard to correctness, completeness, and consistency:

mapsterrits_valida
Models Validation
  • External correctness and completeness: How to verify that all the relevant individuals and features are taken into account. That can only be achieved empirically by building models open to falsification.
  • Consistency: How to verify that the symbolic descriptions (categories and connectors) are complete, coherent and non redundant across models and abstraction levels. That can be formally verified.
  • Internal correctness and completeness (alignment) : How to verify that current and required business processes are to be seamlessly and effectively supported by systems architectures.

It must be noted that while correctness is often considered for a wide range of models, the term get a particular meaning for extensional models because in that case validity has to be assessed externally.

Truth in Models

Compared to design/prescriptive models, analysis/descriptive ones are set with regard to particular contexts and concerns; as a consequence there is no truth against which to check their validity,but they can be made falsifiable, i.e open to refutation. Formally, that could be done at requirements level by combining non ambiguous assertions about business objects and processes, with counter examples, and finally with shadow models.

Assuming the validity of analysis models, the next step is to guarantee the continuous and consistent matching between business objects and their system counterparts; that objective can be managed at different levels:

  • Traceability is arguably a prerequisite, lest it’s impossible to justify what has been done, and remember what has been checked.
  • Internal (intensional) consistency of models is meant to be managed through guidelines and a judicious use of modelling tools.
  • External (extensional) consistency is achieved through reviews (requirements) or tests (implementations).
  • Use of patterns, when available, is by all means the best policy regarding architectural  (analysis patterns) and development (design patterns) features.
  • Business features and rules are not meant to affect architectures and therefore can be checked at design level.
  • Finally, it will be necessary to validate the deployment of system components and resources.
MDE_Validity
Layered model validity.

But if analysis models are to be falsifiable, they must be based on consistent and correct requirements.

Requirements Validity

Requirements models can be checked for correctness and consistency. Yet, business domains are like real numbers in that business concerns are infinite; even if frozen, there is no hope to fit them into an all-encompassing conceptual model; and there will always be room for some innovative business.

Moreover, introducing conceptual models at requirements stage is highly problematic since it put abstractions upfront whereas they should be the outcome of analysis. A shift in paradigm could be helpful here, abandoning the cascading approach and putting analysis models at the hub of model transformation. Along that perspective, systems are seen as abstract architectures managing symbolic representations of business objects and processes.

Hence, since requirements provide only a partial and changing perspective, the point is not to get a conceptual model but to draw a clear perimeter and to ensure consistency in such a way that analysis models could built on it; more precisely:

Except for structures and connections, which can be expressed with syntactic operators, that cannot be achieved with basic modeling languages, whose semantics are limited to system artifacts. And because the mapping of design models to system extensions can be managed by development tools, the main challenge is the validity of analysis models. Fortunately, languages like UML provide for extensions which could be used to describe simultaneously concrete business contexts and symbolic system representations.

Regarding internal consistency, a significant improvement would be to compare the structure and connections of persistency units with the ones of execution units accessing them. At step further should be to introduce architecture based stereotypes when possible and check cross consistencies.

Regarding external consistency, requirements models are not much help since they reflect specific business concerns, each subject to a different  representation.

RentACart
Rent a Cart ? What for ?

Building a comprehensive and consistent understanding of requirements can only be achieved with symbolic representations and abstractions. And that is where external consistency must be checked.

Leave a Reply