Modeling Symbolic Representations

March 16, 2010

System modeling is all too often a flight for abstraction, when business analysts should instead look for the proper level of representation, ie the one with the best fit to business concerns.

Modeling is synchronic: contexts must be mapped to representations (Velazquez, “Las Meninas”).

Caminao’s blog (Map of Posts) will try to set a path to Architecture Driven System Modelling. The guiding principle is to look at systems as sets of symbolic representations and identify the core archetypes defining how they must be coupled to their actual counterparts. That would provide for lean (need-to-know specs) and fit (architecture driven) models, architecture traceability, and built-in consistency checks.

This blog is meant to be a work in progress, with the basic concepts set open to suggestions or even refutation:

All examples are taken from ancient civilizations in order to put the focus on generic problems of symbolic architectures, disregarding technologies.

Symbolic representation: a primer

Original illustrations by Albert ( allow for concrete understanding of requirements, avoiding the biases associated with contrived textual descriptions.

How to Begin with Requirements

October 5, 2015

Despite being a recurring topic of discussions, a comprehensive and formalized approach to requirements should not hold back newcomers: given that requirements are the necessary genesis of any project nothing can be assumed about their emergence; as a consequence they are better dealt with as they come.


The Genesis of Meanings (Quiché Maya’s view of creation)

For that purpose, and whatever the preferred approach, requirements analysis can be neatly described as an iterative process built from three basic increments: identified individuals, associated features, and classifications.


When entering unknown territory, the first thing to do is to set apart recognizable items. Regarding requirements that would be individuals whose identities has to be managed. Such individuals would then further divided between:

  • Objects (persistent identities) and activities (transient identities).
  • Activities (managed duration) and events (no managed duration).
  • Agents (physical identities) and roles (social identities).
  • Actual (physical identities) and symbolic (social identities) objects.

It must be noted that, except for new standalone applications, most of individuals may have been already defined by existing functional architectures.


Contrary to individuals, features have no identity of their own. As a corollary, they can only be considered as possible attachments to individuals. On that basis, analysts have to answer three basic questions:

  • Can a feature be shared or transferred (functional), or is it bound to the same individual (structural).
  • Does it reflect a state of affairs (attribute) or represent a capability (operation).
  • Is there some overlapping with already known features.

The last question brings about the core of requirements analysis, i.e how to deal with variants, consolidate applications, and manage changes.


Categories and rules are the dual facets of requirements purpose, namely how to classify objects and operations and deal with variants.

  • Categories take a declarative approach that relies on static classifications to describe variants. Starting with partitions, analysts will have to distinguish between structural variants (associated with specific features) and functional ones (associated with the state of the same set of features).
  • Rules take a procedural approach with the equivalent of dynamic classifications to deal with variants. As for categories, analysts will have to decide between the equivalent of structural variants (rules to be enforced for the whole of execution paths) and functional ones (rules to be evaluated along execution paths).

Whereas each option may have its benefits depending on the nature of variants, the primary factor when selecting a scheme should be its consistency with regard to existing applications on one hand, between the description of new objects and activities on the other hand.

Further Readings

A personal view of SysML Block Diagrams

September 17, 2015


SysML is an UML reconfiguration (subset, extensions, and profile) dedicated to systems modelling. The introduction of blocks as the basic unit of systems description can be seen as  its  pivotal change, in particular when block diagrams are used as substitutes for class ones.


Blurred block (Christo)

But the generalized employ of block diagrams may raise a few questions: should its scope be limited ? what kind of systems are targeted ? what kind of models are to be built ? and what are the benefits for modelling processes ?

Scope: Actual Objects & UML blind spot

The description of instances has always been a well known weakness in UML. To be sure, object diagrams were part of UML initial specifications; yet, they have been jettisoned due to their lack of clear definition and purpose; finally they have been “overlooked” in UML last versions.

That neglect has left a blind spot in UML modeling apparatus: if components and deployment diagrams are available to deal with logical and physical instances at the end of the engineering process, UML 2.5 has nothing equivalent for the context “as-it-is” at development process inception.

So, SysML new focus on physical systems offers a worthwhile opportunity to make up for the disappointments of object  diagrams, introducing the block diagram as the tool of choice for requirements specifications relative to existing systems and environment, along class diagrams for the definition of their logical counterpart, and components and deployment diagrams for logical and physical implementations.

Such a restricted focus on physical instances would meet the parsimony principle (aka Occam’s razor) and would provide for continuity with UML and clarity of semantics and purposes.

Blocks can be used all along, for physical as well as logical components

Actual objects: a blind spot for UML, a blurred one for SysML

But that opportunity is to be missed if, instead of focusing on the blind spot of requirements with regard to actual configurations, block diagrams are used indiscriminately for instances and classes, blurring the critical distinction between physical and logical components, and introducing overlaps and semantic ambiguities across diagrams.

Target: Mechanical vs Logical Systems

Whereas SysML clearly suits the needs of mechanical and civil engineering, its preferred hierarchical approach may fall short when systems cannot be reduced to equipment built from hardware and software.

Nowadays, with software colonizing every nook and cranny of actual and virtual worlds (soon to be known as the Internet of Things), a broader systemic approach may be necessary, with systems modeled as collaborative frameworks between agents, devices, and computer systems.


Systemic designs are better driven by functional architectures

Being by nature logical, distributed, heterogeneous, and in continuous mutation, these systems are not easily described by hierarchies of physical components. And even when blocks can be used to represent hardware and software, separately or jointly, block diagrams tend to privilege trees of physical components intertwined by logical or functional associations.

That bias in favor of physical architectures may put functional ones on the passenger seat; as a consequence, critical software alternatives may be shrouded, bypassed, or emerge at lower levels of composition; too late and too local for sound architectural decision-making.

Models: Requirements vs Design

The bias introduced by block diagrams may be of no consequence as far as SysML is used for requirements specifications. But that may not be the case if it is also used for design, especially if block diagrams are used all along without a transition being clearly marked between requirements and design.

At issue here is the qualitative and quantitative edge of software capabilities over hardware ones: a prejudice for block hierarchies could compromise the plasticity (ability to change architecture without affecting performances) and versatility (ability to perform different activities without changing the architecture) of systems designs.

Moreover, apart from being inherently more rigid than networks, block hierarchies are built along a single dimension combining logical and physical features. The overall plasticity of such hierarchies is to be constrained by the plasticity of their physical elements, and their versatility will be similarly limited by hardware constraints on software interfaces.

Finally, as the semantics of ports and flows used to define communication between blocks don’t force an explicit typed distinction between hardware and software, chances are for mixed designs to limit systems modularity. That points to the shortcomings of untyped modeling constructs.

Processes: Overheads of Untyped Constructs

Paradoxically for a supposedly specific profile, SysML’s blocks can be seen as untyped constructs to the extent that they apply to both physical and logical objects, as well as constraints.

Like their programming counterparts, non typed modeling constructs induce ambiguities which have to be clarified by comments or additional constructs. That is where SysML requirements diagrams come into view.

To begin with, requirements are meant to be the source of engineering processes before anything can be assumed with regard to their format or modeling constructs.

Given that premise, usually accepted by leading tools providers, they are to be expressed through text; the very notion of “requirements diagram”, which suggests some formalism, may sound like an oxymoron, and “requirements editor” would be less confusing.

Alternatively, e.g for scientific applications, if the assumption can be dropped requirements can be directly captured using formal expressions and constraints.

Whereas SysML supports both options, the use of untyped blocks may begets redundancy and confusion. Since block diagrams can be used for (physical) targets as well as for (logical) constraints, three policies can be considered:

  • Text: requirements are documented with their own diagram and linked to physical blocks (a).
  • Constraint: requirements are represented by logical blocks linked to physical ones (b).
  • Text and constraint: requirements are documented with their own diagram and linked to constraints represented by logical blocks, themselves linked to physical blocks (c).
Three different approaches to requirements management

Three ways to document requirements: (a) text, (b) constraint, (c) text and constraint.

Embarrassment of riches may be especially challenging when the problem is to bring some order to unstructured and ambiguous inputs. In the case of requirements traceability, adding layers of dependencies may rapidly result in acute “spaghetti syndrome”.

Moreover, the apparent benign choice of diagrams may compound the risks on design mentioned above by masking the critical transition from requirements (documented as such or by constraints) to architectures (as described by physical blocs).


The proper use of block diagrams  has to make up for being simultaneously too restrictive and too permissive.

With regard to structures (and consequently architectures) block diagrams strongly lean towards physical hierarchies; to quote: “blocks provide a general-purpose capability to model systems as trees of modular components.”

With regard to semantics blocks put no limit on what can be represented; to quote: “The specific kinds of components, the kinds of connections between them, and the way these elements combine to define the total system can all be selected according to the goals of a particular system model.”

Since, to quote: “SysML blocks can be used throughout all phases of system specification and design, and can be applied to many different kinds of systems”,  no wonder block diagrams tend to put class diagrams and their sound and consistent semantics to the margins.

These pitfalls can be avoided by using SysML as a typical UML profile, in particular by limiting the use of block diagrams to the modelling of systems as-they-are, and using standard UML2 diagrams with SysML profile and stereotypes otherwise.

PS. SysML 1.4 appears to make amends by deprecating or clarifying some redundant or confusing elements with regard to UML.

Further Readings

External Links

Agile Delivery & Dynamic Programming

September 7, 2015


Business driven development is arguably the cornerstone of the agile development model. On one side it means business value set by users and stakeholders, on the other side it entails continuous and just-in-time delivery; what happens in-between is set by backlogs (for development) and product increments (for delivery).

Where individual motives blend into collective rationality (Alex Prager)

Continuous releases and discrete delivery  (Alex Prager)

A sanguine understanding of continuous releases may assume that planning is no longer relevant, and that deployment can be carried out “on-the-fly”. But that would assume that stakeholders and product owners are ready to put aside roadmaps, overlook milestones, and more generally forget that time is money. That would go against a basic objective of agile, namely that developments must be driven by business needs, and products delivered just-in-time for best value. Given the well established track record of dynamic programming for  manufacturing processes, could the technique be usefully applied to agile engineering processes ?

Delivery, Deployment, & Continuity

Continuity doesn’t mean synchronization: business, engineering, and operations are governed by different concerns set along different time-frames. Some buffering is needed, materialized by the distinction between releases (engineering concerns) and deployment (business and operational concerns).

A time for every purpose: Epics & business roadmap (a), associated backlogs of users’ stories (b), released features (c), architectural capabilities (d), deployed components (e).

A time for every purpose: Epics & business roadmap (a), associated backlogs of users’ stories (b), released features (c), architectural capabilities (d), deployed components (e).

Such distinctions introduce both overlapping (with business time-frame) and discontinuity (between development and deployment):

  • Product roadmaps are set in business time-frames and determine development and deployment time-frames.
  • Development time is set by-product roadmaps and runs clockwise from project inception to software releases (a>b>c).
  • Deployment time is also set relative to product roadmaps but it runs counterclockwise, from product deployment as planned by business back to released software components (c<e).

Development and deployments runs can be compared to crews tunneling through a mountain from both sides; where and when they meet leaves room to adjustments. Yet more is at stake in the meeting between development completion and deployment inception. Apart from time, adjustments may also bear on formats and contents; and given the specificity of development and deployment purposes, their adjustment may also be seen as the morphing of continuous software releases (project perspective) into discrete increments (product perspective).

Dynamic Programming

Dynamic programming (aka multistage programming) is a problem solving method that combines two principles:

  • Divide & conquer is a general purpose strategy that deals with the intrinsic complexity of problems by breaking them down into collections of simpler sub-problems to be solved separately depending on sequencing constraints.
  • Recursion deals with the lack of complete or accurate information upfront, solving the problem in stages rather than as one entity. Each stage is optimized separately on the basis of the current state reflecting decisions taken at previous stages, the optimality principle guaranteeing the optimality of the final outcome.

That incremental and iterative approach clearly befits the tenets of the agile development model.

Twofold Planning

As noted above, and whatever the technique, agile processes entwine two phases, one for development, the other for deployment, each with its own planning:

  • The development planning, epitomized by backlog management, deals with the definition of work units and their sequencing.
  • The deployment planning deals with the merging of software released into product and their incremental deployment.
Project work units are sequenced (backlog), Product increments are merged, and both are dynamically adjusted around their nexus.

Project work units are sequenced (backlog), Product increments are merged, and both are dynamically adjusted around their nexus.

That makes for multiple dynamics, first for updated backlogs, then for updated deployment targets, and finally for possible feedback through their nexus. That can only be achieved with dynamic programming.

Backlog: Multistage sequencing

Backlogs are used to manage work units targeting self-contained requirements items; they can be represented by graphs with nodes standing for work units and arcs weighted by constraints.

Basically, the problem is to optimize the development of a given set of items given users’ priorities, technical constraints, and resources availability. When all the information is available upfront, optimum solutions can be obtained with simple Shortest Path algorithms. Yet, given the iterative and exploratory nature of agile processes, backlogs are meant to be updated as the project advances, taking advantage of improved knowledge:

  • Users may introduce new items, remove ones, or change their priorities due to a better understanding of requirements space.
  • Engineers may also introduce new items (e.g for technical debt) or reconsider technical difficulties and dependencies due to a better understanding of solutions space.
Dynamic reordering of backlogs: looking forward for the optimum path to completion.

Dynamic reordering of backlogs: looking forward for the optimum path to completion.

Dynamic programming is introduced in order to support step-wise decisions optimizing the whole process:

  • Backlog states (t1 & t2) are defined by the remaining work units, rankings, and feasibility constraints.
  • Each stage redefines the optimum path to completion taking into account the current state and updated information. Recursive computation being based on the summary information etched in states, it ensues that all future decisions can be selected optimally without recourse to information regarding previously made decisions.

Given a set of feasible paths (as defined by technical dependencies and time), the aim at each stage is to select the optimum path for the remaining units based on current state. Optimization functions will typically consider users’ value, learning curve and associated risks, and resources availability and costs.

As illustrated below, nodes can represent grouped items, e.g when several projects have to share resources or releases are to be regrouped.

Deployment: Dynamic merging

Given a set of released software components, the aim of deployment planning is to decide which increment to add to deployed product. Assuming that technical concerns have already been dealt with by releases, the objective at each stage is select the items maximizing the ROI of the deployed product. It must be noted that, contrary to the development algorithm looking forward for the optimization, the deployment algorithm select the optimum path by looking backward at the ROI of deployed products.

Dynamic reordering of deployments: looking backward for the optimum path to completion.

Dynamic reordering of deployments: looking backward for the optimum path to completion.

But the backward impact of deployment optimization can go further and affect backlogs.


Shared ownership and continuous delivery are two main pillars of the agile development model, the former giving the development full authority and responsibility, the latter ensuring the users have a firm hand on the helm. Yet, as already noted, development, deployment, and business are governed by different time-frames, which could induce some frictions, e.g if business units were forced to synchronize products deployments with software releases. While severe disruptions can be avoided if releases and deployments are managed separately, development teams cannot be completely sheltered from changes in business or operational priorities. That is where the dynamic reassessment of optimum paths is to help: assuming a change in deployment planning (nq instead of op), the new priorities are fed back into development (aka backlog) rankings, and the optimum path is updated.


Change in deployment priorities (nq instead of op) can be fed back to backlog planning (f before a).

It must be noted that such feedback only affects ranks and must leaves contents unchanged.

Conclusion: Business driven, Just-in-time delivery, & Lean Inventories

Dynamic programming appears as a primary factor with regard to three core tenets of the agile development model:

  • Business driven development doesn’t mean that developments are pushed by requirements but that they are pulled by deployment.
  • Just-in-time delivery can only be achieved with the help of a buffer between development and deployment. This buffer should not be confused with an inventory as it has nothing to do with product quantities.
  • On the contrary, this buffer, combined with dynamic programming, plays a critical role in the cutback of intermediate documents and models (aka development inventories).

Further Readings

External Links

Enterprise Systems & the OS Kernel Paradigm

September 1, 2015


Given the ubiquity of information and communication technologies on one hand, the falling apart of technical fences between systems, enterprises, and business environments on the other hand, applying the operating system (OS) paradigm to enterprise architectures seems a logical move.

Users and access to services (Queuing at a Post Office in French West Indies)

Borrowing the blueprint of computers operating systems, enterprise operating systems (EOS) would  be organized around a kernel managing shared resources (people, hardware and software) and providing services to business, engineering or operational processes.

Gerrymandering & Layers

When IT was neatly fenced behind computer screens managers could keep a clear view on organization, roles, and responsibilities. But with physical hedges replaced by clouded walls, the risk is that IT may appear as the primary constituent of enterprise architecture. Given the lack of formal arguments against what may be a misguided understanding, enterprise architects have to rely on pragmatic answers. Yet, they could prop their arguments by upending the very principles of IT operating systems and restore the right governance footprint.

To begin with, turfs must be reclaimed, and that can be done if the whole of assets and services are layered according to the nature of problems and solutions: business processes (enterprise), supporting functionalities (systems), and technologies (platforms).

Problems and solutions must be set along architecture layers

EA must separate and federate concerns along architecture layers

Then, reclaiming must also include governance, and for that purpose EOS are to rely on a comprehensive and consistent understanding of assets, people and mechanisms across layers:

  • Physical assets, including hardware.
  • Non physical assets, including software.
  • Agents (identified people with organizational responsibilities) and roles.
  • Events (changes in the state of objects, processes, or expectations) and activities.

Mimicking traditional OS, that could be achieved with a small and compact conceptual kernel of formal concepts bearing out the definitions of primitives and services for the whole of enterprise processes.

EOS’s Kernel: 12 concepts

A wealth of definitions may be the main barrier to enterprise architecture as a discipline because such profusion necessarily comes with overlaps, ambiguities, and inconsistencies. Hence the benefit of relying on a small set of concepts covering the whole of enterprise systems:

  • Six for individuals actual (objects, events, processes) and symbolic (surrogates objects, activities, roles) elements.
  • One for actual (locations) or symbolic (package) containers.
  • One for the partitioning of behaviors (branch) or surrogates (power type).
  • Four for actual (channels and synchronization) and symbolic (references and flows) connectors.

Governance calls for comprehensive and consistent semantics

Considering that nowadays business entities (enterprise), services (systems), and software components (technology) share the same distributed world, these concepts have to keep some semantic consistency across layers whatever their lexical avatars. To mention two critical examples, actors (aka roles) and events must be consistently understood by business and system analysts.

Those concepts are used to describe enterprise systems building blocks which can be combined with a small set of well known syntactic operators:

  • Two types of connectors depending on target: instances (associations) or types (inheritance).
  • Three types connections for nondescript, aggregation, and composition.

Syntactic operators are meant to be applied independently of targets semantics

Again, Occam’s razor should be the rule: just like semantics are consistently defined across architecture layers, the same syntactic operators are to be uniformly applied to artifacts independently of their semantics.

Kernel’s Functions

Continuing with the kernel analogy, based on a comprehensive and consistent description of resources, the traditional OS functions can be reinterpreted with regard to architecture capabilities implemented across layers:

  • What: memory of business objects and operations (enterprise), data base logical entities (systems), data base physical records (platforms).
  • Who: roles (enterprise), interfaces (systems), and devices (platforms).
  • When: business events (enterprise), logical events (systems), and transaction managers (platforms).
  • Where: sites (enterprise), logical processing units (systems), network architecture (platforms).
  • How: business processes (enterprise), applications (systems), and CPU (systems).
Traceability of Capabilities across architecture layers

Traceability of Capabilities across architecture layers

That fits with the raison d’être of a kernel which is to combine core functions in order to support the services called by processes.


Still milking the OS analogy, a primary goal of an enterprise kernel is to support a seamless integration of services:

  1. Business driven: the definition of services must be directly and unambiguously associated to business ends and means across enterprise layers.
  2. Traceability: they must ensure the transparency of the tie-ups between organization and processes on one hand, information systems on the other hand.
  3. Plasticity: they must facilitate the alignment of changes in business objectives, organization and supporting systems.

A reasoned way to achieve these objectives is to classify services with regard to the purpose of calling processes:

  • Business processes deal with the transactions between the enterprise and its environment.
  • Engineering processes deal with the development of enterprise resources independently of their use.
  • Operational processes deal with the management of enterprise resources when directly or indirectly used by business processes.
Enterprise Operating System: Layers & Services

Enterprise Operating System: Layers & Services

That classification can then be crossed with architecture levels:

  • At enterprise level services are bound to assets to be used by business, engineering, or operational processes.
  • At systems level services are bound to functions supporting business, engineering, or operational processes.
  • At platform level services are bound to resources used by business, engineering, or operational processes.

As services will usually rely on different functions across layers, the complexity will be dealt with by kernel primitives and masked behind interfaces.

Services called by processes can combine different functions directly (basic lines) or across layers (dashed lines).

Services called by processes can combine different functions directly (basic lines) or across layers (dashed lines).

Finally, that organization of services along architecture layers may be aligned with governance levels: strategic for enterprise assets, tactical for systems functionalities, and operational for platforms and resources.

Further Reading

Agile Architectures: Versatility meets Plasticity

June 22, 2015


At enterprise level agility can be understood as a mix of versatility and plasticity, the former an attribute of function, the latter of form:

  • Versatility: enterprise ability to perform different activities in changing environments without having to change its architectures.
  • Plasticity: enterprise ability to change its architectures without affecting its performances.
Plasticity is for form, versatility for function

Agility: Forms & Performances (P. Pénicaud)

Combining versatility and plasticity requires a comprehensive and consistent view of assets (architectures) and modus operandi (processes) organized with regard to change. And that can be achieved with model based systems engineering (MBSE).

MBSE & Change

Agility is all about change, and if enterprise governance is not to be thrown aside decision-making has to be supported by knowledgeable descriptions of enterprise objectives, assets, and organization.

If change management is to be the primary objective, targets must be classified along two main distinctions:

  • Actual (business context and organization) or symbolic (information systems).
  • Objects (business entities or system surrogates) or activities (business processes or logic).

Comprehensive and consistent descriptions of actual and symbolic assets (architectures) and modus operandi (processes) with regard to change management.

The two axis determine four settings supporting transparency and traceability:

  • Dependencies between operational and structural elements.
  • Dependencies between actual assets and processes and their symbolic representation as systems surrogates.

Versatility and plasticity will be obtained by managing changes and alignments between settings.

Changes & Alignments

Looking for versatility, changes in users’ requirements must be rapidly taken into account by applications (changes from actual to symbolic).

Looking for plasticity, changes in business objectives are meant to be supported by enterprise capabilities (changes from operational to structural).

The challenge is to ensure that both threads can be weaved together into business functions and realized by services (assuming a service oriented architecture).

With the benefits of MBSE, that could be carried out through a threefold alignment:

  • At users level the objective is to ensure that applications are consistent with business logic and provide the expected quality of service. That is what requirements traceability is meant to achieve.
  • At system level the objective is to ensure that business functions and features can be directly mapped to systems functionalities. That is what services oriented architectures (SOA) are  meant to achieve.
  • At enterprise level the objective is to ensure that the enterprise capabilities are congruent with its business objectives, i.e that they support its business processes through an effective use of assets. That is what maturity and capability models are meant to achieve.

Versatility comes from users’ requirements, plasticity from architectures capabilities.

That would make agility a concrete endeavor across enterprise, from business users and applications to business processes and architectures capabilities.

Further Reading

Use Cases & Action Semantics

June 8, 2015


Use cases are meant to describe dialogues between users and systems, yet they usually don’t stand in isolation. On one hand they stem from business processes, on the other hand they must be realized by system functionalities, some already supported, others to be newly developed. As bringing the three facets within a shared modeling paradigm may seem a long shot, some practical options may be at hand.


Use case & action semantics (Robert Doisneau)

To begin with, a typical option would be to use the OMG’s Meta-Object Facility (MOF) to translate the relevant subset of BPM models into UML ones. But then if, as previously suggested, use cases can be matched with such a subset, a more limited and pragmatic approach would be to introduce some modeling primitives targeting UML artifacts.

From BPM to UML: Meta-model vs Use cases.

From BPM to UML: Meta-model vs Use cases.

For that purpose it will be necessary to clarify the action semantics associated with the communication between processes and systems.

Meta-models drawbacks

Contrary to models, which deal with instances of business objects and activities, meta-models are supposedly blind to business contents as they are meant to describe modeling artifacts independently of what they stand for in business contexts. To take an example, Customer is a modeling type, UML Actor is a meta-modeling one.

To stress the point, meta-languages have nothing to say about the semantics of targeted domains: they only know the language constructs used to describe domains contents and the mapping rules between those constructs.

As a consequence, whereas meta-languages are at their best when applied to clear and compact modeling languages, they scale poorly because of the exponential complexity of rules and the need to deal with all and every language idiosyncrasies. Moreover, performances degrade critically with ambiguities because even limited semantics uncertainties often pollute the meaning of trustworthy neighbors generating cascades of doubtful translations (see Knowledge-based model transformation).


Factoring a core with trustworthy semantics (b) will prevent questionable ones from tainting the whole translation (a).

Yet, scalability and ambiguity problems can be overcame by applying a core of unambiguous modeling constructs to a specific subset, and that can be achieved with use cases.

Use Cases & UML diagrams

As it happened, the Use Case can be seen as the native UML construct, the original backbone around which preexisting modeling artifacts have been consolidated, becoming a central and versatile hub of UML-based development processes:

  • Sequence diagrams describe collaborations between system components but can also describe interactions between actors and systems (i.e use cases).
  • Activity diagrams describe the business logic to be applied by use cases.
  • Class diagrams can be used for the design of software components but also for the analysis of business objects referenced by use cases.
  • State diagrams can be used for the behavior of software components but also for the states of business objects or processes along execution paths of use cases.
Use cases at the hub of UML diagrams

Use case as Root Sequence

Apart of being a cornerstone of UML modeling, use cases are also its doorway as they can be represented by a simple sequence diagram describing interactions between users and systems, i.e between business processes and applications. So the next step should be to boil down the semantics of those interactions.

Use Cases & Action Semantics

When use cases are understood as gateways between business users and supporting systems, it should be possible to characterize the basic action semantics of messages in terms of expectations with regard to changes and activity:

  • Change: what process expects from system with regard to the representation of business context.
  • Activity: what process expects from system with regard to its supporting role.
Basic action semantics for interactions between users (BPM) and systems (UML)

Basic action semantics for interactions between users (BPM) and systems (UML)

Crossing the two criteria gives four basic situations for users-to-system action semantics:

  • Reading: no relevant change in the environment has to be registered, and no activity has to be supported.
  • Monitoring: no relevant change in the environment has to be registered, but the system is supposed to keep track on some activity.
  • Achievement: the system is supposed to keep track on some specific change in the environment without carrying out any specific activity.
  • Accomplishment: the system is supposed to keep track on some specific change in the environment and support associated activity.

When use cases are positioned at the center of UML diagrams, those situations can be associated with modeling patterns.

Use Cases & Modeling Patterns

If use cases describe what business processes expect from supporting systems, they can be used to map action semantics to UML diagrams:

  • Reading: class diagrams with relevant queries.
  • Monitoring: activity diagrams with reference to class diagrams.
  • Achievement:  class diagrams with associated state diagrams.
  • Accomplishment:  activity diagrams with associated state diagrams and reference to class diagrams.
From use cases action semantics to UML diagrams

From use cases action semantics to UML diagrams

While that catalog cannot pretend to fully support requirements, the part it does support comes with two critical benefits:

  1. It fully and consistently describes the interactions at architecture level.
  2. It can be unambiguously translated from BPM to UML.

On that basis, use cases provide a compact and unambiguous kernel of modeling constructs bridging the gap between BPM and UML.

Further Reading

Inducing Analysis Patterns from Design ones: a look in rear view mirror

May 20, 2015


Assuming patterns are meant to chart the path from problems to solutions, and given the foundational contribution of the Gang of Four (GoF), it may help to look at functional problems (aka analysis patterns) backward  from the perspective of the now well established solutions framework (aka design patterns).


Matching Patterns (M.Kippenberger)

Patterns & Models

Patterns are handrails from stereotyped problems to well known and tested solutions, bringing the benefits of reuse for costs, quality, and traceability. Set within the Model Driven Architecture, patterns can be regrouped depending on their source and target:

  • Functional patterns deal with the representation of business objects and activities by system symbolic surrogates. They stand between computation and platform independent models. Whereas functional patterns are not to be confused with business patterns (used with CIMs), they are also known as analysis patterns when targeting PIMs.
  • Design patterns deal with the implementation of symbolic surrogates as software components.  They stand between platform independent and platform specific models.
MDA layers clearly coincide with reusable assets

Patterns and MDA layers

As it happens, there is a striking contrast between the respective achievements for design and functional (or analysis) patterns, the former, firmly set up by the pivotal work of the GoF, is nowadays widely and consistently applied to software design; the latter, frustrated by the medley of requirements capture, remain fragmented and confined to proprietary applications. Given that discrepancy, it may be worth to look in the rear-view mirror  and consider functional patterns from the perspective of design ones.

Objects, Classes, & Types

Design patterns are part and parcel of the object oriented (OO) approach and built on some of its core tenets, in particular:

  • Favor object composition over class inheritance.
  • Program to an interface, not an implementation.

Whereas the meaning of classes, implementation, and inheritance is specific to OO design, the principles can nonetheless bear upon analysis providing they are made to apply to types and composition, whose semantics have a broader scope. To begin with, if classes describe the software implementation of symbolic objects, and inheritance the relationships between them, types only pertain to their functional capabilities. As a corollary, the semantics of inheritance and composition makes no difference with regard to types, as opposed to classes. The lesson for functional patterns would therefore to favor types whenever possible, i.e when functional capabilities doesn’t depend on identities. Then, “programming to interfaces” would translate (for functional patterns) into describing communications between types. That approach being best represented by aspect oriented analysis. Not by chance, the lessons drawn above appear to be directly supported by the two criteria used by the the GoF to classify design patterns: scope and purpose. With regard to scope, class patterns deal with inheritance relationships between static descriptions fixed at compile-time, while object patterns deal also with relationships between instances which can be changed at run-time. With regard to purpose, creational patterns deal with objects instanciation, structural ones carry on with composition, and behavioral ones with algorithms and responsibilities.

Design Patterns Catalog

Design Patterns Catalog

The corresponding entries can be summarily described as:

  • Class/Creational: for new identified instances.
  • Object/Creational: for new parts of identified instances.
  • Class/Structural: for mixing interfaces using inheritance.
  • Object/Structural: for mixing interfaces using composition.
  • Class/Behavioral: for the definition of algorithms.
  • Object/Behavioral: for the collaboration between instances.

The next step would be to induce a catalog of functional patterns from those entries.

Inducing a Catalog of Functional Patterns

Leveraging the GoF’s design patterns, the objective is to spread the benefits upstream to the whole of the modeling process through a transparent mapping of functional and design patterns. And that could be achieved by inducing a functional catalog from its design equivalent. With regard to scope, the objective would be to factor out the architectural impact of requirements. Hence the distinction between architecture patterns dealing with business entities and processes consistently identified at enterprise and system levels, and aspect patterns dealing with features that can be managed separately. With regard to purpose the same semantics can be applied: creational patterns deal with objects instanciation, structural ones carry on with their composition and features, and behavioral ones with algorithms and responsibilities.

Functional Patterns Catalog: examples

Functional Patterns Catalog: examples

  • Architecture/Creational: for the mapping of business entities to their software surrogates. For instance, customers must be identified as parties (architecture/creational) before being fleshed out with specific features.
  • Aspect/Creational: for the intrinsic components of identified business entities. For instance, once created and identified as a customer, a foreign one is completed with the specific mandatory features of foreigners.
  • Architecture/Structural:  for inherited (i.e intrinsic) features of identified business entities.  For instance, foreign sales inherit specific invoice features.
  • Aspect/Structural:  for associated (intrinsic or otherwise) features of identified business entities. For instance, foreign sales delegate invoicing in foreign currencies.
  • Architecture/Behavioral: for the definition business rules attached to entities and managed by business domains. For instance invoicing rules are specialized for foreign currencies.
  • Aspect/Behavioral: for the definition of collaboration between entities independently of business domains. For instance allocating responsibilities at run-time depending on the invoice.

As can be seen with the MDA diagram above, this catalog is neutral as its entries are congruent with the categories used by the main modeling methodologies : use cases, domains, business rules, services, objects, aspects, etc.

Recommended Reading

Gamma E., Helm R., Johnson R. and Vlissides John M. , “Design Patterns: Elements of Reusable Object-Oriented Software”, Addison-Wesley (1994).

Further Reading

Feasibility & Capabilities

May 2, 2015


As far as systems engineering is concerned, the aim of a feasibility study is to verify that a business solution can be supported by a system architecture (requirements feasibility) subject to some agreed technical and budgetary constraints (engineering feasibility).

(A. Magnaldo)

Project Rain Check  (A. Magnaldo)

Where to Begin

A feasibility study is based on the implicit assumption of slack architecture capabilities. But since capabilities are set with regard to several dimensions, architectures boundaries cannot be taken for granted and decisions may even entail some arbitrage between business requirements and engineering constraints.

Using the well-known distinction between roles (who), activities (how), locations (where), control (when), and contents (what), feasibility should be considered for supporting functionalities (between business processes and systems) and implementation (between functionalities and platforms):


Feasibility with regard to Systems and Platforms

Depending on priorities, feasibility considerations could look from three perspectives:

  • Focusing on system functionalities (e.g with use cases) implies that system boundaries are already identified and that the business logic will be defined along with users’ interfaces.
  • Starting with business requirements puts business domains and logic on the driving seat, making room for variations in system functionalities and boundaries .
  • Operational requirements (physical environments, events, and processes execution) put the emphasis on a mix of business processes and quality of service, thus making software functionalities a dependent variable.

In any case a distinction should be made between requirements and engineering feasibility, the former set with regard to architecture capabilities, the latter with regard to development resources and budget constraints.

Requirements Feasibility & Architecture Capabilities

Functional capabilities are defined at system boundaries and if all feasibility options are to be properly explored, architectures capabilities must be understood as a trade-off between the five intrinsic factors e.g:

  • Security (entry points) and confidentiality (domains).
  • Compliance with regulatory constraints (domains) and flexibility (activities).
  • Reliability (processes) and interoperability (locations).

Feasible options must be set within the Capabilities Pentagon

Feasible options could then be figured out by points set within the capabilities pentagon. Given metrics on functional requirements, their feasibility under the non functional constraints could be assessed with regard to cross capabilities. And since the same five core capabilities can be consistently defined across enterprise, systems, and platforms layers, requirements feasibility could be assessed without prejudging architecture decisions.

Business Requirements & Architecture Capabilities

One step further, the feasibility of business and operational objectives (the “Why” of the Zachman framework) can be more easily assessed if set on the outer range and mapped to architecture capabilities.


Business Requirements and Architecture Capabilities

Engineering Feasibility & ROI

Finally, the feasibility of business and functional requirements under the constraints set by non functional requirements has to be translated in terms of ROI, and for that purpose the business value has to be compared to the cost of engineering the solution given the resources (people and tools), technical requirements, and budgetary constraints.

Architecture Capabilities for Processes, with deontic (basic) and alethic (dashed) dependencies.

ROI assessment mapping business value against functionalities, engineering outlays, and operational costs.

That where the transparency and traceability of capabilities across layers may be especially useful when alternatives and priorities are to be considered mixing functionalities, engineering outlays, and operational costs.

Further Reading

Business Processes & Use Cases

March 19, 2015


As can be understood from their theoretical basis (Pi-Calculus, Petri Nets, or State Machines), processes are meant to describe the concurrent execution of activities. Assuming that large enterprises have to keep a level of indirection between operations and business logic, it ensues that activities and business logic should be defined independently of the way they are executed by business processes.

Communication Semantics vs Contexts & Contents (G. Winograd)

Communication Semantics vs Contexts & Contents (G. Winograd)

For that purpose two basic modeling approaches are available:  BPM (Business Process Modeling) takes the business perspective, UML (Unified Modeling Language) takes the engineering one. Yet, each falls short with regard to a pivotal conceptual distinctions: BPM lumps together process execution and business logic, and UML makes no difference between business and software process execution. One way out of the difficulty would be to single out communications between agents (humans or systems), and specify interactions independently of their contents and channels.


Business Process: Communication + Business Logic

That could be achieved if communication semantics were defined independently of domain-specific languages (for information contents) and technical architecture (for communication channels). As it happens, and not by chance, the outcome would neatly coincide with use cases.

Linguistics & Computers Parlance

Business and functional requirements (see Requirements taxonomy) can be expressed with formal or natural languages. Requirements expressed with formal languages, domain-specific or generic, can be directly translated into some executable specifications. But when natural languages are used to describe what business expects from systems, requirements often require some elicitation.

When that challenge is put into a linguistic perspective, two school of thought can be considered, computational or functional.

The former approach is epitomized by Chomsky’s Generative Grammar and its claim that all languages, natural or otherwise, share an innate universal grammar (UG) supporting syntactic processing independently of their meanings. Nowadays, and notwithstanding its initial favor, that “computer friendly” paradigm hasn’t kept much track in general linguistics.

Alternatively, the functionalist school of thought considers linguistics as a general cognitive capability deprived of any autonomy. Along that reasoning there is no way to separate domain-specific semantics from linguistic constructs, which means that requirements complexities, linguistic or business specific, have to be dealt with as a lump, with or without the help of knowledgeable machines.

In between, a third approach has emerged that considers language as a functional system uniquely dedicated to communication and the mapping of meanings to symbolic representations. On that basis it should be possible to separate the communication apparatus (functional semantics) from the complexities of business (knowledge representation).

Processes Execution & Action Languages

Assuming the primary objective of business processes is to manage the concurrent execution of activities, their modeling should be driven by events and their consequences for interactions between business and systems. Unfortunately, that approach is not explicitly supported by BPM or UML.

Contrary to the “simplex” mapping of business information into corresponding data models (e.g using Relational Theory), models of business and systems processes (e.g Petri Nets or State Machines) have to be set in a “duplex” configuration as they are meant to operate simultaneously. Neither BPM nor UML are well equipped to deal with the task:

  • The BPM perspective is governed by business logic independently of interactions with systems.
  • Executable UML approaches are centered on software processes execution, and their extensions to action semantics deal essentially on class instances, features value, and objects states.

Such shortcomings are of no serious consequences for stand-alone applications, i.e when what happens at architecture level can be ignored; but overlooking the distinction between the respective semantics of business and software processes execution may critically hamper the usefulness, and even the validity, of models pertaining to represent distributed interactions between users and systems. Communication semantics may help to deal with the difficulty by providing relevant stereotypes and patterns.

Business Process Models

While business process models can (and should) also be used to feed software engineering processes, their primary purpose is to define how concurrent business operations are to be executed. As far as systems engineering is concerned, that will tally with three basic steps:

  1. Dock process interactions (aka sessions) to their business context: references to agents, business objects and time-frames.
  2. Specify interactions: references to context, roles, events, messages, and time related constraints.
  3. Specify information: structures, features, and rules.
Communication with systems: dock to context (1), interactions (2), information (3).

Communication with systems: dock to context (1), interactions (2), information (3).

Although modeling languages and tools usually support those tasks, the distinctions remain implicit, leaving users with the choice of semantics. In the absence of explicit guidelines confusion may ensue, e.g between business rules and their use by applications (BPM), or between business and system events (UML). Hence the benefits of introducing functional primitives dedicated to the description of interactions.

Such functional action semantics can be illustrated by the well known CRUD primitives for the creation, reading, updating and deletion of objects, a similar approach being also applied to the design of domain specific patterns or primitives supported by functional frameworks. While thick on the ground, most of the corresponding communication frameworks deal with specific domains or technical protocols without factoring out what pertains to communication semantics independently of information contents or technical channels.

Communication semantics should be independent of business specific contents and systems architectures.

Communication semantics should be independent of business specific contents and systems architectures.

But that distinction could be especially productive when applied to business processes as it would be possible to fully separate the semantics of communications between agents and supporting systems on one hand, and the business logic used to process business flows on the other hand.

 Communication Semantics

Languages have developed to serve two different purposes: first as a means to communicate (a capability shared between humans and many primates), and then as a means to represent and process information (a capability specific to humans). Taking a leaf from functional approaches to linguistics, it may be possible to characterize messages with regard to action semantics, more precisely associated activity and attendant changes:

  • No change: messages relating to passive (objects) or active  (performed activity) states.
  • Change: messages relating to achievement (no activity) or accomplishment (attendant on performed activity).
Dynamics of communication context

Dynamics of communication context

It must be noted that such semantics can also be extended to the state of expectations.

Additionally, messages would have to be characterized by:

  • Time-frame: single or different.
  • Address space : single or different.
  • Mode: information, request for information, or request for action.

Organizing business processes along those principles, would enable the alignment of BPMs with their UML counterpart.

Use Cases as Bridges between BPM & UML

Use cases can be seen as the default entry point for UML modeling, and as such they should provide a bridge from business process models. That can be achieved by if use cases are understood as a combination of interactions and activities, the former obtained from communications as defined above, the latter from business logic.

Use Cases are best understood as a combination of interactions and business logic

Use Cases are best understood as a combination of interactions and business logic

One step further, the distinction between communication semantics and business contents could be used to align models respectively to systems and software architectures:

Last but not least, the consolidation of models contents at architecture level would be congruent with modeling languages, BPM and UML.

Further Reading

External Links

Detour from Turing Game

February 20, 2015


Considering Alan Turing’s question, “Can machines think ?”, could the distinction between communication and knowledge representation capabilities help to decide between human and machine ?


Alan Turing at 4

What happens when people interact ?

Conversations between people are meant to convey concrete, partial, and specific expectations. Assuming the use of a natural language, messages have to be mapped to the relevant symbolic representations of the respective cognitive contexts and intentions.


Conveying intentions

Assuming a difference in the way this is carried on by people and machines, could that difference be observed at message level ?

Communication vs Representation Semantics

To begin with, languages serve two different purposes: to exchange messages between agents, and to convey informational contents. As illustrated by the difference between humans and other primates, communication (e.g alarm calls directly and immediately bound to imminent menace) can be carried out independently of knowledge representation (e.g information related to the danger not directly observable), in other words linguistic capabilities for communication and symbolic representation can be set apart. That distinction may help to differentiate people from machines.

Communication Capabilities

Exchanging messages make use of five categories of information:

  • Identification of participants (Who) : can be set independently of their actual identity or type (human or machine).
  • Nature of message (What): contents exchanged (object, information, request, … ) are not contingent on participants type.
  • Life-span of message (When): life-cycle (instant, limited, unlimited, …) is not contingent on participants type.
  • Location of participants (Where): the type of address space (physical, virtual, organizational,…) is not contingent on participants type.
  • Communication channels (How): except for direct (unmediated) human conversations, use of channels for non direct (distant, physical or otherwise) communication are not contingent on participants type .

Setting apart the trivial case of direct human conversation, it ensues that communication capabilities are not enough to discriminate between human and artificial participants, .

Knowledge Representation Capabilities

Taking a leaf from Davis, Shrobe, and Szolovits, knowledge representation can be characterized by five capabilities:

  1. Surrogate: KR provides a symbolic counterpart of actual objects, events and relationships.
  2. Ontological commitments: a KR is a set of statements about the categories of things that may exist in the domain under consideration.
  3. Fragmentary theory of intelligent reasoning: a KR is a model of what the things can do or can be done with.
  4. Medium for efficient computation: making knowledge understandable by computers is a necessary step for any learning curve.
  5. Medium for human expression: one the KR prerequisite is to improve the communication between specific domain experts on one hand, generic knowledge managers on the other hand.

On that basis knowledge representation capabilities cannot be used to discriminate between human and artificial participants.

Returning to Turing Test

Even if neither communication nor knowledge representation capabilities, on their own, suffice to decide between human and machine, their combination may do the trick. That could be achieved with questions like:

  • Who do you know: machines can only know previous participants.
  • What do you know: machines can only know what they have been told, directly or indirectly (learning).
  • When did/will you know: machines can only use their own clock or refer to time-spans set by past or planned transactional events.
  • Where did/will you know: machines can only know of locations identified by past or planned communications.
  • How do you know: contrary to humans, intelligent machines are, at least theoretically, able to trace back their learning process.

Hence, and given scenarii scripted adequately, it would be possible to build decision models able to provide unambiguous answers.

Reference Readings

A. M. Turing, “Computing Machinery and Intelligence”

Davis R., Shrobe H., Szolovitz P., “What is a Knowledge Representation?”

Further Reading




Get every new post delivered to your Inbox.

Join 399 other followers