Archive for the ‘Modeling Languages & Methods’ Category

Healthcare: Tracks & Stakes

February 8, 2018


Healthcare represents at least a tenth of developed country’s GDP, with demography pushing to higher levels year after year. In principle technology could drive costs in both directions; in practice it has worked like a ratchet: upside, innovations are extending the scope of expensive treatments, downside, institutional and regulatory constraints have hamstrung the necessary mutations of organizations and processes.

Health Care Personal Assistant (Kerry James Marshall)

As a result, attempts to spread technology benefits across healthcare activities have dwindle or melt away, even when buttressed by major players like Google or Microsoft.

But built up pressures on budgets combined with social transformations have undermined bureaucratic barriers and incumbents’ estates, springing up initiatives from all corners: pharmaceutical giants, technology startups, healthcare providers, insurers, and of course major IT companies.

Yet the wide range of players’ fields and starting lines may be misleading, incumbents or newcomers are well aware of what the race is about: whatever the number of initial track lanes, they are to fade away after a few laps, spurring the front-runners to cover the whole track, alone or through partnerships. As a consequence, winning strategies would have to be supported by a comprehensive and coherent understanding of all healthcare aspects and issues, which can be best achieved with ontologies.

Ontologies vs Models

Ontologies are symbolic constructs (epitomized by conceptual graphs made of nodes and connectors) whose purpose is to make sense of a domain of discourse:

  1. Ontologies are made of categories of things, beings, or phenomena; as such they may range from simple catalogs to philosophical doctrines.
  2. Ontologies are driven by cognitive (i.e non empirical) purposes, namely the validity and consistency of symbolic representations.
  3. Ontologies are meant to be directed at specific domains of concerns, whatever they can be: politics, religion, business, astrology, etc.

That makes ontologies a special case of uncommitted models: like models they are set on contexts and concerns; but contrary to models ontologies’ concerns are detached from actual purposes. That is precisely what is expected from a healthcare conceptual framework.

Contexts & Business Domains

Healthcare issues are set across too many domains to be effectively fathomed, not to mention followed as they change. Notwithstanding, global players must anchor their strategies to different institutional contexts, and frame their policies as to make them transparent and attractive to others players. Such all-inclusive frameworks could be built from ontologies profiled with regard to the governance and stability of contexts:

  • Social: No authority, volatile, continuous and informal changes.
  • Institutional: Regulatory authority, steady, changes subject to established procedures.
  • Professional: Agreed upon between parties, steady, changes subject to established procedures.
  • Corporate: Defined by enterprises, periodic, changes subject to internal decision-making.
  • Personal: Customary, defined by named individuals (e.g research paper).

Ontologies set along that taxonomy of contexts could then be refined as to target enterprise architecture layers: enterprise, systems, platforms, e.g:

A sample of Healthcare profiled ontologies

Depending on the scope and nature of partnerships, ontologies could be further detailed as to encompass architectures capabilities: Who, What, How, Where, When. 

Concerns & Architectures Capabilities

As pointed above, a key success factor for major players would be their ability to federate initiatives and undertakings of both incumbents and newcomers, within or without partnerships. That can be best achieved with enterprise architectures aligned with an all-inclusive yet open framework, and for that purpose the Zachman taxonomy would be the option of choice. The corresponding enterprise architecture capabilities (Who,What, How, Where, When) could then be uniformly applied to contexts and concerns:

  • Internally across architecture layers for enterprise (business and organization), systems (logical structures and functionalities), and platforms (technologies).
  • Externally across context-based ontologies as proposed above.

The nexus between environments (contexts) and enterprises (concerns) ontologies could then be organised according to the epistemic nature of items: terms, documents, symbolic representations (aka surrogates), or business objects and phenomena.

Mapping knowledge to architectures capabilities

That would outline four basic ontological archetypes that may or may not be combined:

  • Thesaurus: ontologies covering terms, concepts.
  • Document Management: thesaurus and documents.
  • Organization and Business: ontologies pertaining to enterprise organization and business processes.
  • Engineering: ontologies pertaining to the symbolic representation (aka surrogates) of organizations, businesses, and systems.

Global healthcare players could then build federating frameworks by combining domain and architecture driven ontologies, e.g:

Building federating frameworks with modular ontologies designed on purpose.

As a concluding remark, it must be reminded that the objective is to federate the activities and systems of healthcare players without interfering with the design of their business processes or supporting systems. Hence the importance of the distinction between ontologies and models introduced above which would act as a guaranty that concerns are not mixed up insofar as ontologies remain uncommitted models.

Further Reading

External Links


Ontologies as Productive Assets

January 22, 2018


An often overlooked benefit of artificial intelligence has been a renewed interest in seminal philosophical and cognitive topics; ontologies coming top of the list.

Ontological Questioning (The Thinker Monkey, Breviary of Mary of Savoy)

Yet that interest has often been led astray by misguided perspectives, in particular:

  • Universality: one-fits-all approaches are pointless if not self-defeating considering that ontologies are meant to target specific domains of concerns.
  • Implementation: the focus is usually put on representation schemes (commonly known as Resource Description Frameworks, or RDFs), instead of the nature of targeted knowledge and the associated cognitive capabilities.

Those misconceptions, often combined, may explain the limited practical inroads of ontologies. Conversely, they also point to ontologies’ wherewithal for enterprises immersed into boundless and fluctuating knowledge-driven business environments.

Ontologies as Assets

Whatever the name of the matter (data, information or knowledge), there isn’t much argument about its primacy for business competitiveness; insofar as enterprises are concerned knowledge is recognized as a key asset, as valuable if not more than financial ones, and should be managed accordingly. Pushing the comparison still further, data would be likened to liquidity, information to fixed income investment, and knowledge to capital ventures. To summarize, assets whatever their nature lose value when left asleep and bear fruits when kept awake; that’s doubly the case for data and information:

  • Digitized business flows accelerates data obsolescence and makes it continuous.
  • Shifting and porous enterprises boundaries and markets segments call for constant updates and adjustments of enterprise information models.

But assessing the business value of knowledge has always been a matter of intuition rather than accounting, even when it can be patented; and most of knowledge shapes up well beyond regulatory reach. Nonetheless, knowledge is not manna from heaven but the outcome of information processing, so assessing the capabilities of such processes could help.

Admittedly, traditional modeling methods are too stringent for that purpose, and looser schemes are needed to accommodate the open range of business contexts and concerns; as already expounded, that’s precisely what ontologies are meant to do, e.g:

  • Systems modeling,  with a focus on integration, e.g Zachman Framework.
  • Classifications, with a focus on range, e.g Dewey Decimal System.
  • Conceptual models, with a focus on understanding, e.g legislation.
  • Knowledge management, with a focus on reasoning, e.g semantic web.

And ontologies can do more than bringing under a single roof the whole of enterprise knowledge representations: they can also be used to nurture and crossbreed symbolic assets and develop innovative ones.

Ontologies Benefits

Knowledge is best understood as information put to use; accounting rules may be disputed but there is no argument about the benefits of a canny combination of information, circumstances, and purpose. Nonetheless, assessing knowledge returns is hampered by the lack of traceability: if a part of knowledge is explicit and subject to symbolic representation, another is implicit and manifests itself only through actual behaviors. At philosophical level it’s the line drawn by Wittgenstein: “The limits of my language mean the limits of my world”;  at technical level it’s AI’s two-lanes approach: symbolic rule-based engines vs non symbolic neural networks; at corporate level implicit knowledge is seen as some unaccounted for aspect of intangible assets when not simply blended into corporate culture. With knowledge becoming a primary success factor, a more reasoned approach of its processing is clearly needed.

To begin with, symbolic knowledge can be plied by logic, which, quoting Wittgenstein again, “takes care of itself; all we have to do is to look and see how it does it.” That would be true on two conditions:

  • Domains are to be well circumscribed. 
  • A water-tight partition must be secured between the logic of representations and the semantics of domains.

That could be achieved with modular and specific ontologies built on a clear distinction between common representation syntax and specific domains semantics.

As for non-symbolic knowledge, its processing has for long been overshadowed by the preeminence of symbolic rule-based schemes, that is until neural networks got the edge and deep learning overturned the playground. In a few years’ time practically unlimited access to raw data and the exponential growth in computing power have opened the door to massive sources of unexplored knowledge which is paradoxically both directly relevant yet devoid of immediate meaning:

  • Relevance: mined raw data is supposed to reflect the geology and dynamics of targeted markets.
  • Meaning: the main value of that knowledge rests on its implicit nature; applying existing semantics would add little to existing knowledge.

Assuming that deep learning can transmute raw base metals into knowledge gold, enterprises would need to understand, assess, and improve the refining machinery. That could be done with ontological frames.

Further Reading

External Links

Focus: Enterprise Architect Booklet

October 16, 2017


Given the diversity of business and organizational contexts, and EA still a fledgling discipline, spelling out a job description for enterprise architects can be challenging.


Aligning business, organization, and systems perspectives (Hans Vredeman de Vries)

So, rather than looking for comprehensive definitions of roles and responsibilities, one should begin by circumscribing the key topics of the trade, namely:

  1. Concepts : eight exclusive and unambiguous definitions provide the conceptual building blocks.
  2. Models: how the concepts are used to analyze business requirements and design systems architectures and software artifacts.
  3. Processes: how to organize business and engineering processes.
  4. Architectures: how to align systems capabilities with business objectives.
  5. Governance: assessment and decision-making.

The objective being to define the core issues that need to be addressed by enterprise architects.


To begin with, the primary concern of enterprise architects should be to align organization, processes, and systems with enterprise business objectives and environment. For that purpose architects are to consider two categories of models:

  • Analysis models describe business environments and objectives.
  • Design models prescribe how systems architectures and components are to be developed.

Enterprise architects must focus on individuals (objects and processes) consistently identified (#) across business and system realms.

That distinction is not arbitrary but based on formal logic: analysis models are extensional as they classify actual instances of business objects and activities; in contrast, design models are intensional as they define the features and behaviors of required system artifacts.

The distinction is also organizational: as far as enterprise architecture is concerned, the focus is to remain on objects and activities whose identity (#) and semantics are to be continuously and consistently maintained across business (actual instances) and system (symbolic representations) realms:

Relevant categories at architecture level can be neatly and unambiguously defined.

  • Actual containers represent address spaces or time frames; symbolic ones represent authorities governing symbolic representations. System are actual realizations of symbolic containers managing symbolic artifacts.
  • Actual objects (passive or active) have physical identities; symbolic objects have social identities; messages are symbolic objects identified within communications. Power-types (²) are used to partition objects.
  • Roles (aka actors) are parts played by active entities (people, devices, or other systems) in activities (BPM), or, if it’s the case, when interacting with systems (UML’s actors). Not to be confounded with agents meant to be identified independently of their behavior.
  • Events are changes in the state of business objects, processes, or expectations.
  • Activities are symbolic descriptions of operations and flows (data and control) independently of supporting systems; execution states (aka modes) are operational descriptions of activities with regard to processes’ control and execution. Power-types (²) are used to partition execution paths.

Since the objective is to identify objects and behaviors at architecture level, variants, abstractions, or implementations are to be overlooked. It also ensues that the blueprints obtained remain general enough as to be uniformly, consistently and unambiguously translated into most of modeling languages.

Languages & Models

Enterprise architects may have to deal with a range of models depending on scope (business vs system) or level (enterprise and system vs domains and applications):

  • Business process modeling languages are used to associate business domains and enterprises organization.
  • Domain specific languages do the same between business domains and software components, bypassing enterprise organization and systems architecture.
  • Generic modeling languages like UML are supposed to cover the whole range of targets.
  • Languages like Archimate focus on the association between enterprises organization and systems functionalities.
  • Contrary to modeling languages programming ones are meant to translate functionalities into software end-products. Some, like WSDL (Web Service Definition Language), can be used to map EA into service oriented architectures (SOA).

Scope of Modeling Languages

While architects clearly don’t have to know the language specifics, they must understand their scope and purposes.


Whatever the languages, methods, or models, the primary objective is that architectures support business processes whenever and wherever needed. Except for standalone applications (for which architects are marginally involved), the way systems architectures support business processes is best understood with regard to layers:

  • Processes are solutions to business problems.
  • Processes (aka business solutions) induce problems for systems, to be solved by functional architecture.
  • Implementations of functional architectures induce problems for platforms, to be solved by technical architectures.

Enterprise architects should focus on the alignment of business problems and supporting systems functionalities

As already noted, enterprise architects are to focus on enterprise and system layers: how business processes are supported by systems functionalities and, more generally, how architecture capabilities are to be aligned with enterprise objectives.

Nonetheless, business processes don’t operate in a vacuum and may depend on engineering and operational processes, with regard to development for the former, deployment for the latter.


Enterprise architects should take a holistic view of business, engineering, and operational processes.

Given the crumbling of traditional fences between environments and IT systems under combined markets and technological waves, the integration of business, engineering, and operational processes is to become a necessary condition for market analysis and reactivity to changes in business environment.


Blueprints being architects’ tool of choice, enterprise architects use them to chart how enterprise objectives are to be supported by systems capabilities; for that purpose:

  • On one hand they have to define the concepts used for the organization, business domains, and business processes.
  • On the other hand they have to specify, monitor, assess, and improve the capabilities of supporting systems.

In between they have to define the functionalities that will consolidate specific and possibly ephemeral business needs into shared and stable functions best aligned with systems capabilities.


The role of functional architectures is to map conceptual models to systems capabilities

As already noted, enterprise architects don’t have to look under the hood at the implementation of functions; what they must do is to ensure the continuous and comprehensive transparency between existing as well a planned business objectives and systems capabilities.


One way or the other, governance implies assessment, and for enterprise architects that means setting apart architectural assets and business applications:

  • Whatever their nature (enterprise organization or systems capabilities), the life-cycle of assets encompasses multiple production cycles, with returns to be assessed across business units. On that account enterprise architects are to focus on the assessment of the functional architecture supporting business objectives.
  • By contrast, the assessment of business applications can be directly tied to a business value within a specific domain, value which may change with cycles. Depending on induced changes for assets, adjustments are to be carried out through users’ stories (standalone, local impact) or use cases (shared business functions, architecture impact).

Enterprise architects deal with assets, business analysts with processes.

The difficulty of assessing returns for architectural assets is compounded by cross dependencies between business, engineering, and operational processes; and these dependencies may have a decisive impact for operational decision-making.

Business Intelligence & Decision-making

Embedding IT systems in business processes is to be decisive if business intelligence (BI) is to accommodate the ubiquity of digitized business processes and the integration of enterprises with their environments. That is to require a seamless integration of data analytics and decision-making:

Data analytics (sometimes known as data mining) is best understood as a refining activity whose purpose is to process raw data into meaningful information:

  • Data understanding gives form and semantics to raw material.
  • Business understanding charts business contexts and concerns in terms of objects and processes descriptions.
  • Modeling consolidates data and business understanding into descriptive, predictive, or operational models.
  • Evaluation assesses and improves accuracy and effectiveness with regard to objectives and decision-making.

Decision-making processes in open and digitized environment are best described with the well established OODA (Observation, Orientation, Decision, Action) loop:

  1. Observation: understanding of changes in business environments (aka territories).
  2. Orientation: assessment of the reliability and shelf-life of pertaining information (aka maps) with regard to current positions and operations.
  3. Decision: weighting of options with regard to enterprise capabilities and broader objectives.
  4. Action: carrying out of decisions within the relevant time-frame.

Seamless integration of data analytics and decision-making.

Along that perspective data analytics and decision-making can be seen as the front-offices of business intelligence, and  knowledge management as its back-office.

More generally that scheme epitomizes the main challenge of enterprise architects, namely the continuous and dynamic alignment of enterprise organization and systems to market environment, business processes, and decision-making.

Further Reading

On The Holistic Nature of MBSE

October 1, 2017


Interestingly, variants of MBSE/MDSE acronyms put the focus on the duality of the concept, software on one side, systems on the other.

MBSE is by nature a two-faced endeavor (Sand Painting Navajo Rug)

As that duality operates for models, systems, and organizations, MBSE offers a holistic view on enterprise architecture.

Models and Software

Models are symbolic representations of actual contexts in line with specific purposes: requirements analysis, simulation, software design, etc. Software is a subset of models characterized by target (computer code) and language (executable instructions). Based on that understanding, MBSE should not be limited to DSLs silos and code generation but employed to bring together and manage the whole range of concerns and artifacts.

Systems and Applications

The hapless track record of Waterfall and the parallel ascent of Agile have clouded the grounds for phased development processes. But whereas agile schemes are the default option when applications can be developed independently, external dependencies prevent their scaling up to system level. That’s when system engineering takes precedence on applications development, with MBSE introduced to manage shared models and support collaboration between teams.

Organization and Projects

As epitomized by agile development models, projects can be driven by specific business needs or shared architecture capabilities. Whereas the former are best carried out iteratively by autonomous teams sharing skills and responsibility, the latter entail collaboration between organizational units along time. MBSE provides the link between standalone projects, phased processes, and enterprise organization.

MBSE provides a holistic view of organisations and systems.

By providing a holistic view of changes in organizations, systems, and software, MBSE should be a key component of enterprise architecture.

Further Reading


Transcription & Deep Learning

September 17, 2017

Humans looking for reassurance against the encroachment of artificial brains should try YouTube subtitles: whatever Google’s track record in natural language processing, the way its automated scribe writes down what is said in the movies is essentially useless.

A blank sheet of paper was copied on a Xerox machine.
This copy was used to make a second copy.
The second to make a third one, and so on…
Each copy as it came out of the machine was re-used to make the next.
This was continued for one hundred times, producing a book of one hundred pages. (Ian Burn)

Experience directly points to the probable cause of failure: the usefulness of real-time transcriptions is not a linear function of accuracy because every slip can be fatal, without backup or second chance. It’s like walking a line: for all practical purposes a single misunderstanding can throw away the thread of understanding, without a chance of retrieve or reprieve.

Contrary to Turing machines, listeners have no finite states; and contrary to the sequence of symbols on tapes, tales are told by weaving together semantic threads. It ensues that stories are work in progress: readers can pause to review and consolidate meanings, but listeners have no other choice than punting on what comes to they mind, hopping that the fabric of the story will carry them out.

So, whereas automated scribes can deep learn from written texts and recorded conversations, there is no way to do the same from what listeners understand. That’s the beauty of story telling: words may be written but meanings are renewed each time the words are heard.

Further Reading

The Agility of Words

July 9, 2017


Oral cultures come with implicit codes for the repetition of words and sentences, making room for some literary hide-and-seek between the storyteller and his audience.


Stories Waiting to Happen

Could such narrative schemes be employed for users’ stories to open out the dialog between users (the storytellers) and business analysts (the listeners).

Open Storytelling

To begin with fiction, authors are meant to tell stories for readers ready to believe them at least while they are reading.

For young readers yet unable to suspend their disbelief, laser-disc games of the last century already gave post-toddlers a free hand to play with narratives.

But when the same scheme has been tried with grown-ups it has fizzled out: what would be the point of buying a story if you have to make it yourself ? the answer of agile business analysts is that users’ stories may be more pliable than budgets.

Tell Once, Tell Twice, Think Again

That’s what has just happened to “Hamlet on the Holodeck: The Future of Narrative in Cyberspace”, first published by Janet H. Murray twenty years ago with qualified ado, and now making a new debut, unedited yet clever as ever. That suggests both an observation and an interrogation.

For one, and notwithstanding readers consideration, a good story, fiction or otherwise, remains a good story which may be better appreciated in different circumstances. Then, considering the weighty mutation of circumstances since the book first appearance, the interrogation is about probable cause: should the origin of the rebirth to be looked for in technological advances, in the readers’ mind of that specific (non-fiction) book, or in the readiness of (fiction) books readers to collaborate in story building

Alternates in Narrative

As probable cause for new narrative ways, technology obviously comes first due to its means to change the relationship between readers and stories: breakthroughs in artificial intelligence, deep-learning, and computational linguistics have opened paths barely conceivable twenty years ago.

As a collateral effect of the technological revolution, opportunity may explain the renewed interest of Janet Murray’s likely readers: issues that were hardly broached before the initial publishing are now routinely mooted in the literati cognosphere.

Finally, on a broader social perspective, changes may have altered the motivation of fiction aficionados, bringing new relevancy to Janet Murray’s intuitions: as farcically illustrated by the uncritical audiences for alternative facts, the perception of reality may have been transformed by the utter sway of social networks.

Back to a literary perspective, evidences seem to point to the status of stories with regard to reality:

  • When embedded in games, stories don’t pretend to anything. On that ground changes are driven by players’ decisions regarding events or characters’ options that only affect the narratives of a plot defined upfront.
  • When set as fictions, stories, however preposterous, are meant to stand on their own ground. The meanings given to events and options are constitutive of the plot, and readers’ decisions are driven by their understanding of facts and behaviors.

So, Google’s AlphaGo may have overturned the grounds for the first category, but stories are not games and the only variants that count are the ones affecting understanding. More so for stories that use fictional realities to tell what should be.

Heed & Lead in Users’ Stories

Users’ stories are the agile answer to the challenge of elusive requirements. Definitively a cornerstone of the agile approach to software engineering, users’ stories are meant to deal with the instability of requirements, in contours as well as detours.

With regard to contours, users’ stories explore the space of requirements through successive iterations rooted into clearly identified users’ needs. Whereas the backbone (the plot) is set by stakeholders (the authors), the scope doesn’t have to be revealed upfront but can be progressively discovered through interactions between users (the storytellers) and analysts (the listeners).

But detours are where alternates in narratives may really prove themselves by helping to adjust users’ needs (the narratives) to business objectives (the plot). As a consequence changes suggested by analysts should not be limited to users’ options and ergonomy but may also concern the meaning of facts and behaviors. Along that reasoning users’ stories would use the agility of words to align the meanings of new business applications with the ones set by business functionalities already supported by systems.

Further Reading

External Links

Unified Architecture Framework Profile (UAFP): Lost in Translation ?

July 2, 2017


The intent of Unified Architecture Framework Profile (UAFP) is to “provide a Domain Meta-model usable by non UML/SysML tool vendors who may wish to implement the UAF within their own tool and metalanguage.”

Detached Architecture (Víctor Enrich)

But a meta-model trying to federate (instead of bypassing) the languages of tools providers has to climb up the abstraction scale above any domain of concerns, in that case systems architectures. Without direct consideration of the domain, the missing semantic contents has to be reintroduced through stereotypes.

Problems with that scheme appear at two critical junctures:

  • Between languages and meta-models, and the way semantics are introduced.
  • Between environments and systems, and the way abstractions are defined.

Caminao’s modeling paradigm is used to illustrate the alternative strategy, namely the direct stereotyping of systems architectures semantics.

Languages vs Stereotypes

Meta-Models are models of models: just like artifacts of the latter represent sets of instances from targeted domains, artifacts of the former represent sets of symbolic artifacts from the latter. So while set higher on the abstraction scale, meta-models still reflect the domain of concerns.

Meta-models takes a higher view of domains, meta-languages don’t.

Things are more complex for languages because linguistic constructs ( syntax and semantics) and pragmatic are meant to be defined independently of domain of discourse. Taking a simple example from the model above, it contains two kinds of relationships:

  • Linguistic constructs:  represents, between actual items and their symbolic counterparts; and inherits, between symbolic descriptions.
  • Domain specific: played by, operates, and supervises.

While meta-models can take into account both categories, that’s not the case for languages which only consider linguistic constructs and mechanisms. Stereotypes often appear as a painless way to span the semantic fault between what meta-models have to do and what languages use to do; but that is misguided because mixing domain specific semantics with language constructs can only breed confusion.

Stereotypes & Semantics

If profiles and stereotypes are meant to refine semantics along domains specifics, trying to conciliate UML/SysML languages and non UML/SysML models puts UAFP in a lopsided position by looking the other way, i.e towards one-fits-all meta-language instead of systems architecture semantics. Its way out of this conundrum is to combine stereotypes with UML constraint, as can be illustrated with PropertySet:

UAFP for PropertySet (italics are for abstract)

Behind the mixing of meta-modeling levels (class, classifier, meta-class, stereotype, meta-constraint) and the jumble of joint modeling concerns (property, measurement, condition), the PropertySet description suggests the overlapping of two different kinds of semantics, one looking at objects and behaviors identified in environments (e.g asset, capability, resource); the other focused on systems components (property, condition, measurement). But using stereotypes indifferently for both kind of semantics has consequences.

Stereotypes, while being the basic UML extension mechanism, comes without much formalism and can be applied extensively. As a corollary, their semantics must be clearly defined in line with the context of their use, in particular for meta-languages topping different contexts.

PropertySet for example is defined as an abstract element equivalent to a data type, simple or structured, a straightforward semantic that can be applied consistently for contexts, domains or languages.

That’s not the case for ActualPropertySet which is defined as an InstanceSpecification for a “set or collection of actual properties”. But properties defined for domains (as opposed to languages) have no instances of their own and can only occur as concrete states of objects, behaviors, or expectations, or as abstract ranges in conditions or constraints. And semantics ambiguities are compounded when inheritance is indifferently applied between a motley of stereotypes.

Properties epitomize the problems brought about by confusing language and domain stereotypes and point to a solution.

To begin with syntax, stereotypes are redundant because properties can be described with well-known language constructs.

As for semantics, stereotyped properties should meet clearly defined purposes; as far as systems architectures are concerned, that would be the mapping to architecture capabilities:

Property must be stereotyped with regard to induced architecture capabilities.

  • Properties that can be directly and immediately processed, symbolic (literal) or not (binary objects).
  • Properties whose processing depends on external resource, symbolic (reference) or not (numeric values).

Such stereotypes could be safely used at language level due to the homogeneity of property semantics. That’s not the case for objects and behaviors.

Languages Abstractions & Symbolic Representations

The confusion between language and domain semantics mirrors the one between enterprise and systems, as can be illustrated by UAFP’s understanding of abstraction.

In the context of programming languages, isAbstract applies to descriptions that are not meant to be instantiated: for UAFP “PhysicalResource” isAbstract because it cannot occur except as “NaturalResource” or “ResourceArtifact”, none of them isAbstract.

“isAbstract” has no bearing on horses and carts, only on the meaning of the class PhysicalResource.

Despite the appearances, it must be reminded that such semantics have nothing to do with the nature of resources, only with what can be said about it. In any case the distinction is irrelevant as long as the only semantics considered are confined to specification languages, which is the purpose of the UAFP.

As that’s not true for enterprise architects, confusion is to arise when the modeling Paradigm is extended as to include environments and their association with systems. Then, not only that two kinds of instances (and therefore abstractions) are to be described, but that the relationship between external and internal instances is to determine systems architectures capabilities. Extending the simple example above:

  • Overlooking the distinction between active and passive physical resources prevents a clear and reliable mapping to architecture technical capabilities.
  • Organizational resource lumps together collective (organization), individual and physical (person), individual and organizational (role), symbolic (responsibility), resources. But these distinctions have a direct consequences for architecture functional capabilities.

Abstraction & Symbolic representation

Hence the importance of the distinction between domain and language semantics, the former for the capabilities of the systems under consideration, the latter for the capabilities of the specification languages.

Systems Never Walk Alone

Profiles are supposed to be handy, reliable, and effective guides for the management of specific domains, in that case the modeling of enterprise architectures. As it happens, the UAF profile seems to set out the other way, forsaking architects’ concerns for tools providers’ ones; that can be seen as a lose-lose venture because:

  • There isn’t much for enterprise architects along that path.
  • Tools interoperability would be better served by a parser focused on languages semantics independently of domain specifics.

Hopefully, new thinking about architecture frameworks (e.g DoDAF) tends to restyle them as EA profiles, which may help to reinstate basic requirements:

  • Explicit modeling of environment, enterprise, and systems.
  • Clear distinction between domain (enterprise and systems architecture) and languages.
  • Unambiguous stereotypes with clear purposes

A simple profile for enterprise architecture

On a broader perspective understanding meta-models and profiles as ontologies would help with the alignment of purposes (enterprise architects vs tools providers), scope (enterprise vs systems), and languages (modeling vs programming).

Back to Classics: Ontologies

As introduced long ago by philosophers, ontologies are meant to make sense of universes of discourse. To be used as meta-models and profiles ontologies must remain neutral and support representation and contents semantics independently of domains of concern or perspective.

With regard to neutrality, the nature of semantics should tally the type of nodes (top):

  • Nodes would represent elements specific to domains (bottom right).
  • Connection nodes would be used for semantically neutral (aka syntactic) associations to be applied uniformly across domains (bottom left).

That can be illustrated with the simple example of cars:


RDF graphs (top) support formal (bottom left) and domain specific (bottom right) semantics.

With regard to contexts, ontologies should be defined according to the nature of governance and stability:

  • Social: No authority, volatile, continuous and informal changes.
  • Institutional: Regulatory authority, steady, changes subject to established procedures.
  • Professional: Agreed upon between parties, steady, changes subject to established procedures.
  • Corporate: Defined by enterprises, periodic, changes subject to internal decision-making.
  • Personal: Customary, defined by named individuals (e.g research paper).

Ontologies set along that taxonomy could also be refined as to be aligned with enterprise architecture layers: enterprise, systems, platforms, e.g:

Ontologies, capabilities (Who,What,How, Where, When), and architectures (enterprise, systems, platforms).

With regard to concerns ontologies should  focus on the epistemic nature of targeted items: terms, documents, symbolic representations, or actual objects and phenomena. That would outline four basic concerns that may or may not be combined:

  • Thesaurus: ontologies covering terms and concepts.
  • Document Management: ontologies covering documents with regard to topics.
  • Organization and Business: ontologies pertaining to enterprise organization, objects and activities.
  • Engineering: ontologies pertaining to the symbolic representation of products and services.

Ontologies: Purposes & Targets

More generally, understanding meta-models and profiles as functional ontologies is to bring all EA business and engineering concerns within a comprehensive and consistent conceptual framework.

Further Reading

Enterprise Architecture

External Links

Views, Models, & Architectures

May 27, 2017


Views can take different meanings, from windows opening on specific data contexts (e.g DB relational theory), to assortments of diagrams dedicated to particular concerns (e.g UML).

Fortunato Depero tunnels

Deconstructing the Universe along Contexts and Concerns (Depero Fortunato)

Models for their part have also been understood as views, on DB contents as well as systems’ architecture and components, the difference being on the focus put on engineering. Due to their association with phased processes, models has been relegated to a back-burner by agile approaches; yet it may resurface in terms of granularity with model-based engineering frameworks.

Views & Architectures

As far as systems engineering is concerned, understandings of views usually refer to Philippe Kruchten’s “4+1” View Model of Software Architecture” :

  • Logical view: design of software artifacts.
  • Process view: captures the concurrency and synchronization aspects.
  • Physical view: describes the mapping(s) of software artifacts onto hardware.
  • Development view: describes the static organization of software artifacts in development environments.

A fifth is added for use cases describing the interactions between systems and business environments.

Whereas these views have been originally defined with regard to UML diagrams, they may stand on their own meanings and merits, and be assessed or amended as such.

Apart from labeling differences, there isn’t much to argue about use cases (for requirements), process (for operations), and physical (for deployment) views; each can be directly associated to well identified parts of systems engineering that are to be carried out independently of organizations, architectures or methods.

Logical and development views raise more questions because they imply a distinction between design and implementation. That implicit assumption induces two kinds of limitations:

  • They introduce a strong bias toward phased approaches, in contrast to agile development models that combine requirements, development and acceptance into iterations.
  • They classify development processes with regard to predefined activities, overlooking a more critical taxonomy based on objectives, architectures and life-cycles: user driven and short-term (applications ) vs data-based and long-term (business functions).

These flaws can be corrected if logical and development views are redefined respectively as functional and application views, the former targeting business objects and functions, the latter business logic and users’ interfaces.

Architecture based views

Architecture based views

That make views congruent with architecture levels and consequently with engineering workshops. More importantly, since workshops make possible the alignment of products with work units, they are a much better fit to model-based engineering and a shift from procedural to declarative paradigm.

Model-based Systems Engineering & Granularity

At least in theory, model-based systems engineering (MBSE) should free developers from one-fits-all procedural schemes and support iterative as well as declarative approaches. In practice that would require matching tasks with outcomes, which could be done if responsibilities on the former can be aligned with models granularity of the latter.

With coarse-grained phased schemes like MDA’s CIM/PIM/PSM (a), dependencies between tasks would have to be managed with regard to a significantly finer artifacts’ granularity.

Managing changes at architecture (a) or application (b) level.

Managing changes at architecture (a) or application (b) level.

For agile schemes, assuming conditions on shared ownership and continuous deliveries are met, projects would put locks on “models” at both ends (users’ stories and deliveries) of development cycles (b), with backlogs items defining engineering granularity.

Backlogs mechanism can be used to manage customized granularity and hierarchical dependencies across model layers

From the enterprise perspective it would be possible to unify the management of changes in architectures across layers and responsibilities: business concepts and organization, functional architecture, and systems capabilities:


Functional architecture as symbolic bridge between business needs and system capabilities.

From the engineering perspective it would be possible to unify the management of changes in artifacts at the appropriate level of granularity: static and explicit using milestones (phased), dynamic and implicit using backlogs (agile).

Fine grained model based frameworks could support phased as well as agile development solutions

Such a declarative repository would greatly enhance exchanges and integration across projects  and help to align heterogeneous processes independently of the methodologies used.

Further Reading

External Links

Focus: Business Cases for Use Cases

February 27, 2017


As originally defined by Ivar Jacobson, uses cases (UCs) are focused on the interactions between users and systems. The question is how to associate UC requirements, by nature local, concrete, and changing, with broader business objectives set along different time-frames.


Cases, Kites, and Clouds (Sigmar Polke)

Backing Use Cases

On the system side UCs can be neatly traced through the other UML diagrams for classes, activities, sequence, and states. The task is more challenging on the business side due to the diversity of concerns to be defined with other languages like Business Process Modeling Notation (BPMN).

Use cases at the hub of UML diagrams

Use Cases contexts

Broadly speaking, tracing use cases to their business environments have been undertaken with two approaches:

  • Differentiated use cases, as epitomized by Alister Cockburn’s seminal book (Readings).
  • Business use cases, to be introduced beside standard (often renamed as “system”) use cases.

As it appears, whereas Cockburn stays with UCs as defined by Jacobson but refines them to deal specifically with generalization, scaling, and extension, the second approach introduces a somewhat ill-defined concept without setting apart the different concerns.

Differentiated Use Cases

Being neatly defined by purposes (aka goals), Cockburn’s levels provide a good starting point:

  • Users: sea level (blue).
  • Summary: sky, cloud and kite (white).
  • Functions: underwater, fish and clam (indigo).

As such they can be associated with specific concerns:

Cockburn’s differentiated use cases

  • Blue level UCs are concrete; that’s where interactions are identified with regard to actual agents, place, and time.
  • White level UCs are abstract and cannot be instanciated; cloud ones are shared across business processes, kite ones are specific.
  • Indigo level UCs are concrete but not necessarily the primary source of instanciation; fish ones may or may not be associated with business functions supported by systems (grey), e.g services , clam ones are supposed to be directly implemented by system operations.

As illustrated by the example below, use cases set at enterprise or business unit level can also be concrete:

Example with actors for users and legacy systems (bold arrows for primary interactions)

UC abstraction connectors can then be used to define higher business objectives.

Business “Use” Cases

Compared to Cockburn’s efficient (no new concept) and clear (qualitative distinctions) scheme, the business use case alternative adds to the complexity with a fuzzy new concept based on quantitative distinctions like abstraction levels (lower for use cases, higher for business use cases) or granularity (respectively fine- and coarse-grained).

At first sight, using scales instead of concepts may allow a seamless modeling with the same notations and tools; but arguing for unified modeling goes against the introduction of a new concept. More critically, that seamless approach seems to overlook the semantic gap between business and system modeling languages. Instead of three-lane blacktops set along differentiated use cases, the alignment of business and system concerns is meant to be achieved through a medley of stereotypes, templates, and profiles supporting the transformation of BPMN models into UML ones.

But as far as business use cases are concerned, transformation schemes would come with serious drawbacks because the objective would not be to generate use cases from their business parent but to dynamically maintain and align business and users concerns. That brings back the question of the purpose of business use cases:

  • Are BUCs targeting business logic ? that would be redundant because mapping business rules with applications can already be achieved through UML or BPMN diagrams.
  • Are BUCs targeting business objectives ? but without a conceptual definition of “high levels” BUCs are to remain nondescript practices. As for the “lower levels” of business objectives, users’ stories already offer a better defined and accepted solution.

If that makes the concept of BUC irrelevant as well as confusing, the underlying issue of anchoring UCs to broader business objectives still remains.

Conclusion: Business Case for Use Cases

With the purposes clearly identified, the debate about BUC appears as a diversion: the key issue is to set apart stable long-term business objectives from short-term opportunistic users’ stories or use cases. So, instead of blurring the semantics of interactions by adding a business qualifier to the concept of use case, “business cases” would be better documented with the standard UC constructs for abstraction. Taking Cockburn’s example:

Abstract use cases: no actor (19), no trigger (20), no execution (21)

Different levels of abstraction can be combined, e.g:

  • Business rules at enterprise level: “Handle Claim” (19) is focused on claims independently of actual use cases.
  • Interactions at process level: “Handle Claim” (21) is focused on interactions with Customer independently of claims’ details.

Broader enterprise and business considerations can then be documented depending on scope.

Further Reading

External Links

NIEM & Information Exchanges

January 24, 2017


The objective of the National Information Exchange Model (NIEM) is to provide a “dictionary of agreed-upon terms, definitions, relationships, and formats that are independent of how information is stored in individual systems.”

(Alfred Jensen)

NIEM’s model makes no difference between data and information (Alfred Jensen)

For that purpose NIEM’s model combines commonly agreed core elements with community-specific ones. Weighted against the benefits of simplicity, this architecture overlooks critical distinctions:

  • Inputs: Data vs Information
  • Dictionary: Lexicon and Thesaurus
  • Meanings: Lexical Items and Semantics
  • Usage: Roots and Aspects

That shallow understanding of information significantly hinders the exchange of information between business or institutional entities across overlapping domains.

Inputs: Data vs Information

Data is made of unprocessed observations, information makes sense of data, and knowledge makes use of information. Given that NIEM is meant to be an exchange between business or institutional users, it should have no concern with data mining or knowledge management.

Data is meaningless, information meaning is set by semantic domains.

As an exchange, NIEM should have no concern with data mining or knowledge management.

The problem is that, as conveyed by “core of data elements that are commonly understood and defined across domains, such as person, activity, document, location”, NIEM’s model makes no explicit distinction between data and information.

As a corollary, it implies that data may not only be meaningful, but universally so, which leads to a critical trap: as substantiated by data analytics, data is not supposed to mean anything before processed into information; to keep with examples, even if the definition of persons and locations may not be specific, the semantics of associated information is nonetheless set by domains, institutional, regulatory, contractual, or otherwise.

Data is meaningless, information meaning is set by semantic domains.

Data is meaningless, information meaning is set by semantic domains.

Not surprisingly, that medley of data and information is mirrored by NIEM’s dictionary.

Dictionary: Lexicon & Thesaurus

As far as languages are concerned, words (e.g “word”, “ξ∏¥” ,”01100″) remain data items until associated to some meaning. For that reason dictionaries are built on different levels, first among them lexical and semantic ones:

  • Lexicons take items on their words and gives each of them a self-contained meaning.
  • Thesauruses position meanings within overlapping galaxies of understandings held together by the semantic equivalent of gravitational forces; the meaning of words can then be weighted by the combined semantic gravity of neighbors.

In line with its shallow understanding of information, NIEM’s dictionary only caters for a lexicon of core standalone items associated with type descriptions to be directly implemented by information systems. But due to the absence of thesaurus, the dictionary cannot tackle the semantics of overlapping domains: if lexicons alone can deal with one-to-one mappings of items to meanings (a), thesauruses are necessary for shared (b) or alternative (c) mappings.


Shared or alternative meanings cannot be managed with lexicons

With regard to shared mappings (b), distinct lexical items (e.g qualification) have to be mapped to the same entity (e.g person). Whereas some shared features (e.g person’s birth date) can be unequivocally understood across domains, most are set through shared (professional qualification), institutional (university diploma), or specific (enterprise course) domains .

Conversely, alternative mappings (c) arise when the same lexical items (e.g “mole”) can be interpreted differently depending on context (e.g plastic surgeon, farmer, or secret service).

Whereas lexicons may be sufficient for the use of lexical items across domains (namespaces in NIEM parlance), thesauruses are necessary if meanings (as opposed to uses) are to be set across domains. But thesauruses being just tools are not sufficient by themselves to deal with overlapping semantics. That can only be achieved through a conceptual distinction between lexical and semantic envelops.

Meanings: Lexical Items & Semantics

NIEM’s dictionary organize names depending on namespaces and relationships:

  • Namespaces: core (e.g Person) or specific (e.g Subject/Justice).
  • Relationships: types (Counselor/Person) or properties (e.g PersonBirthDate).

NIEM’s Lexicon: Core (a) and specific (b) and associated core (c) and specific (d) properties

But since lexicons know only names, the organization is not orthogonal, with lexical items mapped indifferently to types and properties. The result being that, deprived of reasoned guidelines, lexical items are chartered arbitrarily, e.g:

Based on core PersonType, the Justice namespace uses three different schemes to define similar lexical items:

  • “Counselor” is described with core PersonType.
  • “Subject” and “Suspect” are both described with specific SubjectType, itself a sub-type of PersonType.
  • “Arrestee” is described with specific ArresteeType, itself a sub-type of SubjectType.

Based on core EntityType:

  • The Human Services namespace bypasses core’s namesake and introduces instead its own specific EmployerType.
  • The Biometrics namespace bypasses possibly overlapping core Measurer and BinaryCaptured and directly uses core EntityType.
Lexical items are meshed disregarding semantics

Lexical items are chartered arbitrarily

Lest expanding lexical items clutter up dictionary semantics, some rules have to be introduced; yet, as noted above, these rules should be limited to information exchange and stop short of knowledge management.

Usage: Roots and Aspects

As far as information exchange is concerned, dictionaries have to deal with lexical and semantic meanings without encroaching on ontologies or knowledge representation. In practice that can be best achieved with dictionaries organized around roots and aspects:

  • Roots and structures (regular, black triangles) are used to anchor information units to business environments, source or destination.
  • Aspects (italics, white triangles) are used to describe how information units are understood and used within business environments.
nformation exchanges are best supported by dictionaries organized around roots and aspects

Information exchanges are best supported by dictionaries organized around roots and aspects

As it happens that distinction can be neatly mapped to core concepts of software engineering.

P.S. Thesauruses & Ontologies

Ontologies are systematic accounts of existence for whatever is considered, in other words some explicit specification of the concepts meant to make sense of a universe of discourse. From that starting point three basic observations can be made:

  1. Ontologies are made of categories of things, beings, or phenomena; as such they may range from simple catalogs to philosophical doctrines.
  2. Ontologies are driven by cognitive (i.e non empirical) purposes, namely the validity and consistency of symbolic representations.
  3. Ontologies are meant to be directed at specific domains of concerns, whatever they can be: politics, religion, business, astrology, etc.

With regard to models, only the second one puts ontologies apart: contrary to models, ontologies are about understanding and are not supposed to be driven by empirical purposes.

On that basis, ontologies can be understood as thesauruses describing galaxies of concepts (stars) and features (planets) held together by semantic gravitation weighted by similarity or proximity. As such ontologies should be NIEM’s tool of choice.

Further Reading

External Links


Your content with a new angle at

IT Modernization < V.Hanniet

About IT Modernization

IT Modernization < V. Hanniet

software model driven approaches

Caminao's Ways

Do systems know how symbolic they are ?