Modeling Symbolic Representations

March 16, 2010

System modeling is all too often a flight for abstraction, when business analysts should instead look for the proper level of representation, ie the one with the best fit to business concerns.

Modeling is synchronic: contexts must be mapped to representations (Velazquez, “Las Meninas”).

Caminao’s blog will try to set a path to Architecture Driven System Modelling. The guiding principle is to look at systems as sets of symbolic representations and identify the core archetypes defining how they must be coupled to their actual counterparts. That would provide for lean (need-to-know specs) and fit (architecture driven) models, architecture traceability, and built-in consistency checks.

This blog is meant to be a work in progress, with the basic concepts set open to suggestions or even refutation:

All examples are taken from ancient civilizations in order to put the focus on generic problems of symbolic architectures, disregarding technologies.

Symbolic representation: a primer

Original illustrations by Albert (http://www.albertdessinateur.com/) allow for concrete understanding of requirements, avoiding the biases associated with contrived textual descriptions.

Abstractions & Emerging Architectures

March 1, 2014

Objective

Modeling is all too often a flight for abstraction when analysts should instead get their bearings and look for the proper level of representation, i.e the one best fitting their concerns. As a consequence, many debates that seem baffling when revolving around abstraction levels may suddenly clear up when reset in terms of artifacts and symbolic representations.

The Origin of Design (A. Magritte)

Models, artifacts, and the emergence of design (R. Magritte)

That especially the case for enterprise architectures which, contrary to system ones, cannot be reduced to planned design but seem to emerge from a mix of cultural sediments, economic factors, technology constraints, and planned designs.

Hence the need to understand the relationships between enterprise contexts, organization and processes on one hand, their symbolic counterpart as systems objects on the other hand.

Artifacts & Models

When architectures are considered, a distinction should first be made between artifacts (e.g buildings) and models (blueprints), the former being manufactured objects designed and built on purpose, the latter symbolic artifacts reflecting those purposes and how to meet them.

Blueprints are used to design and build physical objects according to purposes.

Blueprints are used to design and build physical objects according to purposes.

That distinction between artifacts and symbolic descriptions is easy to make for physical objects built on plans, less so for symbolic objects which are artifacts of their own and as such are begot from symbolic descriptions. In other words symbolic artifacts crop up as designs as well as final products.

Symbolic artifacts have to be designed

Symbolic artifacts have to be designed before being implemented as objects of their own.

Moreover, artifacts being used in contexts, their description must also include modus operandi. For enterprises that would mean business objectives, organization, and processes.

MO: How to use artifacts and manage associated information.

Business process: how to use artifacts and manage associated information.

Enterprises, whose architectures combine actual contexts and activities with their symbolic counterpart in systems will therefore have to be described by two kinds of models:

  • Models of business contexts and processes describe current or planned objects and activities.
  • Models of symbolic representations describe associated symbolic objects used to store, process, or exchange information.
Models are used to describe actual or symbolic objects and behaviors

Models are used to describe actual or symbolic objects and behaviors

That distinction can be mapped to the one between enterprise and systems architectures: one set of models deals with enterprise objectives, assets, and organization, the other one deals with system components used as symbolic surrogates.

Architecture & Design

Architecture and design may have a number of overlapping features yet they clearly differ with regard to software: contrary to architecture, software design is meant to fully describe how to implement system components. That difference is especially meaningful for enterprise architecture:

  • At enterprise level models are used to describe objects and activities from a business perspective, independently of their representation by system components. Whatever the nature of targeted objects and activities (physical or symbolic, current or planned), models are meant to describe business units (actual or required) identified and managed at enterprise level.
  • At system level models are used to describe software components. Given that systems are meant to represent business contexts and support business processes, their architecture has to be aligned on the units managed at enterprise level.

Assuming that functional, persistency, and execution units must be uniquely and consistently identified at both enterprise and systems level, their respective models have to share some common infrastructure.

ad

Architecture models overlap for enterprise and systems, design models are only used for systems.

The overlapping of models with regard to enterprise and systems architectures and their yoking into systems design determine the background of architectures transformations.

Abstractions and Changes

If some continuity is to be maintained across architectures mutations, modeling abstractions are needed to frame and consolidate changes at both enterprise and system levels.

From the enterprise standpoint the primary factor is the continuity and consistency of corporate identity and activities. For that purpose abstractions will have to target functional, persistency, and execution units. Definitions of those abstract units will provide the backbone of enterprise architecture (a). That backbone can then be independently fleshed out with features providing identified structures of objects and activities are not affected (b).

From the systems standpoint the objective is the alignment of system and enterprise units on one hand, the effectiveness of technical architecture on the other hand. For that purpose abstract architecture units (reflecting enterprise units) are mapped to system units (c), whose design will be carried on independently (d).

Identified enterprise units are consolidated into abstract units (a), detailed (b), consolidated into systems units (c) to be further designed (d).

Identified enterprise units (a) are detailed (b), (c) to be further designed (d).

That should determine the right level of abstraction, namely when corresponding abstract units can be used to align enterprise and systems ones.

Once securely locked to a common architecture backbone, enterprise and system models can be expanded according to their respective concerns, business and organization for the former, technology and platforms implementation for the latter. On that basis primary changes can be analyzed in terms of specialization and extension.

Specialization will change the local features of enterprise or systems units without affecting their identification or semantics at architecture level:

  • With regard to enterprise, entry points (a1), features (a2), business rules (a3), and control rules (a4) will be added, modified or removed.
  • With regard to systems, designs will be modified or new ones introduced in response to changes in enterprise or technological environments.
Basic architectural changes (enterprise level)

Basic architectural changes (enterprise level)

Contrary to specialization, architecture extension changes enterprise or systems units in ways affecting their identification, semantics or implementation at architecture level:

  • With regard to enterprise, entry points locations (b1), semantic domains (b2), business applications (b3), and processes (b4) will be added, modified or removed
  • With regard to systems, changes in platforms implementations following new technologies or operational requirements.

Hence, while specialization will not affect the architecture backbone, that’s not the case for extension. More critically, the impact of extensions may not be limited to basic changes to backbones as inheritance may also affect the identification mechanisms and semantics of existing units. That happens when abstract descriptions are introduced for aspects that cannot be identified on their own but only when associated to some identified object or behavior.

That can be illustrated by a banking example of a transition from account-based to customer-based management:

  1. To begin with, let’s assume a single process for accounts, with customers represented as aspects of accounts.
  2. Then, in order to support customers relationship management, customers become entities of their own, identified independently of accounts.
  3. Finally, roles and types are introduced as abstract descriptions (not identified on their own) in order to characterize actual parties (customer, supplier, etc) and accounts (current, savings, insurance, etc).
When architectures grow extension can change identification mechanisms and semantics

When architectures grow extension can change identification mechanisms and semantics (italics are for abstract descriptions)

That modeling shift from concrete to abstract descriptions can be seen as the hinge connecting changes in systems and enterprise architectures.

Eppur si muove

As enterprises grow and extend, architectures become more complex and have to be supported by symbolic representations of whatever is needed for their management: assets, roles, activities, mechanisms, etc. As a consequence, models of enterprise architectures have to deal with two kinds of targets, actual assets and processes on one hand, their symbolic representation as system objects on the other hand.

This apparent symmetry can be misleading as the former models are meant to reflect a reality but the latter ones are used to produce one. In other words there is no guarantee that their alignment can be comprehensively and continuously maintained. Yet, as Galileo purportedly once said of the Earth circling the Sun despite models of the contrary, it moves. So, what are the primary factors behind moves in enterprise architectures ?

What moves first: actual contexts and processes or enterprise abstractions ?

What moves first: actual contexts and processes or enterprise abstractions ?

Assuming that enterprise architecture entails some kind of documentation, changes in actual contexts will induce new representations of objects and processes. At this point, the corresponding changes in models directly reflect actual changes, but the reverse isn’t true. For that to happen, i.e for business objects and processes being drawn from models, the bonds between actual and symbolic descriptions have to be loosened, giving some latitude for the latter to be modified independently of their actual counterpart. As noted above, specialization will do that for local features, but for changes to architecture units being carried on from models, abstractions are a prerequisite.

Emerging Architectures and Grey Matter

As already noted, actual-oriented models describe instances of business objects and processes, while symbolic-oriented ones describe representations, both at instances level (aka concrete descriptions) and types level (aka abstract descriptions). As a corollary, changes in actual-oriented models directly reflect changes in contexts and processes (a); that’s not necessarily the case for symbolic-oriented models which can also take into account intended changes (b) to be translated into concrete targets descriptions at a later stage (c).

Emergence of architectural features is best observed when abstractions (italics) are introduced.

Emergence of architectural features is best observed when abstractions (italics) are introduced (b).

Obviously the room left for conjured up architectural changes is bounded by deterministic factors; nonetheless, thought up new functional features are bound to appear first, if at all, as abstract descriptions, and that’s where emerging architectures are best to be observed.

At that tipping point, and assuming a comprehensive understanding of objective factors (business logic, data structures, operational constraints, etc), the influence of non deterministic factors upon emerging architectures can be probed from two directions: pushing from the past or pulling from the future.

The past will make its mark through existing organizational structures and roles. Knowledge, power bases, and habits are much less pliable than processes and systems. When forced to change they are bound to bend the options, and not necessarily through informed decision making.

Conversely, the assessment of future events, non deterministic by nature, is the result of decision making processes mixing explicit rationale with more obscure collective biases. Those collective leanings will often leave their mark on the way changes in contexts are anticipated, risks weighted, and business objectives defined.

Those non deterministic influences are rooted in some enterprise psyche that steer individual behaviors and collective decisions. Like the hypothetical dark matter conjectured by astronomers in order to explain the mass of the universe, that grey matter of corporate entities is the shadow counterpart of actual systems, necessary to explain their position with regard to enterprises contexts, objectives, and organization.

Further Reading

What’s the scope of Agile Principles

February 25, 2014

Objective

The Agile development model as pioneered by the eponymous Manifesto is based both on universal principles meant to be applied in any circumstances, and on more specific ones subject to some prerequisites. Sorting out the provisos may help to extend and improve the agile footprint.

The flexibility of Agile principles (E. de Souza)

The flexibility of Agile principles (E. de Souza)

Checklist

1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

  • Scope: Specific to agile development model.
  • Requisite: Iterative and incremental development.

2. Welcome changing requirements, even late in  development. Agile processes harness change for  the customer’s competitive advantage.

  • Scope: Universal.
  • Requisite: Requirements traceability and modular design.

3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.

  • Scope: Universal.
  • Requisite: Modular design and incremental development.

4. Business people and developers must work together daily throughout the project.

  • Scope: Specific to agile development model.
  • Requisite: Shared ownership, no external dependencies.

5. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.

  • Scope: Universal.
  • Requisite: Dedicated organization and human resources policy.

6. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.

  • Scope: Universal.
  • Requisite: Corporate culture.

7. Working software is the primary measure of progress.

  • Scope: Universal.
  • Requisite: Quality management and effective assessment of returns and costs.

8. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

  • Scope: Universal.
  • Requisite: Dedicated project management and human resources policy.

9. Continuous attention to technical excellence and good design enhances agility.

  • Scope: Universal.
  • Requisite: Corporate culture and human resources policy.

10. Simplicity–the art of maximizing the amount of work not done–is essential.

  • Scope: Universal.
  • Requisite: Quality management and corporate culture.

11. The best architectures, requirements, and designs emerge from self-organizing teams.

  • Scope: Universal.
  • Requisite: Shared ownership and no external dependencies.

12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

  • Scope: Universal.
  • Requisite: Dedicated organization and corporate culture

Assessment

Perhaps surprisingly, only two (#1 and #4) out of twelve principles cannot be applied universally as they clearly conflict with phased processes and tasks differentiation. In other words ten out of twelve of the agile principles could bring benefits to all and every type of project.

Further Reading

External Links

Fingertips Errors & Automated Testing

February 10, 2014

Objective

When interacting with systems, users do things they aren’t supposed to do and walk along irrelevant, even unthinkable, paths that can put tests designers at a loss. This apparent chink between users’ conscious self and their fingertips can be explained by the way humans assess situations and make decisions.

Errors at fingerstips (Rembrandt)

Anatomy of Errors: from brain to fingers (Rembrandt)

Taking a leaf from A. Tversky and D. Kahneman (who received the 2002 Nobel Price in Economics), decision-making relies on two cognitive mechanisms:

  1. The first one “operates automatically and quickly, with little or no effort and no sense of voluntary control”. It’s put in use when actual situations must be assessed and decisions taken rapidly if not instantly.
  2. The second one “allocates attention to the effortful mental activities that demand it, including complex computations”. It’s put in use when situations can be assessed with regard to past experience in order to support informed decisions making.

That distinction can be directly applied to users’ behaviors interacting with systems:

  1. Intuitive behavior: decisions are taken on the basis of the visual context and options as presented by users interfaces before taking into account underlying business contents and logic.
  2. Rational behavior: decisions are taken on the basis of business contents and logic disregarding supporting systems interfaces.

Set in context, that distinction can be put in parallel (but not confused) with the one between domain and functional requirements, the former dealing rationally with business objects and logic, the latter putting the former to use through interactions with supporting systems.

Functional requirements describe the part played by supporting systems

Functional requirements describe the part played by supporting systems

Assuming that business logic should not be contingent on supporting systems interfaces, the best option would be to test its implementation independently of users interactions; moreover, tests targeting intuitive behaviors (i.e not directly based on domain specific contents), could then be generated automatically.

Looking for Errors

Given that testing is meant to find flaws in deliverables, tests are certainly more effective when designers know what they are looking for.

For that purpose phased approaches rely on sequences of differentiated tests dealing successively with programming (unit tests), functional requirements (integration tests), and business requirements (acceptance tests).  The unfortunate downside of those policies is that the most wide-ranging flaws are the last to be looked for, with the risk of being found after cascading and costly consequences for functionalities and programs.

Phased and Iterative approaches to tests

Phased and Iterative approaches to tests

Conversely, agile approaches follow iterative policies, with each development cycle combining the definition, programming, and tests of software products. When properly implemented those policies significantly improve the early detection and correction of errors whatever their origin. Yet, since there is no explicit management of intermediate outcomes, it’s difficult to differentiate the tests according the kind of errors to look for, e.g faulty business rules implementation or flawed user interface.

Architecture driven approaches may provide an answer, with requirements unambiguously sorted out depending on their architectural footprint: business contents or system functionalities. As a corollary, tests could also be designed along the same lines, targeting business rationale or human behavior.

Errors in Mirrors

Acceptance tests being performed with regard to requirements, they should be designed along requirements taxonomy, respectively for business logic, users’ interactions, quality of services, and components implementation. Being aligned on requirements, those tests can be neatly defined with regard to closed sets of specifications, functional or otherwise.

Functional tests have to expect the unexpected

Functional tests have to expect the unexpected

But that’s not the case for users’ interactions because people behaviors are not fully predictable; hence, while tests can be systematically designed with regard to the set of users’ actions framed by business and functional requirements, there is no way to comprehensively and unambiguously check for all and every possible behavioral contingencies. That will make for three levels of functional tests:

  1. Implementation of business logic: tests should be designed directly from business requirements, independently of interactions with users.
  2. Implementation of scenarii: while interactions are defined in reference to business logic, their validation should focus on the presentation of contents and dialog control.
  3. Users exceptions: in addition to inputs validity, already checked with business logic, and users’ actions, supposedly secured by interaction scenarii, it is necessary to check that unexpected behaviors have been properly considered .

How to check that unexpected behaviors have been properly considered ?

How to check that unexpected behaviors have been properly considered ?

In other words, functional tests will have to look simultaneously for errors in software (defined with regard to a finite set of requirements), and for users’ mistakes (set in an open range of behaviors). As if tests designers were to mirror users errors in order to look for software ones. So, assuming that errors in business logic and interactions have been considered, what should still be checked, and how ?

Fingertips Errors

When faced with choices, users bank on mental maps combining graphical and business layers, with the implicit assumption that maps’ contexts and concerns are kept up to date. Those maps combine three communication mechanisms:

  • Languages, natural or specific, use syntax and semantics to define business contents, logic, and operations.
  • Icons use similarity for the visual representation of business operations or functional primitives (e.g create, delete, etc).
  • Signals use proximity to draw users’ attention to predefined events (e.g sounds for operations completion or incoming emails).

While language-based interactions are supposedly fully covered by business and functional tests, icons and signals make room for “fingertips” reactions which cannot be directly framed within business logic or functional scenarii, and therefore cannot be comprehensively checked for erroneous behaviors.

Icons and signal based communication can trigger unexpected behaviors.

Icons and signal based communication can trigger unexpected behaviors.

Yet, if instinctive reactions preclude rational considerations, decisions may be swayed by analogies and associations before being informed by the relevant business contents. To prevent that risk, test scenarii built on business logic and functional interactions should be extended in order to take into account the intuitive aspects of users’ behaviors.

Mental Maps & Automated Tests

As noted above, mental maps are built on three layers, one deep (language semantics) and two shallow (icons and signals). While the shallow layers are supposed to reference the deep one, icons and signals may induce instinctive behaviors independently of the referenced business logic. Those behaviors can be triggered by two kinds of mechanisms:

  • Analogy: users will look for similarities and familiar configurations.
  • Proximity: users will look for continuity with regard to scope and operations.

Clearly, lapses in such behaviors will normally escape tests designed for business and functional requirements; yet, by being driven by self-contained mechanisms, intuitive behaviors can be checked independently of references to business contents. And that may open the door to automated tests generation.

With regard to similarities, tests should look for possible confusion between:

  • Objects with common representation but specific features (inheritance).
  • Operations with shared semantics but different scope (polymorphism).
  • Sequences with shared operations but different timing .

With regard to proximity, tests should look for possible confusion between:

  • Objects and their parts, or between their parts (structural proximity).
  • Operations usually associated into the same activity (functional proximity).
  • Operations usually executed successively (chronological proximity).

Scripts for such tests could be generated through pattern-matching and run by wizard applications.

Further Reading

External Links

AAA+ for a New Year

January 16, 2014

Continuity

As illustrated by the millennium bug, systems perseverance often defy human expectations, as if continuity was a self-acquired characteristic of systems once set free from their unknowing designers. Hence the benefit of keeping the focus on primary issues regarding software artifacts, development processes, and enterprise architecture.

Happy New Year

Where to look  for AAA issues (M. Cattelan)

Software Artifacts

As for any engineering activity, software engineering must be clear about its end products as well as intermediate ones. If  methodologies and technologies are set aside, three types of artifacts should be considered:

  • Requirements provide the necessary starting point. How they are expressed will essentially depend on the languages familiar to users.
  • Models are introduced as intermediate products when the continuity of processes cannot be achieved and some communication is necessary.
  • Software components are the end products.

Model based engineering approaches provide a comprehensive and consistent framework for the analysis of engineering problems and the choice of development solutions.

Enterprise Architecture

Software artifacts are the building blocks of information systems whose aims is to support enterprise objectives. Hence the need of a shared framework for enterprise (business), systems (applications), and platforms (technology) concerns. As attempts to force all problems and solutions into a single formalism have been inconclusive (to say the least), on should instead maintain the separation of concerns and build a bridge between those that must be kept aligned.

Agile Processes

While enterprise architectures are meant to deal with stable and shared assets and mechanisms, business processes are supposed to deal with changes and to take advantage of opportunities. Hence the conundrum when the latter are to be supported by the former; that’s the challenge taken on by agile approaches. While that may be an easy task for low-hanging fruits of well circumscribed projects, innovative solutions are needed if agile principles are to be “scaled” up to architecture level.

Further Reading

External Links

MDA & EA: Is The Tail Wagging The Dog ?

December 17, 2013

Making Heads or Tails

OMG’s Model Driven Architecture (MDA) is a systems engineering framework set along three model layers:

  • Computation Independent Models (CIMs) describe business objects and activities independently of supporting systems.
  • Platform Independent Models (PIMs) describe systems functionalities independently of platforms technologies.
  • Platform Specific Models (PSMs) describe systems components as implemented by specific technologies.

Since those layers can be mapped respectively to enterprise, functional, and technical architectures, the question is how to make heads or tails of the driving: should architectures be set along model layers or should models organized according architecture levels.

(judy Kensley McKie)

A Dog Making Head or Tail (Judy Kensley McKie)

In other words, has some typo reversed the original “architecture driven modeling” (ADM) into “model driven architecture” (MDA) ?

Wrong Spelling, Right Concepts

A confusing spelling should not mask the soundness and relevance of the approach: MDA model layers effectively correspond to a clear hierarchy of problems and solutions:

  • Computation Independent Models describe how business processes support enterprise objectives.
  • Platform Independent Models describe how systems functionalities support business processes.
  • Platform Specific Models describe how platforms implement systems functionalities.
MDA layers correspond to a clear hierarchy of problems and solutions

MDA layers correspond to a clear hierarchy of problems and solutions

That should leave no room for ambiguity: regardless of the misleading “MDA” moniker,  the modeling of systems is meant to be driven by enterprise concerns and therefore to follow architecture divides.

Architectures & Assets Reuse

As it happens, the “MDA” term is doubly confusing as it also blurs the distinction between architectures and processes. And that’s unfortunate because the reuse of architectural assets by development processes is at the core of the MDA framework:

  • Business objects and logic (CIM) are defined independently of the functional architectures (PIM) supporting them.
  • Functional architectures (PIM) are defined independently of implementation platforms (PSM).
  • Technical architecture (PSM) are defined independently of deployment configurations.
MDA layers clearly coincide with reusable assets

MDA layers coincide with categories of reusable assets

Under that perspective the benefits of the “architecture driven” understanding (as opposed to the “model driven” one) appear clearly for both aspects of enterprise governance:

  • Systems governance can be explicitly and transparently aligned on enterprise organization and business objectives.
  • Business and development processes can be defined, assessed, and optimized with regard to the reuse of architectural assets.

With the relationship between architectures and processes straightened out and architecture reinstated as the primary factor, it’s possible to reexamine the contents of models used as hinges between them.

Languages & Model Purposes

While engineering is not driven by models but by architectures, models do describe architectures. And since models are built with languages, one should expect different options depending on the nature of artifacts being described. Broadly speaking, three basic options can be considered:

  • Versatile and general modeling languages like UML can be tailored to different contexts and purposes, along development cycle (requirements, analysis, design) as well as across perspectives (objects, activities, etc) and domains (banking, avionics, etc)
  • Non specific business modeling languages like BPM and rules-based languages are meant to be introduced upfront, even if their outcome can be used further down the development cycle.
  • Domain specific languages, possibly built with UML, are also meant to be introduced early as to capture domains complexity. Yet, and contrary to languages like BPM, their purpose is to provide an integrated solution covering the whole development cycle.
Languages: general purpose (blue), process or domain specific (green), or design.

Languages: general purpose (blue), process or domain specific (green), or design (brown).

As seen above for reuse and enterprise architecture, a revised MDA perspective clarifies the purpose of models and consequently the language options. With developments “driven by models”, code generation is the default option and nothing much is said about what should be shared and reused, and why. But with model contents aligned on architecture levels, purposes become explicit and modeling languages have to be selected accordingly, e.g:

  • Domain specific languages for integrated developments (PSM-centered).
  • BPM for business specifications to be implemented by software packages (CIM-centered).
  • UML for projects set across system functional architecture (PIM-centered).

The revised perspective and reasoned association between languages and architectures can then be used to choose development processes: projects that can be neatly fitted into single boxes can be carried out along a continuous course of action,  others will require phased development models.

Enterprise Architecture & Engineering Processes

Systems engineering has to meet different kinds of requirements: business goals, system functionalities, quality of service, and platform implementations. In a perfect (model driven engineering) world there would be one stakeholder, one architecture, and one time-frame. Unfortunately, requirements are usually set by different stakeholders, governed by different rationales, and subject to changes along different time-frames. Hence the importance of setting forth the primary factors governing engineering processes:

  • Planning: architecture levels (business, systems, platforms) are governed by different time-frames and engineering projects must be orchestrated accordingly.
  • Communication: collaboration across organizational units require traceability and transparency.
  • Governance: decisions across architecture levels and business units cannot be made upfront and options and policies must be assessed continuously.

Those objectives are best supported when engineering processes are set along architecture levels:

Enterprise Architecture & Processes

Enterprise Architecture & Processes

  1. At enterprise level requirements deal with organization and business processes. The relevant part of those requirements will be translated into system functionalities which in turn will be translated into platforms technical requirements.
  2. At enterprise level analysis deals with symbolic representations of enterprise environment, objectives, and activities. Contrary to requirements, which are meant to convey changes and bear adaptation (dashed lines), the aim of analysis at enterprise level is to consolidate symbolic representations and guarantee their consistency and continuity. As a corollary, analysis at system level must be aligned with its enterprise counterpart before functional (continuous lines) requirements are taken into account.
  3. Design at enterprise level deals with operational concerns and resources deployment. Part of it is to be supported by systems as designed and implemented as platforms. Yet, as figured by dashed arrows, operational solutions designed at enterprise level bear upon the design of systems architectures and the configuration of their implementation as platforms.

When engineering is driven by architectures, processes can be devised depending on enterprise concerns and engineering contexts. While that could come with various terminologies, the partitioning principles will remain unchanged, e.g:

  • Agile processes will combine requirements with development and bypass analysis phases (a).
  • Projects meant to be implemented by Commercial-Off-The-Shelf Software (COTS) will start with business requirements, possibly using BPM, then carry on directly to platform implementation, bypassing system analysis and design phases (b).
  • Changes in enterprise architecture capabilities will be rooted in analysis of enterprise objectives, possibly but not necessarily with inputs from business and operational requirements, continue with analysis and design of systems functionalities, and implement the corresponding resources at platform level (c).
  • Projects dealing with operational concerns will be conducted directly through systems design of and platform implementation (d).
Processes should be devised according enterprise concerns and engineering contexts

Examples of process templates depending on objectives and contexts.

To conclude, when architecture is reinstated as the primary factor, the MDA paradigm becomes a pivotal component of enterprise architecture as it provides a clear understanding of architecture divides an dependencies on one hand, and their relationship with engineering processes on the second hand.

Further Reading

From Processes to Services

November 24, 2013

Objective

Even in the thick of perplexing debates, enterprise architects often agree on the meaning of processes and services, the former set from a business perspective, the latter from a system one. Considering the rarity of such a consensus, it could be used to rally the different approaches around a common understanding of some of EA’s objectives.

Process and service

Service meets Process

A Governing Dilemma

Systems have long been of three different species that communicated but didn’t interbred: information ones were calmly processing business records, industrial ones were tensely controlling physical devices, and embedded ones lived their whole life stowed away within devices. Lastly, and contrary to the natural law of evolution, those three species have started to merge into a versatile and powerful new breed keen to colonize the whole of enterprise ecosystem.

When faced with those pervading systems, enterprises usually waver between two policies, containment or integration, the former struggling to weld and confine all systems within technology boundaries, the latter trying to fragment them and share out the pieces between whichever business units ready to take charge.

While each approach may provide acceptable compromises in some contexts, both suffer critical flaws:

  • Centralized solutions constrict business opportunities and innovation by putting all concerns under a single unwieldy lid of technical constraints.
  • Federated solutions rely on integration mechanisms whose increasing size and complexity put the whole of systems integrity and adaptability on the line.

Service oriented architectures may provide a way out of this dilemma by introducing a functional bridge between enterprise governance  and systems architectures.

Separation of Concerns

Since governance is meant to be driven by concerns, one should first consider the respective rationale behind business processes and system functionalities, the former driven by contexts and opportunities, and the latter by functional requirements and platforms implementation.

While business processes usually involve various degrees of collaboration between enterprises, their primary objective is to fulfill each one’s very specific agenda, namely to beat the others and be the first to take advantage of market opportunities. That put systems at the cross of a dual perspective: from a business point of view they are designed to provide a competitive edge, but from an engineering standpoint they aim at standards and open infrastructures. Clearly, there is no reason to assume that those perspectives should coincide, one  being driven by changes in competitive environments, the other by continuity and interoperability of systems platforms. That’s where Service Oriented Architectures should help: by introducing a level of indirection between business processes and systems functionalities, services naturally allow for the mapping of requirements with architecture capabilities.

cccc

Services provide a level of indirection between business and system concerns.

Along that reasoning (and the corresponding requirements taxonomy), the design of services would be assessed in terms of optimization under constraints: given enterprise organization and objectives (business requirements), the problem is to maximize the business value of supporting systems (functional requirements) within the limits set by implementation platforms (non functional requirements).

Services & Capabilities

Architectures and processes are orthogonal descriptions respectively for enterprise assets and activities. Looking for the footprint of supporting systems, the first step is to consider how business processes should refer to architecture capabilities :

  • From a business perspective, i.e disregarding supporting systems and platforms, processes can be defined in terms of symbolic objects, business logic, and the roles of agents, devices, and systems.
  • The functional perspective looks at the role of supporting systems; as such, it is governed by business objectives and subject to technical constraints.
  • From a technical perspective, i.e disregarding the symbolic contents of interactions between systems and contexts, operational processes are characterized by the nature of interfaces (human, devices, or other systems), locations (centralized or distributed), and synchronization constraints.

Service oriented architectures typify the functional perspective by factoring out the symbolic contents of system functionalities, introducing services as symbolic hinges between enterprise and system architectures. And when defined in terms of customers, messages, contract, policy, and endpoints, services can be directly mapped to architectures capabilities.

cccc

Services are a perfect match for capabilities

Moreover, with services defined in terms of architecture capabilities, the divide between business and operational requirements can be drawn explicitly:

  • Actual (external) entities and their symbolic counterpart: services only deal with symbolic objects (messages).
  • Actual entities and their roles: services know nothing about physical agents, only about symbolic customers.
  • Business logic and processes execution: contracts deal with the processing of symbolic flows, policies deal with operational concerns.
  • External events and system time: service transactions are ACID, i.e from customer standpoint, they appear to be timeless.

Those distinctions are used to factor out the common backbone of enterprise and system architectures, and as such they play a pivotal role in their alignment.

Anchoring Business Requirements to Supporting Systems

Business processes are meant to met enterprise objectives given contexts and resources. But if the alignment of enterprise and system architectures is to be shielded from changes in business opportunities and platforms implementation, system functionalities will have to support a wide range of shifting business goals while securing the continuity and consistency of shared resources and communication mechanisms. In order to conciliate business changes with system continuity, business processes must be anchored to objects and activities whose identity and semantics are set at enterprise level independently of the part played by supporting systems:

  • Persistent units (aka business objects): structured information uniquely associated to identified individuals in business context. Life cycle and integrity of symbolic representations must be managed independently of business processes execution.
  • Functional and execution units: structured activity triggered by an event identified in business context, and whose execution is bound to a set of business objects. State of symbolic representations must be managed in isolation for the duration of process execution.
Services can be defined according persistency and functional units

Services can be defined according persistency and functional units (#)

The coupling between business units (persistent or transient) identified at business level and their system counterpart can be secured through services defined with regard to business processes (customers), business objects (messages), business logic (contract), and business operations (policy).

It must be noted that while services specifications for customers, messages, contracts, and policy are identified at business level and completed at functional level, that’s not the case for endpoints since services locations are set at architecture level independently of business requirements.

Filling out Functional Requirements

Functional requirements are set in two dimensions, symbolic and operational; the former deals with the contents exchanged between business processes and supporting systems with regard to objects, activities and events, or actors; the latter deals with the actual circumstances of the exchanges: locations, interfaces, execution constraints, etc.

Given that services are by nature shared and symbolic, they can only be defined between systems. As a corollary, when functionalities are slated as services, a clear distinction should be maintained between the symbolic contents exchanged between business processes and supporting systems, and the operational circumstances of actual interactions with actors.

Interactions: symbolic and local (a),  non symbolic and local (b), symbolic and shared (c).

Interactions: symbolic and local (a), non symbolic and local (b), symbolic and shared (c).

Depending on the preferred approach for requirements capture, symbolic contents can be specified at system boundaries (e.g use cases), or at business level (e.g users’ stories). Regardless, both approaches can be used to flesh out the symbolic descriptions of functional and persistency units.

From a business process standpoint, users (actors in UML parlance) should not be seen as agents but as the roles agents play in enterprise organization, possibly with constraints regarding the quality of service at entry points. That distinction between agents and roles is critical if functional architectures are to dissociate changes in business processes on one hand, platform implementation on the other hand.

Along that understanding actors triggering use cases (aka primary actors) can represent the performance of human agents as well as devices or systems. Yet, as far as symbolic flows are concerned, only human agents and systems are relevant (devices have no symbolic capabilities of their own). On the receiving end of use cases (aka secondary actors), only systems are to be considered for supporting services.

Mapping Processes to Services (through Use Cases)

Mapping Processes to Services (through Use Cases)

Hence, when requirements are expressed through use cases, and assuming they are to be realized (fully or partially) through services:

  • Persistency and functional units identified by business process would be mapped to messages and contracts.
  • Business processes would fit service policy.
  • Use case containers (aka systems) would be registered as service customers.

Alternatively, when requirements are set from users’ stories instead of use cases, persistency and functional units have to be elicited through stories, traced back to business processes, and consolidated into features. Those features will be mapped into system functionalities possibly, but not necessarily, implemented as services.

Features are to be supported by systems functionalities, but not necessarily implemented as services.

Mapping Processes to Services (through Users’ Stories)

Hence, while the mapping of business objects and logic respectively to messages and contracts will be similar with use cases and users’ stories, paths will differ for customers and policies:

  • Given that use cases deal explicitly with interactions at system boundaries, they represent a primary source of requirements for services’ customers and policy. Yet, as services are not supposed to be directly affected by interactions at systems boundaries, those elements would have to be consolidated across use cases.
  • Users’ stories for their part are told from a business process perspective that may take into account boundaries and actors but are not focused on them. Depending on the standpoint, it should be possible to define customers and policies requirements for services independently of the contingencies of local interactions.

In both cases, it would be necessary to factor out the non symbolic (aka non functional) part of requirements.

Non Functional Requirements: Quality of Service and System Boundaries

Non functional requirements are meant to set apart the constraints on systems’ resources and performances that could be dealt with independently of business contents. While some may target specific business applications, and others encompass a broader range, the aim is to separate business from architecture concerns and allocate the responsibilities (specification, development, and acceptance) accordingly.

Assuming an architecture of services aligned on capabilities, the first step would be to sort operational constraints:

  • Customers: constraints on usability, customization, response time, availability, …
  • Messages: constraints on scale, confidentiality, compliance with regulations, …
  • Contracts: constraints on scale, confidentiality, …
  • Policy: availability, reliability, maintenance, …
  • Endpoints: costs, maintenance, security, interoperability, …
Non functional constraints may cut across services and capabilities

Non functional constraints may cut across services and capabilities

Since constraints may cut across services and capabilities, non functional requirements are not a given but the result of explicit decisions about:

  • Architecture level: should the constraint be dealt with locally (interfaces), at functional level (services), or at technical level (resources).
  • Services: when set at functional level, should the constraint be dealt with by business services (e.g domain or activity), or architecture ones (e.g authorization or orchestration).

The alignment of services with architecture capabilities will greatly enhance the traceability and rationality of those decisions.

A Simple Example

This example is based on the Purchase Order case published with the OMG specifications: http://www.omg.org/spec/SoaML/1.0/Beta2/PDF/

A simple purchase order process analyzed in terms of service customers, messages and entities (#), contracts, and policy (aka choreography)

A simple purchase order process analyzed in terms of service customers, messages and entities (#), contracts, and policy (aka choreography)

Further Reading

Modeling Languages: Differences Matter

November 2, 2013

Objective

Modeling languages are meant to support the description of contexts and concerns from specific standpoints. And because those different perspectives have to be mapped against shared references, language must also provide constructs for ironing out variants and factoring out constants.

Digging or Ironing out differences (Erwitt Elliott)

Ironing out differences in features and behaviors (Erwitt Elliott)

Yet, while most languages generally agree on basic definitions of objects and behaviors, many distinctions are ignored or subject to controversial understanding; such shortcomings might be critical when architecture capabilities are concerned:

  • Actual entities and their symbolic counterpart.
  • Actual entities and their roles.
  • Business logic and business operations
  • External events and system time.
Making Differences

Making Differences

As those distinctions set the backbone of functional architectures, languages should be assessed according their ability to express them unambiguouslyusing the least possible set of constructs.

Business objects vs Symbolic Surrogates

As far as symbolic systems are concerned, the primary purpose of models is to distinguish between targets and representations. Hence the first assessment yardstick: modeling languages must support a clear and unambiguous distinction between objects and behaviors of concern on one hand, symbolic system surrogates on the other hand.

cc

A primer on representation: objects of concerns and surrogates

Yet, if objects and behaviors are to be consistently and continuously managed, modeling languages must also provide common constructs mapping identities and structures of actual entities to their symbolic counterpart.

Agents vs Roles

Given that systems are built to support business processes, modeling languages should enable a clear distinction between physical entities able to interact with systems on one hand, roles as defined by enterprise organization on the other hand.

ccc

Agents and Roles

In that case the mapping of actual entities to systems representations is not about identities but functionalities: a level of indirection has to be introduced between enterprise organization and system technology because the same roles can be played,simultaneously or successively, by people, devices, or other systems.

Business logic vs Business operations

Just like actual objects are not to be confused with their symbolic description, modeling languages must make a clear distinction between business logic and processes execution, the former defining how to process symbolic objects and flows, the latter with the coupling between process execution and changes in actual contexts. That distinction is of a particular importance if business and organizational decisions are to be made independently of supporting systems technology.

Pull vs Push Rule Management

Business logic and operational contingencies

Language constructs must also support the consolidation of functional and operational units, the former being defined by integrity constraints on symbolic flows and roles authorizations on objects and operations, the latter taking into account synchronization constraints between the state of actual contexts and the state of their system symbolic counterpart. And for that purpose languages must support the distinction between external and internal events.

External vs Internal Events

Paraphrasing Albert Einstein, time is what happens between events. As a corollary, external time is what happens between context events, and internal time is what happens between system ones. While both can coincide for single systems and locations, no such assumption should be made for distributed systems set in different locations. In that case modeling language should support a clear distinction between external events signalling changes set in actual locations, and internal events signalling changes affecting system surrogates.

Time scales can be design on purpose

Time scales are set by events and concerns

Along that perspective synchronization is to be achieved through the consolidation of time scales. For single locations that can be done using system clocks, across distributed locations the consolidation will entail dedicated time frames and mechanisms set in reference to some initial external event.

Conclusion: How to share differences across perspectives

Somewhat paradoxically, multiple modeling languages erase differences by paring down shared descriptions to some uniform lump that can be understood by all. Conversely, agreeing on a set of distinctions that should be supported by every language could provide an antidote to the Babel syndrome.

That approach can be especially effective for the alignment of enterprise and systems architectures as the four distinctions listed above are equally meaningful in both perspectives.

Further reading

Thinking about Practices

October 12, 2013

A few preliminary words

A theory (aka model) is a symbolic description of contexts and concerns. A practice is a set of activities performed in actual contexts. While the latter may be governed by the former and the former developed from the latter, each should stand on its own merits whatever its debt to the other.

Good practice has no need to show off theory to hold sway (Demetre Chiparus)

Good practices hold sway without showing off theoretical subtext (Demetre Chiparus)

With regard to Software Engineering, theory and practice are often lumped together to be marketed as snake oil, with the unfortunate consequence of ruining their respective sways.

Software Engineering: from Requirements heads to Programs tails

While computer science builds theories about automated processing of symbolic representations, software engineering is all about their use in developing applications supporting actual processes; that may explain why software engineering is long on methods but rather short on theory.

Yet, since there is a requirements head to the programming tail, the transition between has to be supported by some rationale. On that issue approaches can be summarily characterized as formal or procedural.

vvv

How to make program tails from requirements heads

Formal methods try to extend the scope of computing theories to functional specifications; while they should be the option of choice, their scope is curtailed by the lack of structure and formalism inherent to requirements. On the opposite side of the spectrum, procedural approaches forsake theoretical aspirations and concentrate on modus operandi; the fault here is that the absence of a formal description of software artifacts makes such approaches unyielding and unable to accommodate contexts and tasks diversity.

ccc

Procedural (p), formal (f), and agile (a) approaches to software development.

That brings some light on the relationship between theory and practice in software engineering:

  • As illustrated by Relational theory and State machines, formal specifications can support development practice providing requirements can be aligned with computing.
  • As illustrated by the ill-famed Waterfall, development practices left on their own are doomed as soon as they try to coerce ambiguous, incomplete, or changing specifications into arbitrary defined categories.

Agile answers to that conundrum have been to focus on development practices without making theoretical assumptions about specifications. That left those development models halfway, making room for theoretical complements. That situation can be illustrated using Scott Ambler’s 14 best practices of AMDD:

  1. Active Stakeholder Participation / How to define a stakeholder ?
  2. Architecture Envisioning / What concepts should be used to describe architectures and how to differentiate architecture levels ?
  3. Document Continuously / What kind of documents should be produced and how should they relate to life-cycle ?
  4. Document Late / How to time the production of documents with regard to life-cycle ?
  5. Executable Specifications / What kind of requirements taxonomy should be used ?
  6. Iteration Modeling / What kind of modeling paradigm should be used ?
  7. Just Barely Good Enough (JBGE) artifacts /  How to assess the granularity of specifications ?
  8. Look Ahead Modeling / How to assess requirements complexity.
  9. Model Storming / How to decide the depth of granularity to be explored and how to take architectural constraints into account ?
  10. Multiple Models / Even within a single modeling paradigm, how to assess model effectiveness ?
  11. Prioritized Requirements / How to translate users’ value into functional complexity when there is no one-to-one mapping ?
  12. Requirements Envisioning / How to reformulate a lump of requirements into structured ones ?
  13. Single Source Information / How to deal with features shared by multiple users’ stories ?
  14. Test-Driven Design (TDD) / How to differentiate between business-facing and technology-facing tests ?

That would bring the best of two world, with practices inducing questions about the definition of development artifacts and activities, and theoretical answers being used to refine, assess and improve the practices.

Takes Two To Tango

Debates about the respective benefits of theory and practice are meaningless because theory and practice are the two faces of engineering: on one hand the effectiveness of practices depends on development models (aka theories), on the other hand development models are pointless if not validated by actual practices. Hence the benefits of thinking about agile practices.

Along that reasoning, some theoretical considerations appear to be of particular importance for good practice:

  • Enterprise architecture: how to define stakes and circumscribe organizational responsibilities.
  • Systems architecture: how to factor out shared architecture functionalities.
  • Products: how to distinguish between models and code.
  • Metrics: how to compare users’ value with development charge.
  • Release: how to arbitrage between quality and timing.

Such questionings have received some scrutiny from different horizons that may eventually point to a comprehensive and consistent understanding of software engineering artifacts.

Further Reading

External Links

Sifting through a Web of Things

September 27, 2013

Objective

At its inception, the young internet was all about sharing knowledge. Then, business concerns came to the web and the focus was downgraded to information. Now, exponential growth turns a surfeit of information into meaningless data, with the looming risk of web contents being once again downgraded. And the danger is compounded by the inroads of physical objects bidding for full netizenship and equal rights in the so-called “internet of things”.

cccc

How to put words on a web of things (Ai Weiwei)

As it happens, that double perspective coincides with two basic search mechanisms, one looking for identified items and the other for information contents. While semantic web approaches are meant to deal with the latter, it may be necessary to take one step further and to bring the problem (a web of things and meanings) and the solutions (search strategies) within a integrated perspective.

Down with the System Aristocrats

The so-called “internet second revolution” can be summarized as the end of privileged netizenship: down with the aristocracy of systems with their absolute lid on internet residency, within the new web everything should be entitled to have a voice.

cccc

Before and after the revolution: everything should have a say

But then, events are moving fast, suggesting behaviors unbecoming to the things that used to be. Hence the need of a reasoned classification of netizens based on their identification and communication capabilities:

  • Humans have inherent identities and can exchange symbolic and non symbolic data.
  • Systems don’t have inherent identities and can only exchange symbolic data.
  • Devices don’t have inherent identities and can only exchange non symbolic data.
  • Animals have inherent identities and can only exchange non symbolic data.

Along that perspective, speaking about the “internet of things” can be misleading because the primary transformation goes the other way:  many systems initially embedded within appliances (e.g cell phones) have made their coming out by adding symbolic user interfaces, mutating from devices into fully-fledged systems.

Physical Integration: The Meaning of Things

With embedded systems colonizing every nook and cranny of the world, the supposedly innate hierarchical governance of systems over objects is challenged as the latter calls for full internet citizenship. Those new requirements can be expressed in terms of architecture capabilities :

  • Anywhere (Where): objects must be localized independently of systems. That’s customary for physical objects (e.g Geo-localization), but may be more challenging for digital ones on they way across the net.
  • Anytime (When): behaviors must be synchronized over asynchronous communication channels. Existing mechanism used for actual processes (e.g Network Time protocol) may have to be set against modal logic if it is used for their representation.
  • Anybody (Who): while business systems don’t like anonymity and rely on their interfaces to secure access, things of the internet are to be identified whatever their interface (e.g RFID).
  • Anything (What): objects must be managed independently of their nature, symbolic or otherwise (e.g 3D printed objects).
  • Anyhow (How): contrary to business systems, processes don’t have to follow predefined scripts and versatility and non determinism are the rules of the game.

Taking a sortie in a restaurant for example: the actual event is associated to a reservation, car(s) and phone(s) are active objects geo-localized at a fixed place and possibly linked to diners, great wines can be authenticated directly by smartphone applications, phones are used for conversations and pictures, possibly for adding to reviews, friends in the neighborhood can be automatically informed of the sortie and invited to join.

A dinner on the Net: place (restaurant), event (sortie), active objects (car, phone), passive object (wine), message (picture), business objects (reviews, reservations), and social beholders (network friends).

A dinner on the Net: place (restaurant), event (sortie), active objects (car, phone), passive object (wine), message (picture), business objects (reviews, reservations), and social beholders (network friends).

As this simple example illustrates, the internet of things brings together dumb objects, smart systems, and knowledgeable documents. Navigating such a tangle will require more than the Semantic Web initiative because its purpose points in the opposite direction, back to the origin of the internet, namely how to extract knowledge from data and information.

Moreover, while most of those “things” fall under the governance of the traditional internet of systems, the primary factor of change comes from the exponential growth of smart physical things with systems of their own. When those systems are “embedded”, the representations they use are encapsulated and cannot be accessed directly as symbolic ones. In other words those agents are governed by hidden agendas inaccessible to search engines. That problem is illustrated a contrario (things are not services) by services oriented architectures whose one of primary responsibility is to support services discovery.

Semantic Integration: The Actuality of Meanings

The internet of things is supposed to provide a unified perspective on physical objects and symbolic representations, with the former taken as they come and instantly donned in some symbolic skin, and the latter boiled down to their documentary avatars (as Marshall McLuhan famously said, “the medium is the message”). Unfortunately, this goal is also a challenge because if physical objects can be uniformly enlisted across the web, that’s not the case for symbolic surrogates which are specific to social entities and managed by their respective systems accordingly.

With the Internet of Systems, social entities with common endeavors agree on shared symbolic representations and exchange the corresponding surrogates as managed by their systems. The Internet of Things for its part is meant to put an additional layer of meanings supposedly independent of those managed at systems level. As far as meanings are concerned, the latter is flat, the former is hierarchized.

The internet of things is supposed to level the meaning fields, reducing knowledge to common sense.

The internet of things is supposed to level the meaning fields, reducing knowledge to common sense.

That goal raises two questions: (1) what belongs to the part governed by the internet of things and, (2) how is its flattened governance to be related to the structured one of the internet of systems.

A World of Phenomena

Contrary to what its name may suggest, the internet of things deals less with objects than with phenomena, the reason being that things must manifest themselves, or their existence be detected,  before being identified, if and when it’s possible.

Things first appear on radar when some binary footprint can be associated to a signalling event. Then, if things are to go further, some information has to be extracted from captured data:

  • Coded data could be recognized by a system as an identification tag pointing to recorded known features and meanings, e.g a bar code on a book.
  • The whole thing could be turned into its digital equivalent, and vice versa, e.g a song or a picture.
  • Context and meanings could only be obtained by linking the captured data to representations already identified and understood, e.g a religious symbol.
How to put things in place

From data to information: how to add identity and meaning to things

Whereas things managed by existing systems already come with net identities with associated meaning, that’s not necessarily the case for digitized ones as they may or may not have been introduced as surrogates to be used as their real counterpart: if handshakes can stand as symbolic contract endorsements, pictures thereof  cannot be used as contracts surrogates. Hence the necessary distinction between two categories of formal digitization:

  • Applied to symbolic objects (e.g a contract), formal digitization enables the direct processing of digital instances as if performed on actual  ones, i.e with their conventional (i.e business) currency. While those objects have no counterpart (they exist simultaneously in both realms), such digitized objects have to bear an identification issued by a business entity, and that put them under the governance of standard (internet of systems) rules.
  • Applied to binary objects  (e.g a fac-simile), formal digitization applies to digital instances that can be identified and authenticated on their own, independently of any symbolic counterpart. As a corollary, they are not meant to be managed or even modified and, as illustrated by the marketing of cultural contents (e.g music, movies, books …), their actual format may be irrelevant. Providing agreed upon de facto standards, binary objects epitomize internet things.

To conclude on footprint, the Internet of Things appears as a complement more than a substitute as it abides by the apparatus of the Internet of Systems for everything already under its governance, introducing new mechanisms only for the otherwise uncharted things set loose in outer reaches. Can the same conclusion hold for meanings ?

Organizational vs Social Meanings

As epitomized by handshakes and contracts, symbolic representations are all about how social behaviors are sanctioned.

Signals are physical events with open-ended interpretations

When not circumscribed within organizational boundaries, social behaviors are open to different interpretations.

In system-based networks representations and meanings are defined and governed by clearly identified organizations, corporate or otherwise. That’s not necessarily the case for networks populated by software agents performing unsupervised tasks.

The first generations of those internet robots (aka bots) were limited to automated tasks, performed on the account of their parent systems, to which they were due to report. Such neat hierarchical governance is being called into question by bots fired and forgotten by their maker, free of reporting duties, their life wholly governed by social events. That’s the case with the internet of things, with significant consequences for searches.

As noted above, the internet of things can consistently manage both system-defined identities and the new ones it introduces for things of its own. But, given a network job description, the same consolidation cannot be even considered for meanings: networks are supposed to be kept in complete ignorance of contents, and walls between addresses and knowledge management must tower well above the clouds. As a corollary, the overlapping of meanings is bound to grow with the expanse of things, and the increase will not be linear.

Contrary to identities, meanings usually overlap when things float free from systems' governance.

Contrary to identities, meanings usually overlap when things float free from systems’ governance.

That brings some light on the so-called “virtual world”, one made of representations detached from identified roots in the actual world. And there should be no misunderstanding: “actual” doesn’t refer to physical objects but to objects and behaviors sanctioned by social entities, as opposed to virtual, which includes the ones whose meaning cannot be neatly anchored to a social authority.

That makes searches in the web of things doubly challenging as they have to deal with both overlapping and shifting semantics.

A Taxonomy of Searches

Semantic searches (forms and pattern matching should be seen as a special case) can be initiated by any kind of textual input, key words or phrase. As searches, they should first be classified with regard to their purpose: finding some specific instance or collecting information about some topic.

Searches about instances are meant to provide references to:

  • Locations, addresses, …
  • Antiques, stamps,…
  • Books, magazines,…
  • Alumni, friends,…
  • Concerts, games, …
  • Cooking recipes, administrative procedures,…
  • Status of shipment,  health monitoring, home surveillance …
What are you looking for ?

What are you looking for ?

Searches about categories are meant to provide information about:

  • Geography, …
  • Products marketing , …
  • Scholarly topics, market researches…
  • Customers relationships, …
  • Business events, …
  • Business rules, …
  • Business processes …

That taxonomy of searches is congruent with the critical divide between things and symbolic representations.

Things and Symbolic Representations

As noted above, searches can be heeded by references to identified objects, the form of digital objects (sound, visuals, or otherwise), or associations between symbolic representations. Considering that finding referenced objects is basically a indexing problem, and that pattern matching is a discipline of its own, the focus is to be put on the third case, namely searches driven by words (as opposed to identifiers and forms). From that standpoint searches are best understood in the broader semantic context of extensions and intensions , the former being the actual set of objects and phenomena, the latter a selected set of features shared by these instances.

ccc

Searching Steps

A search can therefore be seen as an iterative process going back and forth between descriptions and occurrences or, more formally, between intentions and extensions. Depending on the request, iterations are made of four steps:

  • Given a description (intension) find the corresponding set of instances (extension); e.g “restaurants” >  a list of restaurants.
  • Given an instance (extension), find a description (intension); e.g “Alberto’s Pizza” > “pizzerias”.
  • Extend or generalize a description to obtain a better match to request and context; e.g “pizzerias” > “Italian restaurants”.
  • Trim or refine instances to obtain a better match to request and context; e.g a list of restaurants > a list of restaurants in the Village.

Iterations are repeated until the outcome is deemed to satisfy the quality parameters.

The benefit of those distinctions is to introduce explicit decision points with regard to the reference models heeding  the searches. Depending on purpose and context, such models could be:

  • Inclusive: can be applied to any kind of search.
  • Semantic: can only be applied to circumscribed domains of knowledge. That’s the aim of the semantic web initiative and the Web Ontology Language (OWL).
  • Organizational: can only be applied within specific institutional or business contexts. They could be available to all or through services with restricted access and use.

From Meanings to Things, and back

The stunning performances of modern search engines comes from a combination of brawn and brains, the brainy part for grammars and statistics, the brawny one for running heuristics on gigantic repositories of linguistic practices and web researches. Moreover, those performances improve “naturally” with the accumulation of data pertaining to virtual events and behaviors. Nonetheless, search engines have grey or even blind spots, and there may be a downside to the accumulation of social data, as it may increase the gap between social and corporate knowledge, and consequently the coherence of outcomes.

Meanings can be inclusive, semantic, or specific to organizations

Iterations are repeated until the outcome is deemed to satisfy the quality parameters.

That can be illustrated by a search about Amedeo Modigliani:

  • A inclusive search for “Modigliani” will use heuristics to identify the artist (a). An organizational search for an homonym (e.g a bank customer) would be dealt with at enterprise level, possibly through an intranet (c).
  • A search for “Modigliani’s friends” may look for the artist’s Facebook friends if kept at the inclusive level (a1), or switch to a semantic context better suited to the artist (a2). The same outcome would have been obtained with a semantic search (b).
  • Searches about auction prices may be redirected or initiated directly, possibly subject to authorization (c).

Further Reading

External Links

Knowledge Based Model Transformation

September 8, 2013

Objective

Even when code is directly developed from requirements, software engineering can be seen as a process of transformation from a source model (e.g requirements) to a target one (e.g program). More generally, there will be intermediate artifacts with development combining automated operations with tasks involving some decision-making. Given the importance of knowledge management in decision-making, it should also be considered as a primary factor for model transformation.

ccccc

What has been transformed, added, abstracted ? (Velazquez-Picasso)

Clearing Taxonomy Thickets

Beside purpose, models transformation is often considered with regard to automation or abstraction level.

Classifying transformations with regard to automation is at best redundant, inconsistent and confusing otherwise: on one side of the argument, assuming a “manual transformation” entails some “intelligent design”, it ensues that it cannot be automated; but, on the other side of the argument, if manual operations can be automated, the distinction between manual and automated is pointless. And in any case a critical aspect has been ignored, namely what is the added value of a transformation.

Classifying transformations with regard to abstraction levels is also problematic because, as famously illustrated above, there are too many understandings of what those levels may be. Considering for instance analysis and design models; they are usually meant to deal respectively with system functionalities and implementation. But it doesn’t ensue that the former is more abstract than the latter because system functionalities may describe low-level physical devices and system designs may include high level abstract constructs.

A Taxonomy of Purposes

Depending on purpose, model transformation can be set in three main categories:

  • Translation: source and target models describe the same contents with different languages.
  • Rephrasing: improvement of the initial descriptions using the same language.
  • Processing: source and target models describe successive steps along the development process. While they may be expressed with different languages, the objective is to transform model contents.
Translation, Rephrasing, Processing.

Rephrasing, Translation, Processing.

Should those different purposes be achieved using common mechanisms, or should they be supported by specific ones ?

A Taxonomy of Knowledge

As already noted, automated transformation is taken as a working assumption and the whole process is meant to be governed by rules. And if purpose is used as primary determinant, those rules should be classified with regard of the knowledge to be considered:

  • Syntactic knowledge would be needed to rephrase words and constructs without any change in content or language.
  • Thesaurus and semantics would be employed to translate contents into another language.
  • Stereotypes, Patterns, and engineering practices would be used to process source models into target ones.
vvvv

Knowledge with regard to transformation purposes

On that basis, the next step is to consider the functional architecture of transformation, in particular with regard to supporting knowledge.

Procedural vs Declarative KM

Assuming transformation processes devoid of any decision-making, all relevant knowledge can be documented either directly in rules (procedural approach) or indirectly in models (declarative approach).

To begin with, all approaches have to manage knowledge about languages principles (ML) and the modeling languages used for source (MLsrc) and target (MLtrg) models. Depending on language, linguistic knowledge can combine syntactic and semantic layers, or distinguish between them.

cccc

Meta-models can be structured along linguistic layers or combine syntax and semantics.

Given exponential growth of complexity, procedural approaches are better suited for the processing of unstructured knowledge, while declarative ones are the option of choice when knowledge is organized along layers.

When applied to model transformation, procedural solutions use tables (MLT) to document the mappings between language constructs,  and programs to define and combine transformation rules. Such approaches work well enough for detailed rules targeting specific domains, but fall short when some kind of generalization is needed; moreover, in the absence of distinction at language level (see UML and users’ concerns),  any contents differentiation in source models (Msrc) will be blurred in target ones (Mtrg) .

Transformation Policies: Procedural vs Declarative

Transformation Policies: Procedural vs Declarative

And filtering out noise appears to be the main challenge of model processing, especially when even localized taints or ambiguities in source models may contaminate the whole of target ones; hence the benefit of setting apart layers of source models according their nature and reliability.

That can be achieved with the modularity of declarative solutions, with transformations carried out separately according the nature of knowledge: syntactic, semantic, or engineering. While declarative solutions also rely on rules, those rules can be managed according to:

  • Purpose: rephrasing, translation, or development.
  • Knowledge level: rules about facts, categories, or reasoning.
  • Confidence level: formal, heuristic, fuzzy, modal.

That makes declarative solutions more easily assessed and therefore improved.

Assessment

Advances in model transformation call for sound assessment, and that can be done with regard to usual knowledge management capabilities:

  • Primitive operations: how to create, read, update, or delete (CRUD) transformations.
  • Transparency: how to reason about transformations.
  • Traceability: how to record the history of performed transformations and chart the effects of hypothetical ones.
  • Reliability: built-in consistency checks, testability.
  • Robustness: how to deal with incomplete, noisy, or inconsistent models.
  • Modularity, reusability: how to customize, combine, or generalize transformations.

In principle declarative approaches are generally seen as superior to procedural ones due to the benefits of a clear-cut separation between the different domains of knowledge on one hand, and the programs performing the transformations on the other hand:

  • The complexity of transformations increases exponentially with the number of rules, independently of the corpus of knowledge.
  • The ensuing spaghetti syndrome rapidly downgrades transparency, traceability, and reliability of rules which, in procedural approaches, are the only gate to knowledge itself.
  • Adding specific rules in order to deal with incomplete or inconsistent models significantly degrades modularity and reusability, with the leveraging impact of new rules having to be added for every variant.

In practice the flaws of procedural approaches can be partially curbed providing that:

  • Rules apply to a closely controlled domain.
  • Models are built with complete, unambiguous and consistent languages.

Both conditions can be met with domain specific languages (DSLs) and document based ones (e.g XML) targeting design and program models within well-defined business domains.

Compared with the diversity and specificity of procedural solutions, the extent of declarative ones have been checked by their more theoretically demanding requirements, principally met by relational models. Yet, the Model Driven Architecture (MDA) distinction between computation independent (CIMs), platform independent (PIMs), and platform specific (PSMs) models could provide the basis for a generalization of knowledge-based model transformation.

Further Reading

External Links


Follow

Get every new post delivered to your Inbox.

Join 283 other followers