Modeling Symbolic Representations

March 16, 2010

System modeling is all too often a flight for abstraction, when business analysts should instead look for the proper level of representation, ie the one with the best fit to business concerns.

Modeling is synchronic: contexts must be mapped to representations (Velazquez, “Las Meninas”).

Caminao’s blog (Map of Posts) will try to set a path to Architecture Driven System Modelling. The guiding principle is to look at systems as sets of symbolic representations and identify the core archetypes defining how they must be coupled to their actual counterparts. That would provide for lean (need-to-know specs) and fit (architecture driven) models, architecture traceability, and built-in consistency checks.

This blog is meant to be a work in progress, with the basic concepts set open to suggestions or even refutation:

All examples are taken from ancient civilizations in order to put the focus on generic problems of symbolic architectures, disregarding technologies.

Symbolic representation: a primer

Original illustrations by Albert ( allow for concrete understanding of requirements, avoiding the biases associated with contrived textual descriptions.

Use Cases are Agile Tools

November 26, 2015


Use cases are often associated with the whole of UML diagrams, and consequently with cumbersome procedures, and excessive overheads. But like cats, use cases can be focused, agile and versatile.

Use cases are agile tools with side view

Use Case: agile, simple-minded, with a side view (P. Picasso)

Simple minded, Robust, Easy going

As initially defined by Ivar Jacobson, the aim of use cases is to describe what happens between users and systems. Strictly speaking, it means a primary actor (possibly seconded) and a triggering event (possibly qualified); that may (e.g transaction) or may not (e.g batch) be followed by some exchange of messages . That header may (will) not be the full story, but it provides a clear, simple and robust basis:

  • Clear and simple: while conditions can be added and interactions detailed, they are not necessary parts of the core definition and can be left aside without impairing it.
  • Robust: the validity of the core definition is not contingent on further conditions and refinements. It ensues that simple and solid use cases can be defined and endorsed very soon, independently of their further extension.

As a side benefit, use cases come with a smooth learning curve that enable analysts to become rapidly skilled without being necessarily expert.

Open minded and Versatile

Contrary to a somewhat short-sighted perspective, use cases are not limited to users because actors (aka roles) are meant to hide the actual agents involved: people, devices, or other systems. As a consequence, the scope of UCs is not limited to dialog with users but may also includes batch (as one-step interactions) and real-time transactions.

Modular and Inter-operable

Given their simplicity and clarity of purpose, use cases can be easily processed by a wide array of modeling tools on both sides of the business/engineering divide, e.g BPM and UML. That brings significant benefits for modularity.

At functional level use cases can be used to factor out homogeneous modules to be developed by different tools according to their nature. As an example, shared business functions may have to be set apart from business specific scenarii.

Use cases at the hub of UML diagrams

Use cases can be easily combined with a wide range of modeling tools

At technical level, interoperability problems brought about by updates synchronization are to be significantly reduced simply because modules’ core specifications (set by use cases) can be managed independently.


Given their modularity, use cases can be easily tailored to the iterative paradigm. Once a context is set by business process or user’s story, development iterations can be defined with regard to:

  • Invariants (use case): primary actor and triggering event.
  • Iterations (scenarii): alternative execution paths identified by sequences of extension points representing the choices of actors or variants within the limits set by initial conditions.
  • Backlog units: activities associated to segments of execution paths delimited by extension points.
  • Exit condition: validation of execution path.

Execution paths & Development cycles


Last but not least, use cases provide a sound and pragmatic transition between domain specific stories and architectural features. Taking a leaf from the Scaled Agile Framework, business functions can be factored out as functional features represented by shared use cases whose actors stand for systems and not final users.

Features are to be supported by systems functionalities, but not necessarily implemented as services.

Features are to be supported by systems functionalities that may or may not implemented as services.

Depending on targeted architectures, those features may or may not be implemented by services.

Further Reading

External Links

Quality Circles

November 11, 2015

Generally speaking, quality may refer to intrinsic properties, functional characteristics, or some external yardstick. With regard to software engineering it would mean code, users experience, and operations, each with its own specific stakeholders and criteria.

A bird's eye view on quality circles (Jonathan Monk)

A bird’s-eye view on quality circles (Jonathan Monk)

On one side, traditional phased approaches to QA are meant to deal with those different aspects, yet they fall short when those facets are weaved together across enterprise architectures and business environments. On the other side agile quality solutions may also fail to cope with transverse business functions shared across architectures. Hence the need of a bird’s-eye view putting quality into a broader enterprise perspective.

Who Cares for Quality

Whatever the attributes considered, quality should clearly encompass actual products as well as their uses. For that purpose quality has to be assessed with regard to the requirements as expressed by business stakeholders, users, or systems engineers and administrators. Given the constraints and specificity of changing environments, objective yardsticks are of limited use and quality is often to be assessed for the lack thereof:

  • Business requirements: the product doesn’t meet expectations with regard to business contents (objects and logic).
  • Functional requirements: while the product meets business requirements, the part played by supporting systems doesn’t meet users’ expectations.
  • Quality of service: while the product meets business and functional requirements, users’ experience doesn’t meet expectations.
  • Technical requirements: while the product meets users’ expectations (business, functional, and ease of use), there are problems with deployment, maintenance, or operations.

Quality is best defined with regard to requirements and checked with regard to architectures

Crossing those concerns, quality assessment has to deal with two primary challenges:

  • Since assessment at each level can be conditioned by lower levels, outcomes must be described and traced accordingly. That is to be the role of quality management.
  • Since assessment has to cover both products and their use during their shelf life, uncertainty will have to be taken into account. That is to be the role of quality assurance.

A third aspect can be added for externalities, i.e factors whose impact cannot be clearly or uniquely attributed: external risks are not under control, ergonomy cannot be accurately measured, and the assessment of ROI for processes improvement remains a matter of insight.

Quality Management & Documentation

The primary objective of quality management is to identify, define, and track the targeted outcomes and the factors deemed to affect their characteristics: contracts, products traceability, models reuse, tests, etc.

Depending on target and development model, management footprint can be defined at three levels of detail:

  • With regard to the use of products in their operational context, the focus is to be on deployed systems compared to textual specifications (a).
  • With regard to the intrinsic properties of deliverables, the focus is to be extended to software components (b).
  • When products are to be deployed in different environments, or to be maintained or modified along time, additional documentation will be necessary to trace changes to functional (c) and enterprise (d) architectures.
Assessment at each level can be conditioned by lower levels

Assessment at each level may be conditioned by lower levels

In any case (i.e with or without intermediate documentation,) traceability is to be a corner-stone of quality management:

  • Business processes with regard to business objectives, e.g how to assess insurance premiums or compute missile trajectory.
  • Code with regard to textual requirements.
  • System functionalities with regard to business processes. Use cases are widely used to describe how systems are to support business processes, and system functionalities are combined to realize use cases.
  • System components as technical implementations of functionalities targeted to different users, locations, and configurations.

And another dimension of traceability is required when quality assurance has to deal with uncertainty, risks, and decision-making.

From Management to Assurance

The objective of quality assurance is to define, carry on, and monitor operations in order to improve the characteristics concerned and reduce the probability that something will go amiss during the planned shelf life of products.

For that purpose assurance footprint and granularity must be aligned with the layers defined by quality management:

  • Integration and acceptance tests are carried out in reference to requirements on the assumption that software components have been validated.
  • Code checking and unit tests are carried out in reference to business and functional requirements on the assumption that their consistency has been checked.
  • External consistency is checked with regard to business requirements independently of functional or technical ones.
  • Internal consistency is checked with regard to functional requirements on the assumption that the business requirements (external) consistency has been checked.
Footprint & granularity of management and assurance must be congruent

Footprint & granularity of management and assurance must be congruent

Those operations, meant to deal with the quality of each layers, have to be combined with schemes of secure transformations between layers, e.g reuse, patterns or code generation. That would put quality on a sound basis were it not for externalities.

Quality Assurance & Risk Management

As already noted, QA has to take into account uncertainties and risks both external (business or technical environments) and internal (development processes). Assuming quality assurance has to include risk assessment, policies should be driven by risk acceptance levels:

  • No risk: quality assurance can be designed as to eliminate some uncertainties (e.g reuse and code generation).
  • No risk taken: whereas business and technology options are not sure bets some must be carried out regardless of what happens in the environment (e.g unexpected regulatory change or delay in critical technology). In that case QA must provide fallback solutions.
  • Managed risks: some defaults or delays can be priced and weighted by likelihood. In that case QA should monitor the risks and balance their cost (e.g resources consumption, late delivery) against the cost of preventive (e.g more systematic checks on consistency, additional staff) or corrective (e.g tests or maintenance) measures.
Quality management should be set at the nexus between risks management and quality assurance.

Quality management should be set at the nexus between risks management and quality assurance.

That will put quality management at the nexus between risks management and quality assurance.

Further Readings

Where to Begin with EA

November 4, 2015

Whereas EA is a broadly recognized practical concern, there isn’t much of a consensus about it as a discipline. Hence the interest of figuring it out from practice.

(André Kertesz)

Separate actual and symbolic realms (Marc Riboud)

Be Specific

Compared to the abundance of advice about EA management, there is a dearth of specifics about what is to be managed, apart from the Zachman framework. So the best approach is to begin with actual practice and try to characterize the specifics of what is actually done.

Separate Structures from Processes

Architecture is about shared assets whose life cycle is not limited to specific activities. Hence the need to set apart processes, which have to change depending on business environments and opportunities, and structures (e.g organization and systems) whose life cycle is meant to be set along a corporate time frame.

Separate Symbolic from Actual

To be of any use EA has to rely on some consensus regarding what is to be managed, and by who. That can only be achieved if some distinction is kept between symbolic descriptions (the equivalent of blueprints) of information and processes on one hand, actual objects (e.g legacy) or activities on the other hand.

Add Time Frames

At the end of the day success will be decided by the fruitful combination of enterprise assets (financial, physical, logical, human) and business context and objectives. Defining their respective life cycles and planning the necessary changes could be seen as the primary responsibility of enterprise architects.

Add Responsibilities

Last and least, allocating responsibilities is probably better carried out on a case by case basis depending on each organization and corporate culture.

That’s All

Those few principles may seem unassuming but they provide a sound basis that takes full advantage of what staff and management know about their enterprise resources and practices. And never forget that continuity is a critical factor of EA success.

Further Reading

Data Mining & Requirements Analysis

October 24, 2015


Data mining explores business opportunities and competitive advantage, requirements analysis describe supporting applications. Both use models, the former’s are predictive and ephemeral, the latter’s descriptive (or prescriptive) and perennial.

(Andreas Gursky)

Data mining: sorting business wheat from world chaff (Andreas Gursky)

Understanding how they are related could significantly improve processes maturity.

Data vs Requirements Analysis

Nowadays the success of a wide range of enterprises critically depends on two achievements:

  1. Mapping business models to changing environments by sorting through facts, capturing the relevant data, and processing the whole into meaningful and up to date information.
  2. Putting that information into effective use through their business processes and supporting systems.
Data analysis (left) and systems requirements (right)

Models from data analysis (left) and systems requirements (right)

Those challenges are converging: under the pressure of markets forces and technological advances most of traditional fences between business channels and IT systems are crumbling, putting the focus on the functional integration between data mining and production systems. How the latter is expected to feed the former has been the bread and butter of good corporate governance for some time, but there has been less interest for the opposite flow, namely how data analysis could “inform” business requirements.

From Data to Information

Facts are not given but must be captured through a symbolic description of actual observations. That entails some observer set on task using a mix of conceptual and technical apparatus. Data mining and requirements analysis are practical realizations of that process:

  • Data mining relies on analytic tools to extract revealing information that could be used to chart opportunities along business models.
  • Requirements analysis relies on business processes and users’ practice to extract symbolic descriptions that will be used to build models of supporting applications.

If both walk the path from data to information, their objectives are different: the former’s is to improve business decisions by making sense of actual observations; the latter’s is to build system surrogates from the symbolic descriptions of actual business objects and activities.

Anchors & Structures: Plasticity of  Business Entities

Perhaps paradoxically, business agility calls for terra firma because nimble trades must be rooted in corporate identity and business continuity. As a consequence, the first step of requirements analysis should be to associate individuals business objects or activities with stable and consistent identification mechanisms, and to group them with regard to that mechanism:

  • External entities with natural (person) or designed identity (car).
  • Symbolic entities for roles (customer) or commitments (maintenance contract).
  • Actual activities (promotion campaign) and events (sale) or business logic (promotion).


Conversely, as the aim of data analysis is to explore every business angle, individual observations are supposed to be moved across groups; yet, since the units identified by data analysis will have to be aligned with the ones described by requirements analysis, moves must also keep track of identities. That dilemma between continuity of identified structures on one side, plasticity of functional aspects on the other side, can be illustrated by banks which, in response to marketing requirements, had to shift from account (internal identification) to customer (external identification) based systems.

From account (left) to customer (right) centered systems

It’s easier to market insurance from customer centered systems (right) than from account centered ones (left)

That challenge can be overcome by linking the identification of symbolic entities to external anchors.

Profiles & Features: Versatility of Business Opportunities

As noted above, requirements and data analysis are set on the same road but driven by different forces: the former tries to group individuals with regard to identification mechanisms before fleshing them out with relevant features; the latter tries to group individuals with given identities according to features and opportunity profiles. Yet, what could appear as collision courses may become a meeting of minds if both courses are charted with regard to variants analysis.

From the requirements perspective the primary concern is to distinguish between structural and functional variants:

  • Structural variants are bound to identities, i.e set up-front for the respective life-cycle of individual business objects or transactions. As a consequence they cannot be changed without undermining business continuity. Moreover, being part and parcel of descriptors (e.g  types and use cases) their change will affect engineering processes.
  • Functional variants may vary during the respective life-cycle of individual business objects or transactions. As a consequence they can be changed without undermining business continuity, and changes in descriptors (e.g partitions and scenarii) can be managed without affecting engineering processes.

From the data mining perspective the objective is to improve the benefits of information systems for decision-making processes:

  • Static: how to classify individuals as to reduce the uncertainty of predictions
  • Dynamic: how to classify business options as to reduce the uncertainty of decisions.

Since those objectives are set for individuals, constraints on continuity and consistency can be dealt with independently of the description of symbolic surrogates.

Identified individuals with profiles for customers (a), their behaviors (b), and conciliatory gestures (c)

Identified individuals with profiles for customers (a), their behaviors (b), and promotional gestures (c)

It ensues that perspectives can be adjusted by factoring out the constraints of continuity and consistency for business objects (e.g cars), agents (e.g customer) and processes (e.g repair). Profiles for agents (a), behaviors (b), and business options (c) could then be freely explored and tailored with regard to changes in business environment and objectives.

Applying Data Analysis to Requirements

Not surprisingly data analysis techniques can be used to adjust perspectives. For that purpose a sample of individuals (business objects and operations) representing the population targeted by requirements would have to be submitted to basic mining routines. Borrowing a catalog from F. Provost & T. Fawcett:

  1. Classification: estimates the probability for each individual (objects or operations) to belong to a set of classes; can be used to assess the closeness of the variants (respectively power-types or execution paths) identified by requirements analysis.
  2. Regression: reverse classification; estimates how much of individual features valuations can be explained by the proposed classifications.
  3. Similarity: a shallow version of classification; can be used to assess the distance between variants and consolidate the proposed classifications.
  4. Clustering: a deep version of classification; can be used to distinguish between shallow and natural classifications.
  5. Co-occurrence: deals with behavioral variants; can be used to distinguish between functional and structural classifications.
  6. Profiling: reverse of co-occurrence; can be used to consolidate functional and structural classifications.
  7. Links prediction: can be used to define relationships.
  8. Data reduction: eliminate redundant individuals; can be used to consolidate requirements and refine tests scenarii.
  9. Causal modeling: brings together business logic (events and rules) and users decisions; should provide the backbone of tests scenarii.

Besides the direct benefits for requirements, such procedures may help to bridge the span between data and requirements analysis and significantly improve processes’ capability and maturity level.

Business Objectives & Enterprise Architecture Capabilities

Data mining being first and foremost about competitive edge, it relies on a timely and effective coupling between enterprises capabilities and business opportunities. But the dilemma between continuity and plasticity described above for business objects and processes reappears at enterprise level: how to conciliate architecture, by nature perennial, with the agility needed to make the best of changing and competitive environments ?

As architectural big bang is arguably a last resort option, answers to that question must be progressive and local: if changes are to be swift and pertinent they must be both circumscribed and leveraged to the relevant parts of architecture. Taking an (amended) leaf of the Zachman framework, its sixth column (“Why” ) could be reset as a line for business and operational objectives that would cross the original five columns instead of the architecture layers. Using a pentagonal representation of enterprise architecture, that line would be set on the outer range.


Enterprise Architecture and the loci of change

It must be reminded that setting objectives on a line crossing the columns of capabilities instead of a column crossing the lines of layers means that objectives are set at enterprise level and their cascading impact traced and managed through layers.

Symbolic Systems vs World

Nowadays the life of enterprises fully depends on the ability of their systems to deal with their environment by making sense of data and supporting production systems. As long as environments were a hotchpotch of actual and symbolic artifacts the pros and cons of integration could be balanced. But the generalization of digital facts and transactions has upended the balance: there is no more room or time for latency and enterprises must unify the symbolic representation of their business models, organization, and computer systems.

Selected Readings

EA Frameworks: Non Negotiable Features

October 15, 2015

Frameworks are meant to abet the design and governance of enterprises’ organization and systems, not to add any methodological layer of complexity. If that entry level is to be attained preconditions are to be checked out for comprehensiveness, modularity, clarity of principle, and consistency.

(André Kertesz)

Some features are not negotiable (André Kertesz)

Meeting these requirements will in turn greatly facilitate declarative and iterative approaches to enterprise architecture.


The primary objective of an enterprise framework is to bring under a common management roof different contexts and concerns (business, technical, organizational), and synchronize their respective time-frames. That can only be achieved through an all-inclusive and unified conceptual perspective.

Suggested check: Variants for core concepts like agents or events must be clearly defined at enterprise and system levels; e.g  people (agents with identity and organizational status), roles (organization and access to systems), and bots (software agents without identity and organizational status).


On one hand enterprise frameworks must deal with strategic issues without being sidetracked by enterprises idiosyncrasies. On the other hand swift and specific adaptations to changing environments should not be hampered by cumbersome procedures or steep learning curve. That can only be achieved by lean and versatile frameworks built from a clear and compact set of architecture artifacts, to be readily extended, specialized or implemented through the enactment of dedicated processes.

Suggested check: How a framework is to further the development of a new business, facilitate the merging of organizations, or support the transition to a new architecture (e.g SOA).

Clarity of Principle

Comprehensiveness and modularity are pointless without a principled backbone supporting incremental changes and a smooth learning curve. For that purpose a clear separation should be maintained between the semantics of the core patterns used to describe architectures and the processes to be carried out for their evolution.

Suggested check: The meaning of  primary terms (event, role, activity, etc) should be uniquely and unambiguously defined based on the core framework principles, independently of the processes using them.


EA frameworks should be more compass than textbook, drawing clear lines of action before providing details of implementation. Lest architects been lost in compilations of ambiguous or overlapping definitions and rules, core meanings must remain unaffected when put to use across the framework.

Suggested check: Carry out a comprehensive search for a sample of primary terms (e.g event, role, activity, etc.), list and compare the different (if any) definitions, and verify that they can be boiled down to a unique and unambiguous one.

EA & Model Based Systems Engineering

These basic requirements get their full meaning when set in the broader context of EA evolution. Contrary to their IT component, enterprise architectures cannot be reduced to planned designs but grow from a mix of organization, culture, business environments, technology constraints, and strategic planning. As EA evolution is by nature incremental, supporting frameworks should provide for iterative development based on declarative knowledge of their organizational or technical constituents. That could be achieved by combining EA with model based systems engineering.

Further Readings


How to Begin with Requirements

October 5, 2015

Despite being a recurring topic of discussions, a comprehensive and formalized approach to requirements should not hold back newcomers: given that requirements are the necessary genesis of any project nothing can be assumed about their emergence; as a consequence they are better dealt with as they come.


The Genesis of Meanings (Quiché Maya’s view of creation)

For that purpose, and whatever the preferred approach, requirements analysis can be neatly described as an iterative process built from three basic increments: identified individuals, associated features, and classifications.


When entering unknown territory, the first thing to do is to set apart recognizable items. Regarding requirements that would be individuals whose identities has to be managed. Such individuals would then further divided between:

  • Objects (persistent identities) and activities (transient identities).
  • Activities (managed duration) and events (no managed duration).
  • Agents (physical identities) and roles (social identities).
  • Actual (physical identities) and symbolic (social identities) objects.

It must be noted that, except for new standalone applications, most of individuals may have been already defined by existing functional architectures.


Contrary to individuals, features have no identity of their own. As a corollary, they can only be considered as possible attachments to individuals. On that basis, analysts have to answer three basic questions:

  • Can a feature be shared or transferred (functional), or is it bound to the same individual (structural).
  • Does it reflect a state of affairs (attribute) or represent a capability (operation).
  • Is there some overlapping with already known features.

The last question brings about the core of requirements analysis, i.e how to deal with variants, consolidate applications, and manage changes.


Categories and rules are the dual facets of requirements purpose, namely how to classify objects and operations and deal with variants.

  • Categories take a declarative approach that relies on static classifications to describe variants. Starting with partitions, analysts will have to distinguish between structural variants (associated with specific features) and functional ones (associated with the state of the same set of features).
  • Rules take a procedural approach with the equivalent of dynamic classifications to deal with variants. As for categories, analysts will have to decide between the equivalent of structural variants (rules to be enforced for the whole of execution paths) and functional ones (rules to be evaluated along execution paths).

Whereas each option may have its benefits depending on the nature of variants, the primary factor when selecting a scheme should be its consistency with regard to existing applications on one hand, between the description of new objects and activities on the other hand.

Further Readings

A personal view of SysML Block Diagrams

September 17, 2015


SysML is an UML reconfiguration (subset, extensions, and profile) dedicated to systems modelling. The introduction of blocks as the basic unit of systems description can be seen as  its  pivotal change, in particular when block diagrams are used as substitutes for class ones.


Blurred block (Christo)

But the generalized employ of block diagrams may raise a few questions: should its scope be limited ? what kind of systems are targeted ? what kind of models are to be built ? and what are the benefits for modelling processes ?

Scope: Actual Objects & UML blind spot

The description of instances has always been a well known weakness in UML. To be sure, object diagrams were part of UML initial specifications; yet, they have been jettisoned due to their lack of clear definition and purpose; finally they have been “overlooked” in UML last versions.

That neglect has left a blind spot in UML modeling apparatus: if components and deployment diagrams are available to deal with logical and physical instances at the end of the engineering process, UML 2.5 has nothing equivalent for the context “as-it-is” at development process inception.

So, SysML new focus on physical systems offers a worthwhile opportunity to make up for the disappointments of object  diagrams, introducing the block diagram as the tool of choice for requirements specifications relative to existing systems and environment, along class diagrams for the definition of their logical counterpart, and components and deployment diagrams for logical and physical implementations.

Such a restricted focus on physical instances would meet the parsimony principle (aka Occam’s razor) and would provide for continuity with UML and clarity of semantics and purposes.

Blocks can be used all along, for physical as well as logical components

Actual objects: a blind spot for UML, a blurred one for SysML

But that opportunity is to be missed if, instead of focusing on the blind spot of requirements with regard to actual configurations, block diagrams are used indiscriminately for instances and classes, blurring the critical distinction between physical and logical components, and introducing overlaps and semantic ambiguities across diagrams.

Target: Mechanical vs Logical Systems

Whereas SysML clearly suits the needs of mechanical and civil engineering, its preferred hierarchical approach may fall short when systems cannot be reduced to equipment built from hardware and software.

Nowadays, with software colonizing every nook and cranny of actual and virtual worlds (soon to be known as the Internet of Things), a broader systemic approach may be necessary, with systems modeled as collaborative frameworks between agents, devices, and computer systems.


Systemic designs are better driven by functional architectures

Being by nature logical, distributed, heterogeneous, and in continuous mutation, these systems are not easily described by hierarchies of physical components. And even when blocks can be used to represent hardware and software, separately or jointly, block diagrams tend to privilege trees of physical components intertwined by logical or functional associations.

That bias in favor of physical architectures may put functional ones on the passenger seat; as a consequence, critical software alternatives may be shrouded, bypassed, or emerge at lower levels of composition; too late and too local for sound architectural decision-making.

Models: Requirements vs Design

The bias introduced by block diagrams may be of no consequence as far as SysML is used for requirements specifications. But that may not be the case if it is also used for design, especially if block diagrams are used all along without a transition being clearly marked between requirements and design.

At issue here is the qualitative and quantitative edge of software capabilities over hardware ones: a prejudice for block hierarchies could compromise the plasticity (ability to change architecture without affecting performances) and versatility (ability to perform different activities without changing the architecture) of systems designs.

Moreover, apart from being inherently more rigid than networks, block hierarchies are built along a single dimension combining logical and physical features. The overall plasticity of such hierarchies is to be constrained by the plasticity of their physical elements, and their versatility will be similarly limited by hardware constraints on software interfaces.

Finally, as the semantics of ports and flows used to define communication between blocks don’t force an explicit typed distinction between hardware and software, chances are for mixed designs to limit systems modularity. That points to the shortcomings of untyped modeling constructs.

Processes: Overheads of Untyped Constructs

Paradoxically for a supposedly specific profile, SysML’s blocks can be seen as untyped constructs to the extent that they apply to both physical and logical objects, as well as constraints.

Like their programming counterparts, non typed modeling constructs induce ambiguities which have to be clarified by comments or additional constructs. That is where SysML requirements diagrams come into view.

To begin with, requirements are meant to be the source of engineering processes before anything can be assumed with regard to their format or modeling constructs.

Given that premise, usually accepted by leading tools providers, they are to be expressed through text; the very notion of “requirements diagram”, which suggests some formalism, may sound like an oxymoron, and “requirements editor” would be less confusing.

Alternatively, e.g for scientific applications, if the assumption can be dropped requirements can be directly captured using formal expressions and constraints.

Whereas SysML supports both options, the use of untyped blocks may begets redundancy and confusion. Since block diagrams can be used for (physical) targets as well as for (logical) constraints, three policies can be considered:

  • Text: requirements are documented with their own diagram and linked to physical blocks (a).
  • Constraint: requirements are represented by logical blocks linked to physical ones (b).
  • Text and constraint: requirements are documented with their own diagram and linked to constraints represented by logical blocks, themselves linked to physical blocks (c).
Three different approaches to requirements management

Three ways to document requirements: (a) text, (b) constraint, (c) text and constraint.

Embarrassment of riches may be especially challenging when the problem is to bring some order to unstructured and ambiguous inputs. In the case of requirements traceability, adding layers of dependencies may rapidly result in acute “spaghetti syndrome”.

Moreover, the apparent benign choice of diagrams may compound the risks on design mentioned above by masking the critical transition from requirements (documented as such or by constraints) to architectures (as described by physical blocs).


The proper use of block diagrams  has to make up for being simultaneously too restrictive and too permissive.

With regard to structures (and consequently architectures) block diagrams strongly lean towards physical hierarchies; to quote: “blocks provide a general-purpose capability to model systems as trees of modular components.”

With regard to semantics blocks put no limit on what can be represented; to quote: “The specific kinds of components, the kinds of connections between them, and the way these elements combine to define the total system can all be selected according to the goals of a particular system model.”

Since, to quote: “SysML blocks can be used throughout all phases of system specification and design, and can be applied to many different kinds of systems”,  no wonder block diagrams tend to put class diagrams and their sound and consistent semantics to the margins.

These pitfalls can be avoided by using SysML as a typical UML profile, in particular by limiting the use of block diagrams to the modelling of systems as-they-are, and using standard UML2 diagrams with SysML profile and stereotypes otherwise.

PS. SysML 1.4 appears to make amends by deprecating or clarifying some redundant or confusing elements with regard to UML.

Further Readings

External Links

Agile Delivery & Dynamic Programming

September 7, 2015


Business driven development is arguably the cornerstone of the agile development model. On one side it means business value set by users and stakeholders, on the other side it entails continuous and just-in-time delivery; what happens in-between is set by backlogs (for development) and product increments (for delivery).

Where individual motives blend into collective rationality (Alex Prager)

Continuous releases and discrete delivery  (Alex Prager)

A sanguine understanding of continuous releases may assume that planning is no longer relevant, and that deployment can be carried out “on-the-fly”. But that would assume that stakeholders and product owners are ready to put aside roadmaps, overlook milestones, and more generally forget that time is money. That would go against a basic objective of agile, namely that developments must be driven by business needs, and products delivered just-in-time for best value. Given the well established track record of dynamic programming for  manufacturing processes, could the technique be usefully applied to agile engineering processes ?

Delivery, Deployment, & Continuity

Continuity doesn’t mean synchronization: business, engineering, and operations are governed by different concerns set along different time-frames. Some buffering is needed, materialized by the distinction between releases (engineering concerns) and deployment (business and operational concerns).

A time for every purpose: Epics & business roadmap (a), associated backlogs of users’ stories (b), released features (c), architectural capabilities (d), deployed components (e).

A time for every purpose: Epics & business roadmap (a), associated backlogs of users’ stories (b), released features (c), architectural capabilities (d), deployed components (e).

Such distinctions introduce both overlapping (with business time-frame) and discontinuity (between development and deployment):

  • Product roadmaps are set in business time-frames and determine development and deployment time-frames.
  • Development time is set by product roadmaps and runs clockwise from project inception to software releases (a>b>c).
  • Deployment time is also set relative to product roadmaps but it runs counterclockwise, from product deployment as planned by business back to released software components (c<e).

Development and deployments runs can be compared to crews tunneling through a mountain from both sides; where and when they meet leaves room to adjustments. Yet more is at stake in the meeting between development completion and deployment inception. Apart from time, adjustments may also bear on formats and contents; and given the specificity of development and deployment purposes, their adjustment may also be seen as the morphing of continuous software releases (project perspective) into discrete increments (product perspective).

Dynamic Programming

Dynamic programming (aka multistage programming) is a problem solving method that combines two principles:

  • Divide & conquer is a general purpose strategy that deals with the intrinsic complexity of problems by breaking them down into collections of simpler sub-problems to be solved separately depending on sequencing constraints.
  • Recursion deals with the lack of complete or accurate information upfront, solving the problem in stages rather than as one entity. Each stage is optimized separately on the basis of the current state reflecting decisions taken at previous stages, the optimality principle guaranteeing the optimality of the final outcome.

That incremental and iterative approach clearly befits the tenets of the agile development model.

Twofold Planning

As noted above, and whatever the technique, agile processes entwine two phases, one for development, the other for deployment, each with its own planning:

  • The development planning, epitomized by backlog management, deals with the definition of work units and their sequencing.
  • The deployment planning deals with the merging of software released into product and their incremental deployment.
Project work units are sequenced (backlog), Product increments are merged, and both are dynamically adjusted around their nexus.

Project work units are sequenced (backlog), Product increments are merged, and both are dynamically adjusted around their nexus.

That makes for multiple dynamics, first for updated backlogs, then for updated deployment targets, and finally for possible feedback through their nexus. That can only be achieved with dynamic programming.

Backlog: Multistage sequencing

Backlogs are used to manage work units targeting self-contained requirements items; they can be represented by graphs with nodes standing for work units and arcs weighted by constraints.

Basically, the problem is to optimize the development of a given set of items given users’ priorities, technical constraints, and resources availability. When all the information is available upfront, optimum solutions can be obtained with simple Shortest Path algorithms. Yet, given the iterative and exploratory nature of agile processes, backlogs are meant to be updated as the project advances, taking advantage of improved knowledge:

  • Users may introduce new items, remove ones, or change their priorities due to a better understanding of requirements space.
  • Engineers may also introduce new items (e.g for technical debt) or reconsider technical difficulties and dependencies due to a better understanding of solutions space.
Dynamic reordering of backlogs: looking forward for the optimum path to completion.

Dynamic reordering of backlogs: looking forward for the optimum path to completion.

Dynamic programming is introduced in order to support step-wise decisions optimizing the whole process:

  • Backlog states (t1 & t2) are defined by the remaining work units, rankings, and feasibility constraints.
  • Each stage redefines the optimum path to completion taking into account the current state and updated information. Recursive computation being based on the summary information etched in states, it ensues that all future decisions can be selected optimally without recourse to information regarding previously made decisions.

Given a set of feasible paths (as defined by technical dependencies and time), the aim at each stage is to select the optimum path for the remaining units based on current state. Optimization functions will typically consider users’ value, learning curve and associated risks, and resources availability and costs.

As illustrated below, nodes can represent grouped items, e.g when several projects have to share resources or releases are to be regrouped.

Deployment: Dynamic merging

Given a set of released software components, the aim of deployment planning is to decide which increment to add to deployed product. Assuming that technical concerns have already been dealt with by releases, the objective at each stage is select the items maximizing the ROI of the deployed product. It must be noted that, contrary to the development algorithm looking forward for the optimization, the deployment algorithm select the optimum path by looking backward at the ROI of deployed products.

Dynamic reordering of deployments: looking backward for the optimum path to completion.

Dynamic reordering of deployments: looking backward for the optimum path to completion.

But the backward impact of deployment optimization can go further and affect backlogs.


Shared ownership and continuous delivery are two main pillars of the agile development model, the former giving the development full authority and responsibility, the latter ensuring the users have a firm hand on the helm. Yet, as already noted, development, deployment, and business are governed by different time-frames, which could induce some frictions, e.g if business units were forced to synchronize products deployments with software releases. While severe disruptions can be avoided if releases and deployments are managed separately, development teams cannot be completely sheltered from changes in business or operational priorities. That is where the dynamic reassessment of optimum paths is to help: assuming a change in deployment planning (nq instead of op), the new priorities are fed back into development (aka backlog) rankings, and the optimum path is updated.


Change in deployment priorities (nq instead of op) can be fed back to backlog planning (f before a).

It must be noted that such feedback only affects ranks and must leaves contents unchanged.

Conclusion: Business driven, Just-in-time delivery, & Lean Inventories

Dynamic programming appears as a primary factor with regard to three core tenets of the agile development model:

  • Business driven development doesn’t mean that developments are pushed by requirements but that they are pulled by deployment.
  • Just-in-time delivery can only be achieved with the help of a buffer between development and deployment. This buffer should not be confused with an inventory as it has nothing to do with product quantities.
  • On the contrary, this buffer, combined with dynamic programming, plays a critical role in the cutback of intermediate documents and models (aka development inventories).

Further Readings

External Links

Enterprise Systems & the OS Kernel Paradigm

September 1, 2015


Given the ubiquity of information and communication technologies on one hand, the falling apart of technical fences between systems, enterprises, and business environments on the other hand, applying the operating system (OS) paradigm to enterprise architectures seems a logical move.

Users and access to services (Queuing at a Post Office in French West Indies)

Borrowing the blueprint of computers operating systems, enterprise operating systems (EOS) would  be organized around a kernel managing shared resources (people, hardware and software) and providing services to business, engineering or operational processes.

Gerrymandering & Layers

When IT was neatly fenced behind computer screens managers could keep a clear view on organization, roles, and responsibilities. But with physical hedges replaced by clouded walls, the risk is that IT may appear as the primary constituent of enterprise architecture. Given the lack of formal arguments against what may be a misguided understanding, enterprise architects have to rely on pragmatic answers. Yet, they could prop their arguments by upending the very principles of IT operating systems and restore the right governance footprint.

To begin with, turfs must be reclaimed, and that can be done if the whole of assets and services are layered according to the nature of problems and solutions: business processes (enterprise), supporting functionalities (systems), and technologies (platforms).

Problems and solutions must be set along architecture layers

EA must separate and federate concerns along architecture layers

Then, reclaiming must also include governance, and for that purpose EOS are to rely on a comprehensive and consistent understanding of assets, people and mechanisms across layers:

  • Physical assets, including hardware.
  • Non physical assets, including software.
  • Agents (identified people with organizational responsibilities) and roles.
  • Events (changes in the state of objects, processes, or expectations) and activities.

Mimicking traditional OS, that could be achieved with a small and compact conceptual kernel of formal concepts bearing out the definitions of primitives and services for the whole of enterprise processes.

EOS’s Kernel: 12 concepts

A wealth of definitions may be the main barrier to enterprise architecture as a discipline because such profusion necessarily comes with overlaps, ambiguities, and inconsistencies. Hence the benefit of relying on a small set of concepts covering the whole of enterprise systems:

  • Six for individuals actual (objects, events, processes) and symbolic (surrogates objects, activities, roles) elements.
  • One for actual (locations) or symbolic (package) containers.
  • One for the partitioning of behaviors (branch) or surrogates (power type).
  • Four for actual (channels and synchronization) and symbolic (references and flows) connectors.

Governance calls for comprehensive and consistent semantics

Considering that nowadays business entities (enterprise), services (systems), and software components (technology) share the same distributed world, these concepts have to keep some semantic consistency across layers whatever their lexical avatars. To mention two critical examples, actors (aka roles) and events must be consistently understood by business and system analysts.

Those concepts are used to describe enterprise systems building blocks which can be combined with a small set of well known syntactic operators:

  • Two types of connectors depending on target: instances (associations) or types (inheritance).
  • Three types connections for nondescript, aggregation, and composition.

Syntactic operators are meant to be applied independently of targets semantics

Again, Occam’s razor should be the rule: just like semantics are consistently defined across architecture layers, the same syntactic operators are to be uniformly applied to artifacts independently of their semantics.

Kernel’s Functions

Continuing with the kernel analogy, based on a comprehensive and consistent description of resources, the traditional OS functions can be reinterpreted with regard to architecture capabilities implemented across layers:

  • What: memory of business objects and operations (enterprise), data base logical entities (systems), data base physical records (platforms).
  • Who: roles (enterprise), interfaces (systems), and devices (platforms).
  • When: business events (enterprise), logical events (systems), and transaction managers (platforms).
  • Where: sites (enterprise), logical processing units (systems), network architecture (platforms).
  • How: business processes (enterprise), applications (systems), and CPU (systems).
Traceability of Capabilities across architecture layers

Traceability of Capabilities across architecture layers

That fits with the raison d’être of a kernel which is to combine core functions in order to support the services called by processes.


Still milking the OS analogy, a primary goal of an enterprise kernel is to support a seamless integration of services:

  1. Business driven: the definition of services must be directly and unambiguously associated to business ends and means across enterprise layers.
  2. Traceability: they must ensure the transparency of the tie-ups between organization and processes on one hand, information systems on the other hand.
  3. Plasticity: they must facilitate the alignment of changes in business objectives, organization and supporting systems.

A reasoned way to achieve these objectives is to classify services with regard to the purpose of calling processes:

  • Business processes deal with the transactions between the enterprise and its environment.
  • Engineering processes deal with the development of enterprise resources independently of their use.
  • Operational processes deal with the management of enterprise resources when directly or indirectly used by business processes.
Enterprise Operating System: Layers & Services

Enterprise Operating System: Layers & Services

That classification can then be crossed with architecture levels:

  • At enterprise level services are bound to assets to be used by business, engineering, or operational processes.
  • At systems level services are bound to functions supporting business, engineering, or operational processes.
  • At platform level services are bound to resources used by business, engineering, or operational processes.

As services will usually rely on different functions across layers, the complexity will be dealt with by kernel primitives and masked behind interfaces.

Services called by processes can combine different functions directly (basic lines) or across layers (dashed lines).

Services called by processes can combine different functions directly (basic lines) or across layers (dashed lines).

Finally, that organization of services along architecture layers may be aligned with governance levels: strategic for enterprise assets, tactical for systems functionalities, and operational for platforms and resources.

Further Reading

Agile Architectures: Versatility meets Plasticity

June 22, 2015


At enterprise level agility can be understood as a mix of versatility and plasticity, the former an attribute of function, the latter of form:

  • Versatility: enterprise ability to perform different activities in changing environments without having to change its architectures.
  • Plasticity: enterprise ability to change its architectures without affecting its performances.
Plasticity is for form, versatility for function

Agility: Forms & Performances (P. Pénicaud)

Combining versatility and plasticity requires a comprehensive and consistent view of assets (architectures) and modus operandi (processes) organized with regard to change. And that can be achieved with model based systems engineering (MBSE).

MBSE & Change

Agility is all about change, and if enterprise governance is not to be thrown aside decision-making has to be supported by knowledgeable descriptions of enterprise objectives, assets, and organization.

If change management is to be the primary objective, targets must be classified along two main distinctions:

  • Actual (business context and organization) or symbolic (information systems).
  • Objects (business entities or system surrogates) or activities (business processes or logic).

Comprehensive and consistent descriptions of actual and symbolic assets (architectures) and modus operandi (processes) with regard to change management.

The two axis determine four settings supporting transparency and traceability:

  • Dependencies between operational and structural elements.
  • Dependencies between actual assets and processes and their symbolic representation as systems surrogates.

Versatility and plasticity will be obtained by managing changes and alignments between settings.

Changes & Alignments

Looking for versatility, changes in users’ requirements must be rapidly taken into account by applications (changes from actual to symbolic).

Looking for plasticity, changes in business objectives are meant to be supported by enterprise capabilities (changes from operational to structural).

The challenge is to ensure that both threads can be weaved together into business functions and realized by services (assuming a service oriented architecture).

With the benefits of MBSE, that could be carried out through a threefold alignment:

  • At users level the objective is to ensure that applications are consistent with business logic and provide the expected quality of service. That is what requirements traceability is meant to achieve.
  • At system level the objective is to ensure that business functions and features can be directly mapped to systems functionalities. That is what services oriented architectures (SOA) are  meant to achieve.
  • At enterprise level the objective is to ensure that the enterprise capabilities are congruent with its business objectives, i.e that they support its business processes through an effective use of assets. That is what maturity and capability models are meant to achieve.

Versatility comes from users’ requirements, plasticity from architectures capabilities.

That would make agility a concrete endeavor across enterprise, from business users and applications to business processes and architectures capabilities.

Further Reading


Get every new post delivered to your Inbox.

Join 401 other followers