Modeling Symbolic Representations

March 16, 2010

System modeling is all too often a flight for abstraction, when business analysts should instead look for the proper level of representation, ie the one with the best fit to business concerns.

Modeling is synchronic: contexts must be mapped to representations (Velazquez, “Las Meninas”).

Caminao’s blog (Map of Posts) will try to set a path to Architecture Driven System Modelling. The guiding principle is to look at systems as sets of symbolic representations and identify the core archetypes defining how they must be coupled to their actual counterparts. That would provide for lean (need-to-know specs) and fit (architecture driven) models, architecture traceability, and built-in consistency checks.

This blog is meant to be a work in progress, with the basic concepts set open to suggestions or even refutation:

All examples are taken from ancient civilizations in order to put the focus on generic problems of symbolic architectures, disregarding technologies.

Symbolic representation: a primer

Original illustrations by Albert (http://www.albertdessinateur.com/) allow for concrete understanding of requirements, avoiding the biases associated with contrived textual descriptions.

Detour from Turing Game

February 20, 2015

Summary

Considering Alan Turing’s question, “Can machines think ?”, could the distinction between communication and knowledge representation capabilities help to decide between human and machine ?

vvvv

Alan Turing at 4

What happens when people interact ?

Conversations between people are meant to convey concrete, partial, and specific expectations. Assuming the use of a natural language, messages have to be mapped to the relevant symbolic representations of the respective cognitive contexts and intentions.

ccc

Conveying intentions

Assuming a difference in the way this is carried on by people and machines, could that difference be observed at message level ?

Communication vs Representation Semantics

To begin with, languages serve two different purposes: to exchange messages between agents, and to convey informational contents. As illustrated by the difference between humans and other primates, communication (e.g alarm calls directly and immediately bound to imminent menace) can be carried out independently of knowledge representation (e.g information related to the danger not directly observable), in other words linguistic capabilities for communication and symbolic representation can be set apart. That distinction may help to differentiate people from machines.

Communication Capabilities

Exchanging messages make use of five categories of information:

  • Identification of participants (Who) : can be set independently of their actual identity or type (human or machine).
  • Nature of message (What): contents exchanged (object, information, request, … ) are not contingent on participants type.
  • Life-span of message (When): life-cycle (instant, limited, unlimited, …) is not contingent on participants type.
  • Location of participants (Where): the type of address space (physical, virtual, organizational,…) is not contingent on participants type.
  • Communication channels (How): except for direct (unmediated) human conversations, use of channels for non direct (distant, physical or otherwise) communication are not contingent on participants type .

Setting apart the trivial case of direct human conversation, it ensues that communication capabilities are not enough to discriminate between human and artificial participants, .

Knowledge Representation Capabilities

Taking a leaf from Davis, Shrobe, and Szolovits, knowledge representation can be characterized by five capabilities:

  1. Surrogate: KR provides a symbolic counterpart of actual objects, events and relationships.
  2. Ontological commitments: a KR is a set of statements about the categories of things that may exist in the domain under consideration.
  3. Fragmentary theory of intelligent reasoning: a KR is a model of what the things can do or can be done with.
  4. Medium for efficient computation: making knowledge understandable by computers is a necessary step for any learning curve.
  5. Medium for human expression: one the KR prerequisite is to improve the communication between specific domain experts on one hand, generic knowledge managers on the other hand.

On that basis knowledge representation capabilities cannot be used to discriminate between human and artificial participants.

Returning to Turing Test

Even if neither communication nor knowledge representation capabilities, on their own, suffice to decide between human and machine, their combination may do the trick. That could be achieved with questions like:

  • Who do you know: machines can only know previous participants.
  • What do you know: machines can only know what they have been told, directly or indirectly (learning).
  • When did/will you know: machines can only use their own clock or refer to time-spans set by past or planned transactional events.
  • Where did/will you know: machines can only know of locations identified by past or planned communications.
  • How do you know: contrary to humans, intelligent machines are, at least theoretically, able to trace back their learning process.

Hence, and given scenarii scripted adequately, it would be possible to build decision models able to provide unambiguous answers.

Reference Readings

A. M. Turing, “Computing Machinery and Intelligence”

Davis R., Shrobe H., Szolovitz P., “What is a Knowledge Representation?”

Further Reading

 

 

AI & Embedded Insanity

February 6, 2015

Summary

Bill Gates recently expressed his concerns about AI’s threats, but shouldn’t we fear insanity, artificial or otherwise ?

vvvv

Human vs Artificial Insanity: chicken or egg ? (Peter Sellers as Dr. Strangelove)

Some clues to answers may be found in the relationship between purposes, designs, and behaviors of intelligent devices.

Intelligent Devices

Intelligence is generally understood as the ability to figure out situations and solve problems, with its artificial avatar turning up when such ability is exercised by devices.

Devices being human artifacts, it’s safe to assume that their design can be fully accounted for, and their purposes wholly exhibited and assessed. As a corollary, debates about AI’s threats should distinguish between harmful purposes (a moral issue) on one hand, faulty designs,  flawed outcomes, and devious behaviors, (all engineering issues) on the other hand. Whereas concerns for the former could arguably be left to philosophers, engineers should clearly take full responsibility for the latter.

Human, Artificial, & Insane Behaviors

Interestingly, the “human” adjective takes different meanings depending on its association to agents (human as opposed to artificial) or behaviors (human as opposed to barbaric). Hence the question: assuming that intelligent devices are supposed to mimic human behaviors, what would characterize devices’ “inhuman” behaviors ?

ccc

How to characterize devices’ inhuman behaviors ?

From an engineering perspective, i.e moral issues being set aside, a tentative answer would point to some flawed reasoning, commonly described as insanity.

Purposes, Reason, Outcomes & Behaviors

As intelligence is usually associated with reason, flaws in the design of reasoning capabilities is where to look for the primary factor of  hazardous devices’ behaviors.

To begin with, the designs of intelligent devices neatly mirror human cognitive activity by combining both symbolic (processing of symbolic representations), and non symbolic (processing of neuronal connections) capabilities. How those capabilities are put to use is therefore to characterize the mapping of purposes to behaviors:

  • Designs relying on symbolic representations allow for explicit information processing: data is “interpreted” into information which is then put to use as the knowledge governing behaviors.
  • Designs based on neural networks is characterized by implicit information processing: data is “compiled” into neuronal connections whose weights (representing knowledge ) are tuned iteratively based on behavioral feedback.
Symbolic (north) vs non symbolic (south) intelligence

Symbolic (north) vs non symbolic (south) intelligence

That distinction is to guide the analysis of potential threats:

  • Designs based on symbolic representations can support both the transparency of ends and the traceability of means. Moreover, such designs allow for the definition of broader purposes, actual or social.
  • Neural networks make the relationships between means and ends more opaque because their learning kernels operate directly on data, with the supporting knowledge implicitly embodied as weighted connections. They make for more concrete and focused purposes for which symbolic transparency and traceability are less of a bearing.

Risks, Knowledge,  & Decision Making

As noted above, an engineering perspective should focus on the risks of flawed designs begetting discrepancies between purposes and outcomes. Schematically, two types of outcome are to be considered: symbolic ones are meant to feed human decisions making, and behavioral ones directly govern devices behaviors. For AI systems combining both symbolic and non symbolic capabilities, risks can arise from:

  • Unsupervised decision making: device behaviors directly governed by implicit knowledge (a).
  • Embedded decision making: flawed decisions based on symbolic knowledge built from implicit knowledge (b).
  • Distributed decision making: muddled decisions based on symbolic knowledge built by combining different domains of discourse (c).
Unsupervised (a), embedded (b), and distributed (c) decision making.

Unsupervised (a), embedded (b), and distributed (c) decision making.

Whereas risks bred by unsupervised decision making can be handled with conventional engineering solutions, that’s not the case for embedded or distributed decision making supported by intelligent devices. And both risks may be increased respectively by the so-called internet of things and semantic web.

Internet of Things, Semantic Web, & Embedded Insanity

On one hand the so-called “internet second revolution” can be summarized as the end of privileged netizenship: while the classic internet limited its residency to computer systems duly identified by regulatory bodies, the new one makes room for every kind of device. As a consequence, many intelligent devices (e.g cell phones) have made their coming out into fully fledged systems.

On the other hand the so-called “semantic web” can be seen as the symbolic counterpart of the internet of things, providing a whole of comprehensive and consistent meanings to targeted factual realities. Yet, given that the symbolic world is not flat but built on piled mazes of meanings, their charting is to be contingent on projections with dissonant semantics. Moreover, as meanings are not supposed to be set in stone, semantic configurations have to be continuously adjusted.

That double trend clearly increases the risks of flawed outcomes and erratic behaviors:

  • Holed up sources of implicit knowledge are bound to increase the hazards of unsupervised behaviors or propagate unreliable information.
  • Misalignment of semantics domains may blur the purposes of intelligent apparatus and introduce biases in knowledge processing.

But such threats are less intrinsic to AI than caused by the way it is used: insanity is more probably to spring from the ill-designed integration of intelligent and reasonable systems into the social fabric than from insane ones.

Further Readings

Models Truth and Disproof

January 21, 2015

“If you cannot find the truth right where you are, where else do you expect to find it?”

Dōgen Zenji

Summary

Software engineering models can be regrouped in two categories depending on their target: analysis models represent business context and concerns, design ones represent systems components. Whatever the terminologies, all models are to be verified with regard to their intrinsic qualities, and validated with regard to their domain of discourse, respectively business objects and activities (analysis models), or software artifacts (design models).

(Chris Engman)

Internal & External Consistency (Chris Engman)

Checking for internal consistency is arguably straightforward as proofs can be built with regard to the syntax and semantics of modeling (or programming) languages. Things are more complicated for external consistency because hypothetical proofs would have to rely on what is known of the business domains, whose knowledge is by nature partial and specific, if not hypothetical. Nonetheless, even if general proofs are out of reach, the truth of models can still be disproved by counter examples found among the instances under consideration.

Domains of Discourse: Business vs Engineering

With regard to systems engineering, domains of discourse cover artifacts which are, by “construct”, fully defined. Conversely, with regard to business context and objectives, domains of discourse have to deal with instances whose definitions are a “work in progress”.

vvvv

Domains of Discourse: Business vs Engineering

That can be illustrated by analysis models, which are meant to translate requirements into system functionalities, as opposed to design ones, which specify the corresponding software artifacts. Since software artifacts are supposed to be built from designs, checking the consistency of the mapping is arguably a straightforward undertaking. That’s not the case when the consistency of analysis models has to be checked against objects and activities identified by business’ domains of discourse, possibly with partial, ambiguous, or conflicting descriptions. For that purpose some logic may help.

Flat Models & Logic

Business requirements describe objects, events, and activities, and the purpose of modeling is to identify those instances and regroup them into subsets built according to their features and relationships.

Building descriptions for targeted instances business objects & activities

How to organize instances of business objects & activities into subsets

As far as models make no use of abstractions (“flat” models), instances can be organized using basic set operators and epistemic (i.e relating to the degree of validation) constraints with regard to existence (m/d), uniqueness (x/o), and change (f/m):

cccc

Notation for epistemic constraints

Using the EU-Rent Car example:

  • Rental cars are exclusively and definitively partitioned according to models (mxf).
  • Models are exclusively partitioned according to rental group (mxm), and exclusively and definitively according body style (mxf).
  • Rental cars are partitioned by derivation (/) according to group and body style.
cccc

Flat model using basic set operators for exclusive (cross) and final (grey) partitions (2)

Such models are deemed to be consistent if all instances are consistently taken into account.

Flat Models External Consistency

Assuming that models backbone can be expressed logically, their consistency can be formally verified using a logical language, e.g Prolog.

To begin with, candidate subsets are obtained by combing requirements for core modeling artifacts expressed as predicates (21 for descriptions of actual objects, 121 for descriptions of actual locations, 20 for descriptions of symbolic ones, 22 for descriptions of symbolic partitions), e.g:

  • type(20, manufacturer).
  • type(21, rentalCar).
  • type(22,  carModel).
  • type(22, rentalGroup).
  • type(22,  bodyStyle).
  • type(121, depot).

Partitions and functional connectors (220 for symbolic reference, 222 for partitions, 221 for actual connection), e.g:

  • connect(222, rentalCar,carModel, mxf).
  • connect(222, carModel, group, mxm).
  • connect(222, carModel, bodyStyle,mxf).
  • connect(220, manufacturer_, carModel, manufacturer, mof).
  • connect(121, location, rentalCar, depot, mxt)

Finally, features and structures (320 for properties, 340 for operations), e.g:

  • feature(340, move_to, depot).
  • feature(320, address).
  • feature(320, location).
  • member(manufacturer,address,mom).
  • member(rentalCar,location,mxm).
  • member(rentalCar,move_to,mxm).

Those candidate descriptions are to be assessed and improved by applying them to sets of identified occurrences taken from requirements. The objective being to map each instance to a description: instance(name, term()), e.g:

  • instance(sedan,carModel(f1(F1),f2(F2))).
  • instance(coupe,carModel(f1(F1),f2(F2))).
  • instance(ford, manufacturer(f6(F6),f7(F7))).
  • instance(focus, rentalCar(f6(F6),f7(F7))).
  • instance_(manufacturer_,focus,ford).

Using a logical interpreter, validation can then be carried out iteratively by looking for counter examples that could disprove the truth of the representations:

  • All instances are taken into account: there is no instance N without instance(N,Structure).
  • Logical consistency: there is no instance N with conflicting partitioning (native and derived).
  • Completeness: there is no instance type(X,N,T(f1,f2,..)) with undefined feature fi.
  • Functional consistency: there is no instance of relation R (native and derived) without a consistent type relation(X, R, Origin, Destination, Epistemic) .

It must be noted that the approach is not limited to objects and is meant to encompass the whole scope of requirements: actual objects, symbolic representations, business logic, and processes execution.

Multilevel Models: From Partitions to Sub-types

Flat models fall short when specific features are to be added to the elements of partitions subsets, and in that case sub-types must be introduced. Yet, and contrary to partitions, sub-types come with abstractions: set within a flat model (i.e without sub-types), Car model fully describes all instances, but when sub-types sedan, coupe, and convertible are introduced, the Car model base type is nothing more than a partial (hence abstract) description.

ccc

From partition to sub-types: subsets descriptions are supplemented with specific features.

Whereas that difference may seem academic, it has direct and practical consequences when validation is considered because consistency must then be checked for every level, concrete or abstract.

LSP & External Consistency

As it happens, the corresponding problem has been tackled by Barbara Liskov for software design: the so-called Liskov substitution principle (LSP) states that if S is a sub-type of T, then instances of T may be replaced with instances of S without altering any of the desirable properties of the program.

Translated to analysis models, the principle would state that, given a set of instances, a model must keep its consistency independently of the level of abstraction considered. As a corollary, and assuming a model abides by the substitution principle, it would be possible to generalize the external consistency of a detailed level to the whole model whatever the level of abstraction. Hence the importance of compliance with the substitution principle when introducing sub-types in analysis models.

vvv

All instances must be accounted for whatever the level of abstraction

Applying the Substitution Principle to Analysis Models

Abstraction is arguably the essence of requirements modeling as its purpose is to bring specific and changing concerns under a common, consistent, and lasting conceptual roof. Yet, the two associated operations of specialization and generalization often receive very little scrutiny despite the fact that most of the related pitfalls can be avoided if the introduction of sub-types (i.e levels of abstraction) is explicitly justified by partitions. And that can be achieved by the substitution principle.

First of all, and as far as requirements analysis is concerned, sub-types should only to be introduced for specific features, properties or operations. Then, epistemic constraints can be used to tally the number of specialized instances with the number of generalized ones, and check for the possibility of functional discrepancies:

  • Discretionary (or conditional or non exhaustive) partitions (d__) may bring about more instances for the base type (nb >= ∑nbi).
  • Overlapping (or duplicate or non isolated) partitions (_o_) may bring about less instances for the base type (nb <= ∑nbi).
  • Assuming specific features, mutable (or reversible) partitions (__m) means that features may differ between level; otherwise (same features) sub-types are not necessary.
vvv

Epistemic constraints on partitions can be used to enforce the LSP

Using a Prolog-like language, the only modification will concern the syntax of predicates, with structures replaced by lists of features:

  • type(20, manufacturer,[f6,f7]).
  • type(21, rentalCar,[f5]).
  • type(22,  carModel,[f1,f2]).
  • type(22, rentalGroup,[f9]).
  • type(22,  bodyStyle,[f8]).
    • type(20, bodyStyle:sedan, [f11,f12]).
    • type(20, bodyStyle:coupe, [f13]).
    • type(20, bodyStyle:convertible, [f14]).
  • type(121, depot,[f10]).

The logical interpreter could then be used to map the sub-types to partitions and check for substitution.

Further Reading

Further Readings

Use Cases Shouldn’t Know About Classes

January 5, 2015

Summary

Uses cases are meant to describe how users interact with systems, classes are meant to describe software components, including those executing use cases. It ensues that classes are introduced with the realization of use cases but are not supposed to appear as such in their definition.

TurkAutomat

Users are not supposed to know about surrogates

The Case for Use Cases

Use cases (UCs) are the brain child of Ivar Jacobson and often considered as the main innovation introduced by UML. Their success, which largely outdoes UML’s footprint, can be explained by their focus and simplicity:

  • Focus: UCs are meant to describe what happens between users and systems. As such they are neatly bounded with regard to their purpose (UCs are the detailed parts of business processes supported by systems) and realization (UCs are implemented by software applications).
  • Simplicity: while UCs may eventually include formal (e.g pre- and post-conditions) and graphical (e.g activity diagrams) specifications, they can be fully defined and neatly circumscribed using stick actors (for the roles played by users or any other system) and ellipses (for system behaviors).
vvv

Use Cases & UML diagrams

As it often happens to successful innovations, use cases have been widely interpreted and extended; nonetheless, the original concepts introduced by Ivar Jacobson remain basically unaltered.

The Point of Use Cases

Whereas focus and simplicity are clearly helpful, the primary success factor is that UCs have a point, namely they provide a conceptual bridge between business and system perspectives. That appears clearly when UCs are compared to main alternatives like users’ stories or functional requirements:

  • Users’ stories are set from business perspective and lack explicit constructs for the parts supported by systems. As a consequence they may flounder to identify and describe business functions meant to be shared across business processes.
  • Conversely, functional requirements are set from system perspective and have no built-in constructs linking business contexts and concerns to their system counterparts. As a consequence they may fall short if business requirements cannot be set upfront or are meant to change with business opportunities.

Along that understanding, nothing should be done to UCs that could compromise their mediating role between business value and system capabilities, the former driven by changes in business environment and enterprise ability to seize opportunities, the latter by the continuity of operations and the effective use of technical or informational assets.

Business Objects vs Software Components

Users’ requirements are driven by concrete, partial, and specific business expectations, and it’s up to architects to weld those diverse and changing views into the consistent and stable functional abstractions that will be implemented by software components.

Users' requirements are driven by concrete, partial, and specific concerns

Users’ requirements are driven by concrete, partial, changing and specific concerns, but supported by stable and fully designed software abstractions.

Given that double discrepancy of objectives and time-scales, business analysts should not try to align their requirements with software designs, and system analysts should not try to second-guess their business counterparts with regard to future business objects. As a consequence, respective outcomes would be best achieved through a clear separation of concerns:

  • Use cases deal with the business value of applications, mapping views on business objects to aspects of classes.
  • Functional architectures deal with assets, in particular the continuous and consistent representation of business objects by software components as described by classes.
vvv

How to get best value from assets

As it happens, that double classification with regard to scope and purpose should also be used to choose a development model: agile when scope and purpose can be united, phased approach otherwise.

Further Reading

 

A Scraps-Free New Year

December 25, 2014

As chances are for new years to come with the legacies of past ones, that would be a good time to scrub the scraps.

Sifting through requirements subhodGupta

How to scrub last year scraps (Subhod Gupta)

Legacies as Forced Reuses

As far as enterprise architectures are concerned, sanguine resolutions or fanciful wishes notwithstanding, new years seldom open the door to brand new perspectives. More often than not, they bring new constraints and further curb the possible courses of action by forcing the reuse of existing assets and past solutions. That will call for a review and assessment of the irrelevancies or redundancies in processes and data structures lest they clog the organization and systems, raise entropy, and degrade governance capability.

Architectures as Chosen Reuses

Broadly defined, architectures combine assets (physical or otherwise) and mechanisms (including organizational ones) supporting activities which, by nature, must be adaptable to changing contexts and objectives. As such, the primary purpose of architectures is to provide some continuity to business processes, across locations and between business units on one hand, along time and business cycles on the other hand. And that is mainly to be achieved through reuse of assets and mechanisms.

Balancing Changes & Reuse

It may be argued that the main challenge of enterprise architects is to maintain the right balance between continuity and agility, the former to maintain corporate identity and operational effectiveness, the latter to exploit opportunities and keep an edge ahead of competitors. That may turn to be an oxymoron if architects continuously try to discard or change what was designed to be kept and reused. Yet that pitfall can be avoided if planting architectures and pruning offshoots are carried out independently.

Seasons to Plant & to Prune

Enterprises life can be set along three basic time-scales:

  • Operational: for immediate assessment and decision-making based on full information.
  • Tactical: for periodic assessment and decision-making based on partially reliable information. Periodicity is to be governed by production cycles.
  • Strategic: for planned assessment and decision-making based on unknown or unavailable information. Time-frames are to be governed by business models and risks assessment.

Whereas architecture life-cycles are by nature strategic and meant to span an indefinite, but significant, number of production cycles, trimming redundancies can be carried on periodically providing it doesn’t impact processes execution. So why not doing the house cleaning with the beginning of new years.

Further Reading

How to Mind a Tree Story

December 8, 2014

Summary

Depending on devotees or dissenters, the Agile development model is all too often presented as dead-end or end-of-story. Some of that unfortunate situation can be explained, and hopefully pacified, by comparing users’ stories to plants, with their roots, trunks, and branches. Assuming that agility calls for sound footings and good springboards, it may be argued that many problems arise with stories barking at the wrong tree (application level) or getting lost in the woods (architecture level).

How to Mind/Mend a True/Tree Story

How to Mind/Mend a True/Tree Story

Application level: Trees, Bushes and Hedges

As Aristotle first stated, good stories have to follow the three unitities: one course of action, located in a single space, run along continuous time.

That rule is clearly satisfied  by stories that can be developed like plants growing from clearly identified roots.

Yet, stories like bushes may grow too many offshoots to be accounted for by a single action narrative; in that case it may be possible to single out a primary trunk and a set of forking branches along which different scenarii could be developed.

More serious difficulties may appear with thickets mixing offshoots from different bushes sharing the same space. That situation will first require some ground work in order to single out individual roots, and then use them to extricate each bush separately. When, like offshoots that actually mingle, story-lines cross and share actions, the description of such actions (aka features) is to be factored out and separated from the contexts of their enactment in the different story-lines.

Finally, like bushes in hedges, stories may chronicle repeated activities serving some collective purpose. That configuration is both easy to recognize and dealt with effectively by introducing a stereotyped story feature for collections and loops management.

Architecture level: Groves, Woods and Plantations

Contrary to hedges which are built on the similarity of their constituents, groves are based on their functional differences, and that can also be seen as a critical distinction between containers and architectures.

In agile parlance, that is best compared to the difference between stories and epics, the former telling what happens between users and applications, the latter taking a bird’s view of the relationships between business processes and systems.

In most of the cases the question will arise for sizable stories deemed too large for development purposes. When dealing with that situation the first step should be to look for thickets and bushes, respectively to be set apart as individual bushes or refined as scenarii. When still confronted with multiple roots, the question would be to decide between hedges and groves, that is between repeated activities and collaboration. And that decision would be critical because collaborations call for a different kind of story (aka epics or themes) set at a higher level, namely architecture.

Scaling Ups and Downs

Assuming the three-units rule cannot be met, two alternative approaches are possible, depending on whether the story has to be broken down or upgraded to an epic, and the undoing of the rule can be used to make a decision:

  1. When the course of actions, once started, is to be contingent on subsequent business (aka external) events the story should be upgraded to an epic, as it will often refer to a part or whole of a business process.
  2. Otherwise: when activities are set along different periods of time (i.e contingent on time-events) the story can be broken down depending on size, functional architecture, or development constraints.
  3. Otherwise: when activities are distributed across locations it may be necessary to factor out architecture-dependent features dealing with shared address spaces and synchronization mechanisms

Applying those guidelines to stories will put the whole development processes on rails and help to align requirements with their architectural footprint: business logic, system functionalities, or platform technologies.

Further Readings

External Links

How to choose Frameworks & Methods

November 16, 2014

Summary

When selecting a method or framework for systems engineering, four basic principles should be followed: continuity, duality, parsimony, and artifacts precedence.

A Framed Perspective

Picking a Framework

Continuity

Modus operandi are built on people understandings, practices, and skills that cannot be changed as easily as tools. In other words “big bang” solutions should be avoided when considering changes in systems governance and software engineering processes.

Duality

While any solution will necessarily entail collaboration between business and systems analysts, they belong to realms with inbuilt differences of concerns and culture. Assuming that the divide can be sewed up by canny procedures is tantamount to ignore the very purpose of the framework.

Parsimony

According to Occam’s Razor, when faced with competing options, the one with the fewest assumptions should be selected. That principle is especially critical when dealing with organizational options that cannot be easily reversed or even adjusted. Hence, when alternative engineering processes are considered, a simple and robust solution should be selected as a default option, and extensions added for specific projects if and when needed.

Artifacts Precedence

Assuming that enterprise architecture entails the continuity, perennity and reuse of shared descriptions and understandings, symbolic artifacts can be seen as the corner-stone of the whole undertaking. As a corollary, and whatever the framework or methodology, the core of managed artifacts should be clearly defined before considering the processus that will use them.

Further Reading

Capabilities vs Processes

October 21, 2014

Summary

Enterprise architecture being a nascent discipline, its boundaries and categories of concerns are still in the making. Yet, as blurs on pivotal concepts are bound to jeopardize further advances, clarification is called upon for the concept of “capability”, whose meaning seems to dither somewhere between architecture, function and process.

ccc

Jumping capability of a four-legged structure (Edgard de Souza)

Hence the benefits of applying definition guidelines to characterize capability with regard to context (architectures) and purpose (alignment between architectures and processes).

Context: Capability & Architecture

Assuming that a capability describes what can be done with a resource, applying the term to architectures would implicitly make them a mix of assets and mechanisms meant to support processes. As a corollary, such understanding would entail a clear distinction between architectures on one hand and supported processes on the other hand; that would, by the way, make an oxymoron of the expression “process architecture”.

On that basis, capabilities could be originally defined independently of business specificities, yet necessarily with regard to architecture context:

  • Business capabilities: what can be achieved given assets (technical, financial, human), organization, and information structures.
  • Systems capabilities: what kind of processes can be supported by systems functionalities.
  • Platforms capabilities: what kind of functionalities can be implemented.
Requirements should be mapped to enterprise architecture capabilities

Architectures Capabilities

Taking a leaf from the Zachman framework, five core capabilities can be identified cutting across those architecture contexts:

  • Who: authentication and authorization for agents (human or otherwise) and roles dealing with the enterprise, using system functionalities, or connecting through physical entry points.
  • What: structure and semantics of business objects, symbolic representations, and physical records.
  • How: organization and versatility of business rules.
  • Where: physical location of organizational units, processing units, and physical entry points.
  • When: synchronization of process execution with regard to external events.

Being set with regard to architecture levels, those capabilities are inherently holistic and can only pertain to the enterprise as a whole, e.g for benchmarking. Yet that is not enough if the aim is to assess architectures capabilities with regard to supported processes.

Purpose: Capability vs Process

Given that capabilities describe architectural features, they can be defined independently of processes. Pushing the reasoning to its limit, one could, as illustrated by the table above, figure a capability without even the possibility of a process. Nonetheless, as the purpose of capabilities is to align supporting architectures and supported processes, processes must indeed be introduced, and the relationship addressed and assessed.

First of all, it’s important to note that trying to establish a direct mapping between capabilities and processes will be self-defeating as it would fly in the face of architecture understood as a shared construct of assets and mechanisms. Rather, the mapping of processes to architectures is best understood with regard to architecture level: traceable between requirements and applications, designed at system level, holistic at enterprise level.

Alignment is direct b

Alignment with processes is mediated by architecture complexity.

Assuming a service oriented architecture, capabilities would be used to align enterprise and system architectures with their process counterparts:

  • Holistic capabilities will be aligned with business objectives set at enterprise level.
  • Services will be aligned with business functions and designed with regard to holistic capabilities.
dddd

Services can be designed with regard to holistic capabilities

Yet, even without a service oriented architecture, that approach could still be used to define functional architecture with regard to holistic capabilities.

Further Readings

External Links

Alignment: from Empathy to Abstraction

October 4, 2014

Summary

Empathy is commonly defined as the ability to directly share another person’s state of mind: feelings, emotions, understandings, etc. Such concrete aptitude would clearly help business analysts trying to capture users’ requirements; and on a broader perspective it could even contribute to enterprise capability to foretell trends from actual changes in business environment.

vvvvvv (Picasso)

Perceptions and Abstractions (Picasso)

Analysis goes in the opposite direction as it extracts abstract descriptions from concrete requirements, singling out a subset of features to be shared while foregoing the rest. The same process of abstraction being carried out for enterprise business and organisation on one hand,  systems and software architectures on the other hand.

That dual perspective can be used to define alignment with regard to the level under consideration: users, systems, or enterprise.

Requirements & Architectures

Requirements capture can be seen as a transition from spoken to written language, its objective being to write down what users tell about what they are doing or what they want to do. For that purpose analysts are presented with two basic policies: they can anchor requirements around already known business objects or processes, or they can stick to users’ stories, identify new structuring entities, and organize requirements alongside. In any case, and except for standalone applications, the engineering  process is to be carried out along two paths:

  • One concrete for the development of applications, the objective being to meet users’ requirements with regard to business logic and quality of service.
  • The other abstract for requirements analysis, the objective being to identify new business functions and features and consolidate them with those already supporting current business processes.

Those paths are set in orthogonal dimensions as concrete paths connect users’ activities to applications, and abstractions can only be defined between requirements levels.

Concrete (brown) and Abstract (blue) paths of requirements engineering

Concrete (brown) and Abstract (blue) paths of requirements engineering

As business analysts stand at the crossroad, they have to combine empathy when listening to users concerns and expectations, and abstraction when mapping users requirements with systems functionalities and enterprise business processes.

Architectures & Alignments

As it happens, the same reasoning can be extended to the whole of engineering process, with analysis carried out to navigate between abstraction levels of architectures and requirements, and design used for the realization of each requirements level into its corresponding architecture level:

  • Users’ stories (or more precisely corresponding uses cases) are realized by applications.
  • Business functions and features are realized by services (assuming a service oriented architecture), which are meant to be an abstraction of applications.
  • Business processes are realized by enterprise capabilities, which can be seen as an abstraction of services.
How requirements are realized by design at each architecture level

How requirements are realized by design at each architecture level

That matrix can be used to define three types of alignment:

  • At users level the objective is to ensure that applications are consistent with business logic and provide the expected quality of service. That is what requirements traceability is meant to achieve.
  • At system level the objective is to ensure that business functions and features can be directly mapped to systems functionalities. That is what services oriented architectures (SOA) are  meant to achieve.
  • At enterprise level the objective is to ensure that the enterprise capabilities are congruent with its business objectives, i.e that they support its business processes through an effective use of assets. That is what maturity and capability models are meant to achieve.

That makes alignment a concrete endeavor whatever the level of abstraction of its targets, i.e not only for users and applications, but also for functions and capabilities.

Further Readings

Alignment for Dummies

September 15, 2014

Summary

The emergence of Enterprise Architecture as a discipline of its own has put the light on the necessary distinction between actual (aka business) and software (aka system) realms. Yet, despite a profusion of definitions for layers, tiers, levels, views, and other modeling perspectives, what should be a constitutive premise of system engineering remains largely ignored, namely: business and systems concerns are worlds apart and bridging the gap is the main challenge of architects and analysts, whatever their preserve.

(J. Baldessari)

Alignment with Dummies (J. Baldessari)

The consequences of that neglect appear clearly when enterprise architects consider the alignment of systems architectures and capabilities on one hand, with enterprise organization and business processes on the other hand. Looking into the grey zone in between, some approaches will line up models according to their structure, assuming the same semantics on both sides of the divide; others will climb up the abstraction ladder until everything will look alike. Not surprisingly, with the core interrogation (i.e “what is to be aligned ?”) removed from the equation, models will be turned into dummies enabling alignment to be carried out by simple pattern matching.

Models & Views

The abundance of definitions for layers, tiers or levels often masks two different understandings of models:

  • When models are understood as symbolic descriptions of sets of instances, each layer targets a different context with a different concern. That’s the basis of the Model Driven Architecture (MDA) and its distinction between Computation Independent Models (CIMs), Platform Independent Models (PIMs), and Platform Specific Models (PSMs)
  • When models are understood as symbolic descriptions built from different perspectives, all layers targets the same context, each with a different concern. Along that understanding each view is associated to a specific aspect or level of abstraction: processes view, functional view, conceptual view, technical view, etc.

As it happens, many alignment schemes use, implicitly or explicitly, the second understanding without clarifying the underlying assumptions regarding the backbone of artifacts. That neglect is unfortunate because, to be of any significance, views will have to be aligned with regard to those artifacts.

What is to be aligned

Whatever the labels and understandings, alignment is meant to deal with two main problems: how business processes are supported by systems functionalities, and how those functionalities are to be implemented. Given that the latter can be fully dealt with at system level, the focus can be put on the alignment of business processes and functional architectures.

A naive solution could be to assume services on both processes and systems sides. Yet, the apparent symmetry covers a tautology: while aiming for services oriented architectures on the systems side would be legitimate, if not necessarily realistic, taking for granted that business processes also tally with services would presume some prior alignment, in other words that the problem has already been solved.

The pragmatic and logically correct approach is therefore to map business processes to system functionalities using whatever option is available, models (CIMs vs PIMs), or views (processes vs functions). And that is where the distinction between business and software semantics is critical: assuming the divide can be overlooked, some “shallow” alignment could be carried out directly providing the models can be translated into some generic language; but if the divide is acknowledged a “deep” alignment will have to be supported by a semantics bridge built across.

Shallow Alignment

Just like models are meant to describe sets of instances, meta-models are meant to describe instances of models independently of their respective semantics. Assuming a semantic continuity between business and systems models, meta-models like OMG’s KDM (Knowledge Discovery Meta-model) appear to provide a very practical solution to the alignment problem.

From a practical point of view, one may assume that no model of functional architecture is available because otherwise it would be aligned “by design” and there would be no problem. So something has to be “extracted” from existing software components:

  1. Software (aka design) models are translated into functional architectures.
  2. Models of business processes are made compatible with the generic language used for system models.
  3. Associations are made based on patterns identified on each side.

While the contents of the first and third steps are well defined and understood, that’s not the case for the second step which take for granted the availability of some agreed upon modeling semantics to be applied to both functional architecture and business processes. Unfortunately that assumption is both factually and logically inconsistent:

  • Factually inconsistent: it is denied by the plethora of candidates claiming for the role, often with partial, overlapping, ambiguous, or conflicting semantics.
  • Logically inconsistent: it simply dodges the question (what’s the meaning of alignment between business processes and supporting systems) either by lumping together the semantics of the respective contexts and concerns, or by climbing up the ladder of abstraction until all semantic discrepancies are smoothed out.

Alignments built on that basis are necessarily shallow as they deal with artifacts disregarding of their contents, like dummies in test plans. As a matter of fact the outcome will add nothing to traceability, which may be enough for trivial or standalone processes and applications, but is to be meaningless when applied at architecture level.

Deep Alignment

Compared to the shallow one, deep alignment, instead of assuming a wide but shallow commonwealth, tries to identify the minimal set of architectural concepts needed to describe alignment’s stake. Moreover, and contrary to the meta-modelling approach, the objective is not to find some higher level of abstraction encompassing the whole of models, but more reasonably to isolate the core of architecture concepts and constructs with shared and unambiguous meanings to be used by both business and system analysts.

That approach can be directly set along the MDA framework:

Languages: general purpose (blue), process or domain specific (green), or design.

Deep alignment makes a distinction between what is at stake at architecture level (blue), from the specifics of process or domain (green), and design (brown).

  • Contexts descriptions (UML, DSL, BPM, etc) are not meant to distinguish between architectural constructs and specific ones.
  • Computation independent models (CIMs) describe business objects and processes combining core architectural constructs (using a generic language like UML), with specific business ones. The former can be mapped to functional architecture, the latter (e.g rules) directly transformed into design artifacts.
  • Platform independent models (PIMs) describe functional architectures using core constructs and framework stereotypes, possibly enriched with specific artifacts managed separately.
  • Platform specific models (PSMs) can be obtained through transformation from PIMs, generated using specific languages, or refactored from legacy code.

Alignment can so focus on enterprise and systems architectural stakes leaving the specific concerns dealt with separately, making the best of existing languages.

Alignment & Traceability

As mentioned above, comparing alignment with traceability may help to better understand its meaning and purpose.

  • Traceability is meant to deal with links between development artifacts from requirements to software components. Its main objective is to manage changes in software architecture and support decision-making with regard to maintenance and evolution.
  • Alignment is meant to deal with enterprise objectives and systems capabilities. Its main objective is to manage changes in enterprise architecture and support decision-making with regard to organization and systems architecture.

cccc

As a concluding remark, reducing alignment to traceability may counteract its very purpose and make it pointless as a tool for enterprise governance.

Further readings


Follow

Get every new post delivered to your Inbox.

Join 329 other followers