Archive for the ‘Knowledge Management’ Category

Events & Decision-making

September 9, 2014

Objective

Between Internet-of-Things and ubiquitous social networks, enterprises’ environments are turning into unified open spaces, transforming the divide between operational and decision-making systems into a pitfall for corporate governance. That jeopardy can be better understood when one consider how the processing of events affect decision-making.

divination_JW_Waterhouse

Making sense of event (J.W. Waterhouse)

Events & Information Processing

Enterprises’ success critically depends on their ability to track, understand, and exploit changes in their environment; hence the importance of a fast, accurate, and purpose-driven reading of events.

That is to be achieved by picking the relevant facts to be tracked, capturing the associated data, processing the data into meaningful information, and finally putting that information into use as knowledge.

From Facts to Knowledge and Back

From Facts to Knowledge and Back

Those tasks have to be carried out iteratively, dealing with both external and internal events:

  • External events are triggered by changes in the state of actual objects, activities, and expectations.
  • Internal events are triggered by the ensuing changes in the symbolic representations of objects and processes as managed by systems.

With events set at the root of the decision-making process, they will also define the time frames.

Events & Decisions Time Frames

As a working hypothesis, decision-making can be defined as real-time knowledge management:

  • To begin with, a real-time scale is created by new facts (t1) registered through the capture of events and associated data (t2).
  • A symbolic intermezzo is then introduced during which data is analyzed, information updated (t3), knowledge extracted, and decisions taken (t4);
  • The real-time scale completes with decision enactment and corresponding change in facts (t5).
Time scale of decision making (real time events are in red, symbolic ones in blue)

Time scale of decision making (real time events are in red, symbolic ones in blue)

The next step is to bring together events and knowledge.

Events & Changes in Knowns & Unknowns

As Donald Rumsfeld once suggested, decision-making is all about the distinction between things we know that we know, things that we know we don’t know, and things we don’t know we don’t know. And that classification can be mapped to the nature of events and the processing of associated data:

  • Known knowns (KK) are traced through changes in already defined features of identified objects, activities or expectations. Corresponding external events are expected and the associated data can be immediately translated into information.
  • Known unknowns (KU) are traced through changes in still undefined features of identified objects, activities or expectations. Corresponding external events are unexpected and the associated data cannot be directly translated into information.
  • Unknown unknowns (UU) are traced through changes in still undefined objects, activities or expectations. Since the corresponding symbolic representations are still to be defined, both external and interval events are unexpected.
vvvvv

Knowledge lifespan is governed by external events

Given that decisions are by nature set in time-frames, they should be mapped to changes in environments, or more precisely to the information carried out by the events taken into consideration.

Knowledge & Decision Making

Events bisect time-scales between before and after, past and future; as a corollary, the associated information (or lack thereof) about changes can be neatly allocated to the known and unknown of current and prospective states of affairs.

Changes in the current states of affairs are carried out by external events:

  • Known knowns (KK): when events are about already defined features of objects, activities or expectations, the associated data can be immediately used to update the states of their symbolic representation.
  • Known unknowns (KU): when events entail new features of already defined objects, activities or expectations, the associated data must be analyzed in order to adjust existing symbolic representations.
  • Unknown unknowns (UU): when events entail new objects, activities or expectations, the associated data must be analyzed in order to build new symbolic representations.

As changes in current states of affairs are shadowed by changes in their symbolic representation, they generate internal events which in turn may trigger changes in prospective states of affairs:

  • Known knowns (KK): updating the states of well-defined objects, activities or expectations may change the course of action but should not affect the set of possibilities.
  • Known unknowns (KU): changes in the set of features used to describe objects, activities or expectations may affect the set of tactical options, i.e ones that are can be set for individual production life-cycles.
  • Unknown unknowns (UU): introducing new types of objects, activities or expectations is bound to affect the set of strategic options, i.e ones that are encompass multiple production life-cycles.

Interestingly, those levels of knowledge appear to be congruent with usual horizons in decision-making: operational , tactical, and strategic:

Decision-making and knowledge level

The scope of decision-making is set by knowledge level

  • Operational: full information on actual states allows for immediate appraisal of prospective states.
  • Tactical: partially defined actual states allow for periodic appraisal of prospective states in synch with production cycles.
  • Strategic: undefined actual states don’t allow for periodic appraisal of prospective states in synch with production cycles; their definition may also be affected through feedback.

Given that those levels of appraisal are based on conjectural information (internal events) built from fragmentary or fuzzy data (external events), they have to be weighted by risks.

Weighting the Risks

Perfect information would guarantee risk-free future and would render decision-making pointless. As a corollary, decisions based on unreliable information entail risks that must be traced back accordingly:

  • Operational: full and reliable information allows for risk-free decisions.
  • Tactical: when bounded by well-defined contexts with known likelihoods, partial or uncertain information allows for weighted costs/benefits analysis.
  • Strategic: set against undefined contexts or unknown likelihoods decision-making cannot fully rely on weighted costs/benefits analysis and must encompass policy commitments, possibly with some transfer of risks, e.g through insurance contracts.

That provides some kind of built-in traceability between the nature and likelihood of events, the reliability of information, and the risks associated to decisions.

Decision Timing

Considering decision-making as real-time knowledge management driven by external (aka actual) events and governed by internal (aka symbolic) ones, how would that help to define decisions time frames ?

To begin with, such time frames would ensure that:

  • All the relevant data is captured as soon as possible (t1>t2).
  • All available data is analyzed as soon as possible (t2>t3).
  • Once a decision has been made, nothing can change during the interval between commitment and action (respectively t4 and t5).

Given those constraints, the focus of timing is to be on the interval between change in prospective states (t3) and decision (t4): once all information regarding prospective states is available, how long to wait before committing to a decision ?

Assuming that decisions are to be taken at the “last responsible moment”, i.e until not taking side could change the possible options, that interval will depend on the nature of decisions:

  • Operational decisions can be put to effect immediately. Since external changes can also be taken into account immediately, the timing is to be set by events occurring within production life-cycles.
  • Tactical decisions can only be enacted at the start of production cycles using inputs consolidated at completion. When analysis can be done in no time (t3=t4) and decisions enacted immediately (t4=t5), commitments can be taken from on cycle to the next. Otherwise some lag will have to be introduced. The last responsible moment for committing a decision will therefore be defined by the beginning of the next production cycle minus the time needed for enactment.
  • Strategic decisions are meant to be enacted according to predefined plans. The timing of commitments should therefore combine planning (when a decision is meant to be taken) and events (when relevant and reliable information is at hand).
The scope of decision-making must be aligned with architecture layers

The scope of decision-making should be aligned with architecture layers

Not surprisingly, when the scope of decision-making is set by knowledge level, it appears to coincide with architecture layers: strategic for enterprise assets, tactical for systems functionalities, and operational for platforms and resources. While that clearly calls for more verification and refinements, such congruence put events processing, knowledge management, and decision-making within a common perspective.

Further Reading

Semantic Web: from Things to Memes

August 10, 2014

The new soup is the soup of human culture. We need a name for the new replicator, a noun which conveys the idea of a unit of cultural transmission, or a unit of imitation. ‘Mimeme’ comes from a suitable Greek root, but I want a monosyllable that sounds a bit like ‘gene’. I hope my classicist friends will forgive me if I abbreviate mimeme to meme…

Richard Dawkins

The genetics of words

The word meme is the brain child of Richard Dawkins in his book The Selfish Gene, published in 1976, well before the Web and its semantic soup. The emergence of the ill-named “internet-of-things” has brought about a new perspective to Dawkins’ intuition: given the clear divide between actual worlds and their symbolic (aka web) counterparts, why not chart human culture with internet semantics ?

ccccc

Symbolic Dissonance: Flowering Knives (Adel Abdessemed).

With interconnected digits pervading every nook and cranny of material and social environments, the internet may be seen as a way to a comprehensive and consistent alignment of language constructs with targeted realities: a name for everything, everything with its name. For that purpose it would suffice to use the web to allocate meanings and don things with symbolic clothes. Yet, as the world is not flat, the charting of meanings will be contingent on projections with dissonant semantics. Conversely, as meanings are not supposed to be set in stone, semantic configurations can be adjusted continuously.

Internet searches: words at work

Semantic searches (as opposed to form or pattern based ones) rely on textual inputs (key words or phrase) aiming at specific reality or information about it:

  • Searches targeting reality are meant to return sets of instances (objects or phenomena) meeting users’ needs (locations, people, events, …).
  • Searches targeting information are meant to return documents meeting users’ interest for specific topics (geography, roles, markets, …).
What are you looking for ?

Looking for information or instances.

Interestingly, the distinction between searches targeting reality and information is congruent with the rhetorical one between metonymy and metaphor, the former best suited for things, the latter for meanings.

Rhetoric: Metonymy & Metaphor

As noted above, searches can be heeded by references to identified objects, the form of digital objects (sound, visuals, or otherwise), or associations between symbolic representations. Considering that finding referenced objects is basically a technical problem, and that pattern matching is a discipline of its own,  the focus is to be put on the third case, namely searches driven by words. From that standpoint searching the web becomes a problem of rhetoric, namely: how to use language to get rapidly and effectively the most accurate outcome to a query. And for that purpose rhetoric provides two basic contraptions: metonymy and metaphor.

Both metonymy and metaphor are linguistic constructs used to substitute a word (or a phrase) by another without altering its meaning. When applied to searches, they are best understood in terms of extensions and intensions, extensions standing for the actual set of objects and behaviors, and intensions for the set of features that characterize these instances.

Metonymy uses contiguity to substitute target terms for source ones, contiguity being defined with regard to their respective extensions. For instance, given that US Presidents reside at the White House, Washington DC, each term can be used instead.

Metonymy use physical or functional proximity (full line) to match terms (dashed line)

Metonymy use physical or functional proximity (full line) to match extensions (dashed line)

Metaphor uses similarity to substitute target terms for source ones, similarity being defined with regard to a shared subset of features, the others being ignored. Hence, in contrast to metonymy, metaphor is based on intensions.

Metaphors use analogy to maps terms whose intensions share a selected subset of features

Metaphors use analogy (dashed line) to maps terms whose intensions (dotted line) share a selected subset of features

As it happens, and not by chance, those rhetorical constructs can be mapped to categories of searches:

  • Metonymy will be used to navigate across instances of things and phenomena following structural, functional, or temporal associations.
  • Metaphors will be used to navigate across terms and concepts according to similarities, ontologies, and abstractions.

As a corollary, searches can be seen as scaffolds supporting the building of meanings.

Selected metaphors are used to extract occurrences to be refined using metonymies.

The building of meanings, back and forth between metonymies and metaphors

Memes & their making

Today general purpose search engines combine brains and brawn to match queries to references, the former taking advantage of language parsers and statistical inferences, the latter running heuristics on gigantic repositories of data and searches. Given the exponential growth of accumulated data and the benefits of hindsight, such engines can continuously improve the relevancy of their answers; moreover, their advances are not limited to accuracy and speed but also embrace a better understanding of queries. And that brings about a qualitative change: accruing improved understandings to knowledge bases provides search engines with learning capabilities.

Assuming that such learning is autonomous, self-sustained, and encompasses concepts and categories, the Web could be seen as a semantic incubator for the development of meanings. That would vindicate Dawkins’ intuition comparing the semantic evolution of memes to the Darwinian evolution of living organisms.

Further Reading

External Links

Governance, Regulations & Risks

July 16, 2014

Governance & Environment

Confronted with spreading and sundry regulations on one hand, the blurring of enterprise boundaries on the other hand, corporate governance has to adapt information architectures to new requirements with regard to regulations and risks. Interestingly, those requirements seem to be driven by two different knowledge policies: what should be known with regard to compliance, and what should be looked for with regard to risk management.

Zhigang-tang2

Governance (Zhigang-tang)

 Compliance: The Need to Know

Enterprise are meant to conform to rules, some set at corporate level, others set by external entities. If one may assume that enterprise agents are mostly aware of the former, that’s not necessary the case for the latter, which means that information and understanding are prerequisites for regulatory compliance :

  • Information: the relevant regulations must be identified, collected, and their changes monitored.
  • Understanding: the meanings of regulations must be analyzed and the consequences of compliance assessed.

With regard to information processing capabilities, it must be noted that, since regulations generally come as well structured information with formal meanings, the need for data processing will be limited, if at all.

With regard to governance, given the pervasive sources of external regulations and their potentially crippling consequences, the challenge will be to circumscribe the relevant sources and manage their consequences with regard to business logic and organization.

Regulatory Compliance vs Risks Management

Regulatory Compliance vs Risks Management

 

Risks: The Will to Know

Assuming that the primary objective of risk management is to deal with the consequences (positive or negative) of unexpected events, its information priorities can be seen as the opposite of the ones of regulatory compliance:

  • Information: instead of dealing with well-defined information from trustworthy sources, risk management must process raw data from ill-defined or unreliable origins.
  • Understanding: instead of mapping information to existing organization and business logic, risk management will also have to explore possible associations with still potentially unidentified purposes or activities.

In terms of governance risks management can therefore be seen as the symmetric of regulatory compliance: the former relies on processing data into information and expanding the scope of possible consequences, the latter on translating information into knowledge and reducing the scope of possible consequences.

With regard to regulations governance is about reduction, with regard to risks it's about expansion

With regard to regulations governance is about reduction, with regard to risks it’s about expansion

Not surprisingly, that understanding coincides with the traditional view of governance as a decision-making process balancing focus and anticipation.

Decision-making: Framing Risks and Regulations

As noted above, regulatory compliance and risk management rely on different knowledge policies, the former restrictive, the latter inclusive. That distinction also coincides with the type of factors involved and the type of decision-making:

  • Regulations are deontic constraints, i.e ones whose assessment is not subject to enterprises decision-making. Compliance policies will therefore try to circumscribe the footprint of regulations on business activities.
  • Risks are alethic constraints, i.e ones whose assessment is subject to enterprise decision-making. Risks management policies will therefore try to prepare for every contingency.

Yet, when set on a governance perspective, that picture can be misleading because regulations are not always mandatory, and even mandatory ones may left room for compliance adjustments. And when regulations are elective, compliance is less driven by sanctions or penalties than by the assessment of business or technical alternatives.

Regulations & Risks : decision patterns

Decision patterns: Options vs Arbitrage

Conversely, risks do not necessarily arise from unidentified events and upshot but can also come from well-defined outcomes with unknown likelihood. Managing the latter will not be very different from dealing with elective regulations except that decisions will be about weighted opportunity costs instead of business alternatives. Similarly, managing risks from unidentified events and upshot can be compared to compliance to mandatory regulations, with insurance policies instead of compliance costs.

What to Decide: Shifting Sands

As regulations can be elective, risks can be interpretative: with business environments relocated to virtual realms, decision-making may easily turns to crisis management based on conjectural and ephemeral web-driven semantics. In that case ensuing overlaps between regulations and risks can only be managed if  data analysis and operational intelligence are seamlessly integrated with production systems.

When to Decide: Last Responsible Moment

Finally, with regulations scope and weighted risks duly assessed, one have to consider the time-frames of decisions about compliance and commitments.

Regarding elective regulations and defined risks, the time-frame of decisions is set at enterprise level in so far as options can be directly linked to business strategies and policies. That’s not the case for compliance to mandatory regulations or commitments exposed to undefined risks since both are subject to external contingencies.

Whatever the source of the time-frame, the question is when to decide, and the answer is at the “last responsible moment”, i.e not until taking side could change the possible options:

  • Whether elective or mandatory, the “last responsible moment” for compliance decision is static because the parameters are known.
  • Whether defined or not, the “last responsible moment” for commitments exposed to risks is dynamic because the parameters are to be reassessed periodically or continuously.
Compliance and risk taking: last responsible moments to decide

Compliance and risk taking: last responsible moments to decide

One step ahead along that path of reasoning, the ultimate challenge of regulatory compliance and risk management would be to use the former to steady the latter.

Further Readings

EA: Entropy Antidote

June 24, 2014

Cybernetics & Governance

When seen through cybernetics glasses, enterprises are social entities whose sustainability and capabilities hang on their ability to track changes in their environment and exploit opportunities before their competitors. As a corollary, corporate governance is to be contingent on fast, accurate and purpose-driven reading of  environments on one hand, effective use of assets on the other hand.

menloop_moholy-nagy

Entropy grows from confusion (Lazlo Moholo-Nagy)

And that will depend on enterprises’ capacity to capture data, process it into information, and translate information into knowledge supporting decision-making. Since that capacity is itself determined by architectures, a changing and competitive environment will require continuous adaptation of enterprises’ organization. That’s when disorder and confusion may increase: lest a robust and flexible organization can absorb and consolidate changes, variety will progressively clog the systems with specific information associated with local adjustments.

Governance & Information

Whatever its type, effective corporate governance depends on timely and accurate information about the actual state of assets and environments. Hence the need to assess such capabilities independently of the type of governance structure that has to be supported, and of any specific business context.

cc

Effective governance is contingent on the distance between actual state of assets and environment on one hand, relevant information on the other hand.

That put the focus on the processing of information flows supporting the governance of interactions between enterprises and their environment:

  • How to identify the relevant facts and monitor them as accurately and timely as required.
  • How to process external data from environment into information, and to consolidate the outcome with information related to enterprise objectives and internal states.
  • How to put the consolidated information to use as knowledge supporting decision-making.
  • How to monitor processes execution and deal with relevant feedback data.
ccc

What is behind enterprise ability to track changes in environment and exploit opportunities.

Enterprises being complex social constructs, those tasks can only be carried on through structured organization and communication mechanisms  supporting the processing of information flows.

Architectures & Changes

Assuming that enterprise governance relies on accurate and timely information with regard to internal states and external environments, the first step would be to distinguish between the descriptions of actual contexts on one hand, of symbolic representation on the other hand.

Models are used to describe actual or symbolic objects and behaviors

Enterprise architectures can be described along two dimensions: nature (actual or symbolic), and target (objects or activities).

Even for that simplified architecture, assessing variety and information processing capabilities in absolute terms would clearly be a challenge. But assessing variations should be both easier and more directly useful.

Change being by nature relative to time, the first thing is to classified changes with regard to time-frames:

  • Operational changes are occurring, and can be dealt with, within the time-frame of processes execution.
  • Structural changes affect contexts and assets and cannot be dealt with at process level as they.

On that basis the next step will be to examine the tie-ups between actual changes and symbolic representations:

  • From actual to symbolic: how changes in environments are taken into account; how processes execution and state of assets are monitored.
  • From symbolic to actual: how changes in business models and processes design are implemented.
What moves first: actual contexts and processes or enterprise abstractions

What moves first: actual contexts and processes or enterprise abstractions

The effects of  those changes on overall governance capability will depend on their source (internal or external) and modality (planned or not).

Changes & Information Processing

As far as enterprise governance is considered, changes can be classified with regard to their source and modality.

With regard to source:

  • Changes within the enterprise are directly meaningful (data>information), purpose-driven (information>knowledge), and supposedly manageable.
  • Changes in environment are not under control, they may need interpretation (data<?>information), and their consequences or use are to be explored (information<?>knowledge).

With regard to modality:

  • Data associated with planned changes are directly meaningful (data>information) whatever their source (internal or external); internal changes can also be directly associated with purpose (information>knowledge);
  • Data associated with unplanned internal changes can be directly interpreted (data>information) but their consequences have to be analyzed (information<?>knowledge); data associated with unplanned external changes must be interpreted (data<?>information).
Changes can be classified with regard to their source (enterprise or environment) and modality (planned or not).

Changes can be classified with regard to their source (enterprise or environment) and modality (planned or not).

Assuming with Stafford Beer that viable systems must continuously adapt their capabilities to their environment, this taxonomy has direct consequences for their governance:

  • Changes occurring within planned configurations are meant to be dealt with, directly (when stemming from within enterprise), or through enterprise adjustments (when set in its environment).
  • That assumption cannot be made for changes occurring outside planned configurations because the associated data will have to be interpreted and consequences identified prior to any decision.

Enterprise governance will therefore depend on the way those changes are taken into account, and in particular on the capability of enterprise architectures to process the flows of associated data into information, and to use it to deal with variety.

EA & Models

Originally defined by thermodynamic as a measure of heat dissipation, the concept of entropy has been taken over by cybernetics as a measure of  the (supposedly negative) variation in the value of information supporting corporate governance.

As noted above, the key challenge is to manage the relevancy and timely interpretation and use of the data, in particular when new data cannot be mapped into predefined  semantic frame, as may happen with unplanned changes in contexts. How that can be achieved will depend on the processing of data and its consolidation into information as carried on at enterprise level or by business and technical units.

Given that data is captured at the periphery of systems, one may assume that the monitoring of operations performed by business and technical units are not to be significantly affected by architectures. The same assumption can be made for market research meant to be carried on at enterprise level.

Architecture Layers and Information Processing

Architecture Layers and Information Processing

Within that working assumption, the focus is to be put on enterprise architecture capability to “read” environments (from data to information), as well as to “update” itself (putting information to use as knowledge).

With regard to “reading” capabilities the primary factor will be traceability:

  • At technical level traceability between components and applications is required if changes in business operations are to be mapped to IT architecture.
  • At organizational level, the critical factor for governance will be the ability to adapt the functionalities of supporting systems to changes in business processes. And that will be significantly enhanced if both can be mapped to shared functional concepts.

Once the “readings” of external changes are properly interpreted with regard to internal assets and objectives, enterprise governance will have to decide if changes can be dealt with by the current architecture or if it has to be modified. Assuming that change management is an intrinsic part of enterprise governance, “updating” capabilities will rely on a continuous, comprehensive and consistent management of information, best achieved through models, as epitomized by the Model Driven Architecture (MDA) framework.

vvvvv

Models as bridges between data and knowledge

Based on requirements capture and analysis, respective business, functional, and technical information is consolidated into models:

  • At technical level platform specific models (PSMs) provide for applications and components traceability. They support maintenance and configuration management and can be combined with design patterns to build modular software architecture from reusable components.
  • At organizational level, platform independent models (PIMs) are used to align business processes with systems functionalities. Combined with functional patterns the objective is to use service oriented architectures as a level of indirection between organization and information technology.
  • At enterprise level, computation independent models (CIMs) are meant to bring together corporate tangible and intangible assets. That’s where corporate culture will drive architectural changes  from systems legacy, environment challenges, and planned designs.

EA & Entropy

Faced with continuous changes in their business environment and competition, enterprises have to navigate between rocks of rigidity and whirlpools of variety, the former policies trying to carry on with existing architectures, the latter adding as many variants as seems to appear to business objects, processes, or channels. Meeting environments challenges while warding off growing complexity will depend on the plasticity and versatility of architectures, more precisely on their ability to “digest” the variety of data and transform it into corporate knowledge. Along that perspective enterprise architecture can be seen as a natural antidote to entropy, like a corporate cousin of  Maxwell’s demon, standing at enterprise gates and allowing changes in a way that would decrease internal complexity relative to the external one.

Further Readings

The Finger & the Moon: Fiddling with Definitions

June 5, 2014

Objective

Given the glut of redundant, overlapping, circular, or conflicting definitions, it may help to remember that “define” literally means putting limits upon. Definitions and their targets are two different things, the former being language constructs (intensions), the latter set of instances (extensions).  As a Chinese patriarch once said, the finger is not to be confused with the moon.

Fingering definition

Fiddling with words: to look at the moon, it is necessary to gaze beyond the finger.

In order to gauge and cut down the distance between words and world, definitions can be assessed and improved at functional and semantic levels.

What’s In & What’s Out

At the minimum a definition must support clear answers at whether any occurrence is to be included in or excluded from the defined set. Meeting that straightforward condition will steer clear of self-sustained semantic wanderings.

Functional Assessment

Since definitions can be seen as a special case of non exhaustive classifications, they can be assessed through a straightforward two-steps routine:

  1. Effectiveness: applying candidate definition to targeted instances must provide clear and unambiguous answers (or mutually exclusive subsets).
  2. Usefulness: the resulting answers (or subsets) must directly support well-defined purposes.

Such routine meets Occam’s razor parsimony principle by producing outcomes that are consistent (“internal” truth, i.e no ambiguity), sufficient (they meet their purpose), and simple (mutually exclusive classifications are simpler than overlapping ones).

Functional assessment should also take feedbacks into account as instances can be refined and purposes reconsidered with the effect of improving initially disappointing outcomes. For instance, a good requirements taxonomy is supposed to be used to allocate responsibilities with regard to acceptance, and carrying out classification may be accompanied by a betterment of requirements capture.

Semantic Improvement

Once functionally checked, candidate definitions can be assessed for semantics, and adjusted as to maximize the scope and consistency of their footprint. While different routines can be used, all rely on tweaking words with neighboring meanings.

Here a routine posted by Alexander Samarin: find a sentence with a term, substitute the term by its definition and check if the sentence still has the sense. (Usually a few sentences are used for such a check).

For example, taking three separate definitions:
1. Discipline is a system of governing rules.
2. A business process is an explicitly defined coordination for guiding the enactment of business activity flows.
3. Business Process Management (BPM) is a discipline for the use of any combination of modeling, automation, execution, control, measurement and optimization of business processes.

And a combined (or stand-alone) definition:

Business process management (BPM) is [a system of governing rules] for the use of any combination of modeling, automation, execution, control, measurement and optimization of [explicitly defined coordination for guiding the enactment of business activity flows]

Further Readings

EA Documentation: Taking Words for Systems

May 18, 2014

In so many words

Given the clear-cut and unambiguous nature of software, how to explain the plethora of  “standard” definitions pertaining to systems, not to mention enterprises, architectures ?

Documents and architectures, which grows on the other (Gilles Barbier).

Documents and Systems: which ones nurture the others (Gilles Barbier).

Tentative answers can be found with reference to the core functions documents are meant to support: instrument of governance, medium of exchange, and content storage.

Instrument of Governance: the letter of the law

The primary role of documents is to support the continuity of corporate identity and activities with regard to their regulatory and business environments. Along that perspective documents are to receive legal tender for the definitions of parties (collective or individuals), roles, and contracts. Such documents are meant to support the letter of the law, whether set at government, industry, or corporate level. When set at corporate level that letter may be used to assess the capability and maturity of architectures, organizations, and processes. Whatever the level, and given their role for legal tender or assessment, those documents have to rely on formal textual definitions, possibly supplemented with models.

Medium of Exchange: the spirit of the law

Independently of their formal role, documents are used as medium of exchange, across corporate entities as well as internally between their organizational units. When freed from legal or governance duties, such documents don’t have to carry authorized or frozen interpretations and assorted meanings can be discussed and consolidated in line with the spirit of the law. That makes room for model-based documents standing on their own, with textual definitions possibly set in the background. Given the importance of direct discussions in the interpretation of their contents, documents used as medium of (immediate) exchange should not be confused with those used as means of storage (exchange along time).

Means of Storage: letter only

Whatever their customary functions, documents can be used to store contents to be reinstated at a later stage. In that case, and contrary to direct (aka immediate) exchange, interpretations cannot be consolidated through discussion but have to stand on the letter of the documents themselves. When set by regulatory or organizational processes, canonical interpretations can be retrieved from primary contexts, concerns, or pragmatics. But things can be more problematic when storage is performed for its own purpose, without formal reference context. That can be illustrated by legacy applications with binary code can be accompanied by self-documented source code, source with documentation, source with requirements, generated source with models, etc.

Documentation and Enterprise Architecture

Assuming that the governance of structured social organizations must be supported by comprehensive documentation, documents must be seen as a necessary and intrinsic component of enterprise architectures and their design should be aligned on concerns and capabilities.

As noted above, each of the basic functionalities comes with specific constraints; as a consequence a sound documentation policy should not mix functionalities. On that basis, documents should be defined by mapping purposes with users across enterprise architecture layers:

  • With regard to corporate environment, documentation requirements are set by legal constraints, directly (regulations and contracts) or indirectly (customary framework for transactions, traceability and audit).
  • With regard to organization, documents have to met two different objectives. As a medium of exchange they are meant to support the collaboration between organizational units, both at business level (processes) and across architecture levels. As an instrument of governance they are used to assess architecture capabilities and processes performances. Documents supporting those objectives are best kept separate if negative side effects are to be avoided.
  • With regard to systems functionalities, documents can be introduced for procurements (governance), development (exchange), and change (storage).
  • Within systems, the objective is to support operational deployment and maintenance of software components.
Documents’ purposes and users

Documents’ purposes and users

The next step will be to integrate documents pertaining to actual environments and organization (brown background) with those targeting symbolic artifacts (blue background).

Models are used to describe actual or symbolic objects and behaviors

Models are used to describe actual or symbolic objects and behaviors

That could be achieved with MBE/MDA approaches.

Further readings

 

Sifting through a Web of Things

September 27, 2013

The world is the totality of facts, not of things.

Ludwig Wittgenstein

Objective

At its inception, the young internet was all about sharing knowledge. Then, business concerns came to the web and the focus was downgraded to information. Now, exponential growth turns a surfeit of information into meaningless data, with the looming risk of web contents being once again downgraded. And the danger is compounded by the inroads of physical objects bidding for full netizenship and equal rights in the so-called “internet of things”.

cccc

How to put words on a web of things (Ai Weiwei)

As it happens, that double perspective coincides with two basic search mechanisms, one looking for identified items and the other for information contents. While semantic web approaches are meant to deal with the latter, it may be necessary to take one step further and to bring the problem (a web of things and meanings) and the solutions (search strategies) within an integrated perspective.

Down with the System Aristocrats

The so-called “internet second revolution” can be summarized as the end of privileged netizenship: down with the aristocracy of systems with their absolute lid on internet residency, within the new web everything should be entitled to have a voice.

cccc

Before and after the revolution: everything should have a say

But then, events are moving fast, suggesting behaviors unbecoming to the things that used to be. Hence the need of a reasoned classification of netizens based on their identification and communication capabilities:

  • Humans have inherent identities and can exchange symbolic and non symbolic data.
  • Systems don’t have inherent identities and can only exchange symbolic data.
  • Devices don’t have inherent identities and can only exchange non symbolic data.
  • Animals have inherent identities and can only exchange non symbolic data.

Along that perspective, speaking about the “internet of things” can be misleading because the primary transformation goes the other way:  many systems initially embedded within appliances (e.g cell phones) have made their coming out by adding symbolic user interfaces, mutating from devices into fully fledged systems.

Physical Integration: The Meaning of Things

With embedded systems colonizing every nook and cranny of the world, the supposedly innate hierarchical governance of systems over objects is challenged as the latter calls for full internet citizenship. Those new requirements can be expressed in terms of architecture capabilities :

  • Anywhere (Where): objects must be localized independently of systems. That’s customary for physical objects (e.g Geo-localization), but may be more challenging for digital ones on they way across the net.
  • Anytime (When): behaviors must be synchronized over asynchronous communication channels. Existing mechanism used for actual processes (e.g Network Time protocol) may have to be set against modal logic if it is used for their representation.
  • Anybody (Who): while business systems don’t like anonymity and rely on their interfaces to secure access, things of the internet are to be identified whatever their interface (e.g RFID).
  • Anything (What): objects must be managed independently of their nature, symbolic or otherwise (e.g 3D printed objects).
  • Anyhow (How): contrary to business systems, processes don’t have to follow predefined scripts and versatility and non determinism are the rules of the game.

Taking a sortie in a restaurant for example: the actual event is associated to a reservation, car(s) and phone(s) are active objects geo-localized at a fixed place and possibly linked to diners, great wines can be authenticated directly by smartphone applications, phones are used for conversations and pictures, possibly for adding to reviews, friends in the neighborhood can be automatically informed of the sortie and invited to join.

A dinner on the Net: place (restaurant), event (sortie), active objects (car, phone), passive object (wine), message (picture), business objects (reviews, reservations), and social beholders (network friends).

A dinner on the Net: place (restaurant), event (sortie), active objects (car, phone), passive object (wine), message (picture), business objects (reviews, reservations), and social beholders (network friends).

As this simple example illustrates, the internet of things brings together dumb objects, smart systems, and knowledgeable documents. Navigating such a tangle will require more than the Semantic Web initiative because its purpose points in the opposite direction, back to the origin of the internet, namely how to extract knowledge from data and information.

Moreover, while most of those “things” fall under the governance of the traditional internet of systems, the primary factor of change comes from the exponential growth of smart physical things with systems of their own. When those systems are “embedded”, the representations they use are encapsulated and cannot be accessed directly as symbolic ones. In other words those agents are governed by hidden agendas inaccessible to search engines. That problem is illustrated a contrario (things are not services) by services oriented architectures whose one of primary responsibility is to support services discovery.

Semantic Integration: The Actuality of Meanings

The internet of things is supposed to provide a unified perspective on physical objects and symbolic representations, with the former taken as they come and instantly donned in some symbolic skin, and the latter boiled down to their documentary avatars (as Marshall McLuhan famously said, “the medium is the message”). Unfortunately, this goal is also a challenge because if physical objects can be uniformly enlisted across the web, that’s not the case for symbolic surrogates which are specific to social entities and managed by their respective systems accordingly.

With the Internet of Systems, social entities with common endeavors agree on shared symbolic representations and exchange the corresponding surrogates as managed by their systems. The Internet of Things for its part is meant to put an additional layer of meanings supposedly independent of those managed at systems level. As far as meanings are concerned, the latter is flat, the former is hierarchized.

The internet of things is supposed to level the meaning fields, reducing knowledge to common sense.

The internet of things is supposed to level the meaning fields, reducing knowledge to common sense.

That goal raises two questions: (1) what belongs to the part governed by the internet of things and, (2) how is its flattened governance to be related to the structured one of the internet of systems.

A World of Phenomena

Contrary to what its name may suggest, the internet of things deals less with objects than with phenomena, the reason being that things must manifest themselves, or their existence be detected,  before being identified, if and when it’s possible.

Things first appear on radar when some binary footprint can be associated to a signalling event. Then, if things are to go further, some information has to be extracted from captured data:

  • Coded data could be recognized by a system as an identification tag pointing to recorded known features and meanings, e.g a bar code on a book.
  • The whole thing could be turned into its digital equivalent, and vice versa, e.g a song or a picture.
  • Context and meanings could only be obtained by linking the captured data to representations already identified and understood, e.g a religious symbol.
How to put things in place

From data to information: how to add identity and meaning to things

Whereas things managed by existing systems already come with net identities with associated meaning, that’s not necessarily the case for digitized ones as they may or may not have been introduced as surrogates to be used as their real counterpart: if handshakes can stand as symbolic contract endorsements, pictures thereof  cannot be used as contracts surrogates. Hence the necessary distinction between two categories of formal digitization:

  • Applied to symbolic objects (e.g a contract), formal digitization enables the direct processing of digital instances as if performed on actual  ones, i.e with their conventional (i.e business) currency. While those objects have no counterpart (they exist simultaneously in both realms), such digitized objects have to bear an identification issued by a business entity, and that put them under the governance of standard (internet of systems) rules.
  • Applied to binary objects  (e.g a fac-simile), formal digitization applies to digital instances that can be identified and authenticated on their own, independently of any symbolic counterpart. As a corollary, they are not meant to be managed or even modified and, as illustrated by the marketing of cultural contents (e.g music, movies, books …), their actual format may be irrelevant. Providing agreed upon de facto standards, binary objects epitomize internet things.

To conclude on footprint, the Internet of Things appears as a complement more than a substitute as it abides by the apparatus of the Internet of Systems for everything already under its governance, introducing new mechanisms only for the otherwise uncharted things set loose in outer reaches. Can the same conclusion hold for meanings ?

Organizational vs Social Meanings

As epitomized by handshakes and contracts, symbolic representations are all about how social behaviors are sanctioned.

Signals are physical events with open-ended interpretations

When not circumscribed within organizational boundaries, social behaviors are open to different interpretations.

In system-based networks representations and meanings are defined and governed by clearly identified organizations, corporate or otherwise. That’s not necessarily the case for networks populated by software agents performing unsupervised tasks.

The first generations of those internet robots (aka bots) were limited to automated tasks, performed on the account of their parent systems, to which they were due to report. Such neat hierarchical governance is being called into question by bots fired and forgotten by their maker, free of reporting duties, their life wholly governed by social events. That’s the case with the internet of things, with significant consequences for searches.

As noted above, the internet of things can consistently manage both system-defined identities and the new ones it introduces for things of its own. But, given a network job description, the same consolidation cannot be even considered for meanings: networks are supposed to be kept in complete ignorance of contents, and walls between addresses and knowledge management must tower well above the clouds. As a corollary, the overlapping of meanings is bound to grow with the expanse of things, and the increase will not be linear.

Contrary to identities, meanings usually overlap when things float free from systems' governance.

Contrary to identities, meanings usually overlap when things float free from systems’ governance.

That brings some light on the so-called “virtual world”, one made of representations detached from identified roots in the actual world. And there should be no misunderstanding: “actual” doesn’t refer to physical objects but to objects and behaviors sanctioned by social entities, as opposed to virtual, which includes the ones whose meaning cannot be neatly anchored to a social authority.

That makes searches in the web of things doubly challenging as they have to deal with both overlapping and shifting semantics.

A Taxonomy of Searches

Semantic searches (forms and pattern matching should be seen as a special case) can be initiated by any kind of textual input, key words or phrase. As searches, they should first be classified with regard to their purpose: finding some specific instance or collecting information about some topic.

Searches about instances are meant to provide references to:

  • Locations, addresses, …
  • Antiques, stamps,…
  • Books, magazines,…
  • Alumni, friends,…
  • Concerts, games, …
  • Cooking recipes, administrative procedures,…
  • Status of shipment,  health monitoring, home surveillance …
What are you looking for ?

What are you looking for ?

Searches about categories are meant to provide information about:

  • Geography, …
  • Products marketing , …
  • Scholarly topics, market researches…
  • Customers relationships, …
  • Business events, …
  • Business rules, …
  • Business processes …

That taxonomy of searches is congruent with the critical divide between things and symbolic representations.

Things and Symbolic Representations

As noted above, searches can be heeded by references to identified objects, the form of digital objects (sound, visuals, or otherwise), or associations between symbolic representations. Considering that finding referenced objects is basically a indexing problem, and that pattern matching is a discipline of its own, the focus is to be put on the third case, namely searches driven by words (as opposed to identifiers and forms). From that standpoint searches are best understood in the broader semantic context of extensions and intensions , the former being the actual set of objects and phenomena, the latter a selected set of features shared by these instances.

ccc

Searching Steps

A search can therefore be seen as an iterative process going back and forth between descriptions and occurrences or, more formally, between intentions and extensions. Depending on the request, iterations are made of four steps:

  • Given a description (intension) find the corresponding set of instances (extension); e.g “restaurants” >  a list of restaurants.
  • Given an instance (extension), find a description (intension); e.g “Alberto’s Pizza” > “pizzerias”.
  • Extend or generalize a description to obtain a better match to request and context; e.g “pizzerias” > “Italian restaurants”.
  • Trim or refine instances to obtain a better match to request and context; e.g a list of restaurants > a list of restaurants in the Village.

Iterations are repeated until the outcome is deemed to satisfy the quality parameters.

The benefit of those distinctions is to introduce explicit decision points with regard to the reference models heeding  the searches. Depending on purpose and context, such models could be:

  • Inclusive: can be applied to any kind of search.
  • Domain specific: can only be applied to circumscribed domains of knowledge. That’s the aim of the semantic web initiative and the Web Ontology Language (OWL).
  • Institutional: can only be applied within specific institutional or business organizations. They could be available to all or through services with restricted access and use.

From Meanings to Things, and back

The stunning performances of modern search engines comes from a combination of brawn and brains, the brainy part for grammars and statistics, the brawny one for running heuristics on gigantic repositories of linguistic practices and web researches. Moreover, those performances improve “naturally” with the accumulation of data pertaining to virtual events and behaviors. Nonetheless, search engines have grey or even blind spots, and there may be a downside to the accumulation of social data, as it may increase the gap between social and corporate knowledge, and consequently the coherence of outcomes.

Meanings can be inclusive, domain specific, or institutional

Meanings can be inclusive, domain specific, or institutional

That can be illustrated by a search about Amedeo Modigliani:

  • A inclusive search for “Modigliani” will use heuristics to identify the artist (a). An organizational search for an homonym (e.g a bank customer) would be dealt with at enterprise level, possibly through an intranet (c).
  • A search for “Modigliani’s friends” may look for the artist’s Facebook friends if kept at the inclusive level (a1), or switch to a semantic context better suited to the artist (a2). The same outcome would have been obtained with a semantic search (b).
  • Searches about auction prices may be redirected or initiated directly, possibly subject to authorization (c).

Further Reading

External Links

Enterprise Governance & Knowledge

July 4, 2013

Knowledgeable Processes

While turf wars may play a part, the debate about Enterprise and Systems governance is rooted in a more serious argument, namely, how the divide between enterprise and systems architectures may affect decision-making.

antin-art-2007-judgment-of-paris.jpg

Informed Decision_making (Eleanor Antin)

The answer to that question can be boiled down to three guidelines respectively for capabilities, functionalities, and visibility.

Architecture Capabilities

From an architecture perspective, enterprises are made of human agents, devices, and symbolic (aka information) systems. From a business perspective, processes combine three kinds of tasks:

  • Authority: deciding how to perform processes and make commitments in the name of the enterprise. That can only be done by human agents, individually or collectively.
  • Execution: processing physical or symbolic flows between the enterprise and its context. Any of those can be done by human agents, individually or collectively, or devices and software systems subject to compatibility qualifications.
  • Control: recording and checking the actual effects of processes execution. Any of those can be done by human agents, individually or collectively, some by software systems subject to qualifications, and none by devices.

Hence, and whatever the solutions, the divide between enterprise and systems will have to be aligned on those constraints:

  • Platforms and technology affects operational concerns, i.e physical accesses to systems and the where and when of processes execution.
  • Enterprise organization determines the way business is conducted: who is authorized to what (business objects), and how (business logic).
  • System functionalities sets the part played by systems in support of business processes.
Enterprise Architecture Capabilities and Business Processes

Enterprise Architecture Capabilities and Business Processes

That gives the first guideline of systems governance:

Guideline #1 (capabilities): Objectives and roles must be set at enterprise level, technical constraints about deployment and access must be defined at platform level, and functional architecture must be designed as to get the maximum of the former subject to the  latter’s constraints.

Informed Decisions: The Will to Know

At its core, enterprise governance is about decision-making and on that basis the purpose of systems is to feed processes with the relevant information so that agents can be put it to use as knowledge.

Those flows can be neatly described by crossing the origin of data (enterprise, systems, platforms) with the processes using the information (business, software engineering, services management):

  • Information processing begins with data, which is no more than registered facts: texts, numbers, sounds, visuals, etc. Those facts are collected by systems through the execution of business, engineering, and servicing processes; they reflect the state of business contexts, enterprise, and platforms.
  • Data becomes information when comprehensively and consistently anchored to identified constituents (objects, activities, events,…) of contexts, organization, and resources.
  • Information becomes knowledge when put to use by agents with regard to their purpose: business, engineering, services.
ccc

Information processing: from data to knowledge and back

Along that perspective, capabilities can be further refined with regard to decision-making.

  • Starting with business logic one should factor out decision points and associated information. That will determine the structure of symbolic representations and functional units.
  • Then, one may derive decision-making roles, together with implicit authorizations and access constraints. That will determine the structure of I/O flows and the logic of interactions.
  • Finally, the functional architecture will have to take into account synchronization and deployment constraints on events notification to and from processes.
Who should know What and When

Who should know What, Where, and When

That can be condensed into the second guideline of system governance:

Guidelines #2 (functionalities): With regard to enterprise governance, the role of systems is to collect data and process it into information organized along enterprise concerns and objectives, enabling decision makers to select and pull relevant information and translate it into knowledge.

Qualified Information: The Veils of Ignorance

Ideally, decision-making should be neatly organized with regard to contexts and concerns:

  • Contexts are set by architecture layers: enterprise organization, system functionalities, platforms technology.
  • Concerns are best identified through processes: business, engineering, or supporting services.
Qualified Information Flows across Architectures and Processes

Qualified Information Flows across Architectures and Processes

Actually, decisions scopes overlap and their outcomes are interwoven.

While distinctions with regard to contexts are supposedly built-in at the source (enterprise, systems, platforms), that’s not the case for concerns whose distinction usually calls for reasoned choices supported by different layers of governance:

  • Assets: shared decisions whose outcome bears upon several business domains and cycles. Those decisions may affect all architecture layers: enterprise (organization), systems (services), or platforms (purchased software packages).
  • Users’ Value: streamlined decisions governed by well identified business units providing for straight dependencies from enterprise (business requirements), to systems (functional requirements) and platforms (users’ entry points).
  • Non functional: shared decisions about scale and performances affecting users’ experience (organization),  engineering (technical requirements), or resources (operational requirements).
Separation of Concerns and Requirements Taxonomy

Qualified Information and Decision Making

As epitomized by non functional requirements, those layers of governance don’t necessarily coincide with the distinction between business, engineering, and servicing concerns. Yet, one should expect the nature of decisions be set prior the actual decision-making, and decision makers be presented with only the relevant information; for instance:

  • Functional requirements should be decided given business requirements and services architecture.
  • Scalability (operational requirements) should be decided with regard to enterprise’s objectives and organization.

Hence the third guideline of system governance:

Guideline #3 (visibility): Systems must feed processes with qualified information according to contexts (business, organization, platforms) and governance level (assets, user’s value, operations) of decision makers.

Further Reading

Tests in Driving Seats

April 24, 2013

Objective

Contrary to its manufacturing cousin, a long time devotee of preventive policies, software engineering is still ambivalent regarding the benefits of integrating quality management with development itself. That certainly should raise some questions, as one would expect the quality of symbolic artifacts to be much easier to manage than the one of their physical counterparts, if for no other reason than the former has to check  symbolic outcomes against symbolic specifications while the latter must also to overcome the contingencies of non symbolic artifacts.

ccc

Walking Quality Hall (E. Erwitt)

Thanks to agile approaches, lessons from manufacturing are progressively learned, with lean and just-in-time principles making tentative inroads into software engineering. Taking advantage of the homogeneity of symbolic development flows,  agile methods have forsaken phased processes in favor of iterative ones, making a priority of continuous and value driven deliveries to business users. Instead of predefined sequences of dedicated tasks, products are developed through iterations regrouping definition, building, and acceptance into the same cycles. That push differentiated documentation and models on back seats and may also introduce a new paradigm by putting tests on driving ones.

From Phased to Iterative Tests Management

Traditional (aka phased) processes follow a corrective strategy: tests are performed according a Last In First Out (LIFO) framework, for components (unit tests), system (integration), and business (acceptance). As a consequence, faults in functional architecture risk being identified after components completion, and flaws in organization and business processes may not emerge before the integration of system functionalities. In other words, the faults with the more wide-ranging consequences may be the last to be detected.

Phased and Iterative approaches to tests

Phased and Iterative approaches to tests

Iterative approaches follow a preemptive strategy: the sooner artifacts are tested, the better. The downside is that without differentiated and phased objectives, there is a question mark on the kind of specifications against which software products are to be tested; likewise, the question is how results are to be managed across iteration cycles, especially if changing requirements are to be taken into account.

Looking for answers, one should first consider how requirements taxonomy can support tests management.

Requirements Taxonomy and Tests Management

Whatever the methods or forms (users’ stories, use case, functional specifications, etc), requirements are meant to describe what is expected from systems, and as such they have two main purposes: (1) to serve as a reference for architects and engineers in software design and (2) to serve as a reference for tests and acceptance.

With regard to those purposes, phased development models have been providing clearly defined steps (e.g requirements, analysis, design, implementation) and corresponding responsibilities. But when iterative cycles are applied to progressively refined requirements, those “facilities” are no longer available. Nonetheless, since tests and acceptance are still to be performed, a requirements taxonomy may replace phased steps as a testing framework.

Taxonomies being built on purpose, one supporting iterative tests should consider two criteria, one driven by targeted contents, the other by modus operandi:

With regard to contents, requirements must be classified depending on who’s to decide: business and functional requirements are driven by users’ value and directly contribute to business experience; non functional requirements are driven by technical considerations. Overlapping concerns are usually regrouped as quality of service.

ccc

Requirements with regard to Acceptance.

That requirements taxonomy can be directly used to build its testing counterpart. As developed by D. Leffingwell (see selected readings), tests should also be classified with regard to their modus operandi, the distinction being between those that can be performed continuously along development iterations and those that are only relevant once products are set within their technical or business contexts. As it happens, those requirements and tests classifications are congruent:

  • Units and component tests (Q1) cover technical requirements and can be performed on development artifacts independently of their functionalities.
  • Functional tests (Q2) deal with system functionalities as expressed by users (e.g with stories or use cases), independently of operational or technical considerations.
  • System acceptance tests (Q3) verify that those functionalities, when performed at enterprise level, effectively support business processes.
  • System qualities tests (Q4) verify that those functionalities, when performed at enterprise level, are supported by architecture capabilities.
Tests Matrix for target and MO (adapted from D. Leffingwell)

Tests Matrix for target and MO (adapted from D. Leffingwell).

Besides the specific use of each criterion in deciding who’s to handle tests, and when, combining criteria brings additional answers regarding automation: product acceptance should be performed manually at business level, preferably by tools at system level; tests performed along development iterations can be fully automated for units and components (black-box), but only partially for functionalities (white-box).

That tests classification can be used to distinguish between phased and iterative tests: the organization of tests targeting products and systems from business (Q3) or technology (Q4) perspectives is clearly not supposed to be affected by development models, phased or iterative, even if resources used during development may be reused. That’s not the case for the organization of the tests targeting functionalities (Q2) or components (Q1).

Iterative Tests

Contrary to tests aiming at products and systems (Q3 and Q4), those performed on development artifacts cannot be set on fixed and well-defined specifications: being managed within iteration cycles they must deal with moving targets.

Unit and components tests (Q1) are white-box operations meant to verify the implementation of functionalities; as a consequence:

  • They can be performed iteratively on software increments.
  • They must take into account technical requirements.
  • They must be aligned on the implementation of tested functionalities.
ccccc

Iterative (aka development) tests for technical (Q1) and functional (Q2) requirements.

Hence, if unit and component tests are to be performed iteratively, (1) they must be set against features and, (2) functional tests must be properly documented and available for reuse.

Functional tests (Q2) are black-box operations meant to validate system behavior with regard to users’ expectations; as a consequence:

  • They can be performed iteratively on software increments.
  • They don’t have to take into account technical requirements.
  • They must be aligned on business requirements (e.g users’ stories or use cases).

Assuming (see previous post) a set of stories (a,b,c,d) identified by alternative paths built from features (f1…5), functional tests (Q2) are to be defined and performed for each story, and then reused to test the implementation of associated features (Q1).

cccc

Functional tests are set along stories, units and components tests are set along features.

At that point two questions must be answered:

  • Given that stories can be changed, expanded or refined along development iterations, how to manage the association between requirements and functional tests.
  • Given that backlogs can be rearranged along development cycles according to changing priorities, how to update tests, manage traceability, and prevent regression.

With model-driven approaches no longer available, one should consider a mirror alternative, namely test-driven development.

Tests Driven Development

Test driven development can be seen as a mirror image of model driven development, a somewhat logical consequence considering the limited role of models in agile approaches.

The core of agile principles is to put the definition, building and acceptance of software products under shared ownership, direct collaboration, and collective responsibility:

  • Shared ownership: a project team groups users and developers and its first objective is to consolidate their respective concerns.
  • Direct collaboration: decisions are taken by team members, without any organizational mediation or external interference.
  • Collective responsibility: decisions about stories, priorities and refinements are negotiated between team members from both sides of the business/system (or users/developers) divide.

Assuming those principles are effectively put to work, there seems to be little room for organized and persistent documentation, as users’ stories are meant to be developed, and products released, in continuity, and changes introduced as new stories.

With such lean and just-in-time processes, documentation, if any, is by nature transient, falling short as a support of test plans and results, even when problems and corrections are formulated as stories and managed through backlogs. In such circumstances, without specifications or models available as development handrails, could that be achieved by tests ?

ccc

Given the ephemeral nature of users’ stories, functional tests should take the lead.

To begin with, users’ stories have to be reconsidered. The distinction between functional tests on one hand, unit and component tests on the other hand, reflects the divide between business and technical concerns. While those concerns may be mixed in users’ stories, they are progressively set apart along iteration cycles. It means that users’ stories are, by nature, transitory, and as a consequence cannot be used to support tests management.

The case for features is different. While they cannot be fully defined up-front, features are not transient: being shared by different stories and bound to system functionalities they are supposed to provide some continuity. Likewise, notwithstanding their changing contents, users’ stories should be soundly identified by solution paths across problems space.

Paths and Features can be identified consistently along iteration cycles

Paths and Features can be identified consistently along iteration cycles.

That can provide a stable framework supporting the management of development tests:

  • Unit tests are specified from crosses between solution paths (described by stories or scenarii) and features.
  • Functional tests are defined by solution paths and built from unit tests associated to the corresponding features.
  • Component tests are defined by features and built by the consolidation of unit tests defined for each targeted feature according to technical constraints.

The margins support continuous and consistent identification of functional and component tests whose contents can be extended or updated through changes made to unit tests.

One step further, and tests can even be used to drive iteration cycles: once features and solution paths soundly identified, there is no need to swell backlogs with detailed stories whose shelf life will be limited. Instead, development processes would get leaner if extensions and refinements could be directly expressed as unit tests.

System Quality and Acceptance Tests

Contrary to development tests which are applied iteratively to programs, system tests are applied to released products and must take into account requirements that cannot be directly or uniquely attached to users stories, either because they cannot be expressed from a business perspective or because they are shared concerns and best described as features.  Tests for those requirements will be consolidated with development ones into system quality and acceptance tests:

  • System Quality Tests deal with performances and resources from the system management perspective. As such they will combine component and functional tests in operational configurations without taking into account their business contents.
  • System  Acceptance Tests deal with the quality of service from the business process perspective. As such they will perform functional tests in operational configurations taking into account business contents and users’ experience.
Development Tests are to be consolidated into Product and System Acceptance Tests

Development Tests are to be consolidated into Product and System Acceptance Tests.

Requirements set too early and quality checks performed too late are at the root of phased processes predicaments, and that can be fixed with a two-pronged policy: a preemptive policy based upon a requirements taxonomy organizing problem spaces according concerns business value, system functionalities, components designs, platforms configuration; a corrective policy driven by the exploration of solution paths, with developments and releases driven by quality concerns.

Further Reading

External Links

 

From Stories to Models

January 16, 2013

Objective

Assuming, for the sake of the argument, that programs are models of implementations, one may also argue that the main challenge of software engineering is to translate requirements into models. But, contrary to programs, nothing can be assumed about requirements apart from being stories told by whoever will need system support for his business process.

vvvv

Telling Stories with Models

Along that reasoning, one may consider the capture and analysis of requirements under the light of two archetypal motifs of storytelling, the Tower of Babel and the Rashomon effect:

  • While stakeholders and users may express their requirements using their own dialects, supporting applications will have to be developed under the same roof. Hence the need of some lingua franca to communicate with their builders.
  • A shared language doesn’t necessary mean common understandings; as requirements usually reflect local and time dependent business opportunities and goals, they may relate to different, if not conflicting, aspects of contexts and concerns that will have to be consolidated, eventually.

From such viewpoints, the alignment of system models to business stories clearly depends on languages and narratives discrepancies.

Business to System Analyst: Your language or mine ?

Stories must be told before being written into models, and that distinction coincides with the one between spoken and written languages or, on a broader perspective,  between direct (aka performed) and documented communication.

Direct communication (by voice, signs, or mime) is set by time and location and must convey contexts and concerns instantly; that’s what happens when requirements are first expressed by business analysts with regard to actual and specific goals.

ccc

Direct communication requires instant understanding

Written languages and documented communication introduces a mediation, enabling stories to be detached from their native here and now; that’s what happens with requirements when managed independently of their original contexts and concerns.

ccc

Documented communication makes room for mediation

The mediation introduced by documented requirements can support two different objectives:

  1. Elicitation: while direct communication calls for instant understanding through a common language, spoken or otherwise, written communication makes room for translation and clarification. As illustrated by Kanji characters, a single written language can support different spoken ones; that would open a communication channel between business and system analysts.
  2. Analysis: since understanding doesn’t mean agreement, mediation is often necessary in order to conciliate, arbitrate or consolidate requirements; for that purpose symbolic representations have to be introduced.

Depending on (1) the languages used to tell the stories and (2) the gamut of concerns behind them, the path from stories to models may be covered in a single step or will have to mark the two steps.

Context and Characters

Direct communication is rooted in actual contexts and points to identified agents, objects or phenomena. Telling a story will therefore begin by introducing characters and objects supposed to retain their identity all along; characters will also be imparted with behavioral capabilities and the concerns supposed to guide them.

ccccc

Stories start with characters and concerns

With regard to business, stories should therefore be introduced by a role, an activity, and a goal.

  • Every story is supposed be told from a specific point of view within the organization. That should be materialized by a leading role; and even if other participants are involved, the narrative should reflect this leading view.
  • If a story is to provide a one-lane bridge between past and future business practices, it must focus on a single activity whose contents can be initially overlooked.
  • Goals are meant to set specific stories within a broader enterprise perspective.

After being anchored to roles and goals, activities will have to be set within boundaries.

Casings and Splits

Once introduced between roles (Who) and goals (Why), activities must be circumscribed with regard to objects (What), actions (How), places (Where) and timing (When). For that purpose the best approach is to use Aristotle’s three unities for drama:

  1. Unity of action: story units must have one main thread of action introduced at the beginning. Subplots, if any, must return to the main plot after completion.
  2. Unity of place: story units must be located into a single physical space where all activities can be carried out without depending on the outcome of activities performed elsewhere.
  3. Unity of time: story units must be governed by a single clock under which all happenings can be organized sequentially.

Stories, especially when expressed vocally, should remain short and, if they have to be divided, splits should not cross units boundaries:

  • Action: splits are made to coincide with variants set by agents’ decisions or business rules.
  • Place: splits are made to coincide with variants in physical contexts.
  • Time: splits are made to coincide with variants in execution constraints.

When stories refer to systems, those constraints should become more specific and coincide with interaction units triggered by a single event from a leading actor.

Filling the blanks

If business contexts, objectives, and roles can be identified with straightforward semantics set at corporate level, meanings become more complex when stories are to be fleshed out with details defined by the different business units. That difficulty can be managed through iterative development that will add specifics to stories within the casing invariants:

  • Each story is developed within a single iteration whose invariants are defined by its action, place, and time-scale.
  • Development proceed by increments whose semantics are defined within the scope set by invariants: operations relative to activities, features relative to objects, events relative to time-scales.

A story is fully documented (i.e an iteration is completed) when no more details can be added without breaking the three units rule or affecting its characters (role and goal) or the semantics of features (attributes and operations).

Descriptions and specifications look from different perspectives

Iterations: a story is fully fleshed out when nothing can be changed without affecting  characters’ features or their semantics.

From Documented Stories to Requirements

Stories must be written down before becoming requirements, further documented by text, model, or code:

  • Text-based documentation uses natural language, usually with hypertext extensions. When analysts are not familiar with modeling languages it is the default option for elicitation and the delivery of comprehensive, unambiguous and consistent requirements.
  • Models use dedicated languages targeting domains (specific) or systems (generic). They are a necessary option when requirements from different sources are to be consolidated before being developed into code.
  • Code (aka execution model) use dedicated languages targeting execution environments. It is the option of choice when requirements are self-contained (i.e not contingent to external dependencies) and expressed with formal languages supporting automated translation.

Whatever their form (user stories, use cases, hypertext, etc), documented requirements must come out as a list of detached items with clearly defined dependencies. Depending on dependencies, requirements can be directly translated into design (or implementation) models or will have to be first consolidated into analysis models.

Telling Models from Stories

Putting aside deployment, development models can be regrouped in two categories:

  • Analysis models describe problems under scrutiny, the objective being to extract relevant aspects.
  • Design models (including programs) describe solutions artifacts.
sculptures_ad0

Analysis models deal with relevant features, Design models deal with their surrogates.

Seen from the perspective of requirements, the objective of models is therefore to organize the contents of business stories into relevant and useful information, in other words software engineering knowledge.

Following the principles set by Davis, Shrobe, and Szolovits for Knowledge Management (cf readings), such models should meet two groups of criteria, one with regard to communication, the other with regard to symbolic representation.

As already noted, models are introduced to support communication across organizational structures or intervals of time. That includes communication between business and systems analysts as well as development tools. Those aspects are supposed to be supported by development environments.

As for model contents, the ultimate objective is to describe the symbolic representations of the business objects and processes targeted by requirements:

  • Surrogates: models must describe the symbolic counterparts of actual objects, events and relationships.
  • Ontological commitments: models must provide sets of statements about the categories of things that may exist in the domain under consideration.
  • Fragmentary theory of intelligent reasoning: models must define what artifacts can do or can be done with.

The main challenge of analysis is therefore to map the space between requirements (concrete stories) and models (symbolic representations), and for that purpose traditional storytelling may offer some useful cues.

From Fictions to Functions

Just like storytellers use cliches and figures of speech to attach symbolic meanings to stories, analysts may use patterns to anchor business stories to systems models.

Cliches are mental constructs with meanings set in collective memory. With regard to requirements, the equivalent would be to anchor activities to primitives operations (e.g CRUD), and roles to functional stereotypes.

vvvvv

Archetypes can be used to anchor stories to shared understandings

While the role of cliches is to introduce basic items, figures of speech are used to extend and enrich their meanings through analogy or metonymy:

  • Analogy is used to identify features or behaviors shared by different stories. That will help to consolidate the description of business objects and activities and points to generalizations.
  • Metonymy is applied when meanings are set by context. That points to aggregate or composite objects or activities.

Primitives, stereotypes, generalization and composition can be employed to map requirements to functional patterns. Those will provide the building blocks of models and help to bridge the gap between business processes and system functionalities.

Further Reading

External Readings