Modeling Symbolic Representations

March 16, 2010

System modeling is all too often a flight for abstraction, when business analysts should instead look for the proper level of representation, ie the one with the best fit to business concerns.

Modeling is synchronic: contexts must be mapped to representations (Velazquez, “Las Meninas”).

Caminao’s blog (Map of Posts) will try to set a path to Architecture Driven System Modelling. The guiding principle is to look at systems as sets of symbolic representations and identify the core archetypes defining how they must be coupled to their actual counterparts. That would provide for lean (need-to-know specs) and fit (architecture driven) models, architecture traceability, and built-in consistency checks.

This blog is meant to be a work in progress, with the basic concepts set open to suggestions or even refutation:

All examples are taken from ancient civilizations in order to put the focus on generic problems of symbolic architectures, disregarding technologies.

Symbolic representation: a primer

Original illustrations by Albert (http://www.albertdessinateur.com/) allow for concrete understanding of requirements, avoiding the biases associated with contrived textual descriptions.

Capabilities vs Processes

October 21, 2014

Summary

Enterprise architecture being a nascent discipline, its boundaries and categories of concerns are still in the making. Yet, as blurs on pivotal concepts are bound to jeopardize further advances, clarification is called upon for the concept of “capability”, whose meaning seems to dither somewhere between architecture, function and process.

ccc

Jumping capability of a four-legged structure (Edgard de Souza)

Hence the benefits of applying definition guidelines to characterize capability with regard to context (architectures) and purpose (alignment between architectures and processes).

Context: Capability & Architecture

Assuming that a capability describes what can be done with a resource, applying the term to architectures would implicitly make them a mix of assets and mechanisms meant to support processes. As a corollary, such understanding would entail a clear distinction between architectures on one hand and supported processes on the other hand; that would, by the way, make an oxymoron of the expression “process architecture”.

On that basis, capabilities could be originally defined independently of business specificities, yet necessarily with regard to architecture context:

  • Business capabilities: what can be achieved given assets (technical, financial, human), organization, and information structures.
  • Systems capabilities: what kind of processes can be supported by systems functionalities.
  • Platforms capabilities: what kind of functionalities can be implemented.
Requirements should be mapped to enterprise architecture capabilities

Architectures Capabilities

Taking a leaf from the Zachman framework, five core capabilities can be identified cutting across those architecture contexts:

  • Who: authentication and authorization for agents (human or otherwise) and roles dealing with the enterprise, using system functionalities, or connecting through physical entry points.
  • What: structure and semantics of business objects, symbolic representations, and physical records.
  • How: organization and versatility of business rules.
  • Where: physical location of organizational units, processing units, and physical entry points.
  • When: synchronization of process execution with regard to external events.

Being set with regard to architecture levels, those capabilities are inherently holistic and can only pertain to the enterprise as a whole, e.g for benchmarking. Yet that is not enough if the aim is to assess architectures capabilities with regard to supported processes.

Purpose: Capability vs Process

Given that capabilities describe architectural features, they can be defined independently of processes. Pushing the reasoning to its limit, one could, as illustrated by the table above, figure a capability without even the possibility of a process. Nonetheless, as the purpose of capabilities is to align supporting architectures and supported processes, processes must indeed be introduced, and the relationship addressed and assessed.

First of all, it’s important to note that trying to establish a direct mapping between capabilities and processes will be self-defeating as it would fly in the face of architecture understood as a shared construct of assets and mechanisms. Rather, the mapping of processes to architectures is best understood with regard to architecture level: traceable between requirements and applications, designed at system level, holistic at enterprise level.

Alignment is direct b

Alignment with processes is mediated by architecture complexity.

Assuming a service oriented architecture, capabilities would be used to align enterprise and system architectures with their process counterparts:

  • Holistic capabilities will be aligned with business objectives set at enterprise level.
  • Services will be aligned with business functions and designed with regard to holistic capabilities.
dddd

Services can be designed with regard to holistic capabilities

Yet, even without a service oriented architecture, that approach could still be used to define functional architecture with regard to holistic capabilities.

Further Readings

 

Alignment: from Empathy to Abstraction

October 4, 2014

Summary

Empathy is commonly defined as the ability to directly share another person’s state of mind: feelings, emotions, understandings, etc. Such concrete aptitude would clearly help business analysts trying to capture users’ requirements; and on a broader perspective it could even contribute to enterprise capability to foretell trends from actual changes in business environment.

vvvvvv (Picasso)

Perceptions and Abstractions (Picasso)

Analysis goes in the opposite direction as it extracts abstract descriptions from concrete requirements, singling out a subset of features to be shared while foregoing the rest. The same process of abstraction being carried out for enterprise business and organisation on one hand,  systems and software architectures on the other hand.

That dual perspective can be used to define alignment with regard to the level under consideration: users, systems, or enterprise.

Requirements & Architectures

Requirements capture can be seen as a transition from spoken to written language, its objective being to write down what users tell about what they are doing or what they want to do. For that purpose analysts are presented with two basic policies: they can anchor requirements around already known business objects or processes, or they can stick to users’ stories, identify new structuring entities, and organize requirements alongside. In any case, and except for standalone applications, the engineering  process is to be carried out along two paths:

  • One concrete for the development of applications, the objective being to meet users’ requirements with regard to business logic and quality of service.
  • The other abstract for requirements analysis, the objective being to identify new business functions and features and consolidate them with those already supporting current business processes.

Those paths are set in orthogonal dimensions as concrete paths connect users’ activities to applications, and abstractions can only be defined between requirements levels.

Concrete (brown) and Abstract (blue) paths of requirements engineering

Concrete (brown) and Abstract (blue) paths of requirements engineering

As business analysts stand at the crossroad, they have to combine empathy when listening to users concerns and expectations, and abstraction when mapping users requirements with systems functionalities and enterprise business processes.

Architectures & Alignments

As it happens, the same reasoning can be extended to the whole of engineering process, with analysis carried out to navigate between abstraction levels of architectures and requirements, and design used for the realization of each requirements level into its corresponding architecture level:

  • Users’ stories (or more precisely corresponding uses cases) are realized by applications.
  • Business functions and features are realized by services (assuming a service oriented architecture), which are meant to be an abstraction of applications.
  • Business processes are realized by enterprise capabilities, which can be seen as an abstraction of services.
How requirements are realized by design at each architecture level

How requirements are realized by design at each architecture level

That matrix can be used to define three types of alignment:

  • At users level the objective is to ensure that applications are consistent with business logic and provide the expected quality of service. That is what requirements traceability is meant to achieve.
  • At system level the objective is to ensure that business functions and features can be directly mapped to systems functionalities. That is what services oriented architectures (SOA) are  meant to achieve.
  • At enterprise level the objective is to ensure that the enterprise capabilities are congruent with its business objectives, i.e that they support its business processes through an effective use of assets. That is what maturity and capability models are meant to achieve.

That makes alignment a concrete endeavor whatever the level of abstraction of its targets, i.e not only for users and applications, but also for functions and capabilities.

Further Readings

Alignment for Dummies

September 15, 2014

Summary

The emergence of Enterprise Architecture as a discipline of its own has put the light on the necessary distinction between actual (aka business) and software (aka system) realms. Yet, despite a profusion of definitions for layers, tiers, levels, views, and other modeling perspectives, what should be a constitutive premise of system engineering remains largely ignored, namely: business and systems concerns are worlds apart and bridging the gap is the main challenge of architects and analysts, whatever their preserve.

(J. Baldessari)

Alignment with Dummies (J. Baldessari)

The consequences of that neglect appear clearly when enterprise architects consider the alignment of systems architectures and capabilities on one hand, with enterprise organization and business processes on the other hand. Looking into the grey zone in between, some approaches will line up models according to their structure, assuming the same semantics on both sides of the divide; others will climb up the abstraction ladder until everything will look alike. Not surprisingly, with the core interrogation (i.e “what is to be aligned ?”) removed from the equation, models will be turned into dummies enabling alignment to be carried out by simple pattern matching.

Models & Views

The abundance of definitions for layers, tiers or levels often masks two different understandings of models:

  • When models are understood as symbolic descriptions of sets of instances, each layer targets a different context with a different concern. That’s the basis of the Model Driven Architecture (MDA) and its distinction between Computation Independent Models (CIMs), Platform Independent Models (PIMs), and Platform Specific Models (PSMs)
  • When models are understood as symbolic descriptions built from different perspectives, all layers targets the same context, each with a different concern. Along that understanding each view is associated to a specific aspect or level of abstraction: processes view, functional view, conceptual view, technical view, etc.

As it happens, many alignment schemes use, implicitly or explicitly, the second understanding without clarifying the underlying assumptions regarding the backbone of artifacts. That neglect is unfortunate because, to be of any significance, views will have to be aligned with regard to those artifacts.

What is to be aligned

Whatever the labels and understandings, alignment is meant to deal with two main problems: how business processes are supported by systems functionalities, and how those functionalities are to be implemented. Given that the latter can be fully dealt with at system level, the focus can be put on the alignment of business processes and functional architectures.

A naive solution could be to assume services on both processes and systems sides. Yet, the apparent symmetry covers a tautology: while aiming for services oriented architectures on the systems side would be legitimate, if not necessarily realistic, taking for granted that business processes also tally with services would presume some prior alignment, in other words that the problem has already been solved.

The pragmatic and logically correct approach is therefore to map business processes to system functionalities using whatever option is available, models (CIMs vs PIMs), or views (processes vs functions). And that is where the distinction between business and software semantics is critical: assuming the divide can be overlooked, some “shallow” alignment could be carried out directly providing the models can be translated into some generic language; but if the divide is acknowledged a “deep” alignment will have to be supported by a semantics bridge built across.

Shallow Alignment

Just like models are meant to describe sets of instances, meta-models are meant to describe instances of models independently of their respective semantics. Assuming a semantic continuity between business and systems models, meta-models like OMG’s KDM (Knowledge Discovery Meta-model) appear to provide a very practical solution to the alignment problem.

From a practical point of view, one may assume that no model of functional architecture is available because otherwise it would be aligned “by design” and there would be no problem. So something has to be “extracted” from existing software components:

  1. Software (aka design) models are translated into functional architectures.
  2. Models of business processes are made compatible with the generic language used for system models.
  3. Associations are made based on patterns identified on each side.

While the contents of the first and third steps are well defined and understood, that’s not the case for the second step which take for granted the availability of some agreed upon modeling semantics to be applied to both functional architecture and business processes. Unfortunately that assumption is both factually and logically inconsistent:

  • Factually inconsistent: it is denied by the plethora of candidates claiming for the role, often with partial, overlapping, ambiguous, or conflicting semantics.
  • Logically inconsistent: it simply dodges the question (what’s the meaning of alignment between business processes and supporting systems) either by lumping together the semantics of the respective contexts and concerns, or by climbing up the ladder of abstraction until all semantic discrepancies are smoothed out.

Alignments built on that basis are necessarily shallow as they deal with artifacts disregarding of their contents, like dummies in test plans. As a matter of fact the outcome will add nothing to traceability, which may be enough for trivial or standalone processes and applications, but is to be meaningless when applied at architecture level.

Deep Alignment

Compared to the shallow one, deep alignment, instead of assuming a wide but shallow commonwealth, tries to identify the minimal set of architectural concepts needed to describe alignment’s stake. Moreover, and contrary to the meta-modelling approach, the objective is not to find some higher level of abstraction encompassing the whole of models, but more reasonably to isolate the core of architecture concepts and constructs with shared and unambiguous meanings to be used by both business and system analysts.

That approach can be directly set along the MDA framework:

Languages: general purpose (blue), process or domain specific (green), or design.

Deep alignment makes a distinction between what is at stake at architecture level (blue), from the specifics of process or domain (green), and design (brown).

  • Contexts descriptions (UML, DSL, BPM, etc) are not meant to distinguish between architectural constructs and specific ones.
  • Computation independent models (CIMs) describe business objects and processes combining core architectural constructs (using a generic language like UML), with specific business ones. The former can be mapped to functional architecture, the latter (e.g rules) directly transformed into design artifacts.
  • Platform independent models (PIMs) describe functional architectures using core constructs and framework stereotypes, possibly enriched with specific artifacts managed separately.
  • Platform specific models (PSMs) can be obtained through transformation from PIMs, generated using specific languages, or refactored from legacy code.

Alignment can so focus on enterprise and systems architectural stakes leaving the specific concerns dealt with separately, making the best of existing languages.

Alignment & Traceability

As mentioned above, comparing alignment with traceability may help to better understand its meaning and purpose.

  • Traceability is meant to deal with links between development artifacts from requirements to software components. Its main objective is to manage changes in software architecture and support decision-making with regard to maintenance and evolution.
  • Alignment is meant to deal with enterprise objectives and systems capabilities. Its main objective is to manage changes in enterprise architecture and support decision-making with regard to organization and systems architecture.

cccc

As a concluding remark, reducing alignment to traceability may counteract its very purpose and make it pointless as a tool for enterprise governance.

Further readings

Events & Decision-making

September 9, 2014

Objective

Between Internet-of-Things and ubiquitous social networks, enterprises’ environments are turning into unified open spaces, transforming the divide between operational and decision-making systems into a pitfall for corporate governance. That jeopardy can be better understood when one consider how the processing of events affect decision-making.

divination_JW_Waterhouse

Making sense of event (J.W. Waterhouse)

Events & Information Processing

Enterprises’ success critically depends on their ability to track, understand, and exploit changes in their environment; hence the importance of a fast, accurate, and purpose-driven reading of events.

That is to be achieved by picking the relevant facts to be tracked, capturing the associated data, processing the data into meaningful information, and finally putting that information into use as knowledge.

From Facts to Knowledge and Back

From Facts to Knowledge and Back

Those tasks have to be carried out iteratively, dealing with both external and internal events:

  • External events are triggered by changes in the state of actual objects, activities, and expectations.
  • Internal events are triggered by the ensuing changes in the symbolic representations of objects and processes as managed by systems.

With events set at the root of the decision-making process, they will also define the time frames.

Events & Decisions Time Frames

As a working hypothesis, decision-making can be defined as real-time knowledge management:

  • To begin with, a real-time scale is created by new facts (t1) registered through the capture of events and associated data (t2).
  • A symbolic intermezzo is then introduced during which data is analyzed, information updated (t3), knowledge extracted, and decisions taken (t4);
  • The real-time scale completes with decision enactment and corresponding change in facts (t5).
Time scale of decision making (real time events are in red, symbolic ones in blue)

Time scale of decision making (real time events are in red, symbolic ones in blue)

The next step is to bring together events and knowledge.

Events & Changes in Knowns & Unknowns

As Donald Rumsfeld once suggested, decision-making is all about the distinction between things we know that we know, things that we know we don’t know, and things we don’t know we don’t know. And that classification can be mapped to the nature of events and the processing of associated data:

  • Known knowns (KK) are traced through changes in already defined features of identified objects, activities or expectations. Corresponding external events are expected and the associated data can be immediately translated into information.
  • Known unknowns (KU) are traced through changes in still undefined features of identified objects, activities or expectations. Corresponding external events are unexpected and the associated data cannot be directly translated into information.
  • Unknown unknowns (UU) are traced through changes in still undefined objects, activities or expectations. Since the corresponding symbolic representations are still to be defined, both external and interval events are unexpected.
Knowns and Unknowns

Knowns and Unknowns

Given that decisions are by nature set in time-frames, they should be mapped to changes in environments, or more precisely to the information carried out by the events taken into consideration.

Knowledge & Decision Making

Events bisect time-scales between before and after, past and future; as a corollary, the associated information (or lack thereof) about changes can be neatly allocated to the known and unknown of current and prospective states of affairs.

Changes in the current states of affairs are carried out by external events:

  • Known knowns (KK): when events are about already defined features of objects, activities or expectations, the associated data can be immediately used to update the states of their symbolic representation.
  • Known unknowns (KU): when events entail new features of already defined objects, activities or expectations, the associated data must be analyzed in order to adjust existing symbolic representations.
  • Unknown unknowns (UU): when events entail new objects, activities or expectations, the associated data must be analyzed in order to build new symbolic representations.

As changes in current states of affairs are shadowed by changes in their symbolic representation, they generate internal events which in turn may trigger changes in prospective states of affairs:

  • Known knowns (KK): updating the states of well-defined objects, activities or expectations may change the course of action but should not affect the set of possibilities.
  • Known unknowns (KU): changes in the set of features used to describe objects, activities or expectations may affect the set of tactical options, i.e ones that are can be set for individual production life-cycles.
  • Unknown unknowns (UU): introducing new types of objects, activities or expectations is bound to affect the set of strategic options, i.e ones that are encompass multiple production life-cycles.

Interestingly, those levels of knowledge appear to be congruent with usual horizons in decision-making: operational , tactical, and strategic:

Decision-making and knowledge level

Decision-making and knowledge level

  • Operational: full information on actual states allows for immediate appraisal of prospective states.
  • Tactical: partially defined actual states allow for periodic appraisal of prospective states in synch with production cycles.
  • Strategic: undefined actual states don’t allow for periodic appraisal of prospective states in synch with production cycles; their definition may also be affected through feedback.

Given that those levels of appraisal are based on conjectural information (internal events) built from fragmentary or fuzzy data (external events), they have to be weighted by risks.

Weighting the Risks

Perfect information would guarantee risk-free future and would render decision-making pointless. As a corollary, decisions based on unreliable information entail risks that must be traced back accordingly:

  • Operational: full and reliable information allows for risk-free decisions.
  • Tactical: when bounded by well-defined contexts with known likelihoods, partial or uncertain information allows for weighted costs/benefits analysis.
  • Strategic: set against undefined contexts or unknown likelihoods decision-making cannot fully rely on weighted costs/benefits analysis and must encompass policy commitments, possibly with some transfer of risks, e.g through insurance contracts.

That provides some kind of built-in traceability between the nature and likelihood of events, the reliability of information, and the risks associated to decisions.

Decision Timing

Considering decision-making as real-time knowledge management driven by external (aka actual) events and governed by internal (aka symbolic) ones, how would that help to define decisions time frames ?

To begin with, such time frames would ensure that:

  • All the relevant data is captured as soon as possible (t1>t2).
  • All available data is analyzed as soon as possible (t2>t3).
  • Once a decision has been made, nothing can change during the interval between commitment and action (respectively t4 and t5).

Given those constraints, the focus of timing is to be on the interval between change in prospective states (t3) and decision (t4): once all information regarding prospective states is available, how long to wait before committing to a decision ?

ccc

Limits on decisions timing

Assuming that decisions are to be taken at the “last responsible moment”, i.e until not taking side could change the possible options, that interval will depend on the nature of decisions:

  • Operational decisions can be put to effect immediately. Since external changes can also be taken into account immediately, the timing is to be set by events occurring within production life-cycles.
  • Tactical decisions can only be enacted at the start of production cycles using inputs consolidated at completion. When analysis can be done in no time (t3=t4) and decisions enacted immediately (t4=t5), commitments can be taken from on cycle to the next. Otherwise some lag will have to be introduced. The last responsible moment for committing a decision will therefore be defined by the beginning of the next production cycle minus the time needed for enactment.
  • Strategic decisions are meant to be enacted according to predefined plans. The timing of commitments should therefore combine planning (when a decision is meant to be taken) and events (when relevant and reliable information is at hand).

While those principles clearly call for more verification and refinement, their interest is to bring events processing, knowledge management, and decision-making within a common perspective.

Further Reading

Semantic Web: from Things to Memes

August 10, 2014

The new soup is the soup of human culture. We need a name for the new replicator, a noun which conveys the idea of a unit of cultural transmission, or a unit of imitation. ‘Mimeme’ comes from a suitable Greek root, but I want a monosyllable that sounds a bit like ‘gene’. I hope my classicist friends will forgive me if I abbreviate mimeme to meme…

Richard Dawkins

The genetics of words

The word meme is the brain child of Richard Dawkins in his book The Selfish Gene, published in 1976, well before the Web and its semantic soup. The emergence of the ill-named “internet-of-things” has brought about a new perspective to Dawkins’ intuition: given the clear divide between actual worlds and their symbolic (aka web) counterparts, why not chart human culture with internet semantics ?

ccccc

Semantic Dissonance: Flowering Knives (Adel Abdessemed).

With interconnected digits pervading every nook and cranny of material and social environments, the internet may be seen as a way to a comprehensive and consistent alignment of language constructs with targeted realities: a name for everything, everything with its name. For that purpose it would suffice to use the web to allocate meanings and don things with symbolic clothes. Yet, as the world is not flat, the charting of meanings will be contingent on projections with dissonant semantics. Conversely, as meanings are not supposed to be set in stone, semantic configurations can be adjusted continuously.

Internet searches: words at work

Semantic searches (as opposed to form or pattern based ones) rely on textual inputs (key words or phrase) aiming at specific reality or information about it:

  • Searches targeting reality are meant to return sets of instances (objects or phenomena) meeting users’ needs (locations, people, events, …).
  • Searches targeting information are meant to return documents meeting users’ interest for specific topics (geography, roles, markets, …).

What are you looking for ?

Looking for information or instances.

Interestingly, the distinction between searches targeting reality and information is congruent with the rhetorical one between metonymy and metaphor, the former best suited for things, the latter for meanings.

Rhetoric: Metonymy & Metaphor

As noted above, searches can be heeded by references to identified objects, the form of digital objects (sound, visuals, or otherwise), or associations between symbolic representations. Considering that finding referenced objects is basically a technical problem, and that pattern matching is a discipline of its own,  the focus is to be put on the third case, namely searches driven by words. From that standpoint searching the web becomes a problem of rhetoric, namely: how to use language to get rapidly and effectively the most accurate outcome to a query. And for that purpose rhetoric provides two basic contraptions: metonymy and metaphor.

Both metonymy and metaphor are linguistic constructs used to substitute a word (or a phrase) by another without altering its meaning. When applied to searches, they are best understood in terms of extensions and intensions, extensions standing for the actual set of objects and behaviors, and intensions for the set of features that characterize these instances.

Metonymy uses contiguity to substitute target terms for source ones, contiguity being defined with regard to their respective extensions. For instance, given that US Presidents reside at the White House, Washington DC, each term can be used instead.

Metonymy use physical or functional proximity (full line) to match terms (dashed line)

Metonymy use physical or functional proximity (full line) to match extensions (dashed line)

Metaphor uses similarity to substitute target terms for source ones, similarity being defined with regard to a shared subset of features, the others being ignored. Hence, in contrast to metonymy, metaphor is based on intensions.

Metaphors use analogy to maps terms whose intensions share a selected subset of features

Metaphors use analogy (dashed line) to maps terms whose intensions (dotted line) share a selected subset of features

As it happens, and not by chance, those rhetorical constructs can be mapped to categories of searches:

  • Metonymy will be used to navigate across instances of things and phenomena following structural, functional, or temporal associations.
  • Metaphors will be used to navigate across terms and concepts according to similarities, ontologies, and abstractions.

As a corollary, searches can be seen as scaffolds supporting the building of meanings.

Selected metaphors are used to extract occurrences to be refined using metonymies.

The building of meanings, back and forth between metonymies and metaphors


Memes & their making

Today general purpose search engines combine brains and brawn to match queries to references, the former taking advantage of language parsers and statistical inferences, the latter running heuristics on gigantic repositories of data and searches. Given the exponential growth of accumulated data and the benefits of hindsight, such engines can continuously improve the relevancy of their answers; moreover, their advances are not limited to accuracy and speed but also embrace a better understanding of queries. And that brings about a qualitative change: accruing improved understandings to knowledge bases provides search engines with learning capabilities.

Assuming that such learning is autonomous, self-sustained, and encompasses concepts and categories, the Web could be seen as a semantic incubator for the development of meanings. That would vindicate Dawkins’ intuition comparing the semantic evolution of memes to the Darwinian evolution of living organisms.

Further Reading

External Links

Governance, Regulations & Risks

July 16, 2014

Governance & Environment

Confronted with spreading and sundry regulations on one hand, the blurring of enterprise boundaries on the other hand, corporate governance has to adapt information architectures to new requirements with regard to regulations and risks. Interestingly, those requirements seem to be driven by two different knowledge policies: what should be known with regard to compliance, and what should be looked for with regard to risk management.

Balancing Risks with Regulations

Balancing Risks with Regulations


 Compliance: The Need to Know

Enterprise are meant to conform to rules, some set at corporate level, others set by external entities. If one may assume that enterprise agents are mostly aware of the former, that’s not necessary the case for the latter, which means that information and understanding are prerequisites for regulatory compliance :

  • Information: the relevant regulations must be identified, collected, and their changes monitored.
  • Understanding: the meanings of regulations must be analyzed and the consequences of compliance assessed.

With regard to information processing capabilities, it must be noted that, since regulations generally come as well structured information with formal meanings, the need for data processing will be limited, if at all.

With regard to governance, given the pervasive sources of external regulations and their potentially crippling consequences, the challenge will be to circumscribe the relevant sources and manage their consequences with regard to business logic and organization.

Regulatory Compliance vs Risks Management

Regulatory Compliance vs Risks Management


Risks: The Will to Know

Assuming that the primary objective of risk management is to deal with the consequences (positive or negative) of unexpected events, its information priorities can be seen as the opposite of the ones of regulatory compliance:

  • Information: instead of dealing with well-defined information from trustworthy sources, risk management must process raw data from ill-defined or unreliable origins.
  • Understanding: instead of mapping information to existing organization and business logic, risk management will also have to explore possible associations with still potentially unidentified purposes or activities.

In terms of governance risks management can therefore be seen as the symmetric of regulatory compliance: the former relies on processing data into information and expanding the scope of possible consequences, the latter on translating information into knowledge and reducing the scope of possible consequences.

With regard to regulations governance is about reduction, with regard to risks it's about expansion

With regard to regulations governance is about reduction, with regard to risks it’s about expansion

Not surprisingly, that understanding coincides with the traditional view of governance as a decision-making process balancing focus and anticipation.

Decision-making: Framing Risks and Regulations

As noted above, regulatory compliance and risk management rely on different knowledge policies, the former restrictive, the latter inclusive. That distinction also coincides with the type of factors involved and the type of decision-making:

  • Regulations are deontic constraints, i.e ones whose assessment is not subject to enterprises decision-making. Compliance policies will therefore try to circumscribe the footprint of regulations on business activities.
  • Risks are alethic constraints, i.e ones whose assessment is subject to enterprise decision-making. Risks management policies will therefore try to prepare for every contingency.

Yet, when set on a governance perspective, that picture can be misleading because regulations are not always mandatory, and even mandatory ones may left room for compliance adjustments. And when regulations are elective, compliance is less driven by sanctions or penalties than by the assessment of business or technical alternatives.

Regulations & Risks : decision patterns

Decision patterns: Options vs Arbitrage

Conversely, risks do not necessarily arise from unidentified events and upshot but can also come from well-defined outcomes with unknown likelihood. Managing the latter will not be very different from dealing with elective regulations except that decisions will be about weighted opportunity costs instead of business alternatives. Similarly, managing risks from unidentified events and upshot can be compared to compliance to mandatory regulations, with insurance policies instead of compliance costs.

When to Decide: Last Responsible Moment

Finally, with regulations scope and weighted risks duly assessed, one have to consider the time-frames of decisions about compliance and commitments.

Regarding elective regulations and defined risks, the time-frame of decisions is set at enterprise level in so far as options can be directly linked to business strategies and policies. That’s not the case for compliance to mandatory regulations or commitments exposed to undefined risks since both are subject to external contingencies.

Whatever the source of the time-frame, the question is when to decide, and the answer is at the “last responsible moment”, i.e not until taking side could change the possible options:

  • Whether elective or mandatory, the “last responsible moment” for compliance decision is static because the parameters are known.
  • Whether defined or not, the “last responsible moment” for commitments exposed to risks is dynamic because the parameters are to be reassessed periodically or continuously.

Compliance and risk taking: last responsible moments to decide

Compliance and risk taking: last responsible moments to decide

One step ahead along that path of reasoning, the ultimate challenge of regulatory compliance and risk management would be to use the former to steady the latter.

Further Readings

EA: Entropy Antidote

June 24, 2014

Cybernetics & Governance

When seen through cybernetics glasses, enterprises are social entities whose sustainability and capabilities hang on their ability to track changes in their environment and exploit opportunities before their competitors. As a corollary, corporate governance is to be contingent on fast, accurate and purpose-driven reading of  environments on one hand, effective use of assets on the other hand.

Entropy is about energy and disorder, information and confusion (L.Delahaye)

Entropy is about energy and disorder, information and confusion (L.Delahaye)

And that will depend on enterprises’ capacity to capture data, process it into information, and translate information into knowledge supporting decision-making. Since that capacity is itself determined by architectures, a changing and competitive environment will require continuous adaptation of enterprises’ organization. That’s when disorder and confusion may increase: lest a robust and flexible organization can absorb and consolidate changes, variety will progressively clog the systems with specific information associated with local adjustments.

Governance & Information

Whatever its type, effective corporate governance depends on timely and accurate information about the actual state of assets and environments. Hence the need to assess such capabilities independently of the type of governance structure that has to be supported, and of any specific business context.

cc

Effective governance is contingent on the distance between actual state of assets and environment on one hand, relevant information on the other hand.

That put the focus on the processing of information flows supporting the governance of interactions between enterprises and their environment:

  • How to identify the relevant facts and monitor them as accurately and timely as required.
  • How to process external data from environment into information, and to consolidate the outcome with information related to enterprise objectives and internal states.
  • How to put the consolidated information to use as knowledge supporting decision-making.
  • How to monitor processes execution and deal with relevant feedback data.
ccc

What is behind enterprise ability to track changes in environment and exploit opportunities.

Enterprises being complex social constructs, those tasks can only be carried on through structured organization and communication mechanisms  supporting the processing of information flows.

Architectures & Changes

Assuming that enterprise governance relies on accurate and timely information with regard to internal states and external environments, the first step would be to distinguish between the descriptions of actual contexts on one hand, of symbolic representation on the other hand.

Models are used to describe actual or symbolic objects and behaviors

Enterprise architectures can be described along two dimensions: nature (actual or symbolic), and target (objects or activities).

Even for that simplified architecture, assessing variety and information processing capabilities in absolute terms would clearly be a challenge. But assessing variations should be both easier and more directly useful.

Change being by nature relative to time, the first thing is to classified changes with regard to time-frames:

  • Operational changes are occurring, and can be dealt with, within the time-frame of processes execution.
  • Structural changes affect contexts and assets and cannot be dealt with at process level as they.

On that basis the next step will be to examine the tie-ups between actual changes and symbolic representations:

  • From actual to symbolic: how changes in environments are taken into account; how processes execution and state of assets are monitored.
  • From symbolic to actual: how changes in business models and processes design are implemented.
What moves first: actual contexts and processes or enterprise abstractions.

What moves first: actual contexts and processes or enterprise abstractions.

The effects of  those changes on overall governance capability will depend on their source (internal or external) and modality (planned or not).

Changes & Information Processing

As far as enterprise governance is considered, changes can be classified with regard to their source and modality.

With regard to source:

  • Changes within the enterprise are directly meaningful (data>information), purpose-driven (information>knowledge), and supposedly manageable.
  • Changes in environment are not under control, they may need interpretation (data<?>information), and their consequences or use are to be explored (information<?>knowledge).

With regard to modality:

  • Data associated with planned changes are directly meaningful (data>information) whatever their source (internal or external); internal changes can also be directly associated with purpose (information>knowledge);
  • Data associated with unplanned internal changes can be directly interpreted (data>information) but their consequences have to be analyzed (information<?>knowledge); data associated with unplanned external changes must be interpreted (data<?>information).
Changes can be classified with regard to their source (enterprise or environment) and modality (planned or not).

Changes can be classified with regard to their source (enterprise or environment) and modality (planned or not).

Assuming with Stafford Beer that viable systems must continuously adapt their capabilities to their environment, this taxonomy has direct consequences for their governance:

  • Changes occurring within planned configurations are meant to be dealt with, directly (when stemming from within enterprise), or through enterprise adjustments (when set in its environment).
  • That assumption cannot be made for changes occurring outside planned configurations because the associated data will have to be interpreted and consequences identified prior to any decision.

Enterprise governance will therefore depend on the way those changes are taken into account, and in particular on the capability of enterprise architectures to process the flows of associated data into information, and to use it to deal with variety.

EA & Models

Originally defined by thermodynamic as a measure of heat dissipation, the concept of entropy has been took over by cybernetics as a measure of  the (supposedly negative) variation in the value of information supporting corporate governance.

As noted above, the key challenge is to manage the relevancy and timely interpretation and use of the data, in particular when new data cannot be mapped into predefined  semantic frame, as may happen with unplanned changes in contexts. How that can be achieved will depend on the processing of data and its consolidation into information as carried on at enterprise level or by business and technical units.

Given that data is captured at the periphery of systems, one may assume that the monitoring of operations performed by business and technical units are not to be significantly affected by architectures. The same assumption can be made for market research meant to be carried on at enterprise level.

Architecture Layers and Information Processing

Architecture Layers and Information Processing

Within that working assumption, the focus is to be put on enterprise architecture capability to “read” environments (from data to information), as well as to “update” itself (putting information to use as knowledge).

With regard to “reading” capabilities the primary factor will be traceability:

  • At technical level traceability between components and applications is required if changes in business operations are to be mapped to IT architecture.
  • At organizational level, the critical factor for governance will be the ability to adapt the functionalities of supporting systems to changes in business processes. And that will be significantly enhanced if both can be mapped to shared functional concepts.

Once the “readings” of external changes are properly interpreted with regard to internal assets and objectives, enterprise governance will have to decide if changes can be dealt with by the current architecture or if it has to be modified. Assuming that change management is an intrinsic part of enterprise governance, “updating” capabilities will rely on a continuous, comprehensive and consistent management of information, best achieved through models, as epitomized by the Model Driven Architecture (MDA) framework.

vvvvv

Models as bridges between data and knowledge

Based on requirements capture and analysis, respective business, functional, and technical information is consolidated into models:

  • At technical level platform specific models (PSMs) provide for applications and components traceability. They support maintenance and configuration management and can be combined with design patterns to build modular software architecture from reusable components.
  • At organizational level, platform independent models (PIMs) are used to align business processes with systems functionalities. Combined with functional patterns the objective is to use service oriented architectures as a level of indirection between organization and information technology.
  • At enterprise level, computation independent models (CIMs) are meant to bring together corporate tangible and intangible assets. That’s where corporate culture will drive architectural changes  from systems legacy, environment challenges, and planned designs.

EA & Entropy

Faced with continuous changes in their business environment and competition, enterprises have to navigate between rocks of rigidity and whirlpools of variety, the former policies trying to carry on with existing architectures, the latter adding as many variants as seems to appear to business objects, processes, or channels. Meeting environments challenges while warding off growing complexity will depend on the plasticity and versatility of architectures, more precisely on their ability to “digest” the variety of data and transform it into corporate knowledge. Along that perspective enterprise architecture can be seen as a natural antidote to entropy, like a corporate cousin of  Maxwell’s demon, standing at enterprise gates and allowing changes in a way that would decrease internal complexity relative to the external one.

Further Readings

The Finger & the Moon: Fiddling with Definitions

June 5, 2014

Objective

Given the glut of redundant, overlapping, circular, or conflicting definitions, it may help to remember that “define” literally means putting limits upon. Definitions and their targets are two different things, the former being language constructs (intensions), the latter set of instances (extensions).  As a Chinese patriarch once said, the finger is not to be confused with the moon.

Fingering definition

Fiddling with words: to look at the moon, it is necessary to gaze beyond the finger.

In order to gauge and cut down the distance between words and world, definitions can be assessed and improved at functional and semantic levels.

Functional Assessment

Since definitions can be seen as a special case of non exhaustive classifications, they can be assessed through a straightforward two-steps routine:

  1. Effectiveness: applying candidate definition to targeted instances must provide clear and unambiguous answers (or mutually exclusive subsets).
  2. Usefulness: the resulting answers (or subsets) must directly support well-defined purposes.

Such routine meets Occam’s razor parsimony principle by producing outcomes that are consistent (“internal” truth, i.e no ambiguity), sufficient (they meet their purpose), and simple (mutually exclusive classifications are simpler than overlapping ones).

Functional assessment should also take feedbacks into account as instances can be refined and purposes reconsidered with the effect of improving initially disappointing outcomes. For instance, a good requirements taxonomy is supposed to be used to allocate responsibilities with regard to acceptance, and carrying out classification may be accompanied by a betterment of requirements capture.

Semantic Improvement

Once functionally checked, candidate definitions can be assessed for semantics, and adjusted as to maximize the scope and consistency of their footprint. While different routines can be used, all rely on tweaking words with neighboring meanings.

Here a routine posted by Alexander Samarin: find a sentence with a term, substitute the term by its definition and check if the sentence still has the sense. (Usually a few sentences are used for such a check).

For example, taking three separate definitions:
1. Discipline is a system of governing rules <for some work, conduct or activity>
2. A business process is an explicitly defined coordination for guiding the enactment of business activity flows
3. Business Process Management (BPM) is a discipline for the use any combination of modeling, automation, execution, control, measurement and optimization of business processes

And a combined (or stand-alone) definition

Business process management (BPM) is [a system of governing rules] for the use of any combination of modeling, automation, execution, control, measurement and optimization of [explicitly defined coordination for guiding the enactment of business activity flows]

Further Readings

EA Documentation: Taking Words for Systems

May 18, 2014

In so many words

Given the clear-cut and unambiguous nature of software, how to explain the plethora of  “standard” definitions pertaining to systems, not to mention enterprises, architectures ?

Documents and architectures, which grows on the other (Gilles Barbier).

Documents and Systems: which ones nurture the others (Gilles Barbier).

Tentative answers can be found with reference to the core functions documents are meant to support: instrument of governance, medium of exchange, and content storage.

Instrument of Governance: the letter of the law

The primary role of documents is to support the continuity of corporate identity and activities with regard to their regulatory and business environments. Along that perspective documents are to receive legal tender for the definitions of parties (collective or individuals), roles, and contracts. Such documents are meant to support the letter of the law, whether set at government, industry, or corporate level. When set at corporate level that letter may be used to assess the capability and maturity of architectures, organizations, and processes. Whatever the level, and given their role for legal tender or assessment, those documents have to rely on formal textual definitions, possibly supplemented with models.

Medium of Exchange: the spirit of the law

Independently of their formal role, documents are used as medium of exchange, across corporate entities as well as internally between their organizational units. When freed from legal or governance duties, such documents don’t have to carry authorized or frozen interpretations and assorted meanings can be discussed and consolidated in line with the spirit of the law. That makes room for model-based documents standing on their own, with textual definitions possibly set in the background. Given the importance of direct discussions in the interpretation of their contents, documents used as medium of (immediate) exchange should not be confused with those used as means of storage (exchange along time).

Means of Storage: letter only

Whatever their customary functions, documents can be used to store contents to be reinstated at a later stage. In that case, and contrary to direct (aka immediate) exchange, interpretations cannot be consolidated through discussion but have to stand on the letter of the documents themselves. When set by regulatory or organizational processes, canonical interpretations can be retrieved from primary contexts, concerns, or pragmatics. But things can be more problematic when storage is performed for its own purpose, without formal reference context. That can be illustrated by legacy applications with binary code can be accompanied by self-documented source code, source with documentation, source with requirements, generated source with models, etc.

Documentation and Enterprise Architecture

Assuming that the governance of structured social organizations must be supported by comprehensive documentation, documents must be seen as a necessary and intrinsic component of enterprise architectures and their design should be aligned on concerns and capabilities.

As noted above, each of the basic functionalities comes with specific constraints; as a consequence a sound documentation policy should not mix functionalities. On that basis, documents should be defined by mapping purposes with users across enterprise architecture layers:

  • With regard to corporate environment, documentation requirements are set by legal constraints, directly (regulations and contracts) or indirectly (customary framework for transactions, traceability and audit).
  • With regard to organization, documents have to met two different objectives. As a medium of exchange they are meant to support the collaboration between organizational units, both at business level (processes) and across architecture levels. As an instrument of governance they are used to assess architecture capabilities and processes performances. Documents supporting those objectives are best kept separate if negative side effects are to be avoided.
  • With regard to systems functionalities, documents can be introduced for procurements (governance), development (exchange), and change (storage).
  • Within systems, the objective is to support operational deployment and maintenance of software components.
Documents purposes and users

Documents’ purposes and users

Further readings

 

Agile between Space & Time

May 2, 2014

Continuity of delivery and locus

While the scope of agile principles extends far beyond the eponymous methods, some of them are more specific and their applicability contingent on contexts and objectives; two of them are especially important as they entail very specific assumptions with regard to time and space:

  • Continuous delivery of valuable software.
  • Direct collaboration between software users and developers all along the development process.
Material inconsistency (H. Zamora)

Continuity and collaboration  (H. Zamora)

Time Capsules vs Time Scales

If time is seen as a discrete gauge introduced to sequence events, it ensues that continuity indicates that no external events are to be taken into account. In other words, once its objectives have been set, software development must be carried out according its own pace, whatever happens in business context and enterprise organization. That’s a pretty strong assumption that should be explicitly endorsed by stakeholders and users.

Interestingly, this assumption can be set against fixed requirements and upfront design associated with waterfall approaches, as if development policies were to be set between two alternative options:

  1. Time capsules: projects deal with changing requirements subject to frozen business context and organization.
  2. Time scales: projects deal with frozen requirements subject to planned changes in business context and organization.

A pragmatic approach should take the best of each option: put limited and self-contained objectives into capsules and organize capsules along scales.

Collaborative vs Procedural Spaces

The reason for processes is that tasks cannot be carried out simultaneously, and the reason for human contribution is that some tasks involve decision-making based on circumstances and provisos that cannot be determined upfront.

Those decisions may concern different domains of concerns with their respective objectives and roles: business requirements, systems functionalities, or technical implementations. Agile and phased approaches clearly disagree about how those decisions are to be taken and conveyed along  processes and across organizational spaces,  the former  ruling that all the parties should deal with them jointly, the latter opting for a separation of concerns. As a counterpart to time continuity, agile recommendation implies some continuity across organizational spaces that cannot be taken for granted. Whether that proviso can be met will determine the choice of a development policy:

  • Collaborative: problems are solved and decisions taken through collaboration between parties within a single organizational space.
  • Procedural: parties deal separately with problems and decision-making within their respective organizational spaces, and communication between those spaces is carried out through prearranged channels.

Those options clearly depend on organizational and technical environments, hence the benefits of charting projects with regard to constraints on ownership and delivery.

Charting projects: Ownership and Delivery

Given that choosing the right development model is a primary factor of success, decision-making should rely on simple and robust criteria. First, if decisions are to be taken and implemented collectively, shared governance has to be secured. Second,  if deliveries are to be carried out continuously, projects shouldn’t be dependent on external events.

When charted with regard to ownership and delivery, projects can be regrouped into four basic categories:

  • Business requirements with distinct stakeholders and objectives directly associated with business value (a).
  • Business requirements with objectives and business value involving separate stakeholders (b).
  • Functional requirements targeting features used by different applications (c).
  • Non functional requirements targeting features used by different applications across different systems (d).
vvv

Charting projects with regard to ownership and delivery constraints

Yet, it must be reminded that those constraints are not necessarily set once and for all, and organizations can be adapted to projects or even to development policies set globally.

Further Readings


Follow

Get every new post delivered to your Inbox.

Join 305 other followers