Archive for the ‘Decision-making’ Category

Ontologies as Productive Assets

January 22, 2018


An often overlooked benefit of artificial intelligence has been a renewed interest in seminal philosophical and cognitive topics; ontologies coming top of the list.

Ontological Questioning (The Thinker Monkey, Breviary of Mary of Savoy)

Yet that interest has often been led astray by misguided perspectives, in particular:

  • Universality: one-fits-all approaches are pointless if not self-defeating considering that ontologies are meant to target specific domains of concerns.
  • Implementation: the focus is usually put on representation schemes (commonly known as Resource Description Frameworks, or RDFs), instead of the nature of targeted knowledge and the associated cognitive capabilities.

Those misconceptions, often combined, may explain the limited practical inroads of ontologies. Conversely, they also point to ontologies’ wherewithal for enterprises immersed into boundless and fluctuating knowledge-driven business environments.

Ontologies as Assets

Whatever the name of the matter (data, information or knowledge), there isn’t much argument about its primacy for business competitiveness; insofar as enterprises are concerned knowledge is recognized as a key asset, as valuable if not more than financial ones, and should be managed accordingly. Pushing the comparison still further, data would be likened to liquidity, information to fixed income investment, and knowledge to capital ventures. To summarize, assets whatever their nature lose value when left asleep and bear fruits when kept awake; that’s doubly the case for data and information:

  • Digitized business flows accelerates data obsolescence and makes it continuous.
  • Shifting and porous enterprises boundaries and markets segments call for constant updates and adjustments of enterprise information models.

But assessing the business value of knowledge has always been a matter of intuition rather than accounting, even when it can be patented; and most of knowledge shapes up well beyond regulatory reach. Nonetheless, knowledge is not manna from heaven but the outcome of information processing, so assessing the capabilities of such processes could help.

Admittedly, traditional modeling methods are too stringent for that purpose, and looser schemes are needed to accommodate the open range of business contexts and concerns; as already expounded, that’s precisely what ontologies are meant to do, e.g:

  • Systems modeling,  with a focus on integration, e.g Zachman Framework.
  • Classifications, with a focus on range, e.g Dewey Decimal System.
  • Conceptual models, with a focus on understanding, e.g legislation.
  • Knowledge management, with a focus on reasoning, e.g semantic web.

And ontologies can do more than bringing under a single roof the whole of enterprise knowledge representations: they can also be used to nurture and crossbreed symbolic assets and develop innovative ones.

Ontologies Benefits

Knowledge is best understood as information put to use; accounting rules may be disputed but there is no argument about the benefits of a canny combination of information, circumstances, and purpose. Nonetheless, assessing knowledge returns is hampered by the lack of traceability: if a part of knowledge is explicit and subject to symbolic representation, another is implicit and manifests itself only through actual behaviors. At philosophical level it’s the line drawn by Wittgenstein: “The limits of my language mean the limits of my world”;  at technical level it’s AI’s two-lanes approach: symbolic rule-based engines vs non symbolic neural networks; at corporate level implicit knowledge is seen as some unaccounted for aspect of intangible assets when not simply blended into corporate culture. With knowledge becoming a primary success factor, a more reasoned approach of its processing is clearly needed.

To begin with, symbolic knowledge can be plied by logic, which, quoting Wittgenstein again, “takes care of itself; all we have to do is to look and see how it does it.” That would be true on two conditions:

  • Domains are to be well circumscribed. 
  • A water-tight partition must be secured between the logic of representations and the semantics of domains.

That could be achieved with modular and specific ontologies built on a clear distinction between common representation syntax and specific domains semantics.

As for non-symbolic knowledge, its processing has for long been overshadowed by the preeminence of symbolic rule-based schemes, that is until neural networks got the edge and deep learning overturned the playground. In a few years’ time practically unlimited access to raw data and the exponential growth in computing power have opened the door to massive sources of unexplored knowledge which is paradoxically both directly relevant yet devoid of immediate meaning:

  • Relevance: mined raw data is supposed to reflect the geology and dynamics of targeted markets.
  • Meaning: the main value of that knowledge rests on its implicit nature; applying existing semantics would add little to existing knowledge.

Assuming that deep learning can transmute raw base metals into knowledge gold, enterprises would need to understand, assess, and improve the refining machinery. That could be done with ontological frames.

Further Reading

External Links


Open Ontologies: From Silos to Architectures

January 1, 2018

To be of any use for enterprises, ontologies have to embrace a wide range of contexts and concerns, often ill-defined for environments, rather well expounded for systems.

Circumscribed Contexts & Crossed Concerns (Robert Goben)

And now that enterprises have to compete in open, digitized, and networked environments, business and systems ontologies have to be combined into modular knowledge architectures.

Ontologies & Contexts

If open-ended business contexts and concerns are to be taken into account, the first step should be to characterize ontologies with regard to their source, justification, and the stability of their categories, e.g:

  • Social: No authority, volatile, continuous and informal changes.
  • Institutional: Regulatory authority, steady, changes subject to established procedures.
  • Professional: Agreed upon between parties, steady, changes subject to established procedures.
  • Corporate: Defined by enterprises, periodic, changes subject to internal decision-making.
  • Personal: Customary, defined by named individuals (e.g research paper).

Assuming such an external taxonomy, the next step would be to see what kind of internal (i.e enterprise architecture) ontologies can be fitted into, as it’s the case for the Zachman framework.

The Zachman’s taxonomy is built on well established concepts (Who,What,How, Where, When) applied across architecture layers for enterprise (business and organization), systems (logical structures and functionalities), and platforms (technologies). These layers can be generalized and applied uniformly across external contexts, from well-defined (e.g regulations) to fuzzy (e.g business prospects or new technologies) ones, e.g:

Ontologies, capabilities (Who,What,How, Where, When), and architectures (enterprise, systems, platforms).

That “divide to conquer” strategy is to serve two purposes:

  • By bridging the gap between internal and external taxonomies it significantly enhances the transparency of governance and decision-making.
  • By applying the same motif (Who,What, How, Where, When) across the semantics of contexts, it opens the door to a seamless integration of all kinds of knowledge: enterprise, professional, institutional, scientific, etc.

As can be illustrated using Zachman concepts, the benefits are straightforward at enterprise architecture level (e.g procurement), due to the clarity of supporting ontologies; not so for external ones, which are by nature open and overlapping and often come with blurred semantics.

Ontologies & Concerns

A broad survey of RDF-based ontologies demonstrates how semantic overlaps and folds can be sort out using built-in differentiation between domains’ semantics on one hand, structure and processing of symbolic representations on the other hand. But such schemes are proprietary, and evidence shows their lines seldom tally, with dire consequences for interoperability: even without taking into account relationships and integrity constraints, weaving together ontologies from different sources is to be cumbersome, the costs substantial, and the outcome often reduced to a muddy maze of ambiguous semantics.

The challenge would be to generalize the principles as to set a basis for open ontologies.

Assuming that a clear line can be drawn between representation and contents semantics, with standard constructs (e.g predicate logic) used for the former, the objective would be to classify ontologies with regard to their purpose, independently of their representation.

The governance-driven taxonomy introduced above deals with contexts and consequently with coarse-grained modularity. It should be complemented by a fine-grained one to be driven by concerns, more precisely by the epistemic nature of the individual instances to be denoted. As it happens, that could also tally with the Zachman’s taxonomy:

  • Thesaurus: ontologies covering terms and concepts.
  • Documents: ontologies covering documents with regard to topics.
  • Business: ontologies of relevant enterprise organization and business objects and activities.
  • Engineering: symbolic representation of organization and business objects and activities.

Ontologies: Purposes & Targets

Enterprises could then pick and combine templates according to domains of concern and governance. Taking an on-line insurance business for example, enterprise knowledge architecture would have to include:

  • Medical thesaurus and consolidated regulations (Knowledge).
  • Principles and resources associated to the web-platform (Engineering).
  • Description of products (e.g vehicles) and services (e.g insurance plans) from partners (Business).

Such designs of ontologies according to the governance of contexts and the nature of concerns would significantly reduce blanket overlaps and improve the modularity and transparency of ontologies.

On a broader perspective, that policy will help to align knowledge management with EA governance by setting apart ontologies defined externally (e.g regulations), from the ones set through decision-making, strategic (e.g plate-form) or tactical (e.g partnerships).

Open Ontologies’ Benefits

Benefits from open and formatted ontologies built along an explicit distinction between the semantics of representation (aka ontology syntax) and the semantics of context can be directly identified for:

Modularity: the knowledge basis of enterprise architectures could be continuously tailored to changes in markets and corporate structures without impairing enterprise performances.

Integration: the design of ontologies with regard to the nature of targets and stability of categories could enable built-in alignment mechanisms between knowledge architectures and contexts.

Interoperability: limited overlaps and finer granularity are to greatly reduce frictions when ontologies bearing out business processes are to be combined or extended.

Reliability: formatted ontologies can be compared to typed programming languages with regard to transparency, internal consistency, and external validity.

Last but not least, such reasoned design of ontologies may open new perspectives for the collaboration between cognitive humans and pretending ones.

Further Reading

External Links

2018: Clones vs Octopuses

December 4, 2017

In the footsteps of robots replacing workmen, deep learning bots look to boot out knowledge workers overwhelmed by muddy data.

Cloning Knowledge (Tadeusz Cantor, from “The Dead Class”)

Faced with that , should humans try to learn deeper and faster than clones, or should they learn from octopuses and their smart hands.

Machine Learning & The Economics of Clones

As illustrated by scan-reading AI machines, the spreading of learning AI technology in every nook and cranny introduces something like an exponential multiplier: compared to the power-loom of the Industrial Revolution which substituted machines for workers, deep learning is substituting replicators for machines; and contrary to power looms, there is no physical limitation on the number of smart clones that can be deployed. So, however fast and deep humans can learn, clones are much too prolific: it’s a no-win situation. To get out of that conundrum humans have to put their hand on a competitive edge, e.g some kind of knowledge that cannot be cloned.

Knowledge & Competition

Appraising humans learning sway over machines, one can take from Spinoza’s categories of knowledge with regard to sources:

  1. Senses (views, sounds, smells, touches) or beliefs (as nurtured by the supposed common “sense”). Artificial sensors can compete with human ones, and smart machines are much better if prejudiced beliefs are put into the equation.
  2. Reasoning, i.e the mental processing of symbolic representations. As demonstrated by AlphaGo, machines are bound to fast extend their competitive edge.
  3. Philosophy which is by essence meant to bring together perceptions, intuitions, and symbolic representations. That’s where human intelligence could beat its artificial cousin which is clueless when purposes are needed.

That assessment is bore out by evolution: the absolute dominance established by humans over other animal species comes from their use of knowledge, which can be summarized as:

  1. Use of symbolic representations.
  2. Ability to formulate and exchange representations of contexts, concerns, and policies.
  3. Ability to agree on stakes and cooperate on policies.

On that basis, the third dimension, i.e the use of symbolic knowledge to cooperate on non-zero-sum endeavors, can be used to draw the demarcation line between human and artificial intelligence:

  • Paths and paces of pursuits as part and parcel of the knowledge itself. The fact that both are mostly obviated by search engines gives humans some edge.
  • Operational knowledge is best understood as information put to use, and must include concerns and decision-making. But smart bots’ ubiquity and capabilities often sap information traceability and decisions transparency, which makes room for humans to prevail.

So humans can find a clear competitive edge in this knowledge dimension because it relies on a combination of experience and thinking and is therefore hard to clone. Organizations should make sure that’s where smart systems take back and humans take up.

Organization & Innovation

Innovation being at the root of competitive edge, understanding the role played by smart systems is a key success factor; that is to be defined by organization.

As epitomized by Henry Ford, industrial-era thinking associated innovation with top-down management and the specialization of execution:

  • At execution level manual tasks were to be fragmented and specialized.
  • At management level analysis and decision-making were to be centralized and abstracted.

That organizational paradigm puts a double restraint on innovation:

  • On execution side the fragmentation of manual tasks prevents workers from effectively assessing and improving their performances.
  • On management side knowledge is kept in conceptual boxes and bereft of feedback from actual uses.

That railing between smart brains and dumb hands may have worked well enough for manufacturing processes limited to material flows and subject to circumscribed and predictable technological changes. It didn’t last.

First, as such hierarchies necessarily grow with processes complexity, overheads and rigidity force repeated pruning. Then, flat hierarchies are of limited use when information flows are to be combined with material ones, so enterprises have to start with matrix organization. Finally, with the seamless integration of digital and material flows, perpetuating the traditional line between management and execution is bound to hamstring innovation:

  • Smart tools may be able to perform a wide range of physical tasks without human supervision, but the core of innovation core as well as its front lines are where human and machines collaborate in processing a mix of material and information flows, both learning from the experience.
  • Hierarchies and centralized decision-making are being cut out from feeders when set in networked business environments colonized by smart bots on both sides of corporate boundaries.

Not surprisingly, these innovation trends seem to tally with the social dimension of knowledge.

Learning from the Octopus

The AI revolution has already broken all historical records of footprint (everything is affected) and speed (a matter of years). Given the length of human education cycles, appraising the consequences comes with some urgency, beginning with the disposal of two entrenched beliefs:

At individual level the new paradigm could be compared to the nervous system of octopuses: each arm gets its brain and neurons, and so its own touch of knowledge and taste of decision-making.

On a broader (i.e enterprise) perspective, knowledge should be supported by two organizational layers, one direct and innovation-driven between trusted co-workers, the other networked and knowledge-driven between remote workers, trusted or otherwise.

Further Reading

External Links

Beans must be Counted, one way And the other

May 2, 2017


Conversations across software engineering forums sometimes reveal unexpected views, as it’s the case for the benefits of accountability.

Counting Paper Beans (Pieter Brueghel the Younger)

One would assume that competition would impel enterprises to scrutiny with regard to resources employed and product outcomes, pushing for the assessment of internal activities based on some agreed metrics. And yet, now and again, software development is viewed as a boutique occupation, if not an art pursuit, carried out by creative craftsmen for enlightened if demanding patrons; a vocation too distinctive to be gauged by common yardsticks.

Difficulties of Oversight

Setting apart creative delusions, the assessment of software development is effectively confronted with rational as well as practical obstacles.

To begin with rationality, and unlike traditional products, there is no market pricing mechanism that could match software development costs with customers’ value. As a consequence business stakeholders and systems engineers prefer to play safe and keep their respective assessments on the opposed banks of the customer/provider divide.

As for the practicality of assessments, the choice is between idiosyncratic approaches (e.g users’ points) and reasoned ones (essentially function points). The former ones being by nature specific and subject to changes in business opportunities, whereas the latter ones are being plagued by implementation plights that make them both costly and unreliable.

Yet, the diluting of IT systems in business environments is making that conundrum irrelevant: the fusing of business processes and supporting software is blanketing the discontinuities between business value and development costs.

Perils of Oversight

Given the digital integration between systems and business environments and the part played by software in production, marketing and operations, enterprises can no longer ignore the economics of software development.

As far as enterprises are concerned, economics use prices for two key purposes, external and internal.

With regard to their business environment, enterprises need metrics to price the resources they could buy and the products they could sell; their competitive edge fully depends on the thoroughness and accuracy of both.

With regard to their internal governance, enterprises need metrics to gauge the efficiency of their factors and the maturity of their processes, and allocate resources accordingly. That internal assessment is the basis of their versatility and plasticity:

  • Confronted to continuous, frequent, and often abrupt changes in business environments, enterprise must be able to adapt their activities without having to change its architectures. That cannot be achieved without timely and accurate assessments of the way their resources are put to use.
  • Conversely, enterprises may have to change their architectures without affecting their performances; that cannot be achieved without a comprehensive and accurate assessments of alternative options, organizational as well as technical.

To summarize, the spread and intricacy of software footprint over both sides of the crumbling fences between enterprise systems and business environments makes software economics a necessary component of enterprises governance, so a tally of software beans should not be an option.

Further Reading

New Year: 2016 is the One to Learn

December 15, 2016

Sometimes the future is best seen through rear-view mirrors; given the advances of artificial intelligence (AI) in 2016, hindsight may help for the year to come.


Deep Mind Learning (J.Bosh)

Deep Learning & the Depths of Intelligence

Deep learning may not have been discovered in 2016 but Google’s AlphaGo has arguably brought a new dimension to artificial intelligence, something to be compared to unearthing the spherical Earth.

As should be expected for machines capabilities, artificial intelligence has for long been fettered by technological handcuffs; so much so that expert systems were initially confined to a flat earth of knowledge to be explored through cumbersome sets of explicit rules. But exponential increase in computing power has allowed neural networks to take a bottom-up perspective, mining for implicit knowledge hidden in large amount of raw data.

Like digging tunnels from both extremities, it took some time to bring together top-down and bottom-up schemes, namely explicit (rule-based) and implicit (neural network-based) knowledge processing. But now that it comes to fruition, the alignment of perspectives puts a new light on the cognitive and social dimensions of intelligence.

Intelligence as a Cognitive Capability

Assuming that intelligence is best defined as the ability to solve problems, the first criterion to consider is the type of input (aka knowledge) to be used:

  • Explicit: rational processing of symbolic representations of contexts, concerns, objectives, and policies.
  • Implicit: intuitive processing of factual (non symbolic) observations of objects and phenomena.

That distinction is broadly consistent with the one between humans, seen as the sole symbolic species with the ability to reason about explicit knowledge, and other animal species which, despite being limited to the processing of implicit knowledge, may be far better at it than humans. Along that understanding, it would be safe to assume that systems with enough computing power will sooner or later be able to better the best of animal species, in particular in the case of imperfect inputs.

Intelligence as a Social Capability

Alongside the type of inputs, the second criterion to be considered is obviously the type of output (aka solution). And since classifications are meant to be built on purpose, a typology of AI outcomes should focus on relationships between agents, humans or otherwise:

  • Self-contained: problem-solving situations without opponent.
  • Competitive: zero-sum conflictual activities involving one or more intelligent opponents.
  • Collaborative: non-zero-sum activities involving one or more intelligent agents.

That classification coincides with two basic divides regarding communication and social behaviors:

  1. To begin with, human behavior is critically different when interacting with living species (humans or animals) and machines (dumb or smart). In that case the primary factor governing intelligence is the presence, real or supposed, of beings with intentions.
  2. Then, and only then, communication may take different forms depending on languages. In that case the primary factor governing intelligence is the ability to share symbolic representations.

A taxonomy of intelligence with regard to cognitive (reason vs intuition) and social (symbolic vs non-symbolic) capabilities may help to clarify the role of AI and the importance of deep learning.

Between Intuition and Reason

Google’s AlphaGo astonishing performances have been rightly explained by a qualitative breakthrough in learning capabilities, itself enabled by the two quantitative factors of big data and computing power. But beyond that success, DeepMind (AlphaGo’s maker) may have pioneered a new approach to intelligence by harnessing both symbolic and non symbolic knowledge to the benefit of a renewed rationality.

Perhaps surprisingly, intelligence (a capability) and reason (a tool) may turn into uneasy bedfellows when the former is meant to include intuition while the latter is identified with logic. As it happens, merging intuitive and reasoned knowledge can be seen as the nexus of AlphaGo decisive breakthrough, as it replaces abrasive interfaces with smart full-duplex neural networks.

Intelligent devices can now process knowledge seamlessly back and forth, left and right: borne by DeepMind’s smooth cognitive cogwheels, learning from factual observations can suggest or reinforce the symbolic representation of emerging structures and behaviors, and in return symbolic representations can be used to guide big data mining.

From consumers behaviors to social networks to business marketing to supporting systems, the benefits of bridging the gap between observed phenomena and explicit causalities appear to be boundless.

Further Reading

External Links

Business Agility vs Systems Entropy

November 28, 2016


As already noted, the seamless integration of business processes and IT systems may bring new relevancy to the OOAD (Observation, Orientation, Decision, Action) loop, a real-time decision-making paradigm originally developed by Colonel John Boyd for USAF fighter jets.

Agility: Orientation (Lazlo Moholo-Nagy)

Agility & Orientation (Lazlo Moholo-Nagy)

Of particular interest for today’s business operational decision-making is the orientation step, i.e the actual positioning of actors and the associated cognitive representations; the point being to use AI deep learning capabilities to surmise opponents plans and misdirect their anticipations. That new dimension and its focus on information brings back cybernetics as a tool for enterprise governance.

In the Loop: OOAD & Information Processing

Whatever the topic (engineering, business, or architecture), the concept of agility cannot be understood without defining some supporting context. For OODA that would include: territories (markets) for observations (data); maps for orientation (analytics); business objectives for decisions; and supporting systems for action.

OODA loop and its actual (red) and symbolic (blue) contexts.

OODA loop and its actual (red) and symbolic (blue) contexts.

One step further, contexts may be readily matched with systems description:

  • Business contexts (territories) for observations.
  • Models of business objects (maps) for orientation.
  • Business logic (objectives) for decisions.
  • Business processes (supporting systems) for action.

The OODA loop and System Perspectives

That provides a unified description of the different aspects of business agility, from the OODA loop and operations to architectures and engineering.

Architectures & Business Agility

Once the contexts are identified, agility in the OODA loop will depend on architecture consistency, plasticity, and versatility.

Architecture consistency (left) is supposed to be achieved by systems engineering out of the OODA loop:

  • Technical architecture: alignment of actual systems and territories (red) so that actions and observations can be kept congruent.
  • Software architecture: alignment of symbolic maps and objectives (blue) so that orientation and decisions can be continuously adjusted.

Functional architecture (right) is to bridge the gap between technical and software architectures and provides for operational coupling.

Business Agility: systems architectures and business operations

Business Agility: systems architectures and business operations

Operational coupling depends on functional architecture and is carried on within the OODA loop. The challenge is to change tack on-the-fly with minimum frictions between actual and symbolic contexts, i.e:

  • Discrepancies between business objects (maps and orientation) and business contexts (territories and observation).
  • Departure between business logic (objectives and decisions) and business processes (systems and actions)

When positive, operational coupling associates business agility with its architecture counterpart, namely plasticity and versatility; when negative, it suffers from frictions, or what cybernetics calls entropy.

Systems & Entropy

Taking a leaf from thermodynamics, cybernetics defines entropy as a measure of the (supposedly negative) variation in the value of the information supporting the control of viable systems.

With regard to corporate governance and operational decision-making, entropy arises from faults between environments and symbolic surrogates, either for objects (misleading orientations from actual observations) or activities (unforeseen consequences of decisions when carried out as actions).

So long as architectures and operations were set along different time-frames (e.g strategic and tactical), cybernetics were of limited relevancy. But the seamless integration of data analytics, operational decision-making, and IT supporting systems puts a new light on the role of entropy, as illustrated by Boyd’s OODA and its orientation component.

Orientation & Agility

While much has been written about how data analytics and operational decision-making can be neatly and easily fitted in the OODA paradigm, a particular attention is to be paid to orientation.

As noted before, the concept of Orientation comes with a twofold meaning, actual and symbolic:

  • Actual: the positioning of an agent with regard to external (e.g spacial) coordinates, possibly qualified with the agent’s abilities to observe, move, or act.
  • Symbolic: the positioning of an agent with regard to his own internal (e.g beliefs or aims) references, possibly mixed with the known or presumed orientation of other agents, opponents or associates.

That dual understanding underlines the importance of symbolic representations in getting competitive edges, either directly through accurate and up-to-date orientation, or indirectly by inducing opponents’ disorientation.

Agility vs Entropy

Competition in networked digital markets is carried out at enterprise gates, which puts the OODA loop at the nexus of information flows. As a corollary, what is at stake is not limited to immediate business gains but extends to corporate knowledge and enterprise governance; translated into cybernetics parlance, a competitive edge would depend on enterprise ability to export entropy, that is to decrease confusion and disorder inside, and increase it outside.

Working on that assumption, one should first characterize the flows of information to be considered:

  • Territories and observations: identification of business objects and events, collection and analysis of associated data.
  • Maps and orientations: structured and consistent description of business domains.
  • Objectives and decisions: structured and consistent description of business activities and rules.
  • Systems and actions: business processes and capabilities of supporting systems.

Static assessment of technical and software architectures for respectively observation and decision

Then, a static assessment of information flows would start with the standing of technical and software architecture with regard to competition:

  • Technical architecture: how the alignment of operations and resources facilitate actions and observations.
  • Software architecture: how the combined descriptions of business objects and logic facilitate orientation and decision.

A dynamic assessment would be carried out within the OODA loop and deal with the role of functional architecture in support of operational coupling:

  • How the mapping of territories’ identities and features help observation and orientation.
  • How decision-making and the realization of business objectives are supported by processes’ designs.

Dynamic assessment of decision-making and the realization of business objectives’ as supported by processes’ designs.

Assuming a corporate cousin of  Maxwell’s demon with deep learning capabilities standing at the gates in its OODA loop, his job would be to analyze the flows and discover ways to decrease internal complexity (i.e enterprise representations) and increase external one (i.e competitors’ representations).

That is to be achieved with the integration of  operational analytics, business intelligence, and decision-making.


Seamless integration of operational analytics, business intelligence, and decision-making.

Further Readings

Business Agility & the OODA Loop

November 21, 2016


The OOAD (Observation, Orientation, Decision, Action) loop is a real-time decision-making paradigm developed in the sixties by Colonel John Boyd from his experience as fighter pilot and military strategist.

(Moholy Nagy)

How to get inside opponent’s loop (Lazlo Moholy-Nagy)

The relevancy of OODA for today’s operational decision-making comes from the seamless integration of IT systems with business operations and the resulting merits of agile development processes.

Business: End of Discrete Time-Frames

Business governance was used to be phased: analyze the market, select opportunities, build capabilities, launch operations. No more. With the melting of the fences between actual and symbolic realms, periodic transitional events have lost most of their relevancy. Deprived of discrete and robust time-frames, the weaving of observed facts with business plans has to be managed on the fly. Success now comes from continuous readiness, quicker tempo, and the ability to operate inside adversaries’ time-scales, for defense (force competitor out of favorable position) as well as offense (get a competitive edge). Hence the reference to dogfights.

Dogfights & Agile Primacy

John Boyd train of thoughts started with the observation that, despite the apparent superiority of the soviet Mig 15 on US F-86 during the Korea war, US fighters stood their ground. From that factual observation it took Boyd’s comprehensive engineering work to demonstrate that as far as dogfights were concerned fast transients between maneuvers (aka agility) was more important than technical capabilities. Pushed up Pentagon’s reluctant ladders by Boyd’s sturdy determination, that conclusion have had wide-ranging consequences in the design of USAF fighters and pilots formation for the following generations. Its influence also spread to management, even if theories’ turnover is much faster there, and shelf-life much shorter.

Nowadays, with the accelerated integration of business processes with IT systems, agility is making a comeback from the software engineering corner. Reflecting business and IT convergence, principles like iterative development, just-in-time delivery, and lean processes, all epitomized by the agile software development model, are progressively mingling into business practices with strong resemblances to dogfights; and the resemblances are not only symbolic.

IT Systems & Business Competition

While some similarities between dogfights and business competition may seem metaphorical, one critical aspect is all too real, namely the increasing importance of supporting machines, IT systems or fighter jets.

Basically, IT systems, like fighters’ electronics, are tasked to observe environments, analyse changes in relation to position and objectives, and support decision-making. But today’s systems go further with two qualitative leaps:

  • The seamless integration of physical and symbolic flows let systems manage some overlapping between supporting decisions and carrying out actions.
  • Due to their artificial intelligence capabilities, systems can learn on-the-job and improve their performances in real-time feedback loops.

When combined, these two trends have drastic impact on the way machines can support human activities in real-time competitive situations. More to the point, they bring new light on business agility.

Business Agility

As illustrated by the radical transformation of fighter cockpits, the merging of analog and digital flows leaves little room for human mediation: data must be processed into information and presented instantly along two critical dimensions, one for decision-making, the other for information life-cycle:

  • Man/Machine interfaces have to materialize the merging of actual and symbolic realms as to support just-in-time decision-making.
  • The replacement of phased selected updates of environment data by continuous changes in raw and massive data means that the status of information has to be incorporated with the information itself, yet without impairing decision-making.

Beyond obvious differences between dogfights and business competition, that double exigence is to characterize business agility:

  1. Instant understanding of changes in business opportunities (Observation) .
  2. Simultaneous assessment of the reliability and shelf-life of pertaining information with regard to current positions and operations (Orientation).
  3. Weighting of options with regard to enterprise capabilities and broader objectives (Decision).
  4. Carrying out of decisions within the relevant time-span (Action).

That understanding of business agility is to be compared with its development and architecture cousins. Yet it doesn’t seem to add much to data analytics and operational decision-making. That is until the concept of orientation is reassessed.

Agility & Orientation: Task vs Tack

To begin with basics, the concept of Orientation comes with a twofold meaning, actual and symbolic:

  • Actual: a position with regard to external (e.g spacial) coordinates, possibly qualified with abilities to observe, move, or act.
  • Symbolic: a position with regard to internal (e.g beliefs or aims) references, possibly mixed with known or presumed orientation of other agents, opponents or associates.

When business is considered, data analytics is supposed to deal comprehensively and accurately with markets’ actual orientations. But the symbolic facet is left largely unexplored.

Boyd’s contribution is to bring together both aspects and combine them into actual practice, namely how to foretell the tack of your opponents from their actual tracks as well as their surmised plans, while fooling them about your own moves, actual or planned.

Such ambitions once out of reach, can now be fulfilled due to the combination of big data, artificial intelligence, and the exponential growth on computing power.

Further Readings


Models as Parachutes

August 31, 2016


The recent paralysis of British Airways world operations (due to a power failure, if officials are to be believed), following the crash of Delta Airlines’ reservation system and a number of similar incidents, once again points to the reliability of large and critical IT systems.

László Moholy-Nagy-para

Models as Parachutes (László Moholy-Nagy)

Particularly at risk are airlines or banking systems, whose seasoned infrastructures, at the cutting edge when introduced half a century ago, have been strained to their limit by waves of extensive networked new functionalities. Confronted to the magnitude and complexity of overall modernization, most enterprises have preferred piecemeal updates to architectural leaps. Such policies may bring some respite, but they may also turn into aggravating factors, increasing stakes and urgency as well as shortening odds.

Assuming some consensus about stakes, hazards, and options, the priority should be to overcome jumping fears by charting a reassuring perspective in continuity with current situation. For that purpose models may provide heartening parachutes.

Models: Intents & Doubts

Models can serve two kinds of purposes:

  • Describe business contexts according to enterprise objectives, foretell evolution, and simulate policies.
  • Prescribe the architecture of supporting systems and the design of software components.
Business analyst figure maps from territories, software architects create territories from maps

Models Purposes: Describe contexts & concerns, Design supporting systems

Frameworks were supposed to combine the two perspectives, providing a comprehensive and robust basis to systems governance. But if prescriptive models do play a significant role in engineering processes, in particular for code generation, they are seldom fed by their descriptive counterpart.

Broadly speaking, the noncommittal attitudes toward descriptive models comes from a rooted mistrust in non executable models: as far as business analysts and software engineers are concerned, such models can only serve as documentary evidence. And since prescriptive models are by nature grounded to systems’ inner making, there is no secure conceptual apparatus linking systemic changes with their technical consequences. Hence the jumping frights.

Overcoming those frights could be achieved by showing the benefits of secure and soft landings.

Models for Secure Landings

As any tools, models must be assessed with regard to their purpose: prescriptive ones with regard to feasibility and reliability of architectures and design, descriptive ones with regard to correctness and consistency. As already noted, compared to what has been achieved for the former, nothing much has been done about the validity of the latter.

Yet, and contrary to customary beliefs, the rigorous verification of descriptive (aka extensional) models is not a dead-end. Of course these models can never be proven true because there is no finite scope against which they could be checked; but it doesn’t mean that nothing can be done to improve their reliability:

Models must be assessed with regard to their purpose

How to Check for secure landings

  • Correctness: How to verify that all the relevant individuals and features are taken into account. That can only be achieved empirically by building models open to falsification.
  • Consistency: How to verify that the symbolic descriptions (categories and connectors) are complete, coherent and non redundant across models and abstraction levels. That can be formally verified.
  • Alignment: How to verify that current and required business processes are to be seamlessly and effectively supported by systems architectures. That can be managed by introducing a level of indirection, as illustrated by MDA with platform independent models (PIMs) set between computation independent (CIMs) and platform specific (PSMs) ones.

Once established on secure grounds, models can be used to ensure soft landings.

Models for Soft Landings

Set within model based system engineering frameworks, models will help to replace piecemeal applications updates by seamless architectures modernization:

  • Systems: using models shift the focus of change from hardware to software.
  • Enterprise: models help to factor out the role of organization and regulations.
  • Project management: models provide the necessary hinge between agile and phased projects, the former for business driven applications, the latter for architecture oriented ones. Combining both approaches will ensure than lean and just-in-time processes will not be sacrificed to system modernization.
Seamless architectures modernization (a) vs Piecemeal applications updates (b).

Seamless architectures modernization (a) vs Piecemeal applications updates (b).

More generally, and more importantly, models are the option of choice (if not the only one) for enterprise knowledge management:

  • Business: Computation independent models (CIMs), employed to trace, justify and rationalize business strategies and processes portfolios.
  • Systems: Platform specific models (PSMs), employed to trace, justify and rationalize technical alternatives and decisions.
  • Decision-making and learning: Platform independent models (PIMs), employed to align business and systems and support enterprise architecture governance.

And knowledge management is arguably the primary factor for successful comprehensive modernization.

Strategic Decision-making: Cash or Crash

Governance is all about risks and decision-making, but investing on truly fail-safe systems for airlines or air traffic control can be likened to a short bet on the Armageddon, and that cannot be easily framed in a neat cost-benefit analysis. But that may be the very nature of strategic decision-making: not amenable to ROI but aiming at risks assessment and the development of the policies apt to contain and manage them. That would be impossible without models.

Further Reading

Agile Collaboration & Social Creativity

February 22, 2016

Open-plan offices and social networks are often seen as significant factors of collaboration and innovation, breeding and nurturing the creativity of knowledge workers, weaving their ideas into webs of truths, and molding their minds into some collective intelligence.

Brains need some breathing space

Open-plan offices, collaboration, and knowledge workers creativity

Yet, as creativity comes with agility, knowledge workflows should give brains enough breathing space lest they get more pressure than pasture.

Collaboration & Thinking Flows

Collaboration is a means to an end. To be of any use exchanges have to be fed with renewed ideas and assumptions, triggering arguments and adjustments, and opening new perspectives. If not they may burn themselves out with hollow considerations blurring clues and expectations, clogging the channels, and finally stemming the thinking flows.

Taking example from lean manufacturing, the first objective should be to streamline knowledge workflows as to eliminate swirling pools of squabbles, drain stagnant puddles of stale thoughts, and gear collaboration to flowing knowledge streams. As illustrated by flood irrigation, the first step is to identify basin levels.

Dunbar Numbers & Collaboration Basins

Studying the grooming habits of social primates, psychologist Robin Dunbar came to the conclusion that the size of social circles that individuals of a living species can maintain is set by the size of brain’s neocortex. Further studies have confirmed Dunbar’s findings, with the corresponding sizes for humans set around 10 for trusted personal groups and 150 for untried social ones. As it happens, and not by chance, those numbers seem to coincide with actual observations: the former for personal and direct collaboration, the latter for social and mediated collaboration.

Based on that understanding, the objective would be to organize knowledge workflows across two primary basins:

  • On-site and face-to-face collaboration with trusted co-workers. Corresponding interactions would be driven by personal dispositions and attitudes.
  • On-line and networked collaboration with workers, trusted or otherwise. Corresponding interactions would be based on shared interests and past exchanges.

Knowledge Workflows

The aim of knowledge workflows is to process data into information and put it to use. That is to be achieved by combining different kinds of tasks, in particular:

  • Data and information management: build the symbolic descriptions of contexts, concerns, and means.
  • Objectives management: based on a set of symbolic descriptions, identify and refine opportunities together with the ways to realize them.
  • Tasks management: allocate rights and responsibilities across organizations and collaboration frames, public and shallow or personal and deep.
  • Flows management: monitor and manage actual flows, publish arguments and propositions, consolidate decisions, …

Taking into account constraints and dependencies between the tasks, the aims would be to balance creativity and automation while eliminating superfluous intermediate products (like documents or models) or activities (e.g unfocused meetings).

With regard to dependencies, KM tasks are often intertwined and cannot be carried out sequentially; moreover, as illustrated by the impact of “creative accounting” on accounted activities, their overlapping is not frozen but subject to feedback, changes and adjustments.

With regard to automation, three groups are to be considered: the first requires only raw processing power and can be fully automated; the second also involves some intelligence that may be provided by smart systems; and the third calls for decision-making that can only be done by human agents entitled by the organization.

At first sight some lessons could be drawn from lean manufacturing, yet, since knowledge processes are not subject to hardware constraints, agile approaches should provide a more informative reference.

Iterative Knowledge Processing

A simple preliminary step is to check the applicability of agile principles by replacing “software” by “knowledge”. Assuming that ground is secured, the core undertaking is to consider what would become of cycles and iterations when applied to knowledge processing:

  • Cycle invariants: tasks would be iterated on given sets of symbolic descriptions applied to the state of affairs (contexts, concerns, and means).
  • Iterations content: based on those descriptions data would be processed into information, changes would be monitored, and possibilities explored.
  • Exit condition: cycles would complete with decisions committing changes in the state of affairs that would also entail adjustments or changes in symbolic descriptions.

That scheme meets three of the basic tenets of the agile paradigm, i.e open scope (unknowns cannot be set in advance), continuity of delivery (invariants are defined and managed by knowledge workers), and users in driving seats (through exit conditions). Yet it still doesn’t deal with creativity and the benefits of collaboration for knowledge workers.

Thinking Space & Pace

The scope of creativity in processes is neatly circumscribed by the nature of flows, i.e the possibility to insert knowledge during the processing: external for material flows (e.g in manufacturing), internal for symbolic flows (e.g in software engineering and knowledge processing).

Yet, whereas both software engineering and knowledge processes come with some built-in capability to redefined their symbolic flows on-the-fly, they don’t grant the same room to creativity. Contrary to software engineering projects which have to close their perspectives on the delivery of working products, knowledge processes are meant to keep them open to new understandings and opportunities. For the former creativity is the means to an end, for the latter it’s the end in itself, with collaboration as means.

Such opposite perspectives have direct consequences for two basic agile collaboration mechanisms: backlog and time-boxing:

  • Backlogs are used to structure and manage the space under exploration. But contrary to software processes whose space is focused and structured by users’ needs, knowledge processes are supposed to play on workers’ creativity to expand and redefine the range under consideration.
  • Time-boxes are used to synchronize tasks. But with creativity entering the fray, neither space granularity or thinking pace can be set in advance and coerced into single-sized boxes. In that case individuals must remain in full control of the contents and stride of their thinking streams.

It ensues that when creativity is the primary success factor standard agile collaboration mechanisms are falling short and intelligent collaboration schemes are to be introduced.

Creativity & Collaboration Tiers

The synchronization of creative activities has to deal with conflicting objectives:

  • On one hand the mental maps of knowledge workers and the stream of their thoughts have to be dynamically aligned.
  • On the other hand unsolicited face-to-face interactions or instant communications may significantly impair the course of creative thinking.

When activities, e.g software engineering, can be streamlined towards the delivery of clearly defined outcomes, backlogs and time-boxes can be used to harness workers’ creativity. When that’s not the case more sophisticated collaboration mechanisms are needed.

Assuming that mediated collaboration has a limited impact on thinking creativity (emails don’t have to be answered, or even presented, instantly), the objective is to steer knowledge workflows across a two-tiered collaboration framework: one personal and direct between knowledge workers, the other social and mediated through enterprise or institutional networks.

On the first tier knowledge workers would manage their thinking flows (content and tempo) independently, initiating or accepting personal collaboration (either through physical contact or some kind of instant messaging) depending on their respective “state of mind”.

The second tier would be for social collaboration and would be expected to replace backlogs and time-boxing. Proceeding from the first to the second tier would be conditioned by workers’ needs and expectations, triggered on their own initiative or following prompts.

From Personal to Collective Thinking

The challenging issue is obviously to define and implement the mechanisms governing the exchanges between collaboration tiers, e.g:

  • How to keep tabs on topics and contents to be safeguarded.
  • How to mediate (i.e filter and time) the solicitations and contribution issued by the social tier.
  • How to assess the solicitations and contribution issued by individuals.
  • How to assess and manage knowledge deemed to remain proprietary.
  • How to identify and manage knowledge workers personal and social circles.

Whereas such issues are customary tackled by various AI systems (knowledge management, decision-making, multi-players games, etc), taken as a whole they bring up the question of the relationship between personal and collective thinking, and as a corollary, the role of organization in nurturing corporate innovation.

Conclusion: Collaboration Spaces vs Panopticon

As illustrated by the rising of futuristic headquarters, leading technology firms have been trying to tackle these issues by redefining internal architecture as collaboration spaces. Compared to traditional open spaces, such approaches try to fuse physical and digital spaces into overlapping layers of collaboration spaces, using artificial intelligence to harness cooperation.

Yet, lest uniform and comprehensive transparency brings the worrying shadow of a panopticon within which everyone can be unknowingly observed, working spaces have to be designed as to enhance collaboration without trespassing on privacy.

That could be achieved with a layered transparency set along the nature of collaboration:

  • Immediate and personal: working cells regrouping 5 to 10 workstations earmarked for a task and used indifferently by teams members.
  • Delayed and personal: open physical spaces accommodating working cells, with instant messaging and geo-localization; spaces are hinged on domains and focused on shared knowledge.
  • On-line and networked: digital spaces merging physical spaces and organizational structures.

That mix of physical and virtual spaces could be dynamically redefined depending on activities, projects, location, and organisation.

Further Readings

External Links

Operational Intelligence & Decision Making

January 18, 2016


According to a leading tools provider operational intelligence (OI) is the ability to “discover and analyze relationships between business events and corresponding IT events”.

Sigmar Polke

Operational Decision-making (Sigmar Polke)

From a marketing perspective, the moniker suggests some kind of cross-breeding between operational research, artificial intelligence, and real-time analytics. Yet, behind vendor dressing, problems and policies remain the ones traditionally dealt with by decision-making and knowledge management, and as far as marketing is concerned, pitches will hardly affect the assessment of field professionals.

Nevertheless, functional pitches may have a deeper influence if they try to outline the aims of operational intelligence to the people directly involved, affecting the way problems are understood and dealt with. That may be the case if business and system events are seemed to be put on a par: overlooking the directed dependency between actual events and their systems counterparts can critically hamper the very capabilities of systems decision-making.

Facts, Data, & Information

The new connected world of human brains and smart things have scaled down space and time by orders of magnitude, up to the point that events seem to come out as soon as they happen, wherever that may be. Facts and updates, that once were incoming as discrete and manageable batches of information, are now bursting continuously and massively as seamless streams of data that have to be processed on-the-fly into information lest they be cannibalized by ambient noise. That new configuration blurs the distinction between operational data (pushed, shallow, transient) and underlying information (pulled, deep, persistent), making it unworkable, if not meaningless altogether.

Taking inventories decisions as an example, traditional schemes rely on periodic readings of actual inventories and sales crossed with market foresight. Now, with on-line sales and the internet of things, real-time data can be used to build on-the-fly indicators whose biases and inaccuracy would be dynamically readjusted on the basis of information built on hindsight. At any given time (t), decision-makers will be presented with actual observations (a),  initial estimations of previous observations (b1, b2), and revised estimations of previous observations (c).


At any given time (t), decision-makers are presented with actual observations (a), initial estimations of previous observations (b1, b2), and revised estimations of previous observations (c).

Set along this framework, the debate about big data can be misleading as it puts the focus on the quantity of data feeding the processes, overlooking the process itself and the distinction between data, information, and knowledge.

Information, Knowledge, & Decision-making

Generally speaking, the distinction between data and information can be set with reference to time and context, data being instant and standalone, and information associated to a shelf life and domain. With regard to decision-making, it would mean that data can be directly used within the context of the current activities and circumstances; e.g, whereas on-line sales data may (or may not) be directly (i.e despite inaccuracies and biases) used to allocate inventories across depots, it has to be “mined” into consolidated information before being used in the broader perspective of inventories planning.

Compared to the transition between data and information, which is carried out by adding time and context, the one between information and knowledge is best understood in terms of decision-making.

Information is obtained by anchoring data to time-frames and contexts, knowledge is acquired by putting information to use.

Information is obtained by anchoring data to time-frames and contexts, knowledge is acquired by putting information to use.

Decisions are best defined as commitments set against some unknown circumstances: somebody, somewhere, or sometime. First, it ensues that decision-making calls for specific and timed information that has to be maintained up-to-date until decisions are taken. Then, taking decisions introduces some irreversible change in the state of affairs or expectations, making potentially obsolete all relevant information. So it may be argued that decisions is what transform information into knowledge.

Operational Intelligence: Objectives & Tools

Assuming decisions mark the nexus between information and knowledge, operational intelligence could be defined as the ability to put information to use, that ability being supported by the analysis of the relationships between business events and corresponding IT events.

Far from being academic, that distinction is essentially pragmatic as it marks the boundary between OI objectives and tools capabilities:

  • The aim of OI is to make sense (and profit) from the dynamic relationship between business (aka external) events on one hand, business objectives and enterprise capabilities on the other hand.
  • The role of supporting tools is to define and manage IT (aka internal) events used to reflect external ones and analyze them.
Whereas business events (red) represent change in the state of affairs, IT events (blue) only represent changes in associated information.

Whereas business events (red) represent change in the state of affairs, IT events (blue) only represent changes in associated information.

Since IT events are artifacts built on purpose there isn’t much to discover or analyze about them; not to mention the fact that confusing business events and their IT shadows is bound to undermine the whole decision-making process. So what is at stake for OI is how to design IT events as to timely and accurately trail the relevant business events.

Operational Intelligence & Actual Knowledge

As already noted, operational intelligence (OI) is about decision-making, which entails changing the state of objects, processes, or expectations. Compared to knowledge management (KM) which may or may not be time-related, OI is inherently bound to the actual state of affairs: on one hand it relies on specific and timed information, on the other hand it renders that information obsolete when it triggers decisions.

At the risk of oversimplification, operational intelligence can first be understood as a combination of traditional disciplines:

  • Data-mining is to filter facts and events, capture data, and analyze it into information.
  • Knowledge management chart information with regard to business objectives and enterprise capabilities.
  • Decision-making manage time-stamps and plan commitments subject to accuracy and likelihood.

But the specificity of operational intelligence is to be found in the way these functions are intertwined and cross-fed by operational concerns.

To begin with, data mining can be dynamically adjusted depending on what is needed for decision-making, and when. As a corollary, with the benefits of data so cooked in advance, some decisions can be taken directly, bypassing the mediation (and delays) of information processing. From a cognitive point of view that would be the equivalent of non symbolic (aka implicit) knowledge to be processed by neuronal networks.

Parceling out OI objectives

Decision-making and differentiated knowledge management

Conversely, information processing could benefit from operational feedback so that knowledge management would be driven by business value, and the supporting information weighted by timing and shelf-life considerations. Whereas part of it could be done through implicit connections, it would be more comprehensively and explicitly achieved through symbolic representations.

Operational Intelligence: Signals vs Symbols

Assuming that intelligence is the ability to figure out situations and solve problems, one may conclude that it is inherently operational. Along the same reasoning, if knowledge is information put to use, it may be implicit as well as explicit.

Nonetheless, the merit of operational intelligence is to bring to a single functional roof symbolic and non symbolic knowledge, the former explicit, using mediation of semantic constructs and used to weight information and support managed decisions, the latter implicit, using direct associations between actual objects or phenomena, and supporting automated decisions.

Further Readings


Your content with a new angle at

IT Modernization < V.Hanniet

About IT Modernization

IT Modernization < V. Hanniet

software model driven approaches

Caminao's Ways

Do systems know how symbolic they are ?