Archive for the ‘Enterprise governance’ Category

Enterprise Architects Basic Tenets

July 19, 2018

Preamble

Humans often expect concepts to come with innate if vague meanings before being compelled to withstand endless and futile controversies around definitions. Going the other way would be a better option: start with differences, weed out irrelevant ones, and use remaining ones to advance.

tamarEttun2

How to keep business rolling (Tamar Ettun)

Concerning enterprise, it would start with the difference between business and architecture, and proceed with the wholeness of data, information, and knowledge.

Business Architecture is an Oxymoron

Business being about time and competition, success is not to be found in recipes but would depend on particularities with regard to objectives, use of resources, and timing. These drives are clearly at odds with architectures rationales for shared, persistent, and efficient structures and mechanisms. As a matter of fact, dealing with the conflicting nature of business and architecture concerns can be seen as a key success factor for enterprise architects, with information standing at the nexus.

Data as Flow, Information as Asset, Knowledge as Service

Paradoxically, the need of a seamless integration of data, information, and knowledge means that the distinction between them can no longer be overlooked.

  1. Data is captured through continuous and heterogeneous flows from a wide range of sources.
  2. Information is built by adding identity, structure, and semantics to data. Given its shared and persistent nature it is best understood as asset.
  3. Knowledge is information put to use through decision-making. As such it is best understood as a service.

Ensuring the distinction as well as the integration must be a primary concern of enterprise architects.

Sustainable Success Depends on a Balancing Act

Success in business is an unfolding affair, on one hand challenged by circumstances and competition, on the other hand to be consolidated by experience and lessons learnt. Meeting challenges while warding off growing complexity will depend on business agility and the versatility and plasticity of organizations and systems. That should be the primary objective of enterprise architects.

Further Reading

Advertisements

Squaring Software To Value Chains

July 4, 2018

Preamble

Digital environments and the ubiquity of software in business processes introduces a new perspective on value chains and the assessment of supporting applications.

Michel Blazy plante

Business & Supporting Processes (Michel Blazy)

At the same time, as software designs cannot be detached of architectures capabilities, the central question remains of allocating costs and benefits between primary and support activities .

Value Chains & Activities

The concept of value chain introduced by Porter in 1985 is meant to encompass the set of activities contributing to the delivery of a valuable product or service for the market.

porter1.png

Porter’s generic model

Taking from Porter’s generic model, various value chains have been refined according to business specific categories for primary and support activities.

Whatever their merits, these approaches are essentially static and fall short when the objective is to trace changes induced by business developments; and that flaw may become critical with the generalization of digital business environments:

  • Given the role and ubiquity of software components (not to mention smart ones), predefined categories are of little use for impact analysis.
  • When changes in value chains are considered, the shift of corporate governance towards enterprise architecture puts the focus on assets contribution, cutting down the relevance of activities.

Hence the need of taking into account changes, software development, and enterprise architectures capabilities.

Value Added & Software Development

While the growing interest for value chains in software engineering is bound to agile approaches and business driven developments, the issue can be put in the broader perspective of project planning.

With regard to assessment, stakeholders, start with business opportunities and look at supporting systems from a black box perspective; in return, software providers are to analyze requirements from a white box perspective, and estimate corresponding development effort and time delivery.

Assuming transparency and good faith, both parties are meant to eventually align expectations and commitments with regard to features, prices, and delivery.

With regard to policies, stakeholders put the focus on returns on investment (ROI), obtained from total cost of ownership, quality of service, and timely delivery. Providers for their part try to minimize development costs while taking into account effective use of resources and costs of opportunities. As it happens, those objectives may be carried on as non-zero sum games:

  • Business stakeholders foretell the actualized returns (a) to be expected from the functionalities under consideration (b).
  • Providers consider the solutions (b’) and estimate actualized costs (a’).
  • Stakeholders and providers agree on functionalities, prices and deliveries (c).
ReksRings_NZSG

Developing the value chain

Assuming that business and engineering environments are set within different time-frames, there should be room for non-zero-sum games winding up to win-win adjustments on features, delivery, and prices.

Continuous vs Phased Alignments

Notwithstanding the constraints of strategic planning, business processes are by nature opportunistic, and their ability to be adjusted to circumstances is becoming all the more critical with the generalization of digital business environments.

Broadly speaking, the squaring of supporting applications to business value can be done continuously or by phases:

  • Phased alignments start with some written agreements with regard to features, delivery, and prices before proceeding with development phases.
  • Continuous alignment relies on direct collaboration and iterative development to shape applications according to business needs.
ReksRings_PhasCont

Phased vs Continuous Adjustments

Beyond sectarian controversies, each approach has its use:

  • Continuous schemes are clearly better at harnessing value chains, providing that project teams be allowed full project ownership, with decision-making freed of external dependencies or delivery constraints.
  • Phased schemes are necessary when value chains cannot be uniquely sourced as they take roots in different organizational units, or if deliveries are contingent on technical constraints.

In any case, it’s not a black-and-white alternative as work units and projects’ granularity can be aligned with differentiated expectations and commitments.

Work Units & Architecture Capabilities

While continuous and phased approaches are often opposed under the guises of Agile vs Waterfall, that understanding is misguided as it extends the former to a motley of self-appointed agile schemes and reduces the latter to an ill-famed archetype.

Instead, a reasoned selection of a development models should be contingent on the problems at hand, and that can be best achieved by defining work-units bottom-up with regard to the capabilities targeted by requirements:

ReksRings_Capabs.jpg

Work units should be defined with regard to the capabilities targeted by requirements

Development patterns could then be defined with regard to architecture layers (organization and business, systems functionalities, platforms implementations) and capabilities footprint:

  • Phased: work units are aligned with architecture capabilities, e.g : business objects (a), business logic (b), business processes (c), users interfaces (d).
  • Iterative: work units are set across capabilities and defined dynamically according to development problems.
EARdmap_VChain

A common development framework for phased and iterative projects

That would provide a development framework supporting the assessment of iterative as well as phased projects, paving the way for comprehensive and integrated impact analysis.

Value Chains & Architecture Capabilities

As far as software engineering is concerned, the issue is less the value chain itself than its change, namely how value is to be added along the chain.

Given the generalization of digitized and networked business environments, that can be best achieved by harnessing value chains to enterprise architecture capabilities, as can be demonstrated using the Caminao layout of the Zachman framework.

To summarize, the transition to this layout is carried out in two steps:

  1. Conceptually, Zachman’s original “Why” column is translated into a line running across column capabilities.
  2. Graphically, the five remaining columns are replaced by embedded pentagons, one for each architecture layer, with the new “Why” line set as an outer layer linking business value to architectures capabilities:

Mapping value chains to enterprise architectures capabilities

That apparently humdrum transformation entails a significant shift in focus and practicality:

  • The focus is put on organizational and business objectives, masking the ones associated to systems and platforms layers.
  • It makes room for differentiated granularity in the analysis of value, some items being anchored to specific capabilities, others involving cross dependencies.

Value chains can then be charted from business processes to supporting architectures, with software applications in between.

Impact Analysis

As pointed above, the crumbling of traditional fences and the integration of enterprise architectures into digital environments undermine the traditional distinction between primary and support activities.

To be sure, business drive is more than ever the defining factor for primary activities; and computing more than ever the archetype of supporting ones. But in between the once clear-cut distinctions are being blurred by a maze of digital exchanges.

In order to avoid a spaghetti heap of undistinguished connections, value chains are to be “colored” according to the nature of links:

  • Between architectures capabilities: business and organization (enterprise), systems functionalities, or platforms and technologies.
  • Between architecture layers: engineering processes.
Pentaheds-Traces

Tracing value chains across architectures layers

When set within that framework, value chains could be navigated in both directions:

  • For the assessment of applications developed iteratively: business value could be compared to development costs and architecture assets’ depreciation.
  • For the assessment of features (functional or non functional) to be shared across business applications: value chains will provide a principled basis for standard accounting schemes.

Combined with model based system engineering that could significantly enhance the integration of enterprise architecture into corporate governance.

Model Driven Architecture & Value Chains

Model driven architecture (MDA) can be seen as the main (only ?) documented example of model based systems engineering. Its taxonomy organizes models within three layers:

  • Computation independent models (CIMs) describe organization and business processes independently of the role played by supporting systems.
  • Platform independent models (PIMs) describe the functionalities supported by systems independently of their implementation.
  • Platform specific models (PSMs) describe systems components depending on platforms and technologies.

Engineering processes can then be phased along architecture layers (a), or carried out iteratively for each application (b).

When set across activities value chains could be engraved in CIMs and refined with PIMs and PSMs(a). Otherwise, i.e with business value neatly rooted in single business units, value chains could remain implicit along software development (b).

Further Reading

 External Links

 

GDPR Ontological Primer

May 10, 2018

Preamble

European Union’s General Data Protection Regulation (GDPR), to come into effect  this month, is a seminal and momentous milestone for data privacy .

Nothing Personal (Arthur Szyk)

Yet, as reported by Reuters correspondents, European enterprises and regulators are not ready; more worryingly, few (except consultants) are confident about GDPR direction.

Misgivings and uncertainties should come as no surprise considering GDPR’s two innate challenges:

  • Regulating privacy rights represents a very ambitious leap into a digital space now at the core of corporate business strategies.
  • Compliance will not be put under a single authority but be overseen by an assortment of national and regional authorities across the European Union.

On that account, ontologies appear as the best (if not the only) conceptual approach able to bring contexts (EU nations), concerns (business vs privacy), and enterprises (organization and systems) into a shared framework.

A workbench built with the Caminao ontological kernel is meant to explore the scope and benefits of that approach, with a beta version (Protégé/OWL 2) available for comments on the Stanford/Protégé portal using the link: Caminao Ontological Kernel (CaKe_GDPR).

Enterprise Architectures & Regulations

Compared to domain specific regulations, GDPR  is a governance-oriented regulation set across business concerns and enterprise organization; but unlike similarly oriented ones like accounting, GDPR is aiming at the nexus of business competition, namely the processing of data into information and knowledge. With such a strategic stake, compliance is bound to become a game-changer cutting across business intelligence, production systems, and decision-making. Hence the need for an integrated, comprehensive, and consistent approach to the different dimensions involved:

  • Concepts upholding businesses, organizations, and regulations.
  • Documentation with regard to contexts and statutory basis.
  • Regulatory options and compliance assessments
  • Enterprise systems architecture and operations

Moreover, as for most projects affecting enterprise architectures, carrying through GDPR compliance is to involve continuous, deep, and wide ranging changes that will have to be brought off without affecting overall enterprise performances.

Ontologies arguably provide a conclusive solution to the problem, if only because there is no other way to bring code, models, documents, and concepts under a single roof. That could be achieved by using ontologies profiles to frame GDPR categories along enterprise architectures models and components.

CakeGDPR_00.jpg

Basic GDPR categories and concepts (black color) as framed by the Caminao Kernel

Compliance implementation could then be carried out iteratively across four perspectives:

  • Personal data and managed information
  • Lawfulness of activities
  • Time and Events
  • Actors and organization.

Data & Information

To begin with, GDPR defines ‘personal data’ as “any information relating to an identified or identifiable natural person (‘data subject’)”. Insofar as logic is concerned that definition implies an equivalence between ‘data’ and ‘information’, an assumption clearly challenged by the onslaught of big data: if proofs were needed, the Cambridge Analytica episode demonstrates how easy raw data can become a personal affair. Hence the need to keep an ontological level of indirection between regulatory intents and the actual semantics of data as managed by information systems.

CakeGDPR_data

Managing the ontological gap between regulatory understandings and compliance footprints

Once lexical ambiguities set apart, the question is not so much about the data bases of well identified records than about the flows of data continuously processed: if identities and ownership are usually set upfront by business processes, attributions may have to be credited to enterprises know-how if and when carried out through data analytics.

Given that the distinctions are neither uniform, exclusive or final, ontologies will be needed to keep tabs on moves and motives. OWL 2 constructs (cf annex) could also help, first to map GDPR categories to relevant information managed by systems, second to sort out natural data from nurtured knowledge.

Activities & Purposes

Given footprints of personal data, the objective is to ensure the transparency and traceability of the processing activities subject to compliance.

Setting apart (see below for events) specific add-ons for notification and personal accesses,  charting compliance footprints is to be a complex endeavor: as there is no reason to assume some innate alignment of intended (regulation) and actual (enterprise) definitions, deciding where and when compliance should apply potentially calls for a review of all processing activities.

After taking into account the nature of activities, their lawfulness is to be determined by contexts (‘purpose limitation’ and ‘data minimization’) and time-frames (‘accuracy’ and ‘storage limitation’). And since lawfulness is meant to be transitive, a comprehensive map of the GDPR footprint is to rely on the logical traceability and transparency of the whole information systems, independently of GDPR.

That is arguably a challenging long-term endeavor, all the more so given that some kind of Chinese Wall has to be maintained around enterprise strategies, know-how, and operations. It ensues that an ontological level of indirection is again necessary between regulatory intents and effective processing activities.

Along that reasoning compliance categories, defined on their own, are first mapped to categories of functionalities (e.g authorization) or models (e.g use cases).

CakeGDPR_activ1

Compliance categories are associated upfront to categories of functionalities (e.g authorization) or models (e.g use cases).

Then, actual activities (e.g “rateCustomerCredit”) can be progressively brought into the compliance fold, either with direct associations with regulations or indirectly through associated models (e.g “ucRateCustomerCredit” use case).

CakeGDPR_activ2

Compliance as carried out through Use Case

The compliance backbone can be fleshed out using OWL 2 mechanisms (see annex) in order to:

  • Clarify the logical or functional dependencies between processing activities subject to compliance.
  • Qualify their lawfulness.
  • Draw equivalence, logical, or functional links between compliance alternatives.

That is to deal with the functional compliance of processing activities; but the most far-reaching impact of the regulation may come from the way time and events are taken into account.

Time & Events

As noted above, time is what makes the difference between data and information, and setting rules for notification makes that difference lawful. Moreover, by adding time constraints to the notifications of changes in personal data, regulators put systems’ internal events on the same standing as external ones. That apparently incidental departure echoes the immersion of systems into digitized business environments, making all time-scales equal whatever their nature. Such flattening is to induce crucial consequences for enterprise architectures.

That shift together with the regulatory intent are best taken into account by modeling events as changes in expectations, physical objects, processes execution, and symbolic objects, with personal data change belonging to the latter.

Gdpr events

Mapping internal (symbolic) and external (actual) events is a critical element of GDPR compliance

Putting apart events specific to GDPR (e.g data breaches), compliance with regard to accuracy and storage limitation regulations will require that all events affecting personal data:

  • Are set in time-frames, possibly overlapping.
  • Have notification constraints properly documented.
  • Have likelihood and costs of potential risks assessed.

As with data and activities, OWL 2 constructs are to be used to qualify compliance requirements.

Actors & Organization

GDPR introduces two specific categories of actors (aka roles): one (data subject) for natural persons, and one for actors set by organizations, either specifically for GDPR assignment, or by delegation to already defined actors.

Gdpr actors

GDPR roles can be set specifically or delegated

OWL 2 can then be used to detail how regulatory roles can be delegated to existing ones, enabling a smooth transition and a dynamic adjustment of enterprise organization with regulatory compliance.

It must be stressed that the semantic distinction between identified agents (e.g natural persons) and the roles (aka UML actors) they play in processes is of particular importance for GDPR compliance because who (or even what) is behind an actor interacting with a system is to remain unknown to the system until the actor can be authentically identified. If that ontological lapse is overlooked there is no way to define and deal with security, confidentiality or privacy regulations.

Conclusion

The use of ontologies brings clear benefits for regulators, enterprise governance, and systems architects.

Without shared conceptual guidelines chances are for the European regulatory orchestra to get lost in squabbles about minutiae before sliding into cacophony.

With regard to governance, bringing systems and regulations into a common conceptual framework is to enable clear and consistent compliance strategies and policies, as well as smooth learning curves.

With regard to architects, ontology-based compliance is to bring cross benefits and externalities, e.g from improved traceability and transparency of systems and applications.

Annex A: Mapping Regulations to Models (sample)

To begin with, OWL 2 can be used to map GDPR categories to relevant resources as managed by information systems:

  • Equivalence: GDPR and enterprise definitions coincide.
  • Logical intersection, union, complement: GDPR categories defined by, respectively, a cross, merge, or difference of enterprise definitions.
  • Qualified association between GDPR and enterprise categories.

Assuming the categories properly identified, the language can then be employed to define the sets of regulated instances:

  • Logical property restrictions, using existential and universal quantification.
  • Functional property restrictions, using joints on attributes values.

Other constructs, e.g cardinality or enumerations, could also be used for specific regulatory constraints.

Finally, some OWL 2 built-in mechanisms can significantly improve the assessment of alternative compliance policies by expounding regulations with regard to:

  • Equivalence, overlap, or complementarity.
  • Symmetry or asymmetry.
  • Transitivity
  • etc.

Annex B: Mapping Regulations to Capabilities

GDPR can be mapped to systems capabilities using well established Zachman’s taxonomy set by crossing architectures functionalities (Who,What,How, Where, When) and layers (business and organization), systems (logical structures and functionalities), and platforms (technologies).

Rules_GDPR

Regulatory Compliance vs Architectures Capabilities

These layers can be extended as to apply uniformly across external ontologies, from well-defined (e.g regulations) to fuzzy (e.g business prospects or new technologies) ones, e.g:

Ontologies, capabilities (Who,What,How, Where, When), and architectures (enterprise, systems, platforms).

Such mapping is to significantly enhance the transparency of regulatory policies.

Further Reading

External Links

Focus: Individuals in Models

May 2, 2018

Preamble

Models are meant to characterize categories of given (descriptive models), designed (prescriptive models), or hypothetical (predictive models) individuals.

Xian-Lintong-terracotta-soldiers f6.jpg

Identity Matter (Terracotta Soldiers of Emperor Qin Shi Huang) 

Insofar as purposes can be kept apart, the discrepancies in targeted individuals can be ironed out, notably by using power-types. Otherwise, e.g if modeling concerns mix business analysis with software engineering, meta-models are introduced as jack-of-all-trades to deal with mixed semantics.

But meta-models generate exponential complexity when used across domains, not to mention the open and fuzzy ones of business intelligence. What is at stake can be better understood through the way individuals are identified and represented.

Partitions & Abstraction

Since models are meant to classify instances with regard to concerns, mixing concerns is to entail mixed classifications, horizontally across domains (e.g business and accounting), or vertically along engineering cycles (e.g business and engineering).

That can be achieved with power-types, meta-classes, or ontologies.

Delegation & Power-types

Given that categories (or classes or types) represent set of instances (given, designed, or simulated), they may by themselves be regarded as symbolic instances used to manage features commonly valued by their own members.

CaKe_indivs1

Power-types are simultaneously categories and instances.

This approach is consistent as well as effective providing the semantics and identities of instances are shared by all agents concerned, e.g business and technical aspects of car rentals. Yet it falls short on both accounts when abstractions levels a set across  domains, inducing connectors with different semantics and increased complexity.

Abstraction & Sub-classes

Sub-classes often appear as a way to overcome the difficulty, as illustrated by the Hepp Research’s Vehicle Sales Ontology: despite being set at different abstraction levels, instances for cars and models are defined and identified uniformly.

CaKe_indivs2

If, in that case, the model fulfills the substitution principle for external consistency, sub-types will fall short if engineering concerns were to be taken into account because the set of individual car models could then differ depending on perspective.

Meta-classes & Stereotypes

The overuse of meta-classes and stereotypes epitomizes the escapism school of modeling. That may be understandable, if not helpful, for the former which is by nature a free pass to abstraction; less so for the latter which is supposed to go the other way toward the specialization of meta-classes according to specific profiles. It ensues that stereotypes should never be used on their own or be extended by another stereotype.

As it happens, such consistency concerns appear to be easily diluted when stereotypes are jumbled with meta constructs; e.g:

  • Abstract stereotype (a).
  • Strong (aka class) inheritance of abstract stereotype from concrete meta-class (b).
  • Weak inheritance (aka aspect) between stereotypes (c, f).
  • Meta-constraint used to map Vehicle to methodology stereotype (d).
  • Domain specific connector to stereotype (e).

CaKe_uafp

Without comprehensive and consistent semantics for instances and abstractions, individuals cannot be solidly mapped to models:

  • Individual concepts (car, horse, boat, …) have no clear mooring.
  • Actual vehicles cannot be tied to the meta-class or the abstract stereotype.
  • There is no strong inheritance tying individuals models and rental cars to an identification mechanism.

Assuming that detachment is not an option, the basis of models must be reset.

Ontological Stereotypes

Put in simple words, ontologies are meant to examine the nature and categories of existence, in general (metaphysics) or in specific contexts. As for the latter, they can be applied to individuals according to the nature of their existence (aka epistemic identity): concepts, documents, categories and aspects.

CaKe_recap

As it happens, sorting things out makes the whole paraphernalia of meta-classes and stereotypes no longer needed because the semantics of inheritance and associations is set by the nature of individuals.

With individuals solidly rooted in targeted domains, models can then fully serve their purposes.

What’s the Point

It’s worth to remind that models have to be built on purpose, which cannot be achieved without a clear understanding of context and targets. Here are some examples:

Quality arguably come first:

  • External consistency: to be checked by mapping individuals in analysis categories to big data.
  • Internal consistency: to be checked by mapping individuals in design classes to observed or simulated run-time components.
  • Acceptance: automated tests generation.

Then, individuals could be used to map models to past and future, the former for refactoring, the latter for business intelligence.

Refactoring looks to the past but is frustrated by undocumented legacy code. Combined with machine learning, individuals could help to bridge the gap between code and models.

Business Intelligence looks the other way as it is meant to map hypothetical business objects and behaviors to the structures and semantics already managed by information systems. As in reversal of legacy benefits, individuals could provide cues to new business meanings.

Finally, maturity assessment and optimization of enterprise processes fully depend on the reliability of their basis.

Further Reading

Collaborative Systems Engineering: From Models to Ontologies

April 9, 2018

Given the digitization of enterprises environments, engineering processes have to be entwined with business ones while kept in sync with enterprise architectures. That calls for new threads of collaboration taking into account the integration of business and engineering processes as well as the extension to business environments.

Wang-Qingsong_scaffold

Collaboration can be personal and direct, or collective and mediated (Wang Qingsong)

Whereas models are meant to support communication, traditional approaches are already straining when used beyond software generation, that is collaboration between humans and CASE tools. Ontologies, which can be seen as a higher form of models, could enable a qualitative leap for systems collaborative engineering at enterprise level.

Systems Engineering: Contexts & Concerns

To begin with contents, collaborations should be defined along three axes:

  1. Requirements: business objectives, enterprise organization, and processes, with regard to systems functionalities.
  2. Feasibility: business requirements with regard to architectures capabilities.
  3. Architectures: supporting functionalities with regard to architecture capabilities.
RekReuse_BFCo

Engineering Collaborations at Enterprise Level

Since these axes are usually governed by different organizational structures and set along different time-frames, collaborations must be supported by documentation, especially models.

Shared Models

In order to support collaborations across organizational units and time-frames, models have to bring together perspectives which are by nature orthogonal:

  • Contexts, concerns, and languages: business vs engineering.
  • Time-frames and life-cycle: business opportunities vs architecture stability.
EASquare2_eam.jpg

Harnessing MBSE to EA

That could be achieved if engineering models could be harnessed to enterprise ones for contexts and concerns. That is to be achieved through the integration of processes.

 Processes Integration

As already noted, the integration of business and engineering processes is becoming a key success factor.

For that purpose collaborations would have to take into account the different time-frames governing changes in business processes (driven by business value) and engineering ones (governed by assets life-cycles):

  • Business requirements engineering is synchronic: changes must be kept in line with architectures capabilities (full line).
  • Software engineering is diachronic: developments can be carried out along their own time-frame (dashed line).
EASq2_wrkflw

Synchronic (full) vs diachronic (dashed) processes.

Application-driven projects usually focus on users’ value and just-in-time delivery; that can be best achieved with personal collaboration within teams. Architecture-driven projects usually affect assets and non-functional features and therefore collaboration between organizational units.

Collaboration: Direct or Mediated

Collaboration can be achieved directly or through some mediation, the former being a default option for applications, the latter a necessary one for architectures.

Cycles_collabs00

Both can be defined according to basic cognitive and organizational mechanisms and supported by a mix of physical and virtual spaces to be dynamically redefined depending on activities, projects, locations, and organisation.

Direct collaborations are carried out between individuals with or without documentation:

  • Immediate and personal: direct collaboration between 5 to 15 participants with shared objectives and responsibilities. That would correspond to agile project teams (a).
  • Delayed and personal: direct collaboration across teams with shared knowledge but with different objectives and responsibilities. That would tally with social networks circles (c).
Cycles_collabs.jpg

Collaborations

Mediated collaborations are carried out between organizational units through unspecified individual members, hence the need of documentation, models or otherwise:

  • Direct and Code generation from platform or domain specific models (b).
  • Model transformation across architecture layers and business domains (d)

Depending on scope and mediation, three basic types of collaboration can be defined for applications, architecture, and business intelligence projects.

EASq2_collabs

Projects & Collaborations

As it happens, collaboration archetypes can be associated with these profiles.

Collaboration Mechanisms

Agile development model (under various guises) is the option of choice whenever shared ownership and continuous delivery are possible. Application projects can so be carried out autonomously, with collaborations circumscribed to team members and relying on the backlog mechanism.

The OODA (Observation, Orientation, Decision, Action) loop (and avatars) can epitomize projects combining operations, data analytics, and decision-making.

EASquare2_collaMechas

Collaboration archetypes

Projects set across enterprise architectures cannot be carried out without taking into account phasing constraints. While ill-fated Waterfall methods have demonstrated the pitfalls of procedural solutions, phasing constraints can be dealt with a roundabout mechanism combining iterative and declarative schemes.

Engineering vs Business Driven Collaborations

With collaborative engineering upgraded at enterprise level, the main challenge is to iron out frictions between application and architecture projects and ensure the continuity, consistency and effectiveness of enterprise activities. That can be achieved with roundabouts used as a collaboration mechanism between projects, whatever their nature:

  • Shared models are managed at roundabout level.
  • Phasing dependencies are set in terms of assertions on shared models.
  • Depending on constraints projects are carried out directly (1,3) or enter roundabouts (2), with exits conditioned by the availability of models.

Engineering driven collaboration: roundabout and backlogs

Moreover, with engineering embedded in business processes, collaborations must also bring together operational analytics, decision-making, and business intelligence. Here again, shared models are to play a critical role:

  • Enterprise descriptive and prescriptive models for information maps and objectives
  • Environment predictive models for data and business understanding.
OKBI_BIDM

Business driven collaboration: operations and business intelligence

Whereas both engineering and business driven collaborations depend on sharing information  and knowledge, the latter have to deal with open and heterogeneous semantics. As a consequence, collaborations must be supported by shared representations and proficient communication languages.

Ontologies & Representations

Ontologies are best understood as models’ backbones, to be fleshed out or detailed according to context and objectives, e.g:

  • Thesaurus, with a focus on terms and documents.
  • Systems modeling,  with a focus on integration, e.g Zachman Framework.
  • Classifications, with a focus on range, e.g Dewey Decimal System.
  • Meta-models, with a focus on model based engineering, e.g models transformation.
  • Conceptual models, with a focus on understanding, e.g legislation.
  • Knowledge management, with a focus on reasoning, e.g semantic web.

As such they can provide the pillars supporting the representation of the whole range of enterprise concerns:

KM_OntosCapabs

Taking a leaf from Zachman’s matrix, ontologies can also be used to differentiate concerns with regard to architecture layers: enterprise, systems, platforms.

Last but not least, ontologies can be profiled with regard to the nature of external contexts, e.g:

  • Institutional: Regulatory authority, steady, changes subject to established procedures.
  • Professional: Agreed upon between parties, steady, changes subject to established procedures.
  • Corporate: Defined by enterprises, changes subject to internal decision-making.
  • Social: Defined by usage, volatile, continuous and informal changes.
  • Personal: Customary, defined by named individuals (e.g research paper).

Cross profiles: capabilities, enterprise architectures, and contexts.

Ontologies & Communication

If collaborations have to cover engineering as well as business descriptions, communication channels and interfaces will have to combine the homogeneous and well-defined syntax and semantics of the former with the heterogeneous and ambiguous ones of the latter.

With ontologies represented as RDF (Resource Description Framework) graphs, the first step would be to sort out truth-preserving syntax (applied independently of domains) from domain specific semantics.

KM_CaseRaw

RDF graphs (top) support formal (bottom left) and domain specific (bottom right) semantics.

On that basis it would be possible to separate representation syntax from contents semantics, and to design communication channels and interfaces accordingly.

That would greatly facilitate collaborations across externally defined ontologies as well as their mapping to enterprise architecture models.

Conclusion

To summarize, the benefits of ontological frames for collaborative engineering can be articulated around four points:

  1. A clear-cut distinction between representation semantics and truth-preserving syntax.
  2. A common functional architecture for all users interfaces, humans or otherwise.
  3. Modular functionalities for specific semantics on one hand, generic truth-preserving and cognitive operations on the other hand.
  4. Profiled ontologies according to concerns and contexts.
KM_OntosCollabs

Clear-cut distinction (1), unified interfaces architecture (2), functional alignment (3), crossed profiles (4).

A critical fifth benefit could be added with regard to business intelligence: combined with deep learning capabilities, ontologies would extend the scope of collaboration to explicit as well as implicit knowledge, the former already framed by languages, the latter still open to interpretation and discovery.

Further Reading

 

Focus: Requirements Reuse

February 22, 2018

Preamble

Requirements is what to feed engineering processes. As such they are to be presented under a wide range of forms, and nothing should be assumed upfront about forms or semantics.

What is to be reused: Sketches or Models  ? (John Devlin)

Answering the question of reuse therefore depends on what is to be reused, and for what purpose.

Documentation vs Reuse

Until some analysis can be carried out, requirements are best seen as documents;  whether such documents are to be ephemeral or managed would be decided depending on method (agile or phased), contents (business, supporting systems, implementation, or quality of services), or purpose (e.g governance, regulations, etc).

What is to be reused.

Setting apart external conditions, requirements documentation could be justified by:

  • Traceability of decision-making linking initial requests with actual implementation.
  • Acceptance.
  • Maintenance of deliverables during their life-cycle.

Depending on development approaches, documentation could limited to archives (agile development models) or managed as intermediate products (phased development models). In the latter case reuse would entail some formatting of requirements.

The Cases for Requirements Reuse

Assuming that requirements have been properly formatted, e.g as analysis models (with technical ones managed internally at system level), reuse could be justified by changes in business, functional, or quality of services requirements:

  • Business processes are meant to change with opportunities. With requirements available as analysis models, changes would be more easily managed (a) if they could be fine-grained. Business rules are a clear example, but that could also be the case for new features added to business objects.
  • Functional requirements may change even without change of business ones, e.g if new channels and users are introduced addressing existing business functions. In that case reusable business requirements (b) would dispense with a repeat of business analysis.
  • Finally, quality of service could be affected by operational changes like localization, number of users, volumes, or frequency. Adjusting architecture capabilities would be much easier with functional (c) and business (d) requirements properly documented as analysis models.

Cases for Reuse

Along that perspective, requirements reuse appears to revolve around two pivots, documents and analysis models. Ontologies could be used to bind them.

Requirements & Ontologies

Reusing artifacts means using them in contexts or for purposes different of native ones. That may come by design, when specifications can anticipate on shared concerns, or as an afterthought, when initially unexpected similarities are identified later on. In any case, reuse policies have to overcome a twofold difficulty:

  • Visibility: business and functional analysts must be made aware of potential reuse without having to spend too much time on research.
  • Overheads: ensuring transparency, traceability, and consistency checks on requirements (documents or analysis models) cannot be achieved without costs.

Ontologies could help to achieve greater visibility with acceptable overheads by framing requirements with regard to nature (documents or models) and context:

With regard to nature, the critical distinction is between document management and model based engineering systems. When framed as ontologies, the former is to be implemented as thesaurus targeting terms and documents, the latter as ontologies targeting categories specific to organizations and business domains.

Documents, models, and capabilities should be managed separately

With regard to context the objective should be to manage reusable requirements depending on the kind of jurisdiction and stability of categories, e.g:

  • Institutional: Regulatory authority, steady, changes subject to established procedures.
  • Professional: Agreed upon between parties, steady, changes subject to accord.
  • Corporate: Defined by enterprises, changes subject to internal decision-making.
  • Social: Defined by usage, volatile, continuous and informal changes.
  • Personal: Customary, defined by named individuals (e.g research paper).

Combining contexts of reuse with architectures layers (enterprise, systems, platforms) and capabilities (Who,What,How, Where, When).

Combined with artificial intelligence, ontology archetypes could crucially extend the benefits of requirements reuse, notably through the impact of deep learning for visibility.

On a broader perspective requirements should be seen as a source of knowledge, and their reuse managed accordingly.

Further Reading

Healthcare: Tracks & Stakes

February 8, 2018

Preamble

Healthcare represents at least a tenth of developed country’s GDP, with demography pushing to higher levels year after year. In principle technology could drive costs in both directions; in practice it has worked like a ratchet: upside, innovations are extending the scope of expensive treatments, downside, institutional and regulatory constraints have hamstrung the necessary mutations of organizations and processes.

Health Care Personal Assistant (Kerry James Marshall)

As a result, attempts to spread technology benefits across healthcare activities have dwindle or melt away, even when buttressed by major players like Google or Microsoft.

But built up pressures on budgets combined with social transformations have undermined bureaucratic barriers and incumbents’ estates, springing up initiatives from all corners: pharmaceutical giants, technology startups, healthcare providers, insurers, and of course major IT companies.

Yet the wide range of players’ fields and starting lines may be misleading, incumbents or newcomers are well aware of what the race is about: whatever the number of initial track lanes, they are to fade away after a few laps, spurring the front-runners to cover the whole track, alone or through partnerships. As a consequence, winning strategies would have to be supported by a comprehensive and coherent understanding of all healthcare aspects and issues, which can be best achieved with ontologies.

Ontologies vs Models

Ontologies are symbolic constructs (epitomized by conceptual graphs made of nodes and connectors) whose purpose is to make sense of a domain of discourse:

  1. Ontologies are made of categories of things, beings, or phenomena; as such they may range from simple catalogs to philosophical doctrines.
  2. Ontologies are driven by cognitive (i.e non empirical) purposes, namely the validity and consistency of symbolic representations.
  3. Ontologies are meant to be directed at specific domains of concerns, whatever they can be: politics, religion, business, astrology, etc.

That makes ontologies a special case of uncommitted models: like models they are set on contexts and concerns; but contrary to models ontologies’ concerns are detached from actual purposes. That is precisely what is expected from a healthcare conceptual framework.

Contexts & Business Domains

Healthcare issues are set across too many domains to be effectively fathomed, not to mention followed as they change. Notwithstanding, global players must anchor their strategies to different institutional contexts, and frame their policies as to make them transparent and attractive to others players. Such all-inclusive frameworks could be built from ontologies profiled with regard to the governance and stability of contexts:

  • Institutional: Regulatory authority, steady, changes subject to established procedures.
  • Professional: Agreed upon between parties, steady, changes subject to accord.
  • Corporate: Defined by enterprises, changes subject to internal decision-making.
  • Social: Defined by usage, volatile, continuous and informal changes.
  • Personal: Customary, defined by named individuals (e.g research paper).

Ontologies set along that taxonomy of contexts could then be refined as to target enterprise architecture layers: enterprise, systems, platforms, e.g:

A sample of Healthcare profiled ontologies

Depending on the scope and nature of partnerships, ontologies could be further detailed as to encompass architectures capabilities: Who, What, How, Where, When. 

Concerns & Architectures Capabilities

As pointed above, a key success factor for major players would be their ability to federate initiatives and undertakings of both incumbents and newcomers, within or without partnerships. That can be best achieved with enterprise architectures aligned with an all-inclusive yet open framework, and for that purpose the Zachman taxonomy would be the option of choice. The corresponding enterprise architecture capabilities (Who,What, How, Where, When) could then be uniformly applied to contexts and concerns:

  • Internally across architecture layers for enterprise (business and organization), systems (logical structures and functionalities), and platforms (technologies).
  • Externally across context-based ontologies as proposed above.

The nexus between environments (contexts) and enterprises (concerns) ontologies could then be organised according to the epistemic nature of items: terms, documents, symbolic representations (aka surrogates), or business objects and phenomena.

Mapping knowledge to architectures capabilities

That would outline four basic ontological archetypes that may or may not be combined:

  • Thesaurus: ontologies covering terms, concepts.
  • Document Management: thesaurus and documents.
  • Organization and Business: ontologies pertaining to enterprise organization and business processes.
  • Engineering: ontologies pertaining to the symbolic representation (aka surrogates) of organizations, businesses, and systems.

Global healthcare players could then build federating frameworks by combining domain and architecture driven ontologies, e.g:

Building federating frameworks with modular ontologies designed on purpose.

As a concluding remark, it must be reminded that the objective is to federate the activities and systems of healthcare players without interfering with the design of their business processes or supporting systems. Hence the importance of the distinction between ontologies and models introduced above which would act as a guaranty that concerns are not mixed up insofar as ontologies remain uncommitted models.

Further Reading

External Links

Ontologies as Productive Assets

January 22, 2018

Preamble

An often overlooked benefit of artificial intelligence has been a renewed interest in seminal philosophical and cognitive topics; ontologies coming top of the list.

Ontological Questioning (The Thinker Monkey, Breviary of Mary of Savoy)

Yet that interest has often been led astray by misguided perspectives, in particular:

  • Universality: one-fits-all approaches are pointless if not self-defeating considering that ontologies are meant to target specific domains of concerns.
  • Implementation: the focus is usually put on representation schemes (commonly known as Resource Description Frameworks, or RDFs), instead of the nature of targeted knowledge and the associated cognitive capabilities.

Those misconceptions, often combined, may explain the limited practical inroads of ontologies. Conversely, they also point to ontologies’ wherewithal for enterprises immersed into boundless and fluctuating knowledge-driven business environments.

Ontologies as Assets

Whatever the name of the matter (data, information or knowledge), there isn’t much argument about its primacy for business competitiveness; insofar as enterprises are concerned knowledge is recognized as a key asset, as valuable if not more than financial ones, and should be managed accordingly. Pushing the comparison still further, data would be likened to liquidity, information to fixed income investment, and knowledge to capital ventures. To summarize, assets whatever their nature lose value when left asleep and bear fruits when kept awake; that’s doubly the case for data and information:

  • Digitized business flows accelerates data obsolescence and makes it continuous.
  • Shifting and porous enterprises boundaries and markets segments call for constant updates and adjustments of enterprise information models.

But assessing the business value of knowledge has always been a matter of intuition rather than accounting, even when it can be patented; and most of knowledge shapes up well beyond regulatory reach. Nonetheless, knowledge is not manna from heaven but the outcome of information processing, so assessing the capabilities of such processes could help.

Admittedly, traditional modeling methods are too stringent for that purpose, and looser schemes are needed to accommodate the open range of business contexts and concerns; as already expounded, that’s precisely what ontologies are meant to do, e.g:

  • Systems modeling,  with a focus on integration, e.g Zachman Framework.
  • Classifications, with a focus on range, e.g Dewey Decimal System.
  • Conceptual models, with a focus on understanding, e.g legislation.
  • Knowledge management, with a focus on reasoning, e.g semantic web.

And ontologies can do more than bringing under a single roof the whole of enterprise knowledge representations: they can also be used to nurture and crossbreed symbolic assets and develop innovative ones.

Ontologies Benefits

Knowledge is best understood as information put to use; accounting rules may be disputed but there is no argument about the benefits of a canny combination of information, circumstances, and purpose. Nonetheless, assessing knowledge returns is hampered by the lack of traceability: if a part of knowledge is explicit and subject to symbolic representation, another is implicit and manifests itself only through actual behaviors. At philosophical level it’s the line drawn by Wittgenstein: “The limits of my language mean the limits of my world”;  at technical level it’s AI’s two-lanes approach: symbolic rule-based engines vs non symbolic neural networks; at corporate level implicit knowledge is seen as some unaccounted for aspect of intangible assets when not simply blended into corporate culture. With knowledge becoming a primary success factor, a more reasoned approach of its processing is clearly needed.

To begin with, symbolic knowledge can be plied by logic, which, quoting Wittgenstein again, “takes care of itself; all we have to do is to look and see how it does it.” That would be true on two conditions:

  • Domains are to be well circumscribed. 
  • A water-tight partition must be secured between the logic of representations and the semantics of domains.

That could be achieved with modular and specific ontologies built on a clear distinction between common representation syntax and specific domains semantics.

As for non-symbolic knowledge, its processing has for long been overshadowed by the preeminence of symbolic rule-based schemes, that is until neural networks got the edge and deep learning overturned the playground. In a few years’ time practically unlimited access to raw data and the exponential growth in computing power have opened the door to massive sources of unexplored knowledge which is paradoxically both directly relevant yet devoid of immediate meaning:

  • Relevance: mined raw data is supposed to reflect the geology and dynamics of targeted markets.
  • Meaning: the main value of that knowledge rests on its implicit nature; applying existing semantics would add little to existing knowledge.

Assuming that deep learning can transmute raw base metals into knowledge gold, enterprises would need to understand, assess, and improve the refining machinery. That could be done with ontological frames.

A Proof of Concept

Compared to tangible assets knowledge may appear as very elusive, yet, and contrary to intangible ones, knowledge is best understood as the outcome of processes that can be properly designed, assessed, and improved. And that can be achieved with profiled ontologies.

As a Proof of Concept, an ontological kernel has been developed along two principles:

  • A clear-cut distinction between truth-preserving representation and domain specific semantics.
  • Profiled ontologies designed according to the nature of contents (concepts, documents, or artifacts), layers (environment, enterprise, systems, platforms), and contexts (institutional, professional, corporate, social.

That provides for a seamless integration of information processing, from data mining to knowledge management and decision making:

  • Data is first captured through aspects.
  • Categories are used to process data into information on one hand, design production systems on the other hand.
  • Concepts serve as bridges to knowledgeable information.

CaKe_DataInfoKnow

A beta version is available for comments on the Stanford/Protégé portal with the link: Caminao Ontological Kernel (CaKe).

Further Reading

External Links

Open Ontologies: From Silos to Architectures

January 1, 2018

To be of any use for enterprises, ontologies have to embrace a wide range of contexts and concerns, often ill-defined for environments, rather well expounded for systems.

Circumscribed Contexts & Crossed Concerns (Robert Goben)

And now that enterprises have to compete in open, digitized, and networked environments, business and systems ontologies have to be combined into modular knowledge architectures.

Ontologies & Contexts

If open-ended business contexts and concerns are to be taken into account, the first step should be to characterize ontologies with regard to their source, justification, and the stability of their categories, e.g:

  • Institutional: Regulatory authority, steady, changes subject to established procedures.
  • Professional: Agreed upon between parties, steady, changes subject to accords.
  • Corporate: Defined by enterprises, changes subject to internal decision-making.
  • Social: Defined by usage, volatile, continuous and informal changes.
  • Personal: Customary, defined by named individuals (e.g research paper).

Assuming such an external taxonomy, the next step would be to see what kind of internal (i.e enterprise architecture) ontologies can be fitted into, as it’s the case for the Zachman framework.

The Zachman’s taxonomy is built on well established concepts (Who,What,How, Where, When) applied across architecture layers for enterprise (business and organization), systems (logical structures and functionalities), and platforms (technologies). These layers can be generalized and applied uniformly across external contexts, from well-defined (e.g regulations) to fuzzy (e.g business prospects or new technologies) ones, e.g:

Ontologies, capabilities (Who,What,How, Where, When), and architectures (enterprise, systems, platforms).

That “divide to conquer” strategy is to serve two purposes:

  • By bridging the gap between internal and external taxonomies it significantly enhances the transparency of governance and decision-making.
  • By applying the same motif (Who,What, How, Where, When) across the semantics of contexts, it opens the door to a seamless integration of all kinds of knowledge: enterprise, professional, institutional, scientific, etc.

As can be illustrated using Zachman concepts, the benefits are straightforward at enterprise architecture level (e.g procurement), due to the clarity of supporting ontologies; not so for external ones, which are by nature open and overlapping and often come with blurred semantics.

Ontologies & Concerns

A broad survey of RDF-based ontologies demonstrates how semantic overlaps and folds can be sort out using built-in differentiation between domains’ semantics on one hand, structure and processing of symbolic representations on the other hand. But such schemes are proprietary, and evidence shows their lines seldom tally, with dire consequences for interoperability: even without taking into account relationships and integrity constraints, weaving together ontologies from different sources is to be cumbersome, the costs substantial, and the outcome often reduced to a muddy maze of ambiguous semantics.

The challenge would be to generalize the principles as to set a basis for open ontologies.

Assuming that a clear line can be drawn between representation and contents semantics, with standard constructs (e.g predicate logic) used for the former, the objective would be to classify ontologies with regard to their purpose, independently of their representation.

The governance-driven taxonomy introduced above deals with contexts and consequently with coarse-grained modularity. It should be complemented by a fine-grained one to be driven by concerns, more precisely by the epistemic nature of the individual instances to be denoted. As it happens, that could also tally with the Zachman’s taxonomy:

  • Thesaurus: ontologies covering terms and concepts.
  • Documents: ontologies covering documents with regard to topics.
  • Business: ontologies of relevant enterprise organization and business objects and activities.
  • Engineering: symbolic representation of organization and business objects and activities.
KM_OntosCapabs

Ontologies: Purposes & Targets

Enterprises could then pick and combine templates according to domains of concern and governance. Taking an on-line insurance business for example, enterprise knowledge architecture would have to include:

  • Medical thesaurus and consolidated regulations (Knowledge).
  • Principles and resources associated to the web-platform (Engineering).
  • Description of products (e.g vehicles) and services (e.g insurance plans) from partners (Business).

Such designs of ontologies according to the governance of contexts and the nature of concerns would significantly reduce blanket overlaps and improve the modularity and transparency of ontologies.

On a broader perspective, that policy will help to align knowledge management with EA governance by setting apart ontologies defined externally (e.g regulations), from the ones set through decision-making, strategic (e.g plate-form) or tactical (e.g partnerships).

Open Ontologies’ Benefits

Benefits from open and formatted ontologies built along an explicit distinction between the semantics of representation (aka ontology syntax) and the semantics of context can be directly identified for:

Modularity: the knowledge basis of enterprise architectures could be continuously tailored to changes in markets and corporate structures without impairing enterprise performances.

Integration: the design of ontologies with regard to the nature of targets and stability of categories could enable built-in alignment mechanisms between knowledge architectures and contexts.

Interoperability: limited overlaps and finer granularity are to greatly reduce frictions when ontologies bearing out business processes are to be combined or extended.

Reliability: formatted ontologies can be compared to typed programming languages with regard to transparency, internal consistency, and external validity.

Last but not least, such reasoned design of ontologies may open new perspectives for the collaboration between cognitive humans and pretending ones.

Further Reading

External Links

2018: Clones vs Octopuses

December 4, 2017

In the footsteps of robots replacing workmen, deep learning bots look to boot out knowledge workers overwhelmed by muddy data.

Cloning Knowledge (Tadeusz Cantor, from “The Dead Class”)

Faced with that , should humans try to learn deeper and faster than clones, or should they learn from octopuses and their smart hands.

Machine Learning & The Economics of Clones

As illustrated by scan-reading AI machines, the spreading of learning AI technology in every nook and cranny introduces something like an exponential multiplier: compared to the power-loom of the Industrial Revolution which substituted machines for workers, deep learning is substituting replicators for machines; and contrary to power looms, there is no physical limitation on the number of smart clones that can be deployed. So, however fast and deep humans can learn, clones are much too prolific: it’s a no-win situation. To get out of that conundrum humans have to put their hand on a competitive edge, e.g some kind of knowledge that cannot be cloned.

Knowledge & Competition

Appraising humans learning sway over machines, one can take from Spinoza’s categories of knowledge with regard to sources:

  1. Senses (views, sounds, smells, touches) or beliefs (as nurtured by the supposed common “sense”). Artificial sensors can compete with human ones, and smart machines are much better if prejudiced beliefs are put into the equation.
  2. Reasoning, i.e the mental processing of symbolic representations. As demonstrated by AlphaGo, machines are bound to fast extend their competitive edge.
  3. Philosophy which is by essence meant to bring together perceptions, intuitions, and symbolic representations. That’s where human intelligence could beat its artificial cousin which is clueless when purposes are needed.

That assessment is bore out by evolution: the absolute dominance established by humans over other animal species comes from their use of knowledge, which can be summarized as:

  1. Use of symbolic representations.
  2. Ability to formulate and exchange representations of contexts, concerns, and policies.
  3. Ability to agree on stakes and cooperate on policies.

On that basis, the third dimension, i.e the use of symbolic knowledge to cooperate on non-zero-sum endeavors, can be used to draw the demarcation line between human and artificial intelligence:

  • Paths and paces of pursuits as part and parcel of the knowledge itself. The fact that both are mostly obviated by search engines gives humans some edge.
  • Operational knowledge is best understood as information put to use, and must include concerns and decision-making. But smart bots’ ubiquity and capabilities often sap information traceability and decisions transparency, which makes room for humans to prevail.

So humans can find a clear competitive edge in this knowledge dimension because it relies on a combination of experience and thinking and is therefore hard to clone. Organizations should make sure that’s where smart systems take back and humans take up.

Organization & Innovation

Innovation being at the root of competitive edge, understanding the role played by smart systems is a key success factor; that is to be defined by organization.

As epitomized by Henry Ford, industrial-era thinking associated innovation with top-down management and the specialization of execution:

  • At execution level manual tasks were to be fragmented and specialized.
  • At management level analysis and decision-making were to be centralized and abstracted.

That organizational paradigm puts a double restraint on innovation:

  • On execution side the fragmentation of manual tasks prevents workers from effectively assessing and improving their performances.
  • On management side knowledge is kept in conceptual boxes and bereft of feedback from actual uses.

That railing between smart brains and dumb hands may have worked well enough for manufacturing processes limited to material flows and subject to circumscribed and predictable technological changes. It didn’t last.

First, as such hierarchies necessarily grow with processes complexity, overheads and rigidity force repeated pruning. Then, flat hierarchies are of limited use when information flows are to be combined with material ones, so enterprises have to start with matrix organization. Finally, with the seamless integration of digital and material flows, perpetuating the traditional line between management and execution is bound to hamstring innovation:

  • Smart tools may be able to perform a wide range of physical tasks without human supervision, but the core of innovation core as well as its front lines are where human and machines collaborate in processing a mix of material and information flows, both learning from the experience.
  • Hierarchies and centralized decision-making are being cut out from feeders when set in networked business environments colonized by smart bots on both sides of corporate boundaries.

Not surprisingly, these innovation trends seem to tally with the social dimension of knowledge.

Learning from the Octopus

The AI revolution has already broken all historical records of footprint (everything is affected) and speed (a matter of years). Given the length of human education cycles, appraising the consequences comes with some urgency, beginning with the disposal of two entrenched beliefs:

At individual level the new paradigm could be compared to the nervous system of octopuses: each arm gets its brain and neurons, and so its own touch of knowledge and taste of decision-making.

On a broader (i.e enterprise) perspective, knowledge should be supported by two organizational layers, one direct and innovation-driven between trusted co-workers, the other networked and knowledge-driven between remote workers, trusted or otherwise.

Further Reading

External Links