Things Speaking in Tongues

January 25, 2017

Preamble

Speaking in tongues (aka Glossolalia) is the fluid vocalizing of speech-like syllables without any recognizable association with a known language. Such experience is best (not ?) understood as the actual speaking of a gutted language with grammatical ghosts inhabited by meaningless signals.

The man behind the tongue (Herbert List)

Do You Hear What I Say ? (Herbert List)

Usually set in religious context or circumstances, speaking in tongue looks like souls having their own private conversations. Yet, contrary to extraterrestrial languages, the phenomenon is not fictional and could therefore point to offbeat clues for natural language technology.

Computers & Language Technology

From its inception computers technology has been a matter of language, from machine code to domain specific. As a corollary, the need to be in speaking terms with machines (dumb or smart) has put a new light on interpreters (parsers in computer parlance) and open new perspectives for linguistic studies. In due return, computers have greatly improve the means to experiment and implement new approaches.

During the recent years advances in artificial intelligence (AI) have brought language technologies to a critical juncture between speech recognition and meaningful conversation, the former leaping ahead with deep learning and signal processing, the latter limping along with the semantics of domain specific languages.

Interestingly, that juncture neatly coincides with the one between the two intrinsic functions of natural languages: communication and representation.

Rules Engines & Neural Network

As exemplified by language technologies, one of the main development of deep learning has been to bring rules engines and neural networks under a common functional roof, turning the former unfathomable schemes into smart conceptual tutors for the latter.

In contrast to their long and successful track record in computer languages, rule-based approaches have fallen short in human conversations. And while these failings have hindered progress in the semantic dimension of natural language technologies, speech recognition have strode ahead on the back of neural networks fueled by increasing computing power. But the rift between processing and understanding natural languages is now being fastened through deep learning technologies. And with the leverage of rule engines harnessing neural networks, processing and understanding can be carried out within a single feedback loop.

From Communication to Cognition

From a functional point of view, natural languages can be likened to money, first as medium of exchange, then as unit of account, finally as store of value. Along that understanding natural languages would be used respectively for communication, information processing, and knowledge representation. And like the economics of money, these capabilities are to be associated to phased cognitive developments:

  • Communication: languages are used to trade transient signals; their processing depends on the temporal persistence of the perceived context and phenomena; associated behaviors are immediate (here-and-now).
  • Information: languages are also used to map context and phenomena to some mental representations; they can therefore be applied to scripted behaviors and even policies.
  • Knowledge: languages are used to map contexts, phenomena, and policies to categories and concepts to be stored as symbolic representations fully detached of original circumstances; these surrogates can the be used, assessed, and improved on their own.

As it happens, advances in technologies seem to follow these cognitive distinctions, with the internet of things (IoT) for data communications, neural networks for data mining and information processing, and the addition of rules engines for knowledge representation. Yet paces differ significantly: with regard to language processing (communication and information), deep learning is bringing the achievements of natural language technologies beyond 90% accuracy; but when language understanding has to take knowledge into account, performances still lag a third below: for computers knowledge to be properly scaled, it has to be confined within the semantics of specific domains.

Sound vs Speech

Humans listening to the Universe are confronted to a question that can be unfolded in two ways:

  • Is there someone speaking, and if it’s the case, what’s the language ?.
  • Is that a speech, and if it’s the case, who’s speaking ?.

In both case intentionality is at the nexus, but whereas the first approach has to tackle some existential questioning upfront, the second can put philosophy on the back-burner and focus on technological issues. Nonetheless, even the language first approach has been challenging, as illustrated by the difference in achievements between processing and understanding language technologies.

Recognizing a language has long been the job of parsers looking for the corresponding syntax structures, the hitch being that a parser has to know beforehand what it’s looking for. Parser’s parsers using meta-languages have been effective with programming languages but are quite useless with natural ones without some universal grammar rules to sort out babel’s conversations. But the “burden of proof” can now be reversed: compared to rules engines, neural networks with deep learning capabilities don’t have to start with any knowledge. As illustrated by Google’s Multilingual Neural Machine Translation System, such systems can now build multilingual proficiency from sufficiently large samples of conversations without prior specific grammatical knowledge.

To conclude, “Translation System” may even be self-effacing as it implies language-to-language mappings when in principle such systems can be fed with raw sounds and be able to parse the wheat of meanings from the chaff of noise. And, who knows, eventually be able to decrypt languages of tongues.

Further Reading

External Links

NIEM & Information Exchanges

January 24, 2017

Preamble

The objective of the National Information Exchange Model (NIEM) is to provide a “dictionary of agreed-upon terms, definitions, relationships, and formats that are independent of how information is stored in individual systems.”

(Alfred Jensen)

NIEM’s model makes no difference between data and information (Alfred Jensen)

For that purpose NIEM’s model combines commonly agreed core elements with community-specific ones. Weighted against the benefits of simplicity, this architecture overlooks critical distinctions:

  • Inputs: Data vs Information
  • Dictionary: Lexicon and Thesaurus
  • Meanings: Lexical Items and Semantics
  • Usage: Roots and Aspects

That shallow understanding of information significantly hinders the exchange of information between business or institutional entities across overlapping domains.

Inputs: Data vs Information

Data is made of unprocessed observations, information makes sense of data, and knowledge makes use of information. Given that NIEM is meant to be an exchange between business or institutional users, it should have no concern with data mining or knowledge management.

Data is meaningless, information meaning is set by semantic domains.

As an exchange, NIEM should have no concern with data mining or knowledge management.

The problem is that, as conveyed by “core of data elements that are commonly understood and defined across domains, such as person, activity, document, location”, NIEM’s model makes no explicit distinction between data and information.

As a corollary, it implies that data may not only be meaningful, but universally so, which leads to a critical trap: as substantiated by data analytics, data is not supposed to mean anything before processed into information; to keep with examples, even if the definition of persons and locations may not be specific, the semantics of associated information is nonetheless set by domains, institutional, regulatory, contractual, or otherwise.

Data is meaningless, information meaning is set by semantic domains.

Data is meaningless, information meaning is set by semantic domains.

Not surprisingly, that medley of data and information is mirrored by NIEM’s dictionary.

Dictionary: Lexicon & Thesaurus

As far as languages are concerned, words (e.g “word”, “ξ∏¥” ,”01100″) remain data items until associated to some meaning. For that reason dictionaries are built on different levels, first among them lexical and semantic ones:

  • Lexicons take items on their words and gives each of them a self-contained meaning.
  • Thesauruses position meanings within overlapping galaxies of understandings held together by the semantic equivalent of gravitational forces; the meaning of words can then be weighted by the combined semantic gravity of neighbors.

In line with its shallow understanding of information, NIEM’s dictionary only caters for a lexicon of core standalone items associated with type descriptions to be directly implemented by information systems. But due to the absence of thesaurus, the dictionary cannot tackle the semantics of overlapping domains: if lexicons alone can deal with one-to-one mappings of items to meanings (a), thesauruses are necessary for shared (b) or alternative (c) mappings.

vv

Shared or alternative meanings cannot be managed with lexicons

With regard to shared mappings (b), distinct lexical items (e.g qualification) have to be mapped to the same entity (e.g person). Whereas some shared features (e.g person’s birth date) can be unequivocally understood across domains, most are set through shared (professional qualification), institutional (university diploma), or specific (enterprise course) domains .

Conversely, alternative mappings (c) arise when the same lexical items (e.g “mole”) can be interpreted differently depending on context (e.g plastic surgeon, farmer, or secret service).

Whereas lexicons may be sufficient for the use of lexical items across domains (namespaces in NIEM parlance), thesauruses are necessary if meanings (as opposed to uses) are to be set across domains. But thesauruses being just tools are not sufficient by themselves to deal with overlapping semantics. That can only be achieved through a conceptual distinction between lexical and semantic envelops.

Meanings: Lexical Items & Semantics

NIEM’s dictionary organize names depending on namespaces and relationships:

  • Namespaces: core (e.g Person) or specific (e.g Subject/Justice).
  • Relationships: types (Counselor/Person) or properties (e.g PersonBirthDate).
vvv

NIEM’s Lexicon: Core (a) and specific (b) and associated core (c) and specific (d) properties

But since lexicons know only names, the organization is not orthogonal, with lexical items mapped indifferently to types and properties. The result being that, deprived of reasoned guidelines, lexical items are chartered arbitrarily, e.g:

Based on core PersonType, the Justice namespace uses three different schemes to define similar lexical items:

  • “Counselor” is described with core PersonType.
  • “Subject” and “Suspect” are both described with specific SubjectType, itself a sub-type of PersonType.
  • “Arrestee” is described with specific ArresteeType, itself a sub-type of SubjectType.

Based on core EntityType:

  • The Human Services namespace bypasses core’s namesake and introduces instead its own specific EmployerType.
  • The Biometrics namespace bypasses possibly overlapping core Measurer and BinaryCaptured and directly uses core EntityType.
Lexical items are meshed disregarding semantics

Lexical items are chartered arbitrarily

Lest expanding lexical items clutter up dictionary semantics, some rules have to be introduced; yet, as noted above, these rules should be limited to information exchange and stop short of knowledge management.

Usage: Roots and Aspects

As far as information exchange is concerned, dictionaries have to deal with lexical and semantic meanings without encroaching on ontologies or knowledge representation. In practice that can be best achieved with dictionaries organized around roots and aspects:

  • Roots and structures (regular, black triangles) are used to anchor information units to business environments, source or destination.
  • Aspects (italics, white triangles) are used to describe how information units are understood and used within business environments.
nformation exchanges are best supported by dictionaries organized around roots and aspects

Information exchanges are best supported by dictionaries organized around roots and aspects

As it happens that distinction can be neatly mapped to core concepts of software engineering.

Further Reading

External Links

Focus: Analysis vs Design

January 4, 2017

Preamble

Definitions should never turn into wages of words as they should only be judged on their purpose and utility, with  such assessment best achieved by comparing and adjusting the meaning of neighboring concepts with regard to tasks at hand.

GChirico_prodigal-son

Analysis & Design as Duet (Giorgio de Chirico)

That approach can be applied to the terms “analysis” and “design” as used in systems engineering.

What: Logic & Engineering

Whatever the idiosyncrasies and fuzziness of business concerns and contexts, at the end of the day business and functional requirements of supporting systems will have to be coerced into the uncompromising logic of computers. Assuming that analysis and design are set along that path, they could be characterized accordingly.

As a matter of fact, a fact all too often ignored, a formal basis can be used to distinguish between analysis and design models, the former for the consolidation of requirements across business domains and enterprise organization, the latter for systems and software designs:

  • Business analysis models are descriptive (aka extensional); they try to put actual objects, events, and processes into categories.
  • System engineering models are prescriptive (aka intensional); they define what is expected of systems components and how to develop them.

Squaring Logic with Engineering

As a confirmation of its validity, that classification along the logic basis of models can be neatly crossed with engineering concerns:

  • Applications: engineering deals with the realization of business needs expressed as use cases or users’ stories. Engineering units are self-contained with specific life-spans, and may consequently be developed on a continuous basis.
  • Architectures: engineering deals with supporting assets at enterprise level. Engineering units are associated with shared functionalities without specific life-spans, with their development subject to some phasing constraints.

That taxonomy can be used to square the understanding of analysis, designs, and architectures.

Where: Business unit or Corporate

Reversing the perspective from content to context, the formal basis of analysis and design can also be crossed with their organizational framework:

  • Analysis is to be carried out locally within business units.
  • Designs are to be set both locally for applications, and at enterprise level for architectures.

Organizational dependencies will determine the roles, responsibilities, and time-frames associated with analysis and design.

Who: Analysts, Architects, Engineers

Contents and contexts are to determine the skills and responsibility for stakeholders, architects, analysts and engineers. On that account:

  • Analysis should be the shared responsibility of business and system analysts.
  • Designs would be solely under the authority of architects and engineers.

The possibility for agents to collaborate and share responsibility will determine the time-frames of analysis and design .

When: Continuous or Discrete

As far as project management is concerned, time is the crux of the matter: paraphrasing Einstein, the only reason for processes [time] is so that everything doesn’t happen at once. Hence the importance of characterizing analysis and design according to the nature of their time-scale:

  • At application level analysis and design can be carried out iteratively along a continuously time-scale.
  • At enterprise level the analysis of business objectives and the design of architectures will require milestones set along discrete time-scales.

The combination of organizational and timing constraints will determine analysis and design modus operandi.

How: Agile or Phased

Finally, the distinction between analysis and design will depend on the software engineering MO, as epitomized by the agile vs phased debate:

  • The agile development model combines analysis, design, and development into a single activity carried out iteratively. It is arguably the option of choice providing the two conditions about shared ownership and continuous delivery can be met.
  • Phased development models may rely on different arrangements but most will include a distinction between requirements analysis and software design.

That makes for an obvious conclusion: whether analysis and design are phased or carried out collaboratively, understanding their purpose and nature is a key success factor for systems and software engineering.

PS: Darwin vs Turing

As pointed out by Daniel Dennett (“From Bacteria to Bach, and Back“), the meaning of analysis and design can be neatly rooted in the theoretical foundations of evolution and computer science.

Darwin built his model of Natural Selection bottom up from the analysis of actual live beings. Roundly refuting the hypothesis of some “intelligent designer”, Darwin’s work epitomizes how ontologies built from observations (aka analysis models) can account for the origin, structure and behaviors of individuals.

Conversely, Turing’s thesis of computation is built top-down from established formal principles to software artifacts. In that case prior logical ontologies are applied to design models meant to realize intended capabilities.

Further Reading

New Year: 2016 is the One to Learn

December 15, 2016

Sometimes the future is best seen through rear-view mirrors; given the advances of artificial intelligence (AI) in 2016, hindsight may help for the year to come.

(J.Bosh)

Deep Mind Learning (J.Bosh)

Deep Learning & the Depths of Intelligence

Deep learning may not have been discovered in 2016 but Google’s AlphaGo has arguably brought a new dimension to artificial intelligence, something to be compared to unearthing the spherical Earth.

As should be expected for machines capabilities, artificial intelligence has for long been fettered by technological handcuffs; so much so that expert systems were initially confined to a flat earth of knowledge to be explored through cumbersome sets of explicit rules. But exponential increase in computing power has allowed neural networks to take a bottom-up perspective, mining for implicit knowledge hidden in large amount of raw data.

Like digging tunnels from both extremities, it took some time to bring together top-down and bottom-up schemes, namely explicit (rule-based) and implicit (neural network-based) knowledge processing. But now that it comes to fruition, the alignment of perspectives puts a new light on the cognitive and social dimensions of intelligence.

Intelligence as a Cognitive Capability

Assuming that intelligence is best defined as the ability to solve problems, the first criterion to consider is the type of input (aka knowledge) to be used:

  • Explicit: rational processing of symbolic representations of contexts, concerns, objectives, and policies.
  • Implicit: intuitive processing of factual (non symbolic) observations of objects and phenomena.

That distinction is broadly consistent with the one between humans, seen as the sole symbolic species with the ability to reason about explicit knowledge, and other animal species which, despite being limited to the processing of implicit knowledge, may be far better at it than humans. Along that understanding, it would be safe to assume that systems with enough computing power will sooner or later be able to better the best of animal species, in particular in the case of imperfect inputs.

Intelligence as a Social Capability

Alongside the type of inputs, the second criterion to be considered is obviously the type of output (aka solution). And since classifications are meant to be built on purpose, a typology of AI outcomes should focus on relationships between agents, humans or otherwise:

  • Self-contained: problem-solving situations without opponent.
  • Competitive: zero-sum conflictual activities involving one or more intelligent opponents.
  • Collaborative: non-zero-sum activities involving one or more intelligent agents.

That classification coincides with two basic divides regarding communication and social behaviors:

  1. To begin with, human behavior is critically different when interacting with living species (humans or animals) and machines (dumb or smart). In that case the primary factor governing intelligence is the presence, real or supposed, of beings with intentions.
  2. Then, and only then, communication may take different forms depending on languages. In that case the primary factor governing intelligence is the ability to share symbolic representations.

A taxonomy of intelligence with regard to cognitive (reason vs intuition) and social (symbolic vs non-symbolic) capabilities may help to clarify the role of AI and the importance of deep learning.

Between Intuition and Reason

Google’s AlphaGo astonishing performances have been rightly explained by a qualitative breakthrough in learning capabilities, itself enabled by the two quantitative factors of big data and computing power. But beyond that success, DeepMind (AlphaGo’s maker) may have pioneered a new approach to intelligence by harnessing both symbolic and non symbolic knowledge to the benefit of a renewed rationality.

Perhaps surprisingly, intelligence (a capability) and reason (a tool) may turn into uneasy bedfellows when the former is meant to include intuition while the latter is identified with logic. As it happens, merging intuitive and reasoned knowledge can be seen as the nexus of AlphaGo decisive breakthrough, as it replaces abrasive interfaces with smart full-duplex neural networks.

Intelligent devices can now process knowledge seamlessly back and forth, left and right: borne by DeepMind’s smooth cognitive cogwheels, learning from factual observations can suggest or reinforce the symbolic representation of emerging structures and behaviors, and in return symbolic representations can be used to guide big data mining.

From consumers behaviors to social networks to business marketing to supporting systems, the benefits of bridging the gap between observed phenomena and explicit causalities appear to be boundless.

Further Reading

External Links

iStar and the Requirements Conundrum

December 12, 2016

Synopsis

Whenever software engineering problems are looked at, the blame is generally put on requirements, with each side of the business/system divide holding the other responsible.

rockwell_runaway

iStar modeling put the focus on communication (N. Rockwell)

The iStar approach tries to tackle the problem with a conceptual language focused on interactions between business processes and supporting systems.

Dilemma

Conceptual approaches to requirements try to breach the dilemma between phased and agile development schemes: the former takes for granted that requirements can be fully and definitively set upfront; the latter takes a more pragmatic path and tries to reconcile business and system analysts through direct and continuous collaboration.

Setting apart frictions between specific methods, the benefits of agile principles and practices are now well-recognized, contingent on the limits of agile scope. Summarily, agile development is at its best when requirements capture and analysis can be weaved with development and tests. The question remains of what happens when requirements are to be dealt with separately.

The iStar’s answer shares with agile a focus on collaboration and doesn’t take side for business (e.g users’ stories) or systems (e.g use cases). Instead, iStar modeling language is meant to support a conceptual description of interactions between business processes and supporting systems in terms of actors’ goals and commitments, and the associated dependencies.

Actors & Goals

The defining aspect of the iStar modeling approach is to replace one-sided perspectives (business or system) by a systemic one focused on the interactions between agents. The interactive part of a requirement will therefore comprise three basic items:

  • A primary actor trigger an interaction in order to meet some goal; e.g a car owner want his car repaired.
  • Secondary actors may be involved during the ensuing exchanges: e.g body shop, appraiser, insurance company.
  • Functions to be performed: actual task; e.g appraise damages; qualification (soft goal), e.g fair appraisal; and resources, e.g premium payment.
Actors & dependencies

Actors & Dependencies

Dependencies Semantics

The factual description of interactions is both detailed and enriched by elements set within a broader scope:

  • Goal (strong) dependency: assertions about actual state of affairs: object, activity, or expectations.
  • Soft-goal dependency: assertions about expected outcomes.
  • Task dependency: organizational, functional, or technical constraints pertaining to the execution of activities.
  • Resource dependency: constraints or conditions on the availability of inputs, actual or symbolic.

It would be tempting to generalize the strong/soft distinction to dependencies as to make use of modal logic, strong dependencies associated with deontic rules, soft dependencies with alethic ones. That would .

iStar & Caminao

Since iStar modeling categories are directly aligned with UML Use Cases, they can easily mapped to core Caminao stereotypes for actors, objects, events, and activities.

Actors & dependencies

iStar with Caminao Stereotypes

Interestingly, the iStar strong/soft distinction could translate to the actual/symbolic one which constitute the conceptual backbone of the Caminao paradigm.

Assessment

From the business perspective, iStar must be credited with two critical tenets:

  • The focus on interactions between agents is essential for business and system analysts to collaborate. Such benefits appear clearly for the definition of primary and secondary roles (aka actors), intents (business) and capabilities (supporting environments).
  • The distinction between strong and soft goals, even if the logical basis remains unexploited.

Yet, the system perspective lacks a functional dimension, e.g:

  • Architecture levels (enterprise and organization, systems and functionalities, platforms and technologies) are not taken into consideration, nor the nature of capabilities, e.g strategic and operational.
  • The strong/soft dependencies distinction is not explicitly associated with systems capabilities.

On the whole these pros and cons reflect iStar’s declared intent on conceptual modeling; as a corollary these flaws mark also the limits of conceptual modeling when it is detached from the symbolic description of supporting systems functionalities.

Nonetheless, as illustrated by the research quoted below, iStar remains a sound basis for the specification of interactions between users and systems, either as use cases or users’ stories.

Further Reading

External Links

Business Agility vs Systems Entropy

November 28, 2016

Synopsis

As already noted, the seamless integration of business processes and IT systems may bring new relevancy to the OOAD (Observation, Orientation, Decision, Action) loop, a real-time decision-making paradigm originally developed by Colonel John Boyd for USAF fighter jets.

Agility: Orientation (Lazlo Moholo-Nagy)

Agility & Orientation (Lazlo Moholo-Nagy)

Of particular interest for today’s business operational decision-making is the orientation step, i.e the actual positioning of actors and the associated cognitive representations; the point being to use AI deep learning capabilities to surmise opponents plans and misdirect their anticipations. That new dimension and its focus on information brings back cybernetics as a tool for enterprise governance.

In the Loop: OOAD & Information Processing

Whatever the topic (engineering, business, or architecture), the concept of agility cannot be understood without defining some supporting context. For OODA that would include: territories (markets) for observations (data); maps for orientation (analytics); business objectives for decisions; and supporting systems for action.

OODA loop and its actual (red) and symbolic (blue) contexts.

OODA loop and its actual (red) and symbolic (blue) contexts.

One step further, contexts may be readily matched with systems description:

  • Business contexts (territories) for observations.
  • Models of business objects (maps) for orientation.
  • Business logic (objectives) for decisions.
  • Business processes (supporting systems) for action.
ccc

The OODA loop and System Perspectives

That provides a unified description of the different aspects of business agility, from the OODA loop and operations to architectures and engineering.

Architectures & Business Agility

Once the contexts are identified, agility in the OODA loop will depend on architecture consistency, plasticity, and versatility.

Architecture consistency (left) is supposed to be achieved by systems engineering out of the OODA loop:

  • Technical architecture: alignment of actual systems and territories (red) so that actions and observations can be kept congruent.
  • Software architecture: alignment of symbolic maps and objectives (blue) so that orientation and decisions can be continuously adjusted.

Functional architecture (right) is to bridge the gap between technical and software architectures and provides for operational coupling.

Business Agility: systems architectures and business operations

Business Agility: systems architectures and business operations

Operational coupling depends on functional architecture and is carried on within the OODA loop. The challenge is to change tack on-the-fly with minimum frictions between actual and symbolic contexts, i.e:

  • Discrepancies between business objects (maps and orientation) and business contexts (territories and observation).
  • Departure between business logic (objectives and decisions) and business processes (systems and actions)

When positive, operational coupling associates business agility with its architecture counterpart, namely plasticity and versatility; when negative, it suffers from frictions, or what cybernetics calls entropy.

Systems & Entropy

Taking a leaf from thermodynamics, cybernetics defines entropy as a measure of the (supposedly negative) variation in the value of the information supporting the control of viable systems.

With regard to corporate governance and operational decision-making, entropy arises from faults between environments and symbolic surrogates, either for objects (misleading orientations from actual observations) or activities (unforeseen consequences of decisions when carried out as actions).

So long as architectures and operations were set along different time-frames (e.g strategic and tactical), cybernetics were of limited relevancy. But the seamless integration of data analytics, operational decision-making, and IT supporting systems puts a new light on the role of entropy, as illustrated by Boyd’s OODA and its orientation component.

Orientation & Agility

While much has been written about how data analytics and operational decision-making can be neatly and easily fitted in the OODA paradigm, a particular attention is to be paid to orientation.

As noted before, the concept of Orientation comes with a twofold meaning, actual and symbolic:

  • Actual: the positioning of an agent with regard to external (e.g spacial) coordinates, possibly qualified with the agent’s abilities to observe, move, or act.
  • Symbolic: the positioning of an agent with regard to his own internal (e.g beliefs or aims) references, possibly mixed with the known or presumed orientation of other agents, opponents or associates.

That dual understanding underlines the importance of symbolic representations in getting competitive edges, either directly through accurate and up-to-date orientation, or indirectly by inducing opponents’ disorientation.

Agility vs Entropy

Competition in networked digital markets is carried out at enterprise gates, which puts the OODA loop at the nexus of information flows. As a corollary, what is at stake is not limited to immediate business gains but extends to corporate knowledge and enterprise governance; translated into cybernetics parlance, a competitive edge would depend on enterprise ability to export entropy, that is to decrease confusion and disorder inside, and increase it outside.

Working on that assumption, one should first characterize the flows of information to be considered:

  • Territories and observations: identification of business objects and events, collection and analysis of associated data.
  • Maps and orientations: structured and consistent description of business domains.
  • Objectives and decisions: structured and consistent description of business activities and rules.
  • Systems and actions: business processes and capabilities of supporting systems.
cccc

Static assessment of technical and software architectures for respectively observation and decision

Then, a static assessment of information flows would start with the standing of technical and software architecture with regard to competition:

  • Technical architecture: how the alignment of operations and resources facilitate actions and observations.
  • Software architecture: how the combined descriptions of business objects and logic facilitate orientation and decision.

A dynamic assessment would be carried out within the OODA loop and deal with the role of functional architecture in support of operational coupling:

  • How the mapping of territories’ identities and features help observation and orientation.
  • How decision-making and the realization of business objectives are supported by processes’ designs.
ccccc

Dynamic assessment of decision-making and the realization of business objectives’ as supported by processes’ designs.

Assuming a corporate cousin of  Maxwell’s demon with deep learning capabilities standing at the gates in its OODA loop, his job would be to analyze the flows and discover ways to decrease internal complexity (i.e enterprise representations) and increase external one (i.e competitors’ representations).

Further Readings

Business Agility & the OODA Loop

November 21, 2016

Preamble

The OOAD (Observation, Orientation, Decision, Action) loop is a real-time decision-making paradigm developed in the sixties by Colonel John Boyd from his experience as fighter pilot and military strategist.

(Moholy Nagy)

How to get inside opponent’s loop (Lazlo Moholy-Nagy)

The relevancy of OODA for today’s operational decision-making comes from the seamless integration of IT systems with business operations and the resulting merits of agile development processes.

Business: End of Discrete Time-Frames

Business governance was used to be phased: analyze the market, select opportunities, build capabilities, launch operations. No more. With the melting of the fences between actual and symbolic realms, periodic transitional events have lost most of their relevancy. Deprived of discrete and robust time-frames, the weaving of observed facts with business plans has to be managed on the fly. Success now comes from continuous readiness, quicker tempo, and the ability to operate inside adversaries’ time-scales, for defense (force competitor out of favorable position) as well as offense (get a competitive edge). Hence the reference to dogfights.

Dogfights & Agile Primacy

John Boyd train of thoughts started with the observation that, despite the apparent superiority of the soviet Mig 15 on US F-86 during the Korea war, US fighters stood their ground. From that factual observation it took Boyd’s comprehensive engineering work to demonstrate that as far as dogfights were concerned fast transients between maneuvers (aka agility) was more important than technical capabilities. Pushed up Pentagon’s reluctant ladders by Boyd’s sturdy determination, that conclusion have had wide-ranging consequences in the design of USAF fighters and pilots formation for the following generations. Its influence also spread to management, even if theories’ turnover is much faster there, and shelf-life much shorter.

Nowadays, with the accelerated integration of business processes with IT systems, agility is making a comeback from the software engineering corner. Reflecting business and IT convergence, principles like iterative development, just-in-time delivery, and lean processes, all epitomized by the agile software development model, are progressively mingling into business practices with strong resemblances to dogfights; and the resemblances are not only symbolic.

IT Systems & Business Competition

While some similarities between dogfights and business competition may seem metaphorical, one critical aspect is all too real, namely the increasing importance of supporting machines, IT systems or fighter jets.

Basically, IT systems, like fighters’ electronics, are tasked to observe environments, analyse changes in relation to position and objectives, and support decision-making. But today’s systems go further with two qualitative leaps:

  • The seamless integration of physical and symbolic flows let systems manage some overlapping between supporting decisions and carrying out actions.
  • Due to their artificial intelligence capabilities, systems can learn on-the-job and improve their performances in real-time feedback loops.

When combined, these two trends have drastic impact on the way machines can support human activities in real-time competitive situations. More to the point, they bring new light on business agility.

Business Agility

As illustrated by the radical transformation of fighter cockpits, the merging of analog and digital flows leaves little room for human mediation: data must be processed into information and presented instantly along two critical dimensions, one for decision-making, the other for information life-cycle:

  • Man/Machine interfaces have to materialize the merging of actual and symbolic realms as to support just-in-time decision-making.
  • The replacement of phased selected updates of environment data by continuous changes in raw and massive data means that the status of information has to be incorporated with the information itself, yet without impairing decision-making.

Beyond obvious differences between dogfights and business competition, that double exigence is to characterize business agility:

  1. Instant understanding of changes in business opportunities (Observation) .
  2. Simultaneous assessment of the reliability and shelf-life of pertaining information with regard to current positions and operations (Orientation).
  3. Weighting of options with regard to enterprise capabilities and broader objectives (Decision).
  4. Carrying out of decisions within the relevant time-span (Action).

That understanding of business agility is to be compared with its development and architecture cousins. Yet it doesn’t seem to add much to data analytics and operational decision-making. That is until the concept of orientation is reassessed.

Agility & Orientation: Task vs Tack

To begin with basics, the concept of Orientation comes with a twofold meaning, actual and symbolic:

  • Actual: a position with regard to external (e.g spacial) coordinates, possibly qualified with abilities to observe, move, or act.
  • Symbolic: a position with regard to internal (e.g beliefs or aims) references, possibly mixed with known or presumed orientation of other agents, opponents or associates.

When business is considered, data analytics is supposed to deal comprehensively and accurately with markets’ actual orientations. But the symbolic facet is left largely unexplored.

Boyd’s contribution is to bring together both aspects and combine them into actual practice, namely how to foretell the tack of your opponents from their actual tracks as well as their surmised plans, while fooling them about your own moves, actual or planned.

Such ambitions once out of reach, can now be fulfilled due to the combination of big data, artificial intelligence, and the exponential growth on computing power.

Further Readings

 

Focus: MDA & UML

November 9, 2016

Preamble

UML (Unified Modeling Language) and MDA (Model Driven Architecture) epitomize the lack of focus and consistency of the OMG’s strategy. As it’s safe to assume that there can be no architectures without models, MDA and UML arguably bring sensible (if not perfect) schemes without significant competition.

MarcelBroodthaers-2Pipes

Unified language for Business and System Modeling (Marcel Broodthaers)

 

Unfortunately, not much has been made to play on their obvious complementarity and to exploit their synergies.

MDA & the Nature of Models

Model driven architecture (MDA) can be seen as the main (only ?) documented example of model based systems engineering. Its taxonomy organizes models within three layers:

  • Computation independent models (CIMs) describe organization and business processes independently of the role played by supporting systems.
  • Platform independent models (PIMs) describe the functionalities supported by systems independently of their implementation.
  • Platform specific models (PSMs) describe systems components depending on implementation platforms.

Engineering can then be managed along architecture layers (a), or carried out as a whole for each application (b).

mapsterrits_landingschar

Managing changes at architecture (a) or application (b) level.

It’s important to note that the MDA framework is completely neutral with regard to methods: engineering processes can be organized as phased activities (procedural), iterations (agile), or artifacts transformation (declarative).

Logic & The Matter of Models

Whatever the idiosyncrasies and fuzziness of business concerns and contexts, at the end of the day requirements will have to be coerced into the strict logic of computer systems. That may be a challenging task to be carried out directly, but less so if upheld by models.

As it happens, a fact all too often ignored, models come with sound logical foundations that can be used to formalize the mapping of requirements into specifications; schematically, models are to be set in two formal categories:

  • Descriptive (aka extensional) ones try to classify actual objects, events, and processes into categories.
  • Prescriptive (aka intensional) ones specify what is expected of systems components and how to develop them.
The logical basis of models

The logical basis of models

Interestingly, that distinction provides a formal justification to the one between analysis and design models, the former for the consolidation of requirements across business domains and enterprise organization, the latter for systems and software designs. Such logical foundations could help to manage the mapping of business processes and systems architectures.

UML & the Anatomy of Models

Except scientific computation, there is no reason to assume a-priori congruence between the description of business objects and processes and the specification of the software components. As a corollary, their respective structures and features are better to be dealt with separately.

But that’s not the case at architecture level, where domains and identities have to be managed continuously and consistency on the two sides of the business/system divide. At this level (aka enterprise architecture), responsibilities and identification and communication mechanisms must be defined uniformly.

Compared to MDA set at architecture level, UML describes the corresponding artifacts for business, systems, and platform layers. Regardless of the confusing terminology (layers or levels), that puts MDA and UML along orthogonal dimensions: the former (layers) deals with the nature of contents, the latter (levels) with their structures and features.

MDA is only concerned with architectures, UML describe the structure of architecture components.

MDA is only concerned with architectures, UML describe the structure of architecture components.

Using the same unified modeling language across business, systems, and platform layers is to clearly and directly enhance transparency and traceability; but the full extent of MDA/UML cross-benefits is to appear when models logic is taken into account.

Models & Systems Evolution

As illustrated by the increasing number of systemic crashes, systems obsolescence is no longer a matter of long-term planning but of operational continuity: change has become the rule and as far as complex and perennial systems are concerned, architectures are to evolve while supporting their functional duties seamlessly. If that is to be achieved, modularity and a degree of consistency are necessary between the nature of changes and their engineering. That’s where MDA is to help.

As pointed to above, modularity is best achieved with regard to level (architecture, element) and models contents (business, systems, platforms).

At architecture level, changes in domains, identification, and categories must be aligned between descriptive (enterprise) and prescriptive (systems) models. That will be best achieved with UML models across all MDA layers.

Using UML and MDA helps to align descriptive and prescriptive models at architecture level.

Using UML and MDA helps to align descriptive and prescriptive models at architecture level.

The constraints of continuity and consistency can be somewhat eased at element level: if descriptive (business) and prescriptive (systems) models of structures and features are to be consistent, they are not necessarily congruent. On component (prescriptive/design) side, UML and object-oriented design (OOD) are to keep them encapsulated. As for the business (descriptive/analysis) side, since structures and features can be modeled separately (and OOD not necessarily the best option), any language (UML, BPMN, DSL,etc.) can be used. In between, the structure (aka signature) of messages passed at architecture level is to be specified depending on communication framework.

Considering the new challenges brought about by the comprehensive interoperability of heterogeneous systems, the OMG should reassess the full range of latent possibilities to be found in its engineering portfolio.

Further Reading

Things Behavior & Social Responsibility

October 27, 2016

Contrary to security breaks and information robberies that can be kept from public eyes, crashes of business applications or internet access are painfully plain for whoever is concerned, which means everybody. And as illustrated by the last episode of massive distributed denial of service (DDoS), they often come as confirmation of hazards long calling for attention.

robot_waynemiller

Device & Social Identity (Wayne Miller)

Things Don’t Think

To be clear, orchestrated attacks through hijacked (if unaware) computers have been a primary concern for internet security firms for quite some time, bringing about comprehensive and continuous reinforcement of software shields consolidated by systematic updates.

But while the right governing hand was struggling to make a safer net, the other hand thoughtlessly brought in connected objects to a supposedly new brand of internet. As if adding things with software brains cut to the bone could have made networks smarter.

And that’s the catch because the internet of things (IoT) is all about making room for dumb ancillary objects; unfortunately, idiots may have their use for literary puppeteers with canny agendas.

Think Again, or Not …

For old-timers with some memory of fingering through library cardboard, googling topics may have looked like dreams: knowledge at one’s fingertips, immediately and comprehensively. But that vision has never been more than a fleeting glimpse in a symbolic world; in actuality, even at its semantic best, the web was to remain a trove of information to be sifted by knowledge workers safely seated in their gated symbolic world. Crooks of course could sneak in as knowledge workers, armed with fountain pens, but without guns covered by the second amendment.

So, from its inception, the IoT has been a paradoxical endeavor: trying to merge actual and symbolic realms that would bypass thinking processes and obliterate any distinction. For sure, that conundrum was supposed to be dealt with by artificial intelligence (AI), with neural networks and deep learning weaving semantic threads between human minds and networks brains.

Not surprisingly, brainy hackers have caught sight of that new wealth of chinks in internet armour and swiftly added brute force to their paraphernalia.

But in addition to the technical aspect of internet security, the recent Dyn DDoS attack puts the light on its social perspective.

Things Behavior & Social Responsibility

As far as it remained intrinsically symbolic, the internet has been able to carry on with its utopian principles despite bumpy business environments. But things have drastically changed the situation, with tectonic frictions between symbolic and real plates wreaking havoc with any kind of smooth transition to internet.X, whatever x may be.

Yet, as the diagnose is clear, so should be the remedy.

To begin with, the internet was never meant to become the central nervous system of human societies. That it has happened in half a generation has defied imagination and, as a corollary, sapped the validity of traditional paradigms.

As things happen, the epicenter of the paradigms collision can be clearly identified: whereas the internet is built from systems, architectures taxonomies are purely technical and ignore what should be the primary factor, namely what kind of social role a system could fulfil. That may have been irrelevant for communication networks, but is obviously critical for social ones.

Further Reading

External Links

Business Problems shouldn’t sleep with IT Solutions

October 8, 2016

Preamble

The often mentioned distinction between problem and solution levels may make sense from an analyst’s particular point of view, whether business or system.  But blending problems and solutions independently of their nature becomes a serious over simplification for enterprise architects considering that one of their prime responsibility is to keep apart business problems from IT solutions.

(Mircea Cantor)

Functional problem with technical solution (Mircea Cantor)

That issue is relevant from engineering as well as business perspective.

Engineering View: Problem Levels & Architecture Layers

As long as computers are used to solve problems the only concern is to find the best solution, and the only architecture of concern is software’s.

But enterprise architects have to deal with systems, not computers, namely how to best serve business objectives with corporate resources, across business units and along business cycles. For that purpose resources (financial, human, technical) and their use are to be layered according to the nature of problems and solutions: business processes (enterprise), supporting functionalities (systems), and technologies (platforms).

From an engineering perspective, the intended congruence between problems levels and architecture layers can be illustrated with the OMG’s model driven architecture (MDA) framework:

  • Computation independent models (CIMs) deal with business processes solutions, to be translated into functional problems for supporting systems.
  • Platform independent models (PIMs) deal with functional solutions, to be translated into technical problems for supporting platforms.
  • Platform specific models (PSMs) deal with technical solutions, to be implemented as code.
MDA layers correspond to a clear hierarchy of problems and solutions

MDA layers can be mapped to a clear hierarchy of problems and solutions

Along that understanding, architectures can be seen as solutions, and the primary responsibility of enterprise architects is to see that problems/solutions brace remain in their respective swim-lanes.

Business View: Business Value & Enterprise Assets

Whereas the engineering perspective may appear technical or specific to a model based approach, the same issue is all the more significant when expressed with regard to business concerns and corporate governance. In that case the critical distinction is between business value and assets:

  • Business value: Problems are set by business opportunities, and solutions by processes and applications. The critical factor is reactivity and time-to-market.
  • Assets: Problems are set by business objectives and strategy, and solutions are to be supported by organization and systems capabilities. The critical factor is reuse and ROI.
Decision-making must distinguish between business opportunities and enterprise governance

Decision-making must distinguish between business opportunities and enterprise governance

If opportunities are to be seized and operations managed on the fly  yet tally with strategic decisions, respective problems and solutions should be kept apart. Juggling with their dynamic alignment is at the core of enterprise architects’ job description.

Enterprise Architects & Governance

Engineering and business perspectives are not to be seen as the terms of an alternative to be picked by enterprise architects. As a matter of fact they must be crossed and governance policies selected depending on the point of view:

  • Looking at EA from an engineering perspective,  the business one will focus on systems governance and assets management as epitomized by model based systems engineering schemes.
  • Looking at EA from a business perspective, the engineering one will focus on lean and just-in-time solutions, as epitomized by agile development models.

As far as governance of large and complex corporate entities, supposedly EA’s primary target, must deal with tactical, operational, and strategic concerns, the nexus between business and engineering perspectives is where enterprise architects are to stand.