Modeling Symbolic Representations

March 16, 2010

System modeling is all too often a flight for abstraction, when business analysts should instead look for the proper level of representation, ie the one with the best fit to business concerns.

Modeling is synchronic: contexts must be mapped to representations (Velazquez, “Las Meninas”).

Caminao’s blog (see Topics Guide) will try to set a path to Architecture Driven System Modelling. The guiding principle is to look at systems as sets of symbolic representations and identify the core archetypes defining how they must be coupled to their actual counterparts. That would provide for lean (need-to-know specs) and fit (architecture driven) models, architecture traceability, and built-in consistency checks.

This blog is meant to be a work in progress, with the basic concepts set open to suggestions or even refutation:

All examples are taken from ancient civilizations in order to put the focus on generic problems of symbolic architectures, disregarding technologies.

Symbolic representation: a primer

Original illustrations by Albert (http://www.albertdessinateur.com/) allow for concrete understanding of requirements, avoiding the biases associated with contrived textual descriptions.

Deep Blind Testing

March 21, 2017

Preamble

Tests are meant to ensure that nothing will go amiss. Assuming that expected hazards can be duly dealt with beforehand, the challenge is to guard against unexpected ones.

Unexpected Outcome (Ariel Schlesinger)

That would require the scripting of every possible outcomes in an unlimited range of unknown circumstances, and that’s where Deep Learning may help.

What to Look For

As Donald Rumsfeld once famously said, there are things that we know we don’t know, and things we don’t know we don’t know; hence the need of setting things apart depending on what can be known and how, and build the scripts accordingly:

  • Business requirements: tests can be designed with respect to explicit specifications; yet some room should also be left for changes in business circumstances.
  • Functional requirements: assuming business requirements are satisfied, the part played by supporting systems can be comprehensively tested with respect to well-defined boundaries and operations.
  • Quality of service: assuming business and functional requirements are satisfied, tests will have to check how human interfaces and resources are to cope with users behaviors and expectations which, by nature, cannot be fully anticipated.
  • Technical requirements: assuming business and functional requirements are satisfied as well as users’ expectations for service, deployment, maintenance, and operations are to be tested with regard to feasibility and costs.

Automated testing has to take into account these differences between scope and nature, from bounded and defined specifications to boundless, fuzzy and changing circumstances.

Automated Software Testing

Automated software testing encompasses two basic components: first the design of test cases (events, operations, and circumstances), then their scripted execution. Leading frameworks already integrate most of the latter together with the parts of the former targeting technical aspects like graphical user interfaces or system APIs. Artificial intelligence (AI) and machine learning (ML) have also been tried for automated test generation, yet with a scope limited by dependency on explicit knowledge, and consequently by the need of some “manual” teaching. That hurdle may be overcame by the deep learning ability to get direct (aka automated) access to implicit knowledge.

Reconnaissance: Known Knowns

Systems are designed artifacts, with the corollary that their components are fully defined and their behavior predictable. The design of technical test cases can therefore be derived from what is known of software and systems architectures, the former for test units, the latter for integration and acceptance tests. Deep learning could then mine recorded log-files in order to identify critical cases’ events and circumstances.

Exploration: Known Unknowns

Assuming that applications must be tested for use during their expected shelf life, some uncertainty has to be factored in for future business circumstances. Yet, assuming applications are designed to meet specific business objectives, such hypothetical circumstances should remain within known boundaries. In that context deep learning could be applied to exploration as well as policies:

  • Compared to technical test cases that can rely on the content of systems log-files, business and functional ones have to look outside and mine raw data from business environments.
  • In return, the relevancy of observations can be assessed with regard to business objectives, improved, and feed the policy module in charge of defining test cases.

Blind Errands: Unknown Unknowns

Even with functional and technical capabilities well-tested and secured, quality of service may remain contingent on human quirks: instinctive or erratic behaviors that could thwart the best designed handrails. On one hand, and due to their very nature, such hazards are not to be easily forestalled by reasoned test cases; but on the other hand they don’t take place in a void but within known functional circumstances. Given that porosity of functional and cognitive layers, the validity of functional test cases may be compromised by unfathomable cognitive associations, and that could open the door to unmanageable regression. Enter deep learning and its ability to extract knowledge from insignificance.

Compared to business and functional test cases, hazards are not directly related to business activities. As a consequence, the learning process cannot be guided by business and functional test cases but has to chart unpredictable human behaviors. As it happens, that kind of learning combining random simulation with automated reinforcement is what makes the specificity of deep learning.

From Non-regression to Self-improvement

As a conclusion, if non-regression is to be the cornerstone of quality management, test cases are to be set along clear swim-lanes: business logic (independently of systems), supporting systems functionalities (for shared applications), users interfaces (for non shared interactions). Then, since test cases are also run across swim-lanes, it opens the door to feedback, e.g unit test cases reassessed directly from business rules independently of systems functionalities, or functional test cases reassessed from users’ behaviors.

Considering that well-defined objectives, sound feedback mechanisms, and the availability of massive data from systems logs (internal) and business environment (external) are the main pillars of deep learning technologies, their combination in integrated frameworks could result in a qualitative leap toward self-improving automated test cases.

Further Reading

 

Focus: Business Cases for Use Cases

February 27, 2017

Preamble

As originally defined by Ivar Jacobson, uses cases (UCs) are focused on the interactions between users and systems. The question is how to associate UC requirements, by nature local, concrete, and changing, with broader business objectives set along different time-frames.

Sigmar-Polke-Hope-Clouds

Cases, Kites, and Clouds (Sigmar Polke)

Backing Use Cases

On the system side UCs can be neatly traced through the other UML diagrams for classes, activities, sequence, and states. The task is more challenging on the business side due to the diversity of concerns to be defined with other languages like Business Process Modeling Notation (BPMN).

Use cases at the hub of UML diagrams

Use Cases contexts

Broadly speaking, tracing use cases to their business environments have been undertaken with two approaches:

  • Differentiated use cases, as epitomized by Alister Cockburn’s seminal book (Readings).
  • Business use cases, to be introduced beside standard (often renamed as “system”) use cases.

As it appears, whereas Cockburn stays with UCs as defined by Jacobson but refines them to deal specifically with generalization, scaling, and extension, the second approach introduces a somewhat ill-defined concept without setting apart the different concerns.

Differentiated Use Cases

Being neatly defined by purposes (aka goals), Cockburn’s levels provide a good starting point:

  • Users: sea level (blue).
  • Summary: sky, cloud and kite (white).
  • Functions: underwater, fish and clam (indigo).

As such they can be associated with specific concerns:

Cockburn’s differentiated use cases

  • Blue level UCs are concrete; that’s where interactions are identified with regard to actual agents, place, and time.
  • White level UCs are abstract and cannot be instanciated; cloud ones are shared across business processes, kite ones are specific.
  • Indigo level UCs are concrete but not necessarily the primary source of instanciation; fish ones may or may not be associated with business functions supported by systems (grey), e.g services , clam ones are supposed to be directly implemented by system operations.

As illustrated by the example below, use cases set at enterprise or business unit level can also be concrete:

Example with actors for users and legacy systems (bold arrows for primary interactions)

UC abstraction connectors can then be used to define higher business objectives.

Business “Use” Cases

Compared to Cockburn’s efficient (no new concept) and clear (qualitative distinctions) scheme, the business use case alternative adds to the complexity with a fuzzy new concept based on quantitative distinctions like abstraction levels (lower for use cases, higher for business use cases) or granularity (respectively fine- and coarse-grained).

At first sight, using scales instead of concepts may allow a seamless modeling with the same notations and tools; but arguing for unified modeling goes against the introduction of a new concept. More critically, that seamless approach seems to overlook the semantic gap between business and system modeling languages. Instead of three-lane blacktops set along differentiated use cases, the alignment of business and system concerns is meant to be achieved through a medley of stereotypes, templates, and profiles supporting the transformation of BPMN models into UML ones.

But as far as business use cases are concerned, transformation schemes would come with serious drawbacks because the objective would not be to generate use cases from their business parent but to dynamically maintain and align business and users concerns. That brings back the question of the purpose of business use cases:

  • Are BUCs targeting business logic ? that would be redundant because mapping business rules with applications can already be achieved through UML or BPMN diagrams.
  • Are BUCs targeting business objectives ? but without a conceptual definition of “high levels” BUCs are to remain nondescript practices. As for the “lower levels” of business objectives, users’ stories already offer a better defined and accepted solution.

If that makes the concept of BUC irrelevant as well as confusing, the underlying issue of anchoring UCs to broader business objectives still remains.

Conclusion: Business Case for Use Cases

With the purposes clearly identified, the debate about BUC appears as a diversion: the key issue is to set apart stable long-term business objectives from short-term opportunistic users’ stories or use cases. So, instead of blurring the semantics of interactions by adding a business qualifier to the concept of use case, “business cases” would be better documented with the standard UC constructs for abstraction. Taking Cockburn’s example:

Abstract use cases: no actor (19), no trigger (20), no execution (21)

Different levels of abstraction can be combined, e.g:

  • Business rules at enterprise level: “Handle Claim” (19) is focused on claims independently of actual use cases.
  • Interactions at process level: “Handle Claim” (21) is focused on interactions with Customer independently of claims’ details.

Broader enterprise and business considerations can then be documented depending on scope.

Further Reading

External Links

Alternative Facts & Augmented Reality

February 5, 2017

Preamble

Coming alongside the White House creative use of facts, the upcoming Snap’s IPO is to bring another perspective on reality with its Snapchat star product integrating augmented reality (AR) with media.

20-Juan-Muñoz

Truth in the eye of the beholder (Juan Munoz)

Whatever the purpose, the “alternative facts” favored by the White House communication detail may bring to the fore two related issues of present-day relevancy: virtual and augmented reality on one hand, the actuality of George Orwell’s Newspeak on the other hand.

Facts and Fiction

To begin with, facts are not given but observed, and that can only be achieved through a mix of conceptual and technical apparatus, the former to design fact-finding vessels, the latter to fill them with actual observations. Based on that understanding, alternatives are less about the facts themselves than about the apparatuses used to collect them, which may be trustworthy, faulty, or deceitful. Setting flaws aside, trust is also what distinguishes augmented and virtual reality:

  • Augmented reality (AR) technologies operate on apparatuses that combine observation and analysis before adding layers of information.
  • Virtual reality (VR) technologies simply overlook the whole issue of reality and observation, and are only concerned with the design of trompe l’oeuils.

The contrast between facts (AR) and fiction (VR) may account for the respective applications and commercial advances: whereas augmented reality is making rapid inroads in business applications, its virtual cousin is still testing the water in games. More significantly perhaps, the comparison points to a somewhat unexpected difference in the role of language: necessary for the establishment of facts, accessory for the creation of fictions.

Speaking of Alternative Facts

As illustrated (pun intended) by virtual reality, fiction can do without words, which is not the case for facts. As a matter of fact (intended again), even facts can be fictional, as epitomized by Orwell’s Newspeak, the language used by the totalitarian state in his 1949 novel Nineteen Eighty-Four. Figuratively speaking, that language may be likened to a linguistic counterpart of virtual reality as its purpose is to bypass the issue of trusty discourse about reality by introducing narratives wholly detached from actual observations. And that’s when fiction catches up with reality: no much stretch of imagination is needed to recognize a similar scheme in current White House’s comments.

Language Matter

As far as humans are concerned, reality comes with semantic and social dimensions that can only be carried out through language. In other words truth is all about the use of language with regard to purpose: communication, information, or knowledge. Taking Trump’s inauguration crowd for example:

vvv

Data come from observations, Information is Data put in form, Knowledge is Information put to use.

  • Communication: language is used to exchange observations associated to immediate circumstances (the place and the occasion).
  • Information: language is used to map observations to mental representations and operations (estimates for the size of the audience).
  • Knowledge: language is use to associate information to purposes through categories and concepts detached of the original circumstances (comparison of audiences for similar events and political conclusions).

Augmented Reality devices on that occasion could be used to tally people on viewed portions of the audience (fact), figure out estimates for the whole audience (information), or decide on the best itineraries back home (knowledge). By contrast, Virtual Reality (aka “alternative facts”) could only be used at communication level to deceive the public.

Further Reading

Things Speaking in Tongues

January 25, 2017

Preamble

Speaking in tongues (aka Glossolalia) is the fluid vocalizing of speech-like syllables without any recognizable association with a known language. Such experience is best (not ?) understood as the actual speaking of a gutted language with grammatical ghosts inhabited by meaningless signals.

The man behind the tongue (Herbert List)

Do You Hear What I Say ? (Herbert List)

Usually set in religious context or circumstances, speaking in tongue looks like souls having their own private conversations. Yet, contrary to extraterrestrial languages, the phenomenon is not fictional and could therefore point to offbeat clues for natural language technology.

Computers & Language Technology

From its inception computers technology has been a matter of language, from machine code to domain specific. As a corollary, the need to be in speaking terms with machines (dumb or smart) has put a new light on interpreters (parsers in computer parlance) and open new perspectives for linguistic studies. In due return, computers have greatly improve the means to experiment and implement new approaches.

During the recent years advances in artificial intelligence (AI) have brought language technologies to a critical juncture between speech recognition and meaningful conversation, the former leaping ahead with deep learning and signal processing, the latter limping along with the semantics of domain specific languages.

Interestingly, that juncture neatly coincides with the one between the two intrinsic functions of natural languages: communication and representation.

Rules Engines & Neural Network

As exemplified by language technologies, one of the main development of deep learning has been to bring rules engines and neural networks under a common functional roof, turning the former unfathomable schemes into smart conceptual tutors for the latter.

In contrast to their long and successful track record in computer languages, rule-based approaches have fallen short in human conversations. And while these failings have hindered progress in the semantic dimension of natural language technologies, speech recognition have strode ahead on the back of neural networks fueled by increasing computing power. But the rift between processing and understanding natural languages is now being fastened through deep learning technologies. And with the leverage of rule engines harnessing neural networks, processing and understanding can be carried out within a single feedback loop.

From Communication to Cognition

From a functional point of view, natural languages can be likened to money, first as medium of exchange, then as unit of account, finally as store of value. Along that understanding natural languages would be used respectively for communication, information processing, and knowledge representation. And like the economics of money, these capabilities are to be associated to phased cognitive developments:

  • Communication: languages are used to trade transient signals; their processing depends on the temporal persistence of the perceived context and phenomena; associated behaviors are immediate (here-and-now).
  • Information: languages are also used to map context and phenomena to some mental representations; they can therefore be applied to scripted behaviors and even policies.
  • Knowledge: languages are used to map contexts, phenomena, and policies to categories and concepts to be stored as symbolic representations fully detached of original circumstances; these surrogates can the be used, assessed, and improved on their own.

As it happens, advances in technologies seem to follow these cognitive distinctions, with the internet of things (IoT) for data communications, neural networks for data mining and information processing, and the addition of rules engines for knowledge representation. Yet paces differ significantly: with regard to language processing (communication and information), deep learning is bringing the achievements of natural language technologies beyond 90% accuracy; but when language understanding has to take knowledge into account, performances still lag a third below: for computers knowledge to be properly scaled, it has to be confined within the semantics of specific domains.

Sound vs Speech

Humans listening to the Universe are confronted to a question that can be unfolded in two ways:

  • Is there someone speaking, and if it’s the case, what’s the language ?.
  • Is that a speech, and if it’s the case, who’s speaking ?.

In both case intentionality is at the nexus, but whereas the first approach has to tackle some existential questioning upfront, the second can put philosophy on the back-burner and focus on technological issues. Nonetheless, even the language first approach has been challenging, as illustrated by the difference in achievements between processing and understanding language technologies.

Recognizing a language has long been the job of parsers looking for the corresponding syntax structures, the hitch being that a parser has to know beforehand what it’s looking for. Parser’s parsers using meta-languages have been effective with programming languages but are quite useless with natural ones without some universal grammar rules to sort out babel’s conversations. But the “burden of proof” can now be reversed: compared to rules engines, neural networks with deep learning capabilities don’t have to start with any knowledge. As illustrated by Google’s Multilingual Neural Machine Translation System, such systems can now build multilingual proficiency from sufficiently large samples of conversations without prior specific grammatical knowledge.

To conclude, “Translation System” may even be self-effacing as it implies language-to-language mappings when in principle such systems can be fed with raw sounds and be able to parse the wheat of meanings from the chaff of noise. And, who knows, eventually be able to decrypt languages of tongues.

Further Reading

External Links

NIEM & Information Exchanges

January 24, 2017

Preamble

The objective of the National Information Exchange Model (NIEM) is to provide a “dictionary of agreed-upon terms, definitions, relationships, and formats that are independent of how information is stored in individual systems.”

(Alfred Jensen)

NIEM’s model makes no difference between data and information (Alfred Jensen)

For that purpose NIEM’s model combines commonly agreed core elements with community-specific ones. Weighted against the benefits of simplicity, this architecture overlooks critical distinctions:

  • Inputs: Data vs Information
  • Dictionary: Lexicon and Thesaurus
  • Meanings: Lexical Items and Semantics
  • Usage: Roots and Aspects

That shallow understanding of information significantly hinders the exchange of information between business or institutional entities across overlapping domains.

Inputs: Data vs Information

Data is made of unprocessed observations, information makes sense of data, and knowledge makes use of information. Given that NIEM is meant to be an exchange between business or institutional users, it should have no concern with data mining or knowledge management.

Data is meaningless, information meaning is set by semantic domains.

As an exchange, NIEM should have no concern with data mining or knowledge management.

The problem is that, as conveyed by “core of data elements that are commonly understood and defined across domains, such as person, activity, document, location”, NIEM’s model makes no explicit distinction between data and information.

As a corollary, it implies that data may not only be meaningful, but universally so, which leads to a critical trap: as substantiated by data analytics, data is not supposed to mean anything before processed into information; to keep with examples, even if the definition of persons and locations may not be specific, the semantics of associated information is nonetheless set by domains, institutional, regulatory, contractual, or otherwise.

Data is meaningless, information meaning is set by semantic domains.

Data is meaningless, information meaning is set by semantic domains.

Not surprisingly, that medley of data and information is mirrored by NIEM’s dictionary.

Dictionary: Lexicon & Thesaurus

As far as languages are concerned, words (e.g “word”, “ξ∏¥” ,”01100″) remain data items until associated to some meaning. For that reason dictionaries are built on different levels, first among them lexical and semantic ones:

  • Lexicons take items on their words and gives each of them a self-contained meaning.
  • Thesauruses position meanings within overlapping galaxies of understandings held together by the semantic equivalent of gravitational forces; the meaning of words can then be weighted by the combined semantic gravity of neighbors.

In line with its shallow understanding of information, NIEM’s dictionary only caters for a lexicon of core standalone items associated with type descriptions to be directly implemented by information systems. But due to the absence of thesaurus, the dictionary cannot tackle the semantics of overlapping domains: if lexicons alone can deal with one-to-one mappings of items to meanings (a), thesauruses are necessary for shared (b) or alternative (c) mappings.

vv

Shared or alternative meanings cannot be managed with lexicons

With regard to shared mappings (b), distinct lexical items (e.g qualification) have to be mapped to the same entity (e.g person). Whereas some shared features (e.g person’s birth date) can be unequivocally understood across domains, most are set through shared (professional qualification), institutional (university diploma), or specific (enterprise course) domains .

Conversely, alternative mappings (c) arise when the same lexical items (e.g “mole”) can be interpreted differently depending on context (e.g plastic surgeon, farmer, or secret service).

Whereas lexicons may be sufficient for the use of lexical items across domains (namespaces in NIEM parlance), thesauruses are necessary if meanings (as opposed to uses) are to be set across domains. But thesauruses being just tools are not sufficient by themselves to deal with overlapping semantics. That can only be achieved through a conceptual distinction between lexical and semantic envelops.

Meanings: Lexical Items & Semantics

NIEM’s dictionary organize names depending on namespaces and relationships:

  • Namespaces: core (e.g Person) or specific (e.g Subject/Justice).
  • Relationships: types (Counselor/Person) or properties (e.g PersonBirthDate).
vvv

NIEM’s Lexicon: Core (a) and specific (b) and associated core (c) and specific (d) properties

But since lexicons know only names, the organization is not orthogonal, with lexical items mapped indifferently to types and properties. The result being that, deprived of reasoned guidelines, lexical items are chartered arbitrarily, e.g:

Based on core PersonType, the Justice namespace uses three different schemes to define similar lexical items:

  • “Counselor” is described with core PersonType.
  • “Subject” and “Suspect” are both described with specific SubjectType, itself a sub-type of PersonType.
  • “Arrestee” is described with specific ArresteeType, itself a sub-type of SubjectType.

Based on core EntityType:

  • The Human Services namespace bypasses core’s namesake and introduces instead its own specific EmployerType.
  • The Biometrics namespace bypasses possibly overlapping core Measurer and BinaryCaptured and directly uses core EntityType.
Lexical items are meshed disregarding semantics

Lexical items are chartered arbitrarily

Lest expanding lexical items clutter up dictionary semantics, some rules have to be introduced; yet, as noted above, these rules should be limited to information exchange and stop short of knowledge management.

Usage: Roots and Aspects

As far as information exchange is concerned, dictionaries have to deal with lexical and semantic meanings without encroaching on ontologies or knowledge representation. In practice that can be best achieved with dictionaries organized around roots and aspects:

  • Roots and structures (regular, black triangles) are used to anchor information units to business environments, source or destination.
  • Aspects (italics, white triangles) are used to describe how information units are understood and used within business environments.
nformation exchanges are best supported by dictionaries organized around roots and aspects

Information exchanges are best supported by dictionaries organized around roots and aspects

As it happens that distinction can be neatly mapped to core concepts of software engineering.

Further Reading

External Links

Focus: Analysis vs Design

January 4, 2017

Preamble

Definitions should never turn into wages of words as they should only be judged on their purpose and utility, with  such assessment best achieved by comparing and adjusting the meaning of neighboring concepts with regard to tasks at hand.

GChirico_prodigal-son

Analysis & Design as Duet (Giorgio de Chirico)

That approach can be applied to the terms “analysis” and “design” as used in systems engineering.

What: Logic & Engineering

Whatever the idiosyncrasies and fuzziness of business concerns and contexts, at the end of the day business and functional requirements of supporting systems will have to be coerced into the uncompromising logic of computers. Assuming that analysis and design are set along that path, they could be characterized accordingly.

As a matter of fact, a fact all too often ignored, a formal basis can be used to distinguish between analysis and design models, the former for the consolidation of requirements across business domains and enterprise organization, the latter for systems and software designs:

  • Business analysis models are descriptive (aka extensional); they try to put actual objects, events, and processes into categories.
  • System engineering models are prescriptive (aka intensional); they define what is expected of systems components and how to develop them.

Squaring Logic with Engineering

As a confirmation of its validity, that classification along the logic basis of models can be neatly crossed with engineering concerns:

  • Applications: engineering deals with the realization of business needs expressed as use cases or users’ stories. Engineering units are self-contained with specific life-spans, and may consequently be developed on a continuous basis.
  • Architectures: engineering deals with supporting assets at enterprise level. Engineering units are associated with shared functionalities without specific life-spans, with their development subject to some phasing constraints.

That taxonomy can be used to square the understanding of analysis, designs, and architectures.

Where: Business unit or Corporate

Reversing the perspective from content to context, the formal basis of analysis and design can also be crossed with their organizational framework:

  • Analysis is to be carried out locally within business units.
  • Designs are to be set both locally for applications, and at enterprise level for architectures.

Organizational dependencies will determine the roles, responsibilities, and time-frames associated with analysis and design.

Who: Analysts, Architects, Engineers

Contents and contexts are to determine the skills and responsibility for stakeholders, architects, analysts and engineers. On that account:

  • Analysis should be the shared responsibility of business and system analysts.
  • Designs would be solely under the authority of architects and engineers.

The possibility for agents to collaborate and share responsibility will determine the time-frames of analysis and design .

When: Continuous or Discrete

As far as project management is concerned, time is the crux of the matter: paraphrasing Einstein, the only reason for processes [time] is so that everything doesn’t happen at once. Hence the importance of characterizing analysis and design according to the nature of their time-scale:

  • At application level analysis and design can be carried out iteratively along a continuously time-scale.
  • At enterprise level the analysis of business objectives and the design of architectures will require milestones set along discrete time-scales.

The combination of organizational and timing constraints will determine analysis and design modus operandi.

How: Agile or Phased

Finally, the distinction between analysis and design will depend on the software engineering MO, as epitomized by the agile vs phased debate:

  • The agile development model combines analysis, design, and development into a single activity carried out iteratively. It is arguably the option of choice providing the two conditions about shared ownership and continuous delivery can be met.
  • Phased development models may rely on different arrangements but most will include a distinction between requirements analysis and software design.

That makes for an obvious conclusion: whether analysis and design are phased or carried out collaboratively, understanding their purpose and nature is a key success factor for systems and software engineering.

PS: Darwin vs Turing

As pointed out by Daniel Dennett (“From Bacteria to Bach, and Back“), the meaning of analysis and design can be neatly rooted in the theoretical foundations of evolution and computer science.

Darwin built his model of Natural Selection bottom up from the analysis of actual live beings. Roundly refuting the hypothesis of some “intelligent designer”, Darwin’s work epitomizes how ontologies built from observations (aka analysis models) can account for the origin, structure and behaviors of individuals.

Conversely, Turing’s thesis of computation is built top-down from established formal principles to software artifacts. In that case prior logical ontologies are applied to design models meant to realize intended capabilities.

Further Reading

New Year: 2016 is the One to Learn

December 15, 2016

Sometimes the future is best seen through rear-view mirrors; given the advances of artificial intelligence (AI) in 2016, hindsight may help for the year to come.

(J.Bosh)

Deep Mind Learning (J.Bosh)

Deep Learning & the Depths of Intelligence

Deep learning may not have been discovered in 2016 but Google’s AlphaGo has arguably brought a new dimension to artificial intelligence, something to be compared to unearthing the spherical Earth.

As should be expected for machines capabilities, artificial intelligence has for long been fettered by technological handcuffs; so much so that expert systems were initially confined to a flat earth of knowledge to be explored through cumbersome sets of explicit rules. But exponential increase in computing power has allowed neural networks to take a bottom-up perspective, mining for implicit knowledge hidden in large amount of raw data.

Like digging tunnels from both extremities, it took some time to bring together top-down and bottom-up schemes, namely explicit (rule-based) and implicit (neural network-based) knowledge processing. But now that it comes to fruition, the alignment of perspectives puts a new light on the cognitive and social dimensions of intelligence.

Intelligence as a Cognitive Capability

Assuming that intelligence is best defined as the ability to solve problems, the first criterion to consider is the type of input (aka knowledge) to be used:

  • Explicit: rational processing of symbolic representations of contexts, concerns, objectives, and policies.
  • Implicit: intuitive processing of factual (non symbolic) observations of objects and phenomena.

That distinction is broadly consistent with the one between humans, seen as the sole symbolic species with the ability to reason about explicit knowledge, and other animal species which, despite being limited to the processing of implicit knowledge, may be far better at it than humans. Along that understanding, it would be safe to assume that systems with enough computing power will sooner or later be able to better the best of animal species, in particular in the case of imperfect inputs.

Intelligence as a Social Capability

Alongside the type of inputs, the second criterion to be considered is obviously the type of output (aka solution). And since classifications are meant to be built on purpose, a typology of AI outcomes should focus on relationships between agents, humans or otherwise:

  • Self-contained: problem-solving situations without opponent.
  • Competitive: zero-sum conflictual activities involving one or more intelligent opponents.
  • Collaborative: non-zero-sum activities involving one or more intelligent agents.

That classification coincides with two basic divides regarding communication and social behaviors:

  1. To begin with, human behavior is critically different when interacting with living species (humans or animals) and machines (dumb or smart). In that case the primary factor governing intelligence is the presence, real or supposed, of beings with intentions.
  2. Then, and only then, communication may take different forms depending on languages. In that case the primary factor governing intelligence is the ability to share symbolic representations.

A taxonomy of intelligence with regard to cognitive (reason vs intuition) and social (symbolic vs non-symbolic) capabilities may help to clarify the role of AI and the importance of deep learning.

Between Intuition and Reason

Google’s AlphaGo astonishing performances have been rightly explained by a qualitative breakthrough in learning capabilities, itself enabled by the two quantitative factors of big data and computing power. But beyond that success, DeepMind (AlphaGo’s maker) may have pioneered a new approach to intelligence by harnessing both symbolic and non symbolic knowledge to the benefit of a renewed rationality.

Perhaps surprisingly, intelligence (a capability) and reason (a tool) may turn into uneasy bedfellows when the former is meant to include intuition while the latter is identified with logic. As it happens, merging intuitive and reasoned knowledge can be seen as the nexus of AlphaGo decisive breakthrough, as it replaces abrasive interfaces with smart full-duplex neural networks.

Intelligent devices can now process knowledge seamlessly back and forth, left and right: borne by DeepMind’s smooth cognitive cogwheels, learning from factual observations can suggest or reinforce the symbolic representation of emerging structures and behaviors, and in return symbolic representations can be used to guide big data mining.

From consumers behaviors to social networks to business marketing to supporting systems, the benefits of bridging the gap between observed phenomena and explicit causalities appear to be boundless.

Further Reading

External Links

iStar and the Requirements Conundrum

December 12, 2016

Synopsis

Whenever software engineering problems are looked at, the blame is generally put on requirements, with each side of the business/system divide holding the other responsible.

rockwell_runaway

iStar modeling put the focus on communication (N. Rockwell)

The iStar approach tries to tackle the problem with a conceptual language focused on interactions between business processes and supporting systems.

Dilemma

Conceptual approaches to requirements try to breach the dilemma between phased and agile development schemes: the former takes for granted that requirements can be fully and definitively set upfront; the latter takes a more pragmatic path and tries to reconcile business and system analysts through direct and continuous collaboration.

Setting apart frictions between specific methods, the benefits of agile principles and practices are now well-recognized, contingent on the limits of agile scope. Summarily, agile development is at its best when requirements capture and analysis can be weaved with development and tests. The question remains of what happens when requirements are to be dealt with separately.

The iStar’s answer shares with agile a focus on collaboration and doesn’t take side for business (e.g users’ stories) or systems (e.g use cases). Instead, iStar modeling language is meant to support a conceptual description of interactions between business processes and supporting systems in terms of actors’ goals and commitments, and the associated dependencies.

Actors & Goals

The defining aspect of the iStar modeling approach is to replace one-sided perspectives (business or system) by a systemic one focused on the interactions between agents. The interactive part of a requirement will therefore comprise three basic items:

  • A primary actor trigger an interaction in order to meet some goal; e.g a car owner want his car repaired.
  • Secondary actors may be involved during the ensuing exchanges: e.g body shop, appraiser, insurance company.
  • Functions to be performed: actual task; e.g appraise damages; qualification (soft goal), e.g fair appraisal; and resources, e.g premium payment.
Actors & dependencies

Actors & Dependencies

Dependencies Semantics

The factual description of interactions is both detailed and enriched by elements set within a broader scope:

  • Goal (strong) dependency: assertions about actual state of affairs: object, activity, or expectations.
  • Soft-goal dependency: assertions about expected outcomes.
  • Task dependency: organizational, functional, or technical constraints pertaining to the execution of activities.
  • Resource dependency: constraints or conditions on the availability of inputs, actual or symbolic.

It would be tempting to generalize the strong/soft distinction to dependencies as to make use of modal logic, strong dependencies associated with deontic rules, soft dependencies with alethic ones. That would .

iStar & Caminao

Since iStar modeling categories are directly aligned with UML Use Cases, they can easily mapped to core Caminao stereotypes for actors, objects, events, and activities.

Actors & dependencies

iStar with Caminao Stereotypes

Interestingly, the iStar strong/soft distinction could translate to the actual/symbolic one which constitute the conceptual backbone of the Caminao paradigm.

Assessment

From the business perspective, iStar must be credited with two critical tenets:

  • The focus on interactions between agents is essential for business and system analysts to collaborate. Such benefits appear clearly for the definition of primary and secondary roles (aka actors), intents (business) and capabilities (supporting environments).
  • The distinction between strong and soft goals, even if the logical basis remains unexploited.

Yet, the system perspective lacks a functional dimension, e.g:

  • Architecture levels (enterprise and organization, systems and functionalities, platforms and technologies) are not taken into consideration, nor the nature of capabilities, e.g strategic and operational.
  • The strong/soft dependencies distinction is not explicitly associated with systems capabilities.

On the whole these pros and cons reflect iStar’s declared intent on conceptual modeling; as a corollary these flaws mark also the limits of conceptual modeling when it is detached from the symbolic description of supporting systems functionalities.

Nonetheless, as illustrated by the research quoted below, iStar remains a sound basis for the specification of interactions between users and systems, either as use cases or users’ stories.

Further Reading

External Links

Business Agility vs Systems Entropy

November 28, 2016

Synopsis

As already noted, the seamless integration of business processes and IT systems may bring new relevancy to the OOAD (Observation, Orientation, Decision, Action) loop, a real-time decision-making paradigm originally developed by Colonel John Boyd for USAF fighter jets.

Agility: Orientation (Lazlo Moholo-Nagy)

Agility & Orientation (Lazlo Moholo-Nagy)

Of particular interest for today’s business operational decision-making is the orientation step, i.e the actual positioning of actors and the associated cognitive representations; the point being to use AI deep learning capabilities to surmise opponents plans and misdirect their anticipations. That new dimension and its focus on information brings back cybernetics as a tool for enterprise governance.

In the Loop: OOAD & Information Processing

Whatever the topic (engineering, business, or architecture), the concept of agility cannot be understood without defining some supporting context. For OODA that would include: territories (markets) for observations (data); maps for orientation (analytics); business objectives for decisions; and supporting systems for action.

OODA loop and its actual (red) and symbolic (blue) contexts.

OODA loop and its actual (red) and symbolic (blue) contexts.

One step further, contexts may be readily matched with systems description:

  • Business contexts (territories) for observations.
  • Models of business objects (maps) for orientation.
  • Business logic (objectives) for decisions.
  • Business processes (supporting systems) for action.
ccc

The OODA loop and System Perspectives

That provides a unified description of the different aspects of business agility, from the OODA loop and operations to architectures and engineering.

Architectures & Business Agility

Once the contexts are identified, agility in the OODA loop will depend on architecture consistency, plasticity, and versatility.

Architecture consistency (left) is supposed to be achieved by systems engineering out of the OODA loop:

  • Technical architecture: alignment of actual systems and territories (red) so that actions and observations can be kept congruent.
  • Software architecture: alignment of symbolic maps and objectives (blue) so that orientation and decisions can be continuously adjusted.

Functional architecture (right) is to bridge the gap between technical and software architectures and provides for operational coupling.

Business Agility: systems architectures and business operations

Business Agility: systems architectures and business operations

Operational coupling depends on functional architecture and is carried on within the OODA loop. The challenge is to change tack on-the-fly with minimum frictions between actual and symbolic contexts, i.e:

  • Discrepancies between business objects (maps and orientation) and business contexts (territories and observation).
  • Departure between business logic (objectives and decisions) and business processes (systems and actions)

When positive, operational coupling associates business agility with its architecture counterpart, namely plasticity and versatility; when negative, it suffers from frictions, or what cybernetics calls entropy.

Systems & Entropy

Taking a leaf from thermodynamics, cybernetics defines entropy as a measure of the (supposedly negative) variation in the value of the information supporting the control of viable systems.

With regard to corporate governance and operational decision-making, entropy arises from faults between environments and symbolic surrogates, either for objects (misleading orientations from actual observations) or activities (unforeseen consequences of decisions when carried out as actions).

So long as architectures and operations were set along different time-frames (e.g strategic and tactical), cybernetics were of limited relevancy. But the seamless integration of data analytics, operational decision-making, and IT supporting systems puts a new light on the role of entropy, as illustrated by Boyd’s OODA and its orientation component.

Orientation & Agility

While much has been written about how data analytics and operational decision-making can be neatly and easily fitted in the OODA paradigm, a particular attention is to be paid to orientation.

As noted before, the concept of Orientation comes with a twofold meaning, actual and symbolic:

  • Actual: the positioning of an agent with regard to external (e.g spacial) coordinates, possibly qualified with the agent’s abilities to observe, move, or act.
  • Symbolic: the positioning of an agent with regard to his own internal (e.g beliefs or aims) references, possibly mixed with the known or presumed orientation of other agents, opponents or associates.

That dual understanding underlines the importance of symbolic representations in getting competitive edges, either directly through accurate and up-to-date orientation, or indirectly by inducing opponents’ disorientation.

Agility vs Entropy

Competition in networked digital markets is carried out at enterprise gates, which puts the OODA loop at the nexus of information flows. As a corollary, what is at stake is not limited to immediate business gains but extends to corporate knowledge and enterprise governance; translated into cybernetics parlance, a competitive edge would depend on enterprise ability to export entropy, that is to decrease confusion and disorder inside, and increase it outside.

Working on that assumption, one should first characterize the flows of information to be considered:

  • Territories and observations: identification of business objects and events, collection and analysis of associated data.
  • Maps and orientations: structured and consistent description of business domains.
  • Objectives and decisions: structured and consistent description of business activities and rules.
  • Systems and actions: business processes and capabilities of supporting systems.
cccc

Static assessment of technical and software architectures for respectively observation and decision

Then, a static assessment of information flows would start with the standing of technical and software architecture with regard to competition:

  • Technical architecture: how the alignment of operations and resources facilitate actions and observations.
  • Software architecture: how the combined descriptions of business objects and logic facilitate orientation and decision.

A dynamic assessment would be carried out within the OODA loop and deal with the role of functional architecture in support of operational coupling:

  • How the mapping of territories’ identities and features help observation and orientation.
  • How decision-making and the realization of business objectives are supported by processes’ designs.
ccccc

Dynamic assessment of decision-making and the realization of business objectives’ as supported by processes’ designs.

Assuming a corporate cousin of  Maxwell’s demon with deep learning capabilities standing at the gates in its OODA loop, his job would be to analyze the flows and discover ways to decrease internal complexity (i.e enterprise representations) and increase external one (i.e competitors’ representations).

Further Readings

Business Agility & the OODA Loop

November 21, 2016

Preamble

The OOAD (Observation, Orientation, Decision, Action) loop is a real-time decision-making paradigm developed in the sixties by Colonel John Boyd from his experience as fighter pilot and military strategist.

(Moholy Nagy)

How to get inside opponent’s loop (Lazlo Moholy-Nagy)

The relevancy of OODA for today’s operational decision-making comes from the seamless integration of IT systems with business operations and the resulting merits of agile development processes.

Business: End of Discrete Time-Frames

Business governance was used to be phased: analyze the market, select opportunities, build capabilities, launch operations. No more. With the melting of the fences between actual and symbolic realms, periodic transitional events have lost most of their relevancy. Deprived of discrete and robust time-frames, the weaving of observed facts with business plans has to be managed on the fly. Success now comes from continuous readiness, quicker tempo, and the ability to operate inside adversaries’ time-scales, for defense (force competitor out of favorable position) as well as offense (get a competitive edge). Hence the reference to dogfights.

Dogfights & Agile Primacy

John Boyd train of thoughts started with the observation that, despite the apparent superiority of the soviet Mig 15 on US F-86 during the Korea war, US fighters stood their ground. From that factual observation it took Boyd’s comprehensive engineering work to demonstrate that as far as dogfights were concerned fast transients between maneuvers (aka agility) was more important than technical capabilities. Pushed up Pentagon’s reluctant ladders by Boyd’s sturdy determination, that conclusion have had wide-ranging consequences in the design of USAF fighters and pilots formation for the following generations. Its influence also spread to management, even if theories’ turnover is much faster there, and shelf-life much shorter.

Nowadays, with the accelerated integration of business processes with IT systems, agility is making a comeback from the software engineering corner. Reflecting business and IT convergence, principles like iterative development, just-in-time delivery, and lean processes, all epitomized by the agile software development model, are progressively mingling into business practices with strong resemblances to dogfights; and the resemblances are not only symbolic.

IT Systems & Business Competition

While some similarities between dogfights and business competition may seem metaphorical, one critical aspect is all too real, namely the increasing importance of supporting machines, IT systems or fighter jets.

Basically, IT systems, like fighters’ electronics, are tasked to observe environments, analyse changes in relation to position and objectives, and support decision-making. But today’s systems go further with two qualitative leaps:

  • The seamless integration of physical and symbolic flows let systems manage some overlapping between supporting decisions and carrying out actions.
  • Due to their artificial intelligence capabilities, systems can learn on-the-job and improve their performances in real-time feedback loops.

When combined, these two trends have drastic impact on the way machines can support human activities in real-time competitive situations. More to the point, they bring new light on business agility.

Business Agility

As illustrated by the radical transformation of fighter cockpits, the merging of analog and digital flows leaves little room for human mediation: data must be processed into information and presented instantly along two critical dimensions, one for decision-making, the other for information life-cycle:

  • Man/Machine interfaces have to materialize the merging of actual and symbolic realms as to support just-in-time decision-making.
  • The replacement of phased selected updates of environment data by continuous changes in raw and massive data means that the status of information has to be incorporated with the information itself, yet without impairing decision-making.

Beyond obvious differences between dogfights and business competition, that double exigence is to characterize business agility:

  1. Instant understanding of changes in business opportunities (Observation) .
  2. Simultaneous assessment of the reliability and shelf-life of pertaining information with regard to current positions and operations (Orientation).
  3. Weighting of options with regard to enterprise capabilities and broader objectives (Decision).
  4. Carrying out of decisions within the relevant time-span (Action).

That understanding of business agility is to be compared with its development and architecture cousins. Yet it doesn’t seem to add much to data analytics and operational decision-making. That is until the concept of orientation is reassessed.

Agility & Orientation: Task vs Tack

To begin with basics, the concept of Orientation comes with a twofold meaning, actual and symbolic:

  • Actual: a position with regard to external (e.g spacial) coordinates, possibly qualified with abilities to observe, move, or act.
  • Symbolic: a position with regard to internal (e.g beliefs or aims) references, possibly mixed with known or presumed orientation of other agents, opponents or associates.

When business is considered, data analytics is supposed to deal comprehensively and accurately with markets’ actual orientations. But the symbolic facet is left largely unexplored.

Boyd’s contribution is to bring together both aspects and combine them into actual practice, namely how to foretell the tack of your opponents from their actual tracks as well as their surmised plans, while fooling them about your own moves, actual or planned.

Such ambitions once out of reach, can now be fulfilled due to the combination of big data, artificial intelligence, and the exponential growth on computing power.

Further Readings