Archive for the ‘Smart Systems’ Category

Deep Blind Testing

March 21, 2017

Preamble

Tests are meant to ensure that nothing will go amiss. Assuming that expected hazards can be duly dealt with beforehand, the challenge is to guard against unexpected ones.

Unexpected Outcome (Ariel Schlesinger)

That would require the scripting of every possible outcomes in an unlimited range of unknown circumstances, and that’s where Deep Learning may help.

What to Look For

As Donald Rumsfeld once famously said, there are things that we know we don’t know, and things we don’t know we don’t know; hence the need of setting things apart depending on what can be known and how, and build the scripts accordingly:

  • Business requirements: tests can be designed with respect to explicit specifications; yet some room should also be left for changes in business circumstances.
  • Functional requirements: assuming business requirements are satisfied, the part played by supporting systems can be comprehensively tested with respect to well-defined boundaries and operations.
  • Quality of service: assuming business and functional requirements are satisfied, tests will have to check how human interfaces and resources are to cope with users behaviors and expectations which, by nature, cannot be fully anticipated.
  • Technical requirements: assuming business and functional requirements are satisfied as well as users’ expectations for service, deployment, maintenance, and operations are to be tested with regard to feasibility and costs.

Automated testing has to take into account these differences between scope and nature, from bounded and defined specifications to boundless, fuzzy and changing circumstances.

Automated Software Testing

Automated software testing encompasses two basic components: first the design of test cases (events, operations, and circumstances), then their scripted execution. Leading frameworks already integrate most of the latter together with the parts of the former targeting technical aspects like graphical user interfaces or system APIs. Artificial intelligence (AI) and machine learning (ML) have also been tried for automated test generation, yet with a scope limited by dependency on explicit knowledge, and consequently by the need of some “manual” teaching. That hurdle may be overcame by the deep learning ability to get direct (aka automated) access to implicit knowledge.

Reconnaissance: Known Knowns

Systems are designed artifacts, with the corollary that their components are fully defined and their behavior predictable. The design of technical test cases can therefore be derived from what is known of software and systems architectures, the former for test units, the latter for integration and acceptance tests. Deep learning could then mine recorded log-files in order to identify critical cases’ events and circumstances.

Exploration: Known Unknowns

Assuming that applications must be tested for use during their expected shelf life, some uncertainty has to be factored in for future business circumstances. Yet, assuming applications are designed to meet specific business objectives, such hypothetical circumstances should remain within known boundaries. In that context deep learning could be applied to exploration as well as policies:

  • Compared to technical test cases that can rely on the content of systems log-files, business and functional ones have to look outside and mine raw data from business environments.
  • In return, the relevancy of observations can be assessed with regard to business objectives, improved, and feed the policy module in charge of defining test cases.

Blind Errands: Unknown Unknowns

Even with functional and technical capabilities well-tested and secured, quality of service may remain contingent on human quirks: instinctive or erratic behaviors that could thwart the best designed handrails. On one hand, and due to their very nature, such hazards are not to be easily forestalled by reasoned test cases; but on the other hand they don’t take place in a void but within known functional circumstances. Given that porosity of functional and cognitive layers, the validity of functional test cases may be compromised by unfathomable cognitive associations, and that could open the door to unmanageable regression. Enter deep learning and its ability to extract knowledge from insignificance.

Compared to business and functional test cases, hazards are not directly related to business activities. As a consequence, the learning process cannot be guided by business and functional test cases but has to chart unpredictable human behaviors. As it happens, that kind of learning combining random simulation with automated reinforcement is what makes the specificity of deep learning.

From Non-regression to Self-improvement

As a conclusion, if non-regression is to be the cornerstone of quality management, test cases are to be set along clear swim-lanes: business logic (independently of systems), supporting systems functionalities (for shared applications), users interfaces (for non shared interactions). Then, since test cases are also run across swim-lanes, it opens the door to feedback, e.g unit test cases reassessed directly from business rules independently of systems functionalities, or functional test cases reassessed from users’ behaviors.

Considering that well-defined objectives, sound feedback mechanisms, and the availability of massive data from systems logs (internal) and business environment (external) are the main pillars of deep learning technologies, their combination in integrated frameworks could result in a qualitative leap toward self-improving automated test cases.

Further Reading

 

Alternative Facts & Augmented Reality

February 5, 2017

Preamble

Coming alongside the White House creative use of facts, the upcoming Snap’s IPO is to bring another perspective on reality with its Snapchat star product integrating augmented reality (AR) with media.

20-Juan-Muñoz

Truth in the eye of the beholder (Juan Munoz)

Whatever the purpose, the “alternative facts” favored by the White House communication detail may bring to the fore two related issues of present-day relevancy: virtual and augmented reality on one hand, the actuality of George Orwell’s Newspeak on the other hand.

Facts and Fiction

To begin with, facts are not given but observed, and that can only be achieved through a mix of conceptual and technical apparatus, the former to design fact-finding vessels, the latter to fill them with actual observations. Based on that understanding, alternatives are less about the facts themselves than about the apparatuses used to collect them, which may be trustworthy, faulty, or deceitful. Setting flaws aside, trust is also what distinguishes augmented and virtual reality:

  • Augmented reality (AR) technologies operate on apparatuses that combine observation and analysis before adding layers of information.
  • Virtual reality (VR) technologies simply overlook the whole issue of reality and observation, and are only concerned with the design of trompe l’oeuils.

The contrast between facts (AR) and fiction (VR) may account for the respective applications and commercial advances: whereas augmented reality is making rapid inroads in business applications, its virtual cousin is still testing the water in games. More significantly perhaps, the comparison points to a somewhat unexpected difference in the role of language: necessary for the establishment of facts, accessory for the creation of fictions.

Speaking of Alternative Facts

As illustrated (pun intended) by virtual reality, fiction can do without words, which is not the case for facts. As a matter of fact (intended again), even facts can be fictional, as epitomized by Orwell’s Newspeak, the language used by the totalitarian state in his 1949 novel Nineteen Eighty-Four. Figuratively speaking, that language may be likened to a linguistic counterpart of virtual reality as its purpose is to bypass the issue of trusty discourse about reality by introducing narratives wholly detached from actual observations. And that’s when fiction catches up with reality: no much stretch of imagination is needed to recognize a similar scheme in current White House’s comments.

Language Matter

As far as humans are concerned, reality comes with semantic and social dimensions that can only be carried out through language. In other words truth is all about the use of language with regard to purpose: communication, information, or knowledge. Taking Trump’s inauguration crowd for example:

vvv

Data come from observations, Information is Data put in form, Knowledge is Information put to use.

  • Communication: language is used to exchange observations associated to immediate circumstances (the place and the occasion).
  • Information: language is used to map observations to mental representations and operations (estimates for the size of the audience).
  • Knowledge: language is use to associate information to purposes through categories and concepts detached of the original circumstances (comparison of audiences for similar events and political conclusions).

Augmented Reality devices on that occasion could be used to tally people on viewed portions of the audience (fact), figure out estimates for the whole audience (information), or decide on the best itineraries back home (knowledge). By contrast, Virtual Reality (aka “alternative facts”) could only be used at communication level to deceive the public.

Further Reading

Things Speaking in Tongues

January 25, 2017

Preamble

Speaking in tongues (aka Glossolalia) is the fluid vocalizing of speech-like syllables without any recognizable association with a known language. Such experience is best (not ?) understood as the actual speaking of a gutted language with grammatical ghosts inhabited by meaningless signals.

The man behind the tongue (Herbert List)

Do You Hear What I Say ? (Herbert List)

Usually set in religious context or circumstances, speaking in tongue looks like souls having their own private conversations. Yet, contrary to extraterrestrial languages, the phenomenon is not fictional and could therefore point to offbeat clues for natural language technology.

Computers & Language Technology

From its inception computers technology has been a matter of language, from machine code to domain specific. As a corollary, the need to be in speaking terms with machines (dumb or smart) has put a new light on interpreters (parsers in computer parlance) and open new perspectives for linguistic studies. In due return, computers have greatly improve the means to experiment and implement new approaches.

During the recent years advances in artificial intelligence (AI) have brought language technologies to a critical juncture between speech recognition and meaningful conversation, the former leaping ahead with deep learning and signal processing, the latter limping along with the semantics of domain specific languages.

Interestingly, that juncture neatly coincides with the one between the two intrinsic functions of natural languages: communication and representation.

Rules Engines & Neural Network

As exemplified by language technologies, one of the main development of deep learning has been to bring rules engines and neural networks under a common functional roof, turning the former unfathomable schemes into smart conceptual tutors for the latter.

In contrast to their long and successful track record in computer languages, rule-based approaches have fallen short in human conversations. And while these failings have hindered progress in the semantic dimension of natural language technologies, speech recognition have strode ahead on the back of neural networks fueled by increasing computing power. But the rift between processing and understanding natural languages is now being fastened through deep learning technologies. And with the leverage of rule engines harnessing neural networks, processing and understanding can be carried out within a single feedback loop.

From Communication to Cognition

From a functional point of view, natural languages can be likened to money, first as medium of exchange, then as unit of account, finally as store of value. Along that understanding natural languages would be used respectively for communication, information processing, and knowledge representation. And like the economics of money, these capabilities are to be associated to phased cognitive developments:

  • Communication: languages are used to trade transient signals; their processing depends on the temporal persistence of the perceived context and phenomena; associated behaviors are immediate (here-and-now).
  • Information: languages are also used to map context and phenomena to some mental representations; they can therefore be applied to scripted behaviors and even policies.
  • Knowledge: languages are used to map contexts, phenomena, and policies to categories and concepts to be stored as symbolic representations fully detached of original circumstances; these surrogates can the be used, assessed, and improved on their own.

As it happens, advances in technologies seem to follow these cognitive distinctions, with the internet of things (IoT) for data communications, neural networks for data mining and information processing, and the addition of rules engines for knowledge representation. Yet paces differ significantly: with regard to language processing (communication and information), deep learning is bringing the achievements of natural language technologies beyond 90% accuracy; but when language understanding has to take knowledge into account, performances still lag a third below: for computers knowledge to be properly scaled, it has to be confined within the semantics of specific domains.

Sound vs Speech

Humans listening to the Universe are confronted to a question that can be unfolded in two ways:

  • Is there someone speaking, and if it’s the case, what’s the language ?.
  • Is that a speech, and if it’s the case, who’s speaking ?.

In both case intentionality is at the nexus, but whereas the first approach has to tackle some existential questioning upfront, the second can put philosophy on the back-burner and focus on technological issues. Nonetheless, even the language first approach has been challenging, as illustrated by the difference in achievements between processing and understanding language technologies.

Recognizing a language has long been the job of parsers looking for the corresponding syntax structures, the hitch being that a parser has to know beforehand what it’s looking for. Parser’s parsers using meta-languages have been effective with programming languages but are quite useless with natural ones without some universal grammar rules to sort out babel’s conversations. But the “burden of proof” can now be reversed: compared to rules engines, neural networks with deep learning capabilities don’t have to start with any knowledge. As illustrated by Google’s Multilingual Neural Machine Translation System, such systems can now build multilingual proficiency from sufficiently large samples of conversations without prior specific grammatical knowledge.

To conclude, “Translation System” may even be self-effacing as it implies language-to-language mappings when in principle such systems can be fed with raw sounds and be able to parse the wheat of meanings from the chaff of noise. And, who knows, eventually be able to decrypt languages of tongues.

Further Reading

External Links

NIEM & Information Exchanges

January 24, 2017

Preamble

The objective of the National Information Exchange Model (NIEM) is to provide a “dictionary of agreed-upon terms, definitions, relationships, and formats that are independent of how information is stored in individual systems.”

(Alfred Jensen)

NIEM’s model makes no difference between data and information (Alfred Jensen)

For that purpose NIEM’s model combines commonly agreed core elements with community-specific ones. Weighted against the benefits of simplicity, this architecture overlooks critical distinctions:

  • Inputs: Data vs Information
  • Dictionary: Lexicon and Thesaurus
  • Meanings: Lexical Items and Semantics
  • Usage: Roots and Aspects

That shallow understanding of information significantly hinders the exchange of information between business or institutional entities across overlapping domains.

Inputs: Data vs Information

Data is made of unprocessed observations, information makes sense of data, and knowledge makes use of information. Given that NIEM is meant to be an exchange between business or institutional users, it should have no concern with data mining or knowledge management.

Data is meaningless, information meaning is set by semantic domains.

As an exchange, NIEM should have no concern with data mining or knowledge management.

The problem is that, as conveyed by “core of data elements that are commonly understood and defined across domains, such as person, activity, document, location”, NIEM’s model makes no explicit distinction between data and information.

As a corollary, it implies that data may not only be meaningful, but universally so, which leads to a critical trap: as substantiated by data analytics, data is not supposed to mean anything before processed into information; to keep with examples, even if the definition of persons and locations may not be specific, the semantics of associated information is nonetheless set by domains, institutional, regulatory, contractual, or otherwise.

Data is meaningless, information meaning is set by semantic domains.

Data is meaningless, information meaning is set by semantic domains.

Not surprisingly, that medley of data and information is mirrored by NIEM’s dictionary.

Dictionary: Lexicon & Thesaurus

As far as languages are concerned, words (e.g “word”, “ξ∏¥” ,”01100″) remain data items until associated to some meaning. For that reason dictionaries are built on different levels, first among them lexical and semantic ones:

  • Lexicons take items on their words and gives each of them a self-contained meaning.
  • Thesauruses position meanings within overlapping galaxies of understandings held together by the semantic equivalent of gravitational forces; the meaning of words can then be weighted by the combined semantic gravity of neighbors.

In line with its shallow understanding of information, NIEM’s dictionary only caters for a lexicon of core standalone items associated with type descriptions to be directly implemented by information systems. But due to the absence of thesaurus, the dictionary cannot tackle the semantics of overlapping domains: if lexicons alone can deal with one-to-one mappings of items to meanings (a), thesauruses are necessary for shared (b) or alternative (c) mappings.

vv

Shared or alternative meanings cannot be managed with lexicons

With regard to shared mappings (b), distinct lexical items (e.g qualification) have to be mapped to the same entity (e.g person). Whereas some shared features (e.g person’s birth date) can be unequivocally understood across domains, most are set through shared (professional qualification), institutional (university diploma), or specific (enterprise course) domains .

Conversely, alternative mappings (c) arise when the same lexical items (e.g “mole”) can be interpreted differently depending on context (e.g plastic surgeon, farmer, or secret service).

Whereas lexicons may be sufficient for the use of lexical items across domains (namespaces in NIEM parlance), thesauruses are necessary if meanings (as opposed to uses) are to be set across domains. But thesauruses being just tools are not sufficient by themselves to deal with overlapping semantics. That can only be achieved through a conceptual distinction between lexical and semantic envelops.

Meanings: Lexical Items & Semantics

NIEM’s dictionary organize names depending on namespaces and relationships:

  • Namespaces: core (e.g Person) or specific (e.g Subject/Justice).
  • Relationships: types (Counselor/Person) or properties (e.g PersonBirthDate).
vvv

NIEM’s Lexicon: Core (a) and specific (b) and associated core (c) and specific (d) properties

But since lexicons know only names, the organization is not orthogonal, with lexical items mapped indifferently to types and properties. The result being that, deprived of reasoned guidelines, lexical items are chartered arbitrarily, e.g:

Based on core PersonType, the Justice namespace uses three different schemes to define similar lexical items:

  • “Counselor” is described with core PersonType.
  • “Subject” and “Suspect” are both described with specific SubjectType, itself a sub-type of PersonType.
  • “Arrestee” is described with specific ArresteeType, itself a sub-type of SubjectType.

Based on core EntityType:

  • The Human Services namespace bypasses core’s namesake and introduces instead its own specific EmployerType.
  • The Biometrics namespace bypasses possibly overlapping core Measurer and BinaryCaptured and directly uses core EntityType.
Lexical items are meshed disregarding semantics

Lexical items are chartered arbitrarily

Lest expanding lexical items clutter up dictionary semantics, some rules have to be introduced; yet, as noted above, these rules should be limited to information exchange and stop short of knowledge management.

Usage: Roots and Aspects

As far as information exchange is concerned, dictionaries have to deal with lexical and semantic meanings without encroaching on ontologies or knowledge representation. In practice that can be best achieved with dictionaries organized around roots and aspects:

  • Roots and structures (regular, black triangles) are used to anchor information units to business environments, source or destination.
  • Aspects (italics, white triangles) are used to describe how information units are understood and used within business environments.
nformation exchanges are best supported by dictionaries organized around roots and aspects

Information exchanges are best supported by dictionaries organized around roots and aspects

As it happens that distinction can be neatly mapped to core concepts of software engineering.

Further Reading

External Links

New Year: 2016 is the One to Learn

December 15, 2016

Sometimes the future is best seen through rear-view mirrors; given the advances of artificial intelligence (AI) in 2016, hindsight may help for the year to come.

(J.Bosh)

Deep Mind Learning (J.Bosh)

Deep Learning & the Depths of Intelligence

Deep learning may not have been discovered in 2016 but Google’s AlphaGo has arguably brought a new dimension to artificial intelligence, something to be compared to unearthing the spherical Earth.

As should be expected for machines capabilities, artificial intelligence has for long been fettered by technological handcuffs; so much so that expert systems were initially confined to a flat earth of knowledge to be explored through cumbersome sets of explicit rules. But exponential increase in computing power has allowed neural networks to take a bottom-up perspective, mining for implicit knowledge hidden in large amount of raw data.

Like digging tunnels from both extremities, it took some time to bring together top-down and bottom-up schemes, namely explicit (rule-based) and implicit (neural network-based) knowledge processing. But now that it comes to fruition, the alignment of perspectives puts a new light on the cognitive and social dimensions of intelligence.

Intelligence as a Cognitive Capability

Assuming that intelligence is best defined as the ability to solve problems, the first criterion to consider is the type of input (aka knowledge) to be used:

  • Explicit: rational processing of symbolic representations of contexts, concerns, objectives, and policies.
  • Implicit: intuitive processing of factual (non symbolic) observations of objects and phenomena.

That distinction is broadly consistent with the one between humans, seen as the sole symbolic species with the ability to reason about explicit knowledge, and other animal species which, despite being limited to the processing of implicit knowledge, may be far better at it than humans. Along that understanding, it would be safe to assume that systems with enough computing power will sooner or later be able to better the best of animal species, in particular in the case of imperfect inputs.

Intelligence as a Social Capability

Alongside the type of inputs, the second criterion to be considered is obviously the type of output (aka solution). And since classifications are meant to be built on purpose, a typology of AI outcomes should focus on relationships between agents, humans or otherwise:

  • Self-contained: problem-solving situations without opponent.
  • Competitive: zero-sum conflictual activities involving one or more intelligent opponents.
  • Collaborative: non-zero-sum activities involving one or more intelligent agents.

That classification coincides with two basic divides regarding communication and social behaviors:

  1. To begin with, human behavior is critically different when interacting with living species (humans or animals) and machines (dumb or smart). In that case the primary factor governing intelligence is the presence, real or supposed, of beings with intentions.
  2. Then, and only then, communication may take different forms depending on languages. In that case the primary factor governing intelligence is the ability to share symbolic representations.

A taxonomy of intelligence with regard to cognitive (reason vs intuition) and social (symbolic vs non-symbolic) capabilities may help to clarify the role of AI and the importance of deep learning.

Between Intuition and Reason

Google’s AlphaGo astonishing performances have been rightly explained by a qualitative breakthrough in learning capabilities, itself enabled by the two quantitative factors of big data and computing power. But beyond that success, DeepMind (AlphaGo’s maker) may have pioneered a new approach to intelligence by harnessing both symbolic and non symbolic knowledge to the benefit of a renewed rationality.

Perhaps surprisingly, intelligence (a capability) and reason (a tool) may turn into uneasy bedfellows when the former is meant to include intuition while the latter is identified with logic. As it happens, merging intuitive and reasoned knowledge can be seen as the nexus of AlphaGo decisive breakthrough, as it replaces abrasive interfaces with smart full-duplex neural networks.

Intelligent devices can now process knowledge seamlessly back and forth, left and right: borne by DeepMind’s smooth cognitive cogwheels, learning from factual observations can suggest or reinforce the symbolic representation of emerging structures and behaviors, and in return symbolic representations can be used to guide big data mining.

From consumers behaviors to social networks to business marketing to supporting systems, the benefits of bridging the gap between observed phenomena and explicit causalities appear to be boundless.

Further Reading

External Links

Business Agility vs Systems Entropy

November 28, 2016

Synopsis

As already noted, the seamless integration of business processes and IT systems may bring new relevancy to the OOAD (Observation, Orientation, Decision, Action) loop, a real-time decision-making paradigm originally developed by Colonel John Boyd for USAF fighter jets.

Agility: Orientation (Lazlo Moholo-Nagy)

Agility & Orientation (Lazlo Moholo-Nagy)

Of particular interest for today’s business operational decision-making is the orientation step, i.e the actual positioning of actors and the associated cognitive representations; the point being to use AI deep learning capabilities to surmise opponents plans and misdirect their anticipations. That new dimension and its focus on information brings back cybernetics as a tool for enterprise governance.

In the Loop: OOAD & Information Processing

Whatever the topic (engineering, business, or architecture), the concept of agility cannot be understood without defining some supporting context. For OODA that would include: territories (markets) for observations (data); maps for orientation (analytics); business objectives for decisions; and supporting systems for action.

OODA loop and its actual (red) and symbolic (blue) contexts.

OODA loop and its actual (red) and symbolic (blue) contexts.

One step further, contexts may be readily matched with systems description:

  • Business contexts (territories) for observations.
  • Models of business objects (maps) for orientation.
  • Business logic (objectives) for decisions.
  • Business processes (supporting systems) for action.
ccc

The OODA loop and System Perspectives

That provides a unified description of the different aspects of business agility, from the OODA loop and operations to architectures and engineering.

Architectures & Business Agility

Once the contexts are identified, agility in the OODA loop will depend on architecture consistency, plasticity, and versatility.

Architecture consistency (left) is supposed to be achieved by systems engineering out of the OODA loop:

  • Technical architecture: alignment of actual systems and territories (red) so that actions and observations can be kept congruent.
  • Software architecture: alignment of symbolic maps and objectives (blue) so that orientation and decisions can be continuously adjusted.

Functional architecture (right) is to bridge the gap between technical and software architectures and provides for operational coupling.

Business Agility: systems architectures and business operations

Business Agility: systems architectures and business operations

Operational coupling depends on functional architecture and is carried on within the OODA loop. The challenge is to change tack on-the-fly with minimum frictions between actual and symbolic contexts, i.e:

  • Discrepancies between business objects (maps and orientation) and business contexts (territories and observation).
  • Departure between business logic (objectives and decisions) and business processes (systems and actions)

When positive, operational coupling associates business agility with its architecture counterpart, namely plasticity and versatility; when negative, it suffers from frictions, or what cybernetics calls entropy.

Systems & Entropy

Taking a leaf from thermodynamics, cybernetics defines entropy as a measure of the (supposedly negative) variation in the value of the information supporting the control of viable systems.

With regard to corporate governance and operational decision-making, entropy arises from faults between environments and symbolic surrogates, either for objects (misleading orientations from actual observations) or activities (unforeseen consequences of decisions when carried out as actions).

So long as architectures and operations were set along different time-frames (e.g strategic and tactical), cybernetics were of limited relevancy. But the seamless integration of data analytics, operational decision-making, and IT supporting systems puts a new light on the role of entropy, as illustrated by Boyd’s OODA and its orientation component.

Orientation & Agility

While much has been written about how data analytics and operational decision-making can be neatly and easily fitted in the OODA paradigm, a particular attention is to be paid to orientation.

As noted before, the concept of Orientation comes with a twofold meaning, actual and symbolic:

  • Actual: the positioning of an agent with regard to external (e.g spacial) coordinates, possibly qualified with the agent’s abilities to observe, move, or act.
  • Symbolic: the positioning of an agent with regard to his own internal (e.g beliefs or aims) references, possibly mixed with the known or presumed orientation of other agents, opponents or associates.

That dual understanding underlines the importance of symbolic representations in getting competitive edges, either directly through accurate and up-to-date orientation, or indirectly by inducing opponents’ disorientation.

Agility vs Entropy

Competition in networked digital markets is carried out at enterprise gates, which puts the OODA loop at the nexus of information flows. As a corollary, what is at stake is not limited to immediate business gains but extends to corporate knowledge and enterprise governance; translated into cybernetics parlance, a competitive edge would depend on enterprise ability to export entropy, that is to decrease confusion and disorder inside, and increase it outside.

Working on that assumption, one should first characterize the flows of information to be considered:

  • Territories and observations: identification of business objects and events, collection and analysis of associated data.
  • Maps and orientations: structured and consistent description of business domains.
  • Objectives and decisions: structured and consistent description of business activities and rules.
  • Systems and actions: business processes and capabilities of supporting systems.
cccc

Static assessment of technical and software architectures for respectively observation and decision

Then, a static assessment of information flows would start with the standing of technical and software architecture with regard to competition:

  • Technical architecture: how the alignment of operations and resources facilitate actions and observations.
  • Software architecture: how the combined descriptions of business objects and logic facilitate orientation and decision.

A dynamic assessment would be carried out within the OODA loop and deal with the role of functional architecture in support of operational coupling:

  • How the mapping of territories’ identities and features help observation and orientation.
  • How decision-making and the realization of business objectives are supported by processes’ designs.
ccccc

Dynamic assessment of decision-making and the realization of business objectives’ as supported by processes’ designs.

Assuming a corporate cousin of  Maxwell’s demon with deep learning capabilities standing at the gates in its OODA loop, his job would be to analyze the flows and discover ways to decrease internal complexity (i.e enterprise representations) and increase external one (i.e competitors’ representations).

Further Readings

Things Behavior & Social Responsibility

October 27, 2016

Contrary to security breaks and information robberies that can be kept from public eyes, crashes of business applications or internet access are painfully plain for whoever is concerned, which means everybody. And as illustrated by the last episode of massive distributed denial of service (DDoS), they often come as confirmation of hazards long calling for attention.

robot_waynemiller

Device & Social Identity (Wayne Miller)

Things Don’t Think

To be clear, orchestrated attacks through hijacked (if unaware) computers have been a primary concern for internet security firms for quite some time, bringing about comprehensive and continuous reinforcement of software shields consolidated by systematic updates.

But while the right governing hand was struggling to make a safer net, the other hand thoughtlessly brought in connected objects to a supposedly new brand of internet. As if adding things with software brains cut to the bone could have made networks smarter.

And that’s the catch because the internet of things (IoT) is all about making room for dumb ancillary objects; unfortunately, idiots may have their use for literary puppeteers with canny agendas.

Think Again, or Not …

For old-timers with some memory of fingering through library cardboard, googling topics may have looked like dreams: knowledge at one’s fingertips, immediately and comprehensively. But that vision has never been more than a fleeting glimpse in a symbolic world; in actuality, even at its semantic best, the web was to remain a trove of information to be sifted by knowledge workers safely seated in their gated symbolic world. Crooks of course could sneak in as knowledge workers, armed with fountain pens, but without guns covered by the second amendment.

So, from its inception, the IoT has been a paradoxical endeavor: trying to merge actual and symbolic realms that would bypass thinking processes and obliterate any distinction. For sure, that conundrum was supposed to be dealt with by artificial intelligence (AI), with neural networks and deep learning weaving semantic threads between human minds and networks brains.

Not surprisingly, brainy hackers have caught sight of that new wealth of chinks in internet armour and swiftly added brute force to their paraphernalia.

But in addition to the technical aspect of internet security, the recent Dyn DDoS attack puts the light on its social perspective.

Things Behavior & Social Responsibility

As far as it remained intrinsically symbolic, the internet has been able to carry on with its utopian principles despite bumpy business environments. But things have drastically changed the situation, with tectonic frictions between symbolic and real plates wreaking havoc with any kind of smooth transition to internet.X, whatever x may be.

Yet, as the diagnose is clear, so should be the remedy.

To begin with, the internet was never meant to become the central nervous system of human societies. That it has happened in half a generation has defied imagination and, as a corollary, sapped the validity of traditional paradigms.

As things happen, the epicenter of the paradigms collision can be clearly identified: whereas the internet is built from systems, architectures taxonomies are purely technical and ignore what should be the primary factor, namely what kind of social role a system could fulfil. That may have been irrelevant for communication networks, but is obviously critical for social ones.

Further Reading

External Links

Brands, Bots, & Storytelling

May 2, 2016

As illustrated by the recent Mashable “pivot”, meaningful (i.e unbranded) contents appear to be the main casualty of new communication technologies. Hopefully (sic), bots may point to a more positive perspective, at least if their want for no no-nonsense gist is to be trusted.

(Latifa Echakhch)

Could bots repair gibberish ? (Latifa Echakhch)

The Mashable Pivot to “branded” Stories

Announcing Mashable recent pivot, Pete Cashmore (Mashable ‘s founder and CEO) was very candid about the motives:

“What our advertisers value most about
 Mashable is the same thing that our audience values: Our content. The
 world’s biggest brands come to us to tell stories of digital culture, 
innovation and technology in an optimistic and entertaining voice. As 
a result, branded content has become our fastest growing revenue 
stream over the past year. Content is now at the core of our ad 
offering and we plan to double down there.

”

Also revealing was the semantic shift in a single paragraph: from “stories”, to “stories told with an optimistic and entertaining voice”, and finally to “branded stories”; as if there was some continuity between Homer’s Iliad and Outbrain’s gibberish.

Spinning Yarns

From Lacan to Seinfeld, it has often been said that stories are what props up our world. But that was before Twitter, Facebook, YouTube and others ruled over the waves and screens. Nowadays, under the combined assaults of smart dummies and instant messaging, stories have been forced to spin advertising schemes, and scripts replaced  by subliminal cues entangled in webs of commercial hyperlinks. And yet, somewhat paradoxically, fictions may retrieve some traction (if not spirit) of their own, reprieved not so much by human cultural thirst as by smartphones’ hunger for fresh technological contraptions.

Apps: What You Show is What You Get

As far as users are concerned, apps often make phones too smart by half: with more than 100 billion of apps already downloaded, users face an embarrassment of riches compounded by the inherent limitations of packed visual interfaces. Enticed by constantly renewed flows of tokens with perfunctory guidelines, human handlers can hardly separate the wheat from the chaff and have to let their choices be driven by the hypothetical wisdom of the crowd. Whatever the outcomes (crowds may be right but often volatile), the selection process is both wasteful (choices are ephemera, many apps are abandoned after a single use, and most are sparely used), and hazardous (too many redundant dead-ends open doors to a wide array of fraudsters). That trend is rapidly facing the physical as well as business limits of a zero-sum playground: smarter phones appear to make for dumber users. One way out of the corner would be to encourage intelligent behaviors from both parties, humans as well as devices. And that’s something that bots could help to bring about.

Bots: What You Text Is What You Get

As software agents designed to help people find their ways online, bots can be differentiated from apps on two main aspects:

  • They reside in the cloud, not on personal devices, which means that updates don’t have to be downloaded on smartphones but can be deployed uniformly and consistently. As a consequence, and contrary to apps, the evolution of bots can be managed independently of users’ whims, fostering the development of stable and reliable communication grammars.
  • They rely on text messaging to communicate with users instead of graphical interfaces and visual symbols. Compared to icons, text put writing hands on driving wheels, leaving much less room for creative readings; given that bots are not to put up with mumbo jumbo, they will prompt users to mind their words as clearly and efficiently as possible.

Each aspect reinforces the other, making room for a non-zero playground: while the focus on well-formed expressions and unambiguous semantics is bots’ key characteristic, it could not be achieved without the benefits of stable and homogeneous distribution schemes. When both are combined they may reinstate written languages as the backbone of communication frameworks, even if it’s for the benefits of pidgin languages serving prosaic business needs.

A Literary Soup of Business Plots & Customers Narratives

Given their need for concise and unambiguous textual messages, the use of bots could bring back some literary considerations to a latent online wasteland. To be sure, those considerations are to be hard-headed, with scripts cut to the bone, plots driven by business happy ends, and narratives fitted to customers phantasms.

Nevertheless, good storytelling will always bring some selective edge to businesses competing for top tiers. So, and whatever the dearth of fictional depth, the spreading of bots scripts could make up some kind of primeval soup and stir the emergence of some literature untainted by its fouled nourishing earth.

Further Readings

Out of Mind Content Discovery

April 20, 2016

Content discovery and the game of Go can be used to illustrate the strengths and limits of artificial intelligence.

(Pavel Wolberg)

Now and Then: contents discovery across media and generations (Pavel Wolberg)

Game of Go: Closed Ground, Non Semantic Charts

The conclusive successes of Google’s AlphaGo against world’s best players are best understood when  related to the characteristics of the game of Go:

  • Contrary to real life competitions, games are set on closed and standalone playgrounds  detached from actual concerns. As a consequence players (human or artificial) can factor out emotions  from cognitive behaviors.
  • Contrary to games like Chess, Go’s playground is uniform and can be mapped without semantic distinctions for situations or moves. Whereas symbolic knowledge, explicit or otherwise, is still required for good performances, excellence can only be achieved through holistic assessments based on intuition and implicit knowledge.

Both characteristics fully play to the strengths of AI, in particular computing power (to explore playground and alternative strategies) and detachment (when decisions have to be taken).

Content Discovery: Open Grounds, Semantic Charts

Content discovery platforms like Outbrain or Taboola are meant to suggest further (commercial) bearings to online users. Compared to the game of Go, that mission clearly goes in the opposite direction:

  • Channels may be virtual but users are humans, with real emotions and concerns. And they are offered proxy grounds not so much to be explored than to be endlessly redefined and made more alluring.
  • Online strolls may be aimless and discoveries fortuitous, but if content discovery devices are to underwrite themselves, they must bring potential customers along monetized paths. Hence the hitch: artificial brains need some cues about what readers have in mind.

That makes content discovery a challenging task for artificial coaches as they have to usher wanderers with idiosyncratic but unknown motivations through boundless expanses of symbolic shopping fields.

What Would Eliza Say

When AI was still about human thinking Alan Turing thought of a test that could check the ability of a machine to exhibit intelligent behaviors. As it was then, available computing power was several orders of magnitude below today’s capacities, so the test was not about intelligence itself, but with the ability to conduct text-based dialogues equivalent to, or indistinguishable from, that of a human. That approach was famously illustrated by Eliza, a software able to beguile humans in conversations without any understanding of their meanings.

More than half a century later, here are some suggestions of leading content discovery engines:

  • After reading about the Ecuador quake or Syrian rebels one is supposed to be interested by 8 tips to keep our liver healthy, or 20 reasons of unsuccessful attempts at losing weight.
  • After reading about growing coffee in Ethiopia one is supposed to be interested by the mansions of world billionaires, or a Shepard pup surviving after being lost at sea for a month.

It’s safe to assume that both would have flunked the Turing Test.

Further Reading

External Links

Selfies & Augmented Identities

March 31, 2016

As smart devices and dumb things respectively drive and feed internet advances, selfies may be seen as a minor by-product figuring the scenes between reasoning capabilities and the reality of things. But then, should that incidental understanding be upgraded to a more meaningful one that will incorporate digital hybrids into virtual reality.

Actual and Virtual Representations (N. Rockwell)

Portraits, Selfies, & Social Identities

Selfies are a good starting point given that their meteoric and wide-ranging success makes for social continuity of portraits, from timeless paintings to new-age digital images. Comparing the respective practicalities and contents of traditional and digital pictures  may be especially revealing.

With regard to practicalities, selfies bring democratization: contrary to paintings, reserved to social elites, selfies let everybody have as many portraits as wished, free to be shown at will, to family, close friends, or total unknowns.

With regard to contents, selfies bring immediacy: instead of portraits conveying status and characters through calculated personal attires and contrived settings, selfies picture social identities as snapshots that capture supposedly unaffected but revealing moments, postures, entourages, or surroundings.

Those selfies’ idiosyncrasies are intriguing because they seem to largely ignore the wide range of possibilities offered by new media technologies which could (and do sometimes) readily make selfies into elaborate still lives or scenic videos.

Likewise is the fading-out of photography as a vector of social representation after the heights achieved in the second half of the 19th century: not until the internet era did photographs start again to emulate paintings as vehicles of social identity.

Those changing trends may be cleared up if mirrors are introduced in the picture.

Selfies, Mirrors, & Physical Identities

Natural or man-made, mirrors have from the origin played a critical part in self-consciousness, and more precisely in self-awareness of physical identity. Whereas portraits are social, asynchronous, and symbolic representations, mirrors are personal, synchronous, and physical ones; hence their different roles, portraits abetting social identities, and mirrors reflecting physical ones. And selfies may come as the missing link between them.

With smartphones now customarily installed as bodily extensions, selfies may morph into recurring personal reflections, transforming themselves into a crossbreed between portraits, focused on social identification, and mirrors, intent on personal identity. That understanding would put selfies on an elusive swing swaying between social representation and communication on one side, authenticity and introspection on the other side.

On that account advances in technologies, from photographs to 3D movies, would have had a limited impact on the traction from either the social or physical side. But virtual reality (VR) is another matter altogether because it doesn’t only affect the medium between social and physical aspects, but also the “very” reality of the physical side itself.

Virtual Reality: Sense & Sensibility

The raison d’être of virtual reality (VR) is to erase the perception fence between individuals and their physical environment. From that perspective VR contraptions can be seen as deluding mirrors doing for physical identity what selfies do for social ones: teleporting individual personas between environments independently of their respective actuality. The question is: could it be carried out as a whole, teleporting both physical and social identities in a single package ?

Physical identities are built from the perception of actual changes directly originated in context or prompted by our own behavior: I move of my own volition, therefore I am. Somewhat equivalently, social identities are built on representations cultivated innerly, or supposedly conveyed by aliens. Considering that physical identities are continuous and sensible, and social ones discrete and symbolic, it should be possible to combine them into virtual personas that could be teleported around packet switched networks.

But the apparent symmetry could be deceitful given that although teleporting doesn’t change meanings, it breaks the continuity of physical perceptions, which means that it goes with some delete/replace operation. On that account effective advances of VR can be seen as converging on alternative teleporting pitfalls:

  • Virtual worlds like Second Life rely on symbolic representations whatever the physical proximity.
  • Virtual apparatuses like Oculus depend solely on the merge with physical proximity and ignore symbolic representations.

That conundrum could be solved if sense and sensibility could be reunified, giving some credibility to fused physical and social personas. That could be achieved by augmented reality whose aim is to blend actual perceptions with symbolic representations.

From Augmented Identities to Extended Beliefs

Virtual apparatuses rely on a primal instinct that makes us to believe in the reality of our perceptions. Concomitantly, human brains use built-in higher level representations of body physical capabilities in order to support the veracity of the whole experiences. Nonetheless, if and when experiences appear to be far-fetched, brains are bound to flag the stories as delusional.

Or maybe not. Even without artificial adjuncts to the brain chemistry, some kind of cognitive morphing may let the mind bypasses its body limits by introducing a deceitful continuity between mental representations of physical capabilities on one hand, and purely symbolic representations on the other hand. Technological advances may offer schemes from each side that could trick human beliefs.

Broadly speaking, virtual reality schemes can be characterized as top-down; they start by setting the mind into some imaginary world, and beguiles it into the body envelope portraying some favorite avatar. Then, taking advantage of its earned confidence, the mind is to be tricked on a flyover that would move it seamlessly from fictional social representations into equally fictional physical ones: from believing to be with superpowers into trusting the actual reach and strength of his hand performing corresponding deeds. At least that’s the theory, because if such a “suspension of disbelief” is the essence of fiction and art, the practicality of its mundane actualization remains to be confirmed.

Augmented reality goes the other way and can be seen as bottom-up, relying on actual physical experiences before moving up to fictional extensions. Such schemes are meant to be fed with trusted actual perceptions adorned with additional inputs, visual or otherwise, designed on purpose. Instead of straight leaps into fiction, beliefs can be kept anchored to real situations from where they can be seamlessly led astray to unfolding wonder worlds, or alternatively ushered to further knowledge.

By introducing both continuity and new dimensions to the design of physical and social identities, augmented reality could leverage selfies into major social constructs. Combined with artificial intelligence they could even become friendly or helpful avatars, e.g as personal coaches or medical surrogate.

Further Readings

External Links