Brands, Bots, & Storytelling

As illustrated by the recent Mashable “pivot”, meaningful (i.e unbranded) contents appear to be the main casualty of new communication technologies. Hopefully (sic), bots may point to a more positive perspective, at least if their want for no no-nonsense gist is to be trusted.

(Latifa Echakhch)
Could bots restore the music of words ? (Latifa Echakhch)

The Mashable Pivot to “branded” Stories

Announcing Mashable recent pivot, Pete Cashmore (Mashable ‘s founder and CEO) was very candid about the motives:

“What our advertisers value most about Mashable is the same thing that our audience values: Our content. The world’s biggest brands come to us to tell stories of digital culture, innovation and technology in an optimistic and entertaining voice. As a result, branded content has become our fastest growing revenue stream over the past year. Content is now at the core of our ad offering and we plan to double down there.”

Also revealing was the semantic shift in a single paragraph: from “stories”, to “stories told with an optimistic and entertaining voice”, and finally to “branded stories”; as if there was some continuity between Homer’s Iliad and Outbrain’s gibberish.

Spinning Yarns

From Lacan to Seinfeld, it has often been said that stories are what props up our world. But that was before Twitter, Facebook, YouTube and others ruled over the waves and screens. Nowadays, under the combined assaults of smart dummies and instant messaging, stories have been forced to spin advertising schemes, and scripts replaced  by subliminal cues entangled in webs of commercial hyperlinks. And yet, somewhat paradoxically, fictions may retrieve some traction (if not spirit) of their own, reprieved not so much by human cultural thirst as by smartphones’ hunger for fresh technological contraptions.

Apps: What You Show is What You Get

As far as users are concerned, apps often make phones too smart by half: with more than 100 billion of apps already downloaded, users face an embarrassment of riches compounded by the inherent limitations of packed visual interfaces. Enticed by constantly renewed flows of tokens with perfunctory guidelines, human handlers can hardly separate the wheat from the chaff and have to let their choices be driven by the hypothetical wisdom of the crowd. Whatever the outcomes (crowds may be right but often volatile), the selection process is both wasteful (choices are ephemera, many apps are abandoned after a single use, and most are sparely used), and hazardous (too many redundant dead-ends open doors to a wide array of fraudsters). That trend is rapidly facing the physical as well as business limits of a zero-sum playground: smarter phones appear to make for dumber users. One way out of the corner would be to encourage intelligent behaviors from both parties, humans as well as devices. And that’s something that bots could help to bring about.

Bots: What You Text Is What You Get

As software agents designed to help people find their ways online, bots can be differentiated from apps on two main aspects:

  • They reside in the cloud, not on personal devices, which means that updates don’t have to be downloaded on smartphones but can be deployed uniformly and consistently. As a consequence, and contrary to apps, the evolution of bots can be managed independently of users’ whims, fostering the development of stable and reliable communication grammars.
  • They rely on text messaging to communicate with users instead of graphical interfaces and visual symbols. Compared to icons, text put writing hands on driving wheels, leaving much less room for creative readings; given that bots are not to put up with mumbo jumbo, they will prompt users to mind their words as clearly and efficiently as possible.

Each aspect reinforces the other, making room for a non-zero playground: while the focus on well-formed expressions and unambiguous semantics is bots’ key characteristic, it could not be achieved without the benefits of stable and homogeneous distribution schemes. When both are combined they may reinstate written languages as the backbone of communication frameworks, even if it’s for the benefits of pidgin languages serving prosaic business needs.

A Literary Soup of Business Plots & Customers Narratives

Given their need for concise and unambiguous textual messages, the use of bots could bring back some literary considerations to a latent online wasteland. To be sure, those considerations are to be hard-headed, with scripts cut to the bone, plots driven by business happy ends, and narratives fitted to customers phantasms.

Nevertheless, good storytelling will always bring some selective edge to businesses competing for top tiers. So, and whatever the dearth of fictional depth, the spreading of bots scripts could make up some kind of primeval soup and stir the emergence of some literature untainted by its fouled nourishing earth.

Further Readings

Out of Mind Content Discovery

Content discovery and the game of Go can be used to illustrate the strengths and limits of artificial intelligence.

(Pavel Wolberg)
Now and Then: contents discovery across media and generations (Pavel Wolberg)

Game of Go: Closed Ground, Non Semantic Charts

The conclusive successes of Google’s AlphaGo against world’s best players are best understood when  related to the characteristics of the game of Go:

  • Contrary to real life competitions, games are set on closed and standalone playgrounds  detached from actual concerns. As a consequence players (human or artificial) can factor out emotions  from cognitive behaviors.
  • Contrary to games like Chess, Go’s playground is uniform and can be mapped without semantic distinctions for situations or moves. Whereas symbolic knowledge, explicit or otherwise, is still required for good performances, excellence can only be achieved through holistic assessments based on intuition and implicit knowledge.

Both characteristics fully play to the strengths of AI, in particular computing power (to explore playground and alternative strategies) and detachment (when decisions have to be taken).

Content Discovery: Open Grounds, Semantic Charts

Content discovery platforms like Outbrain or Taboola are meant to suggest further (commercial) bearings to online users. Compared to the game of Go, that mission clearly goes in the opposite direction:

  • Channels may be virtual but users are humans, with real emotions and concerns. And they are offered proxy grounds not so much to be explored than to be endlessly redefined and made more alluring.
  • Online strolls may be aimless and discoveries fortuitous, but if content discovery devices are to underwrite themselves, they must bring potential customers along monetized paths. Hence the hitch: artificial brains need some cues about what readers have in mind.

That makes content discovery a challenging task for artificial coaches as they have to usher wanderers with idiosyncratic but unknown motivations through boundless expanses of symbolic shopping fields.

What Would Eliza Say

When AI was still about human thinking Alan Turing thought of a test that could check the ability of a machine to exhibit intelligent behaviors. As it was then, available computing power was several orders of magnitude below today’s capacities, so the test was not about intelligence itself, but with the ability to conduct text-based dialogues equivalent to, or indistinguishable from, that of a human. That approach was famously illustrated by Eliza, a software able to beguile humans in conversations without any understanding of their meanings.

More than half a century later, here are some suggestions of leading content discovery engines:

  • After reading about the Ecuador quake or Syrian rebels one is supposed to be interested by 8 tips to keep our liver healthy, or 20 reasons of unsuccessful attempts at losing weight.
  • After reading about growing coffee in Ethiopia one is supposed to be interested by the mansions of world billionaires, or a Shepard pup surviving after being lost at sea for a month.

It’s safe to assume that both would have flunked the Turing Test.

Further Reading

External Links

Selfies & Augmented Identities

As smart devices and dumb things respectively drive and feed internet advances, selfies may be seen as a minor by-product figuring the scenes between reasoning capabilities and the reality of things. But then, should that incidental understanding be upgraded to a more meaningful one that will incorporate digital hybrids into virtual reality.

Actual and Virtual Representations (N. Rockwell)

Portraits, Selfies, & Social Identities

Selfies are a good starting point given that their meteoric and wide-ranging success makes for social continuity of portraits, from timeless paintings to new-age digital images. Comparing the respective practicalities and contents of traditional and digital pictures  may be especially revealing.

With regard to practicalities, selfies bring democratization: contrary to paintings, reserved to social elites, selfies let everybody have as many portraits as wished, free to be shown at will, to family, close friends, or total unknowns.

With regard to contents, selfies bring immediacy: instead of portraits conveying status and characters through calculated personal attires and contrived settings, selfies picture social identities as snapshots that capture supposedly unaffected but revealing moments, postures, entourages, or surroundings.

Those selfies’ idiosyncrasies are intriguing because they seem to largely ignore the wide range of possibilities offered by new media technologies which could (and do sometimes) readily make selfies into elaborate still lives or scenic videos.

Likewise is the fading-out of photography as a vector of social representation after the heights achieved in the second half of the 19th century: not until the internet era did photographs start again to emulate paintings as vehicles of social identity.

Those changing trends may be cleared up if mirrors are introduced in the picture.

Selfies, Mirrors, & Physical Identities

Natural or man-made, mirrors have from the origin played a critical part in self-consciousness, and more precisely in self-awareness of physical identity. Whereas portraits are social, asynchronous, and symbolic representations, mirrors are personal, synchronous, and physical ones; hence their different roles, portraits abetting social identities, and mirrors reflecting physical ones. And selfies may come as the missing link between them.

With smartphones now customarily installed as bodily extensions, selfies may morph into recurring personal reflections, transforming themselves into a crossbreed between portraits, focused on social identification, and mirrors, intent on personal identity. That understanding would put selfies on an elusive swing swaying between social representation and communication on one side, authenticity and introspection on the other side.

On that account advances in technologies, from photographs to 3D movies, would have had a limited impact on the traction from either the social or physical side. But virtual reality (VR) is another matter altogether because it doesn’t only affect the medium between social and physical aspects, but also the “very” reality of the physical side itself.

Virtual Reality: Sense & Sensibility

The raison d’être of virtual reality (VR) is to erase the perception fence between individuals and their physical environment. From that perspective VR contraptions can be seen as deluding mirrors doing for physical identity what selfies do for social ones: teleporting individual personas between environments independently of their respective actuality. The question is: could it be carried out as a whole, teleporting both physical and social identities in a single package ?

Physical identities are built from the perception of actual changes directly originated in context or prompted by our own behavior: I move of my own volition, therefore I am. Somewhat equivalently, social identities are built on representations cultivated innerly, or supposedly conveyed by aliens. Considering that physical identities are continuous and sensible, and social ones discrete and symbolic, it should be possible to combine them into virtual personas that could be teleported around packet switched networks.

But the apparent symmetry could be deceitful given that although teleporting doesn’t change meanings, it breaks the continuity of physical perceptions, which means that it goes with some delete/replace operation. On that account effective advances of VR can be seen as converging on alternative teleporting pitfalls:

  • Virtual worlds like Second Life rely on symbolic representations whatever the physical proximity.
  • Virtual apparatuses like Oculus depend solely on the merge with physical proximity and ignore symbolic representations.

That conundrum could be solved if sense and sensibility could be reunified, giving some credibility to fused physical and social personas. That could be achieved by augmented reality whose aim is to blend actual perceptions with symbolic representations.

From Augmented Identities to Extended Beliefs

Virtual apparatuses rely on a primal instinct that makes us to believe in the reality of our perceptions. Concomitantly, human brains use built-in higher level representations of body physical capabilities in order to support the veracity of the whole experiences. Nonetheless, if and when experiences appear to be far-fetched, brains are bound to flag the stories as delusional.

Or maybe not. Even without artificial adjuncts to the brain chemistry, some kind of cognitive morphing may let the mind bypasses its body limits by introducing a deceitful continuity between mental representations of physical capabilities on one hand, and purely symbolic representations on the other hand. Technological advances may offer schemes from each side that could trick human beliefs.

Broadly speaking, virtual reality schemes can be characterized as top-down; they start by setting the mind into some imaginary world, and beguiles it into the body envelope portraying some favorite avatar. Then, taking advantage of its earned confidence, the mind is to be tricked on a flyover that would move it seamlessly from fictional social representations into equally fictional physical ones: from believing to be with superpowers into trusting the actual reach and strength of his hand performing corresponding deeds. At least that’s the theory, because if such a “suspension of disbelief” is the essence of fiction and art, the practicality of its mundane actualization remains to be confirmed.

Augmented reality goes the other way and can be seen as bottom-up, relying on actual physical experiences before moving up to fictional extensions. Such schemes are meant to be fed with trusted actual perceptions adorned with additional inputs, visual or otherwise, designed on purpose. Instead of straight leaps into fiction, beliefs can be kept anchored to real situations from where they can be seamlessly led astray to unfolding wonder worlds, or alternatively ushered to further knowledge.

By introducing both continuity and new dimensions to the design of physical and social identities, augmented reality could leverage selfies into major social constructs. Combined with artificial intelligence they could even become friendly or helpful avatars, e.g as personal coaches or medical surrogate.

Further Readings

External Links

AlphaGo & Non-Zero-Sum Contests

The recent and decisive wins of Google’s AlphaGo over the world best Go player have been marked as a milestone on the path to general artificial intelligence, one that would be endowed with the same sort of capabilities as its human model. Yet, such assessment may reflect a somewhat mechanical understanding of human intelligence.

Zero-sum contest (Uccello)

What Machines Can Know

As previously noted, human intelligence relies on three categories of knowledge:

  1. Acquired through senses (views, sounds, smells, touches) or beliefs (as nurtured by our common “sense”). That is by nature prone to circumstances and prejudices.
  2. Built through reasoning, i.e the mental processing of symbolic representations. It is meant to be universal and open to analysis, but it offers no guarantee for congruence with actual reality.
  3. Attained through judgment bringing together perceptions, intuitions, and symbolic representations.

Given the exponential growth of their processing power, artificial contraptions are rapidly overtaking human beings on account of perceptions and reasoning capabilities. Moreover, as demonstrated by the stunning success of AlphaGo, they may, sooner rather than later, take the upper hand for judgments based on fixed sets (including empty ones) of symbolic representations. Would that means game over for humans ?

Maybe not, as suggested by the protracted progresses of IBM’s Watson for Oncology which may come as a marker of AI limits when non-zero-sum games are concerned. And there is good reason for that: human intelligence has evolved against survival stakes, not for games sake, and its innate purpose is to make fateful decisions when faced with unpredictable prospects: while machines look for pointless wins, humans aim for meaningful victories

What Animals Can Win

Left to their own, games are meant to be pointless: winning or losing is not to affect players in their otherwise worldly affairs. As a corollary, games intelligence can be disembodied, i.e detached from murky perceptions and freed from fuzzy down-to-earth rules. That’s not the case for real-life contests, especially the ones that drove the development of animal brains aeons ago; then, the constitutive and formative origins of intelligence were to rely on senses without sensors, reason without logic, and judgment without philosophy. The difference with gaming machines is therefore not so much about stakes as about the nature of built-in capabilities: animal intelligence has emerged from the need to focus on actual situations and immediate decision-making without the paraphernalia of science and technology. And since survival is by nature individual, the exercise of animal intelligence is intrinsically singular, otherwise (i.e were the outcomes been uniform) there could have been no selection. As far as animal intelligence is concerned opponents can only be enemies and winners are guaranteed to take all the spoils: no universal reason should be expected.

So, animal intelligence adds survival considerations to the artificial one, but it lacks symbolic and cooperative dimensions.

How Humans Can Win

Given its unique symbolic capability, the human species have been granted a decisive superiority in the evolution race. Using symbolic representations to broaden the stakes, take into account externalities, and build strategies for a wider range of possibilities, human intelligence clearly marks the evolutionary edge between human and other species. The combined capabilities to process non symbolic (aka implicit) knowledge and symbolic representations may therefore define the playground for human and artificial intelligence. But that will not take the cooperative dimension into account.

As it happens, the ability to process symbolic representations has a compound impact on human intelligence by bringing about a qualitative leap not only with regard to knowledge but, perhaps even more critically, with regard to cooperation. Taking leaves from R. Wright, and G. Lakoff, such breakthrough would not be about problem solving but about social destiny: what characterizes human intelligence would be an ability to assess collective aims and consequently to build non-zero-sum strategies bringing shared benefits.

Back to the general artificial intelligence project, the real challenge would be to generalize deep learning to non-zero-sum competition and its corollary, namely the combination and valuation of heterogeneous yet shared actual stakes.

However, as pointed by Lee Sedol, “when it comes to human beings, there is a psychological aspect that one has to also think about.” In other words, as noted above), human intelligence has a native and inherent emotional dimension which may be an asset (e.g as a source of creativity) as well as a liability (when it blurs some hazards).

Further Readings

External Links

Agile Collaboration & Enterprise Creativity

Open-plan offices and social networks are often seen as significant factors of collaboration and innovation, breeding and nurturing the creativity of knowledge workers, weaving their ideas into webs of truths, and molding their minds into some collective intelligence.


Trust & Communication (Juan Munoz)

Yet, as creativity comes with agility, knowledge workflows should give brains enough breathing space lest they get more pressure than pasture.

Collaboration & Thinking Flows

Collaboration is a means to an end. To be of any use exchanges have to be fed with renewed ideas and assumptions, triggering arguments and adjustments, and opening new perspectives. If not they may burn themselves out with hollow considerations blurring clues and expectations, clogging the channels, and finally stemming the thinking flows.

Taking example from lean manufacturing, the first objective should be to streamline knowledge workflows as to eliminate swirling pools of squabbles, drain stagnant puddles of stale thoughts, and gear collaboration to flowing knowledge streams. As illustrated by flood irrigation, the first step is to identify basin levels.

Dunbar Numbers & Collaboration Basins

Studying the grooming habits of social primates, psychologist Robin Dunbar came to the conclusion that the size of social circles that individuals of a living species can maintain is set by the size of brain’s neocortex. Further studies have confirmed Dunbar’s findings, with the corresponding sizes for humans set around 10 for trusted personal groups and 150 for untried social ones. As it happens, and not by chance, those numbers seem to coincide with actual observations: the former for personal and direct collaboration, the latter for social and mediated collaboration.

Based on that understanding, the objective would be to organize knowledge workflows across two primary basins:

  • On-site and face-to-face collaboration with trusted co-workers. Corresponding interactions would be driven by personal dispositions and attitudes.
  • On-line and networked collaboration with workers, trusted or otherwise. Corresponding interactions would be based on shared interests and past exchanges.

Knowledge Workflows

The aim of knowledge workflows is to process data into information and put it to use. That is to be achieved by combining different kinds of tasks, in particular:

  • Data and information management: build the symbolic descriptions of contexts, concerns, and means.
  • Objectives management: based on a set of symbolic descriptions, identify and refine opportunities together with the ways to realize them.
  • Tasks management: allocate rights and responsibilities across organizations and collaboration frames, public and shallow or personal and deep.
  • Flows management: monitor and manage actual flows, publish arguments and propositions, consolidate decisions, …

Taking into account constraints and dependencies between the tasks, the aims would be to balance creativity and automation while eliminating superfluous intermediate products (like documents or models) or activities (e.g unfocused meetings).

With regard to dependencies, KM tasks are often intertwined and cannot be carried out sequentially; moreover, as illustrated by the impact of “creative accounting” on accounted activities, their overlapping is not frozen but subject to feedback, changes and adjustments.

With regard to automation, three groups are to be considered: the first requires only raw processing power and can be fully automated; the second also involves some intelligence that may be provided by smart systems; and the third calls for decision-making that can only be done by human agents entitled by the organization.

At first sight some lessons could be drawn from lean manufacturing, yet, since knowledge processes are not subject to hardware constraints, agile approaches should provide a more informative reference.

Iterative Knowledge Processing

A simple preliminary step is to check the applicability of agile principles by replacing “software” by “knowledge”. Assuming that ground is secured, the core undertaking is to consider what would become of cycles and iterations when applied to knowledge processing:

  • Cycle invariants: tasks would be iterated on given sets of symbolic descriptions applied to the state of affairs (contexts, concerns, and means).
  • Iterations content: based on those descriptions data would be processed into information, changes would be monitored, and possibilities explored.
  • Exit condition: cycles would complete with decisions committing changes in the state of affairs that would also entail adjustments or changes in symbolic descriptions.

That scheme meets three of the basic tenets of the agile paradigm, i.e open scope (unknowns cannot be set in advance), continuity of delivery (invariants are defined and managed by knowledge workers), and users in driving seats (through exit conditions). Yet it still doesn’t deal with creativity and the benefits of collaboration for knowledge workers.

Thinking Space & Pace

The scope of creativity in processes is neatly circumscribed by the nature of flows, i.e the possibility to insert knowledge during the processing: external for material flows (e.g in manufacturing), internal for symbolic flows (e.g in software engineering and knowledge processing).

Yet, whereas both software engineering and knowledge processes come with some built-in capability to redefined their symbolic flows on-the-fly, they don’t grant the same room to creativity. Contrary to software engineering projects which have to close their perspectives on the delivery of working products, knowledge processes are meant to keep them open to new understandings and opportunities. For the former creativity is the means to an end, for the latter it’s the end in itself, with collaboration as means.

Such opposite perspectives have direct consequences for two basic agile collaboration mechanisms: backlog and time-boxing:

  • Backlogs are used to structure and manage the space under exploration. But contrary to software processes whose space is focused and structured by users’ needs, knowledge processes are supposed to play on workers’ creativity to expand and redefine the range under consideration.
  • Time-boxes are used to synchronize tasks. But with creativity entering the fray, neither space granularity or thinking pace can be set in advance and coerced into single-sized boxes. In that case individuals must remain in full control of the contents and stride of their thinking streams.

It ensues that when creativity is the primary success factor standard agile collaboration mechanisms are falling short and intelligent collaboration schemes are to be introduced.

Creativity & Collaboration Tiers

The synchronization of creative activities has to deal with conflicting objectives:

  • On one hand the mental maps of knowledge workers and the stream of their thoughts have to be dynamically aligned.
  • On the other hand unsolicited face-to-face interactions or instant communications may significantly impair the course of creative thinking.

When activities, e.g software engineering, can be streamlined towards the delivery of clearly defined outcomes, backlogs and time-boxes can be used to harness workers’ creativity. When that’s not the case more sophisticated collaboration mechanisms are needed.

Assuming that mediated collaboration has a limited impact on thinking creativity (emails don’t have to be answered, or even presented, instantly), the objective is to steer knowledge workflows across a two-tiered collaboration framework: one personal and direct between knowledge workers, the other social and mediated through enterprise or institutional networks.

On the first tier knowledge workers would manage their thinking flows (content and tempo) independently, initiating or accepting personal collaboration (either through physical contact or some kind of instant messaging) depending on their respective “state of mind”.

The second tier would be for social collaboration and would be expected to replace backlogs and time-boxing. Proceeding from the first to the second tier would be conditioned by workers’ needs and expectations, triggered on their own initiative or following prompts.

From Personal to Collective Thinking

The challenging issue is obviously to define and implement the mechanisms governing the exchanges between collaboration tiers, e.g:

  • How to keep tabs on topics and contents to be safeguarded.
  • How to mediate (i.e filter and time) the solicitations and contribution issued by the social tier.
  • How to assess the solicitations and contribution issued by individuals.
  • How to assess and manage knowledge deemed to remain proprietary.
  • How to identify and manage knowledge workers personal and social circles.

Whereas such issues are customary tackled by various AI systems (knowledge management, decision-making, multi-players games, etc), taken as a whole they bring up the question of the relationship between personal and collective thinking, and as a corollary, the role of organization in nurturing corporate innovation.

Conclusion: Collaboration Spaces vs Panopticon

As illustrated by the rising of futuristic headquarters, leading technology firms have been trying to tackle these issues by redefining internal architecture as collaboration spaces. Compared to traditional open spaces, such approaches try to fuse physical and digital spaces into overlapping layers of collaboration spaces, using artificial intelligence to harness cooperation.

Yet, lest uniform and comprehensive transparency brings the worrying shadow of a panopticon within which everyone can be unknowingly observed, working spaces have to be designed as to enhance collaboration without trespassing on privacy.

That could be achieved with a layered transparency set along the nature of collaboration:

  • Immediate and personal: working cells regrouping 5 to 10 workstations earmarked for a task and used indifferently by teams members.
  • Delayed and personal: open physical spaces accommodating working cells, with instant messaging and geo-localization; spaces are hinged on domains and focused on shared knowledge.
  • On-line and networked: digital spaces merging physical spaces and organizational structures.

That mix of physical and virtual spaces could be dynamically redefined depending on activities, projects, location, and organisation.

Further Readings

External Links

AlphaGo: From Intuitive Learning to Holistic Knowledge

Brawn & Brain

Google’s AlphaGo recent success against Europe’s top player at the game of Go is widely recognized as a major breakthrough for Artificial Intelligence (AI), both because of the undertaking (Go is exponentially more complex than Chess) and time (it has occurred much sooner than expected). As it happened, the leap can be credited as much to brawn as to brain, the former with a massive increase in computing power, the latter with an innovative combination of established algorithms.

(Kunisada)
Brawny Contest around Aesthetic Game (Kunisada)

That breakthrough and the way it has been achieved may seem to draw opposite perspectives about the future of IA: either the current conceptual framework is the best option, with brawny machines becoming brainier and, sooner or later, will be able to leap over  the qualitative gap with their human makers; or it’s a quantitative delusion that could drive brawnier machines and helpless humans down into that very same hole.

Could AlphaGo and its DeepMind makers may point to a holistic bypass around that dilemma ?

Taxonomy of Sources

Taking a leaf from Spinoza, one could begin by considering the categories of knowledge with regard to sources:

  1. The first category is achieved through our senses (views, sounds, smells, touches) or beliefs (as nurtured by our common “sense”). This category is by nature prone to circumstances and prejudices.
  2. The second is built through reasoning, i.e the mental processing of symbolic representations. It is meant to be universal and open to analysis, but it offers no guarantee for congruence with actual reality.
  3. The third is attained through philosophy which is by essence meant to bring together perceptions, intuitions, and symbolic representations.

Whereas there can’t be much controversy about the first ones, the third category leaves room for a wide range of philosophical tenets, from religion to science, collective ideologies, or spiritual transcendence. With today’s knowledge spread across smart devices and driven by the wisdom of crowds, philosophy seems to look more at big data than at big brother.

Despite (or because of) its focus on the second category, AlphaGo and its architectural’s feat may still carry some lessons for the whole endeavor.

Taxonomy of Representations

As already noted, the effectiveness of IA’s supporting paradigms has been bolstered by the exponential increase in available data and the processing power to deal with it. Not surprisingly, those paradigms are associated with two basic forms of representations aligned with the source of knowledge, implicit for senses, and explicit for reasoning:

  • Designs based on symbolic representations allow for explicit information processing: data is “interpreted” into information which is then put to use as knowledge governing behaviors.
  • Designs based on neural networks are characterized by implicit information processing: data is “compiled” into neural connections whose weights (pondering knowledge ) are tuned iteratively on the basis of behavioral feedback.

Since that duality mirrors human cognitive capabilities, brainy machines built on those designs are meant to combine rationality with effectiveness:

  • Symbolic representations support the transparency of ends and the traceability of means, allowing for hierarchies of purposes, actual or social.
  • Neural networks, helped by their learning kernels operating directly on data, speed up the realization of concrete purposes based on the supporting knowledge implicitly embodied as weighted connections.

The potential of such approaches have been illustrated by internet-based language processing: pragmatic associations “observed” on billions of discourses are progressively complementing and even superseding syntactic and semantic rules in web-based parsers.

On that point too AlphaGo has focused ambitions since it only deals with non symbolic inputs, namely a collection of Go moves (about 30 million in total) from expert players. But that limit can be turned into a benefit as it brings homogeneity and transparency, and therefore a more effective combination of algorithms: brawny ones for actual moves and intuitive knowledge from the best players, brainy ones for putative moves, planning, and policies.

Teaching them how to work together is arguably a key factor of the breakthrough.

Taxonomy of Learning

As should be expected from intelligent machines, their impressive track record fully depends of their learning capabilities. Whereas those capabilities are typically applied separately to implicit (or non symbolic) and explicit (or symbolic) contents, bringing them under the control of the same cognitive engine, as humans brains routinely do, has long been recognized as a primary objective for IA.

Practically that has been achieved with neural networks by combining supervised and unsupervised learning: human experts help systems to sort the wheat from the chaff and then let them improve their expertise through millions of self-play.

Yet, the achievements of leading AI players have marked out the limits of these solutions, namely the qualitative gap between playing as the best human players and beating them. While the former outcome can be achieved through likelihood-based decision-making, the latter requires the development of original schemes, and that brings quantitative and qualitative obstacles:

  • Contrary to actual moves, possible ones have no limit, hence the exponential increase in search trees.
  • Original schemes are to be devised with regard to values and policies.

Overcoming both challenges with a single scheme may be seen as the critical achievement of DeepMind engineers.

Mastering the Breadth & Depth of Search Trees

Using neural networks for the evaluation of actual states as well as the sampling of policies comes with exponential increases in breath and depth of search trees. Whereas Monte Carlo Tree Search (MCTS) algorithms are meant to deal with the problem, limited capacity to scale up the processing power will nonetheless lead to shallow trees; until DeepMind engineers succeeded in unlocking the depth barrier by applying MCTS to layered value and policy networks.

AlphaGo seamless use of layered networks (aka Deep Convolutional Neural Networks) for intuitive learning, reinforcement, values, and policies was made possible by the homogeneity of Go’s playground and rules (no differentiated moves and search traps as in the game of Chess).

From Intuition to Knowledge

Humans are the only species that combines intuitive (implicit) and symbolic (explicit) knowledge, with the dual capacity to transform the former into the latter and in reverse to improve the former with the latter’s feedback.

Applied to machine learning that would require some continuity between supervised and unsupervised learning which would be achieved with neural networks being used for symbolic representations as well as for raw data:

  • From explicit to implicit: symbolic descriptions built for specific contexts and purposes would be engineered into neural networks to be tried and improved by running them on data from targeted environments.
  • From implicit to explicit: once designs tested and reinforced through millions of runs in relevant targets, it would be possible to re-engineer the results into improved symbolic descriptions.

Whereas unsupervised learning of deep symbolic knowledge remains beyond the reach of intelligent machines, significant results can be achieved for “flat” semantic playground, i.e if the same semantics can be used to evaluate states and policies across networks:

  1. Supervised learning of the intuitive part of the game as observed in millions of moves by human experts.
  2. Unsupervised reinforcement learning from games of self-play.
  3. Planning and decision-making using Monte Carlo Tree Search (MCTS) methods to build, assess, and refine its own strategies.

Such deep and seamless integration would not be possible without the holistic nature of the game of Go.

Aesthetics Assessment & Holistic Knowledge

The specificity of the game of Go is twofold, complexity on the quantitative side, simplicity on  the qualitative side, the former being the price of the latter.

As compared to Chess, Go’s actual positions and prospective moves can only be assessed on the whole of the board, using a criterion that is best defined as aesthetic as it cannot be reduced to any metrics or handcrafted expert rules. Players will not make moves after a detailed analysis of local positions and assessment of alternative scenarii, but will follow their intuitive perception of the board.

As a consequence, the behavior of AlphaGo can be neatly and fully bound with the second level of knowledge defined above:

  • As a game player it can be detached from actual reality concerns.
  • As a Go player it doesn’t have to tackle any semantic complexity.

Given a fitted harness of adequate computing power, the primary challenge of DeepMind engineers is to teach AlphaGo to transform its aesthetic intuitions into holistic knowledge without having to define their substance.

Further Readings

External Links

Detour from Turing Game

Summary

Considering Alan Turing’s question, “Can machines think ?”, could the distinction between communication and knowledge representation capabilities help to decide between human and machine ?

vvvv
Alan Turing at 4

What happens when people interact ?

Conversations between people are meant to convey concrete, partial, and specific expectations. Assuming the use of a natural language, messages have to be mapped to the relevant symbolic representations of the respective cognitive contexts and intentions.

ccc
Conveying intentions

Assuming a difference in the way this is carried on by people and machines, could that difference be observed at message level ?

Communication vs Representation Semantics

To begin with, languages serve two different purposes: to exchange messages between agents, and to convey informational contents. As illustrated by the difference between humans and other primates, communication (e.g alarm calls directly and immediately bound to imminent menace) can be carried out independently of knowledge representation (e.g information related to the danger not directly observable), in other words linguistic capabilities for communication and symbolic representation can be set apart. That distinction may help to differentiate people from machines.

Communication Capabilities

Exchanging messages make use of five categories of information:

  • Identification of participants (Who) : can be set independently of their actual identity or type (human or machine).
  • Nature of message (What): contents exchanged (object, information, request, … ) are not contingent on participants type.
  • Life-span of message (When): life-cycle (instant, limited, unlimited, …) is not contingent on participants type.
  • Location of participants (Where): the type of address space (physical, virtual, organizational,…) is not contingent on participants type.
  • Communication channels (How): except for direct (unmediated) human conversations, use of channels for non direct (distant, physical or otherwise) communication are not contingent on participants type .

Setting apart the trivial case of direct human conversation, it ensues that communication capabilities are not enough to discriminate between human and artificial participants, .

Knowledge Representation Capabilities

Taking a leaf from Davis, Shrobe, and Szolovits, knowledge representation can be characterized by five capabilities:

  1. Surrogate: KR provides a symbolic counterpart of actual objects, events and relationships.
  2. Ontological commitments: a KR is a set of statements about the categories of things that may exist in the domain under consideration.
  3. Fragmentary theory of intelligent reasoning: a KR is a model of what the things can do or can be done with.
  4. Medium for efficient computation: making knowledge understandable by computers is a necessary step for any learning curve.
  5. Medium for human expression: one the KR prerequisite is to improve the communication between specific domain experts on one hand, generic knowledge managers on the other hand.

On that basis knowledge representation capabilities cannot be used to discriminate between human and artificial participants.

Returning to Turing Test

Even if neither communication nor knowledge representation capabilities, on their own, suffice to decide between human and machine, their combination may do the trick. That could be achieved with questions like:

  • Who do you know: machines can only know previous participants.
  • What do you know: machines can only know what they have been told, directly or indirectly (learning).
  • When did/will you know: machines can only use their own clock or refer to time-spans set by past or planned transactional events.
  • Where did/will you know: machines can only know of locations identified by past or planned communications.
  • How do you know: contrary to humans, intelligent machines are, at least theoretically, able to trace back their learning process.

Hence, and given scenarii scripted adequately, it would be possible to build decision models able to provide unambiguous answers.

Reference Readings

A. M. Turing, “Computing Machinery and Intelligence”

Davis R., Shrobe H., Szolovitz P., “What is a Knowledge Representation?”

Further Reading