Archive for the ‘Internet of Things’ Category

Things Speaking in Tongues

January 25, 2017

Preamble

Speaking in tongues (aka Glossolalia) is the fluid vocalizing of speech-like syllables without any recognizable association with a known language. Such experience is best (not ?) understood as the actual speaking of a gutted language with grammatical ghosts inhabited by meaningless signals.

The man behind the tongue (Herbert List)

Do You Hear What I Say ? (Herbert List)

Usually set in religious context or circumstances, speaking in tongue looks like souls having their own private conversations. Yet, contrary to extraterrestrial languages, the phenomenon is not fictional and could therefore point to offbeat clues for natural language technology.

Computers & Language Technology

From its inception computers technology has been a matter of language, from machine code to domain specific. As a corollary, the need to be in speaking terms with machines (dumb or smart) has put a new light on interpreters (parsers in computer parlance) and open new perspectives for linguistic studies. In due return, computers have greatly improve the means to experiment and implement new approaches.

During the recent years advances in artificial intelligence (AI) have brought language technologies to a critical juncture between speech recognition and meaningful conversation, the former leaping ahead with deep learning and signal processing, the latter limping along with the semantics of domain specific languages.

Interestingly, that juncture neatly coincides with the one between the two intrinsic functions of natural languages: communication and representation.

Rules Engines & Neural Network

As exemplified by language technologies, one of the main development of deep learning has been to bring rules engines and neural networks under a common functional roof, turning the former unfathomable schemes into smart conceptual tutors for the latter.

In contrast to their long and successful track record in computer languages, rule-based approaches have fallen short in human conversations. And while these failings have hindered progress in the semantic dimension of natural language technologies, speech recognition have strode ahead on the back of neural networks fueled by increasing computing power. But the rift between processing and understanding natural languages is now being fastened through deep learning technologies. And with the leverage of rule engines harnessing neural networks, processing and understanding can be carried out within a single feedback loop.

From Communication to Cognition

From a functional point of view, natural languages can be likened to money, first as medium of exchange, then as unit of account, finally as store of value. Along that understanding natural languages would be used respectively for communication, information processing, and knowledge representation. And like the economics of money, these capabilities are to be associated to phased cognitive developments:

  • Communication: languages are used to trade transient signals; their processing depends on the temporal persistence of the perceived context and phenomena; associated behaviors are immediate (here-and-now).
  • Information: languages are also used to map context and phenomena to some mental representations; they can therefore be applied to scripted behaviors and even policies.
  • Knowledge: languages are used to map contexts, phenomena, and policies to categories and concepts to be stored as symbolic representations fully detached of original circumstances; these surrogates can the be used, assessed, and improved on their own.

As it happens, advances in technologies seem to follow these cognitive distinctions, with the internet of things (IoT) for data communications, neural networks for data mining and information processing, and the addition of rules engines for knowledge representation. Yet paces differ significantly: with regard to language processing (communication and information), deep learning is bringing the achievements of natural language technologies beyond 90% accuracy; but when language understanding has to take knowledge into account, performances still lag a third below: for computers knowledge to be properly scaled, it has to be confined within the semantics of specific domains.

Sound vs Speech

Humans listening to the Universe are confronted to a question that can be unfolded in two ways:

  • Is there someone speaking, and if it’s the case, what’s the language ?.
  • Is that a speech, and if it’s the case, who’s speaking ?.

In both case intentionality is at the nexus, but whereas the first approach has to tackle some existential questioning upfront, the second can put philosophy on the back-burner and focus on technological issues. Nonetheless, even the language first approach has been challenging, as illustrated by the difference in achievements between processing and understanding language technologies.

Recognizing a language has long been the job of parsers looking for the corresponding syntax structures, the hitch being that a parser has to know beforehand what it’s looking for. Parser’s parsers using meta-languages have been effective with programming languages but are quite useless with natural ones without some universal grammar rules to sort out babel’s conversations. But the “burden of proof” can now be reversed: compared to rules engines, neural networks with deep learning capabilities don’t have to start with any knowledge. As illustrated by Google’s Multilingual Neural Machine Translation System, such systems can now build multilingual proficiency from sufficiently large samples of conversations without prior specific grammatical knowledge.

To conclude, “Translation System” may even be self-effacing as it implies language-to-language mappings when in principle such systems can be fed with raw sounds and be able to parse the wheat of meanings from the chaff of noise. And, who knows, eventually be able to decrypt languages of tongues.

Further Reading

External Links

Things Behavior & Social Responsibility

October 27, 2016

Contrary to security breaks and information robberies that can be kept from public eyes, crashes of business applications or internet access are painfully plain for whoever is concerned, which means everybody. And as illustrated by the last episode of massive distributed denial of service (DDoS), they often come as confirmation of hazards long calling for attention.

robot_waynemiller

Device & Social Identity (Wayne Miller)

Things Don’t Think

To be clear, orchestrated attacks through hijacked (if unaware) computers have been a primary concern for internet security firms for quite some time, bringing about comprehensive and continuous reinforcement of software shields consolidated by systematic updates.

But while the right governing hand was struggling to make a safer net, the other hand thoughtlessly brought in connected objects to a supposedly new brand of internet. As if adding things with software brains cut to the bone could have made networks smarter.

And that’s the catch because the internet of things (IoT) is all about making room for dumb ancillary objects; unfortunately, idiots may have their use for literary puppeteers with canny agendas.

Think Again, or Not …

For old-timers with some memory of fingering through library cardboard, googling topics may have looked like dreams: knowledge at one’s fingertips, immediately and comprehensively. But that vision has never been more than a fleeting glimpse in a symbolic world; in actuality, even at its semantic best, the web was to remain a trove of information to be sifted by knowledge workers safely seated in their gated symbolic world. Crooks of course could sneak in as knowledge workers, armed with fountain pens, but without guns covered by the second amendment.

So, from its inception, the IoT has been a paradoxical endeavor: trying to merge actual and symbolic realms that would bypass thinking processes and obliterate any distinction. For sure, that conundrum was supposed to be dealt with by artificial intelligence (AI), with neural networks and deep learning weaving semantic threads between human minds and networks brains.

Not surprisingly, brainy hackers have caught sight of that new wealth of chinks in internet armour and swiftly added brute force to their paraphernalia.

But in addition to the technical aspect of internet security, the recent Dyn DDoS attack puts the light on its social perspective.

Things Behavior & Social Responsibility

As far as it remained intrinsically symbolic, the internet has been able to carry on with its utopian principles despite bumpy business environments. But things have drastically changed the situation, with tectonic frictions between symbolic and real plates wreaking havoc with any kind of smooth transition to internet.X, whatever x may be.

Yet, as the diagnose is clear, so should be the remedy.

To begin with, the internet was never meant to become the central nervous system of human societies. That it has happened in half a generation has defied imagination and, as a corollary, sapped the validity of traditional paradigms.

As things happen, the epicenter of the paradigms collision can be clearly identified: whereas the internet is built from systems, architectures taxonomies are purely technical and ignore what should be the primary factor, namely what kind of social role a system could fulfil. That may have been irrelevant for communication networks, but is obviously critical for social ones.

Further Reading

External Links

Brands, Bots, & Storytelling

May 2, 2016

As illustrated by the recent Mashable “pivot”, meaningful (i.e unbranded) contents appear to be the main casualty of new communication technologies. Hopefully (sic), bots may point to a more positive perspective, at least if their want for no no-nonsense gist is to be trusted.

(Latifa Echakhch)

Could bots repair gibberish ? (Latifa Echakhch)

The Mashable Pivot to “branded” Stories

Announcing Mashable recent pivot, Pete Cashmore (Mashable ‘s founder and CEO) was very candid about the motives:

“What our advertisers value most about
 Mashable is the same thing that our audience values: Our content. The
 world’s biggest brands come to us to tell stories of digital culture, 
innovation and technology in an optimistic and entertaining voice. As 
a result, branded content has become our fastest growing revenue 
stream over the past year. Content is now at the core of our ad 
offering and we plan to double down there.

”

Also revealing was the semantic shift in a single paragraph: from “stories”, to “stories told with an optimistic and entertaining voice”, and finally to “branded stories”; as if there was some continuity between Homer’s Iliad and Outbrain’s gibberish.

Spinning Yarns

From Lacan to Seinfeld, it has often been said that stories are what props up our world. But that was before Twitter, Facebook, YouTube and others ruled over the waves and screens. Nowadays, under the combined assaults of smart dummies and instant messaging, stories have been forced to spin advertising schemes, and scripts replaced  by subliminal cues entangled in webs of commercial hyperlinks. And yet, somewhat paradoxically, fictions may retrieve some traction (if not spirit) of their own, reprieved not so much by human cultural thirst as by smartphones’ hunger for fresh technological contraptions.

Apps: What You Show is What You Get

As far as users are concerned, apps often make phones too smart by half: with more than 100 billion of apps already downloaded, users face an embarrassment of riches compounded by the inherent limitations of packed visual interfaces. Enticed by constantly renewed flows of tokens with perfunctory guidelines, human handlers can hardly separate the wheat from the chaff and have to let their choices be driven by the hypothetical wisdom of the crowd. Whatever the outcomes (crowds may be right but often volatile), the selection process is both wasteful (choices are ephemera, many apps are abandoned after a single use, and most are sparely used), and hazardous (too many redundant dead-ends open doors to a wide array of fraudsters). That trend is rapidly facing the physical as well as business limits of a zero-sum playground: smarter phones appear to make for dumber users. One way out of the corner would be to encourage intelligent behaviors from both parties, humans as well as devices. And that’s something that bots could help to bring about.

Bots: What You Text Is What You Get

As software agents designed to help people find their ways online, bots can be differentiated from apps on two main aspects:

  • They reside in the cloud, not on personal devices, which means that updates don’t have to be downloaded on smartphones but can be deployed uniformly and consistently. As a consequence, and contrary to apps, the evolution of bots can be managed independently of users’ whims, fostering the development of stable and reliable communication grammars.
  • They rely on text messaging to communicate with users instead of graphical interfaces and visual symbols. Compared to icons, text put writing hands on driving wheels, leaving much less room for creative readings; given that bots are not to put up with mumbo jumbo, they will prompt users to mind their words as clearly and efficiently as possible.

Each aspect reinforces the other, making room for a non-zero playground: while the focus on well-formed expressions and unambiguous semantics is bots’ key characteristic, it could not be achieved without the benefits of stable and homogeneous distribution schemes. When both are combined they may reinstate written languages as the backbone of communication frameworks, even if it’s for the benefits of pidgin languages serving prosaic business needs.

A Literary Soup of Business Plots & Customers Narratives

Given their need for concise and unambiguous textual messages, the use of bots could bring back some literary considerations to a latent online wasteland. To be sure, those considerations are to be hard-headed, with scripts cut to the bone, plots driven by business happy ends, and narratives fitted to customers phantasms.

Nevertheless, good storytelling will always bring some selective edge to businesses competing for top tiers. So, and whatever the dearth of fictional depth, the spreading of bots scripts could make up some kind of primeval soup and stir the emergence of some literature untainted by its fouled nourishing earth.

Further Readings

Selfies & Augmented Identities

March 31, 2016

As smart devices and dumb things respectively drive and feed internet advances, selfies may be seen as a minor by-product figuring the scenes between reasoning capabilities and the reality of things. But then, should that incidental understanding be upgraded to a more meaningful one that will incorporate digital hybrids into virtual reality.

Actual and Virtual Representations (N. Rockwell)

Portraits, Selfies, & Social Identities

Selfies are a good starting point given that their meteoric and wide-ranging success makes for social continuity of portraits, from timeless paintings to new-age digital images. Comparing the respective practicalities and contents of traditional and digital pictures  may be especially revealing.

With regard to practicalities, selfies bring democratization: contrary to paintings, reserved to social elites, selfies let everybody have as many portraits as wished, free to be shown at will, to family, close friends, or total unknowns.

With regard to contents, selfies bring immediacy: instead of portraits conveying status and characters through calculated personal attires and contrived settings, selfies picture social identities as snapshots that capture supposedly unaffected but revealing moments, postures, entourages, or surroundings.

Those selfies’ idiosyncrasies are intriguing because they seem to largely ignore the wide range of possibilities offered by new media technologies which could (and do sometimes) readily make selfies into elaborate still lives or scenic videos.

Likewise is the fading-out of photography as a vector of social representation after the heights achieved in the second half of the 19th century: not until the internet era did photographs start again to emulate paintings as vehicles of social identity.

Those changing trends may be cleared up if mirrors are introduced in the picture.

Selfies, Mirrors, & Physical Identities

Natural or man-made, mirrors have from the origin played a critical part in self-consciousness, and more precisely in self-awareness of physical identity. Whereas portraits are social, asynchronous, and symbolic representations, mirrors are personal, synchronous, and physical ones; hence their different roles, portraits abetting social identities, and mirrors reflecting physical ones. And selfies may come as the missing link between them.

With smartphones now customarily installed as bodily extensions, selfies may morph into recurring personal reflections, transforming themselves into a crossbreed between portraits, focused on social identification, and mirrors, intent on personal identity. That understanding would put selfies on an elusive swing swaying between social representation and communication on one side, authenticity and introspection on the other side.

On that account advances in technologies, from photographs to 3D movies, would have had a limited impact on the traction from either the social or physical side. But virtual reality (VR) is another matter altogether because it doesn’t only affect the medium between social and physical aspects, but also the “very” reality of the physical side itself.

Virtual Reality: Sense & Sensibility

The raison d’être of virtual reality (VR) is to erase the perception fence between individuals and their physical environment. From that perspective VR contraptions can be seen as deluding mirrors doing for physical identity what selfies do for social ones: teleporting individual personas between environments independently of their respective actuality. The question is: could it be carried out as a whole, teleporting both physical and social identities in a single package ?

Physical identities are built from the perception of actual changes directly originated in context or prompted by our own behavior: I move of my own volition, therefore I am. Somewhat equivalently, social identities are built on representations cultivated innerly, or supposedly conveyed by aliens. Considering that physical identities are continuous and sensible, and social ones discrete and symbolic, it should be possible to combine them into virtual personas that could be teleported around packet switched networks.

But the apparent symmetry could be deceitful given that although teleporting doesn’t change meanings, it breaks the continuity of physical perceptions, which means that it goes with some delete/replace operation. On that account effective advances of VR can be seen as converging on alternative teleporting pitfalls:

  • Virtual worlds like Second Life rely on symbolic representations whatever the physical proximity.
  • Virtual apparatuses like Oculus depend solely on the merge with physical proximity and ignore symbolic representations.

That conundrum could be solved if sense and sensibility could be reunified, giving some credibility to fused physical and social personas. That could be achieved by augmented reality whose aim is to blend actual perceptions with symbolic representations.

From Augmented Identities to Extended Beliefs

Virtual apparatuses rely on a primal instinct that makes us to believe in the reality of our perceptions. Concomitantly, human brains use built-in higher level representations of body physical capabilities in order to support the veracity of the whole experiences. Nonetheless, if and when experiences appear to be far-fetched, brains are bound to flag the stories as delusional.

Or maybe not. Even without artificial adjuncts to the brain chemistry, some kind of cognitive morphing may let the mind bypasses its body limits by introducing a deceitful continuity between mental representations of physical capabilities on one hand, and purely symbolic representations on the other hand. Technological advances may offer schemes from each side that could trick human beliefs.

Broadly speaking, virtual reality schemes can be characterized as top-down; they start by setting the mind into some imaginary world, and beguiles it into the body envelope portraying some favorite avatar. Then, taking advantage of its earned confidence, the mind is to be tricked on a flyover that would move it seamlessly from fictional social representations into equally fictional physical ones: from believing to be with superpowers into trusting the actual reach and strength of his hand performing corresponding deeds. At least that’s the theory, because if such a “suspension of disbelief” is the essence of fiction and art, the practicality of its mundane actualization remains to be confirmed.

Augmented reality goes the other way and can be seen as bottom-up, relying on actual physical experiences before moving up to fictional extensions. Such schemes are meant to be fed with trusted actual perceptions adorned with additional inputs, visual or otherwise, designed on purpose. Instead of straight leaps into fiction, beliefs can be kept anchored to real situations from where they can be seamlessly led astray to unfolding wonder worlds, or alternatively ushered to further knowledge.

By introducing both continuity and new dimensions to the design of physical and social identities, augmented reality could leverage selfies into major social constructs. Combined with artificial intelligence they could even become friendly or helpful avatars, e.g as personal coaches or medical surrogate.

Further Readings

External Links

IoT & Real Time Activities

March 2, 2016

The world is the totality of facts, not of things.

Ludwig Wittgenstein

As the so-called internet of things (IoT) seems to bring together people, systems and devices, the meaning of real-time activities may have to be reconsidered.

Real Time Representation (Lucy Nicholson)

Fact and Broadcast (Lucy Nicholson)

Things, Facts, Events

To begin with, as illustrated by marketed solutions like SIGFOX, the IoT can be described as a fast and stripped-down communication layer carrying not so much things than facts and associated raw (i.e non symbolic) events. That seems to cut across traditional understandings because the IoT is dedicated to non symbolic devices yet may include symbolic systems, and fast communication may or may not mean real-time. So, when applications network requirements are to be considered, the focus should be on the way events are meant to be registered and processed.

Business Environments Cannot be Frozen

Given that time-frames are set according primary events, real-time activities can be defined as exclusive ongoing events: their start initiates a proprietary time-frame perceived from the outside as being without duration, i.e as if nothing could happen until their completion, with activities targeting the same domain supposed to be frozen.

ccc

Contrary to operational timing constraints (left), real-time ones (right) are set against the specific (i.e event driven) time-frames of targeted domain.

That principle can be understood as a generalization of the ACID (Atomicity, Consistency, Isolation, Durability) scheme used to guarantee that database transactions are processed reliably. Along that understanding a real-time business transaction would require that, whatever its actual duration, no change from other transactions would be accepted to its domain representation until the business transaction is completed and its associated outcomes duly committed. Yet, the hitch is that, contrary to systems transactions, there is no way to freeze actual business ones which will continue to be carried out notwithstanding suspended registrations.

Accesses can be fully synchronized within DB systems (single clock), suspended within functional architectures, consolidated within environment.

Accesses can be fully synchronized within DB systems (single clock), suspended within functional architectures, consolidated within environment.

In that case the problem is not so much one of locks on DB as one of dynamic alignment of managed representations with the changing state of affairs in their actual counterpart.

Yoking Systems & Environments

As Einstein famously said, “the only reason for time is so that everything doesn’t happen at once”. Along that reasoning coupling constraints for systems can be analyzed with regard to the way events are notified and registered:

  • Input flows: what happens between changes in environment (aka facts) and their recording by applications (a).
  • Processing: could the application be executed fully based on locally available information, or be contingent on some information managed by systems at domain level (b).
  • Output flows: what happens between actions triggered by applications and the corresponding changes in the environment (c).
vvvv

How to analyze the coupling between environment and system.

It’s important to remind that real-time activities are not defined in absolute time units: they can be measured in microsecond as well as in aeons, and carried out by light sensors or by snails.

A Simple Decision Routine

Deciding on real-time requirements can therefore follow a straightforward routine:

  • Should changes in relevant external objects, processes, or expectations, be instantly detected at system’s boundaries ? (a)
  • Could the interpretation and processing of associated events be carried out locally, or be contingent on information shared at domain level ? (b)
  • Should subsequent actions on relevant external objects, processes, or expectations be carried out instantly ? (c)
vvv

Coupling with the environment must be synchronous and footprint local or locked.

Positive answers to the three questions entail real-time requirements, as will also be the case if access to shared information is necessary.

 What about IoT ?

Strictly speaking, the internet of things is characterized by networked connections between non symbolic things. As it entails asynchronous communication and some symbolic mediation in between, one may assume that the IoT cannot support real-time activities. That assumption can be checked with some business cases given as examples.

Further Readings

External Links

AlphaGo: From Intuitive Learning to Holistic Knowledge

February 1, 2016

Brawn & Brain

Google’s AlphaGo recent success against Europe’s top player at the game of Go is widely recognized as a major breakthrough for Artificial Intelligence (AI), both because of the undertaking (Go is exponentially more complex than Chess) and time (it has occurred much sooner than expected). As it happened, the leap can be credited as much to brawn as to brain, the former with a massive increase in computing power, the latter with an innovative combination of established algorithms.

(Kunisada)

Brawny Contest around Aesthetic Game (Kunisada)

That breakthrough and the way it has been achieved may seem to draw opposite perspectives about the future of IA: either the current conceptual framework is the best option, with brawny machines becoming brainier and, sooner or later, will be able to leap over  the qualitative gap with their human makers; or it’s a quantitative delusion that could drive brawnier machines and helpless humans down into that very same hole.

Could AlphaGo and its DeepMind makers may point to a holistic bypass around that dilemma ?

Taxonomy of Sources

Taking a leaf from Spinoza, one could begin by considering the categories of knowledge with regard to sources:

  1. The first category is achieved through our senses (views, sounds, smells, touches) or beliefs (as nurtured by our common “sense”). This category is by nature prone to circumstances and prejudices.
  2. The second is built through reasoning, i.e the mental processing of symbolic representations. It is meant to be universal and open to analysis, but it offers no guarantee for congruence with actual reality.
  3. The third is attained through philosophy which is by essence meant to bring together perceptions, intuitions, and symbolic representations.

Whereas there can’t be much controversy about the first ones, the third category leaves room for a wide range of philosophical tenets, from religion to science, collective ideologies, or spiritual transcendence. With today’s knowledge spread across smart devices and driven by the wisdom of crowds, philosophy seems to look more at big data than at big brother.

Despite (or because of) its focus on the second category, AlphaGo and its architectural’s feat may still carry some lessons for the whole endeavor.

Taxonomy of Representations

As already noted, the effectiveness of IA’s supporting paradigms has been bolstered by the exponential increase in available data and the processing power to deal with it. Not surprisingly, those paradigms are associated with two basic forms of representations aligned with the source of knowledge, implicit for senses, and explicit for reasoning:

  • Designs based on symbolic representations allow for explicit information processing: data is “interpreted” into information which is then put to use as knowledge governing behaviors.
  • Designs based on neural networks are characterized by implicit information processing: data is “compiled” into neural connections whose weights (pondering knowledge ) are tuned iteratively on the basis of behavioral feedback.

Since that duality mirrors human cognitive capabilities, brainy machines built on those designs are meant to combine rationality with effectiveness:

  • Symbolic representations support the transparency of ends and the traceability of means, allowing for hierarchies of purposes, actual or social.
  • Neural networks, helped by their learning kernels operating directly on data, speed up the realization of concrete purposes based on the supporting knowledge implicitly embodied as weighted connections.

The potential of such approaches have been illustrated by internet-based language processing: pragmatic associations “observed” on billions of discourses are progressively complementing and even superseding syntactic and semantic rules in web-based parsers.

On that point too AlphaGo has focused ambitions since it only deals with non symbolic inputs, namely a collection of Go moves (about 30 million in total) from expert players. But that limit can be turned into a benefit as it brings homogeneity and transparency, and therefore a more effective combination of algorithms: brawny ones for actual moves and intuitive knowledge from the best players, brainy ones for putative moves, planning, and policies.

Teaching them how to work together is arguably a key factor of the breakthrough.

Taxonomy of Learning

As should be expected from intelligent machines, their impressive track record fully depends of their learning capabilities. Whereas those capabilities are typically applied separately to implicit (or non symbolic) and explicit (or symbolic) contents, bringing them under the control of the same cognitive engine, as humans brains routinely do, has long been recognized as a primary objective for IA.

Practically that has been achieved with neural networks by combining supervised and unsupervised learning: human experts help systems to sort the wheat from the chaff and then let them improve their expertise through millions of self-play.

Yet, the achievements of leading AI players have marked out the limits of these solutions, namely the qualitative gap between playing as the best human players and beating them. While the former outcome can be achieved through likelihood-based decision-making, the latter requires the development of original schemes, and that brings quantitative and qualitative obstacles:

  • Contrary to actual moves, possible ones have no limit, hence the exponential increase in search trees.
  • Original schemes are to be devised with regard to values and policies.

Overcoming both challenges with a single scheme may be seen as the critical achievement of DeepMind engineers.

Mastering the Breadth & Depth of Search Trees

Using neural networks for the evaluation of actual states as well as the sampling of policies comes with exponential increases in breath and depth of search trees. Whereas Monte Carlo Tree Search (MCTS) algorithms are meant to deal with the problem, limited capacity to scale up the processing power will nonetheless lead to shallow trees; until DeepMind engineers succeeded in unlocking the depth barrier by applying MCTS to layered value and policy networks.

AlphaGo seamless use of layered networks (aka Deep Convolutional Neural Networks) for intuitive learning, reinforcement, values, and policies was made possible by the homogeneity of Go’s playground and rules (no differentiated moves and search traps as in the game of Chess).

From Intuition to Knowledge

Humans are the only species that combines intuitive (implicit) and symbolic (explicit) knowledge, with the dual capacity to transform the former into the latter and in reverse to improve the former with the latter’s feedback.

Applied to machine learning that would require some continuity between supervised and unsupervised learning which would be achieved with neural networks being used for symbolic representations as well as for raw data:

  • From explicit to implicit: symbolic descriptions built for specific contexts and purposes would be engineered into neural networks to be tried and improved by running them on data from targeted environments.
  • From implicit to explicit: once designs tested and reinforced through millions of runs in relevant targets, it would be possible to re-engineer the results into improved symbolic descriptions.

Whereas unsupervised learning of deep symbolic knowledge remains beyond the reach of intelligent machines, significant results can be achieved for “flat” semantic playground, i.e if the same semantics can be used to evaluate states and policies across networks:

  1. Supervised learning of the intuitive part of the game as observed in millions of moves by human experts.
  2. Unsupervised reinforcement learning from games of self-play.
  3. Planning and decision-making using Monte Carlo Tree Search (MCTS) methods to build, assess, and refine its own strategies.

Such deep and seamless integration would not be possible without the holistic nature of the game of Go.

Aesthetics Assessment & Holistic Knowledge

The specificity of the game of Go is twofold, complexity on the quantitative side, simplicity on  the qualitative side, the former being the price of the latter.

As compared to Chess, Go’s actual positions and prospective moves can only be assessed on the whole of the board, using a criterion that is best defined as aesthetic as it cannot be reduced to any metrics or handcrafted expert rules. Players will not make moves after a detailed analysis of local positions and assessment of alternative scenarii, but will follow their intuitive perception of the board.

As a consequence, the behavior of AlphaGo can be neatly and fully bound with the second level of knowledge defined above:

  • As a game player it can be detached from actual reality concerns.
  • As a Go player it doesn’t have to tackle any semantic complexity.

Given a fitted harness of adequate computing power, the primary challenge of DeepMind engineers is to teach AlphaGo to transform its aesthetic intuitions into holistic knowledge without having to define their substance.

Further Readings

External Links

Detour from Turing Game

February 20, 2015

Summary

Considering Alan Turing’s question, “Can machines think ?”, could the distinction between communication and knowledge representation capabilities help to decide between human and machine ?

vvvv

Alan Turing at 4

What happens when people interact ?

Conversations between people are meant to convey concrete, partial, and specific expectations. Assuming the use of a natural language, messages have to be mapped to the relevant symbolic representations of the respective cognitive contexts and intentions.

ccc

Conveying intentions

Assuming a difference in the way this is carried on by people and machines, could that difference be observed at message level ?

Communication vs Representation Semantics

To begin with, languages serve two different purposes: to exchange messages between agents, and to convey informational contents. As illustrated by the difference between humans and other primates, communication (e.g alarm calls directly and immediately bound to imminent menace) can be carried out independently of knowledge representation (e.g information related to the danger not directly observable), in other words linguistic capabilities for communication and symbolic representation can be set apart. That distinction may help to differentiate people from machines.

Communication Capabilities

Exchanging messages make use of five categories of information:

  • Identification of participants (Who) : can be set independently of their actual identity or type (human or machine).
  • Nature of message (What): contents exchanged (object, information, request, … ) are not contingent on participants type.
  • Life-span of message (When): life-cycle (instant, limited, unlimited, …) is not contingent on participants type.
  • Location of participants (Where): the type of address space (physical, virtual, organizational,…) is not contingent on participants type.
  • Communication channels (How): except for direct (unmediated) human conversations, use of channels for non direct (distant, physical or otherwise) communication are not contingent on participants type .

Setting apart the trivial case of direct human conversation, it ensues that communication capabilities are not enough to discriminate between human and artificial participants, .

Knowledge Representation Capabilities

Taking a leaf from Davis, Shrobe, and Szolovits, knowledge representation can be characterized by five capabilities:

  1. Surrogate: KR provides a symbolic counterpart of actual objects, events and relationships.
  2. Ontological commitments: a KR is a set of statements about the categories of things that may exist in the domain under consideration.
  3. Fragmentary theory of intelligent reasoning: a KR is a model of what the things can do or can be done with.
  4. Medium for efficient computation: making knowledge understandable by computers is a necessary step for any learning curve.
  5. Medium for human expression: one the KR prerequisite is to improve the communication between specific domain experts on one hand, generic knowledge managers on the other hand.

On that basis knowledge representation capabilities cannot be used to discriminate between human and artificial participants.

Returning to Turing Test

Even if neither communication nor knowledge representation capabilities, on their own, suffice to decide between human and machine, their combination may do the trick. That could be achieved with questions like:

  • Who do you know: machines can only know previous participants.
  • What do you know: machines can only know what they have been told, directly or indirectly (learning).
  • When did/will you know: machines can only use their own clock or refer to time-spans set by past or planned transactional events.
  • Where did/will you know: machines can only know of locations identified by past or planned communications.
  • How do you know: contrary to humans, intelligent machines are, at least theoretically, able to trace back their learning process.

Hence, and given scenarii scripted adequately, it would be possible to build decision models able to provide unambiguous answers.

Reference Readings

A. M. Turing, “Computing Machinery and Intelligence”

Davis R., Shrobe H., Szolovitz P., “What is a Knowledge Representation?”

Further Reading

 

 

Semantic Web: from Things to Memes

August 10, 2014

The new soup is the soup of human culture. We need a name for the new replicator, a noun which conveys the idea of a unit of cultural transmission, or a unit of imitation. ‘Mimeme’ comes from a suitable Greek root, but I want a monosyllable that sounds a bit like ‘gene’. I hope my classicist friends will forgive me if I abbreviate mimeme to meme…

Richard Dawkins

The genetics of words

The word meme is the brain child of Richard Dawkins in his book The Selfish Gene, published in 1976, well before the Web and its semantic soup. The emergence of the ill-named “internet-of-things” has brought about a new perspective to Dawkins’ intuition: given the clear divide between actual worlds and their symbolic (aka web) counterparts, why not chart human culture with internet semantics ?

ccccc

Symbolic Dissonance: Flowering Knives (Adel Abdessemed).

With interconnected digits pervading every nook and cranny of material and social environments, the internet may be seen as a way to a comprehensive and consistent alignment of language constructs with targeted realities: a name for everything, everything with its name. For that purpose it would suffice to use the web to allocate meanings and don things with symbolic clothes. Yet, as the world is not flat, the charting of meanings will be contingent on projections with dissonant semantics. Conversely, as meanings are not supposed to be set in stone, semantic configurations can be adjusted continuously.

Internet searches: words at work

Semantic searches (as opposed to form or pattern based ones) rely on textual inputs (key words or phrase) aiming at specific reality or information about it:

  • Searches targeting reality are meant to return sets of instances (objects or phenomena) meeting users’ needs (locations, people, events, …).
  • Searches targeting information are meant to return documents meeting users’ interest for specific topics (geography, roles, markets, …).
What are you looking for ?

Looking for information or instances.

Interestingly, the distinction between searches targeting reality and information is congruent with the rhetorical one between metonymy and metaphor, the former best suited for things, the latter for meanings.

Rhetoric: Metonymy & Metaphor

As noted above, searches can be heeded by references to identified objects, the form of digital objects (sound, visuals, or otherwise), or associations between symbolic representations. Considering that finding referenced objects is basically a technical problem, and that pattern matching is a discipline of its own,  the focus is to be put on the third case, namely searches driven by words. From that standpoint searching the web becomes a problem of rhetoric, namely: how to use language to get rapidly and effectively the most accurate outcome to a query. And for that purpose rhetoric provides two basic contraptions: metonymy and metaphor.

Both metonymy and metaphor are linguistic constructs used to substitute a word (or a phrase) by another without altering its meaning. When applied to searches, they are best understood in terms of extensions and intensions, extensions standing for the actual set of objects and behaviors, and intensions for the set of features that characterize these instances.

Metonymy uses contiguity to substitute target terms for source ones, contiguity being defined with regard to their respective extensions. For instance, given that US Presidents reside at the White House, Washington DC, each term can be used instead.

Metonymy use physical or functional proximity (full line) to match terms (dashed line)

Metonymy use physical or functional proximity (full line) to match extensions (dashed line)

Metaphor uses similarity to substitute target terms for source ones, similarity being defined with regard to a shared subset of features, the others being ignored. Hence, in contrast to metonymy, metaphor is based on intensions.

Metaphors use analogy to maps terms whose intensions share a selected subset of features

Metaphors use analogy (dashed line) to maps terms whose intensions (dotted line) share a selected subset of features

As it happens, and not by chance, those rhetorical constructs can be mapped to categories of searches:

  • Metonymy will be used to navigate across instances of things and phenomena following structural, functional, or temporal associations.
  • Metaphors will be used to navigate across terms and concepts according to similarities, ontologies, and abstractions.

As a corollary, searches can be seen as scaffolds supporting the building of meanings.

Selected metaphors are used to extract occurrences to be refined using metonymies.

The building of meanings, back and forth between metonymies and metaphors

Memes & their making

Today general purpose search engines combine brains and brawn to match queries to references, the former taking advantage of language parsers and statistical inferences, the latter running heuristics on gigantic repositories of data and searches. Given the exponential growth of accumulated data and the benefits of hindsight, such engines can continuously improve the relevancy of their answers; moreover, their advances are not limited to accuracy and speed but also embrace a better understanding of queries. And that brings about a qualitative change: accruing improved understandings to knowledge bases provides search engines with learning capabilities.

Assuming that such learning is autonomous, self-sustained, and encompasses concepts and categories, the Web could be seen as a semantic incubator for the development of meanings. That would vindicate Dawkins’ intuition comparing the semantic evolution of memes to the Darwinian evolution of living organisms.

Further Reading

External Links

Governance, Regulations & Risks

July 16, 2014

Governance & Environment

Confronted with spreading and sundry regulations on one hand, the blurring of enterprise boundaries on the other hand, corporate governance has to adapt information architectures to new requirements with regard to regulations and risks. Interestingly, those requirements seem to be driven by two different knowledge policies: what should be known with regard to compliance, and what should be looked for with regard to risk management.

Zhigang-tang2

Governance (Zhigang-tang)

 Compliance: The Need to Know

Enterprise are meant to conform to rules, some set at corporate level, others set by external entities. If one may assume that enterprise agents are mostly aware of the former, that’s not necessary the case for the latter, which means that information and understanding are prerequisites for regulatory compliance :

  • Information: the relevant regulations must be identified, collected, and their changes monitored.
  • Understanding: the meanings of regulations must be analyzed and the consequences of compliance assessed.

With regard to information processing capabilities, it must be noted that, since regulations generally come as well structured information with formal meanings, the need for data processing will be limited, if at all.

With regard to governance, given the pervasive sources of external regulations and their potentially crippling consequences, the challenge will be to circumscribe the relevant sources and manage their consequences with regard to business logic and organization.

Regulatory Compliance vs Risks Management

Regulatory Compliance vs Risks Management

 

Risks: The Will to Know

Assuming that the primary objective of risk management is to deal with the consequences (positive or negative) of unexpected events, its information priorities can be seen as the opposite of the ones of regulatory compliance:

  • Information: instead of dealing with well-defined information from trustworthy sources, risk management must process raw data from ill-defined or unreliable origins.
  • Understanding: instead of mapping information to existing organization and business logic, risk management will also have to explore possible associations with still potentially unidentified purposes or activities.

In terms of governance risks management can therefore be seen as the symmetric of regulatory compliance: the former relies on processing data into information and expanding the scope of possible consequences, the latter on translating information into knowledge and reducing the scope of possible consequences.

With regard to regulations governance is about reduction, with regard to risks it's about expansion

With regard to regulations governance is about reduction, with regard to risks it’s about expansion

Not surprisingly, that understanding coincides with the traditional view of governance as a decision-making process balancing focus and anticipation.

Decision-making: Framing Risks and Regulations

As noted above, regulatory compliance and risk management rely on different knowledge policies, the former restrictive, the latter inclusive. That distinction also coincides with the type of factors involved and the type of decision-making:

  • Regulations are deontic constraints, i.e ones whose assessment is not subject to enterprises decision-making. Compliance policies will therefore try to circumscribe the footprint of regulations on business activities.
  • Risks are alethic constraints, i.e ones whose assessment is subject to enterprise decision-making. Risks management policies will therefore try to prepare for every contingency.

Yet, when set on a governance perspective, that picture can be misleading because regulations are not always mandatory, and even mandatory ones may left room for compliance adjustments. And when regulations are elective, compliance is less driven by sanctions or penalties than by the assessment of business or technical alternatives.

Regulations & Risks : decision patterns

Decision patterns: Options vs Arbitrage

Conversely, risks do not necessarily arise from unidentified events and upshot but can also come from well-defined outcomes with unknown likelihood. Managing the latter will not be very different from dealing with elective regulations except that decisions will be about weighted opportunity costs instead of business alternatives. Similarly, managing risks from unidentified events and upshot can be compared to compliance to mandatory regulations, with insurance policies instead of compliance costs.

What to Decide: Shifting Sands

As regulations can be elective, risks can be interpretative: with business environments relocated to virtual realms, decision-making may easily turns to crisis management based on conjectural and ephemeral web-driven semantics. In that case ensuing overlaps between regulations and risks can only be managed if  data analysis and operational intelligence are seamlessly integrated with production systems.

When to Decide: Last Responsible Moment

Finally, with regulations scope and weighted risks duly assessed, one have to consider the time-frames of decisions about compliance and commitments.

Regarding elective regulations and defined risks, the time-frame of decisions is set at enterprise level in so far as options can be directly linked to business strategies and policies. That’s not the case for compliance to mandatory regulations or commitments exposed to undefined risks since both are subject to external contingencies.

Whatever the source of the time-frame, the question is when to decide, and the answer is at the “last responsible moment”, i.e not until taking side could change the possible options:

  • Whether elective or mandatory, the “last responsible moment” for compliance decision is static because the parameters are known.
  • Whether defined or not, the “last responsible moment” for commitments exposed to risks is dynamic because the parameters are to be reassessed periodically or continuously.
Compliance and risk taking: last responsible moments to decide

Compliance and risk taking: last responsible moments to decide

One step ahead along that path of reasoning, the ultimate challenge of regulatory compliance and risk management would be to use the former to steady the latter.

Further Readings

Sifting through a Web of Things

September 27, 2013

The world is the totality of facts, not of things.

Ludwig Wittgenstein

Objective

At its inception, the young internet was all about sharing knowledge. Then, business concerns came to the web and the focus was downgraded to information. Now, exponential growth turns a surfeit of information into meaningless data, with the looming risk of web contents being once again downgraded. And the danger is compounded by the inroads of physical objects bidding for full netizenship and equal rights in the so-called “internet of things”.

cccc

How to put words on a web of things (Ai Weiwei)

As it happens, that double perspective coincides with two basic search mechanisms, one looking for identified items and the other for information contents. While semantic web approaches are meant to deal with the latter, it may be necessary to take one step further and to bring the problem (a web of things and meanings) and the solutions (search strategies) within an integrated perspective.

Down with the System Aristocrats

The so-called “internet second revolution” can be summarized as the end of privileged netizenship: down with the aristocracy of systems with their absolute lid on internet residency, within the new web everything should be entitled to have a voice.

cccc

Before and after the revolution: everything should have a say

But then, events are moving fast, suggesting behaviors unbecoming to the things that used to be. Hence the need of a reasoned classification of netizens based on their identification and communication capabilities:

  • Humans have inherent identities and can exchange symbolic and non symbolic data.
  • Systems don’t have inherent identities and can only exchange symbolic data.
  • Devices don’t have inherent identities and can only exchange non symbolic data.
  • Animals have inherent identities and can only exchange non symbolic data.

Along that perspective, speaking about the “internet of things” can be misleading because the primary transformation goes the other way:  many systems initially embedded within appliances (e.g cell phones) have made their coming out by adding symbolic user interfaces, mutating from devices into fully fledged systems.

Physical Integration: The Meaning of Things

With embedded systems colonizing every nook and cranny of the world, the supposedly innate hierarchical governance of systems over objects is challenged as the latter calls for full internet citizenship. Those new requirements can be expressed in terms of architecture capabilities :

  • Anywhere (Where): objects must be localized independently of systems. That’s customary for physical objects (e.g Geo-localization), but may be more challenging for digital ones on they way across the net.
  • Anytime (When): behaviors must be synchronized over asynchronous communication channels. Existing mechanism used for actual processes (e.g Network Time protocol) may have to be set against modal logic if it is used for their representation.
  • Anybody (Who): while business systems don’t like anonymity and rely on their interfaces to secure access, things of the internet are to be identified whatever their interface (e.g RFID).
  • Anything (What): objects must be managed independently of their nature, symbolic or otherwise (e.g 3D printed objects).
  • Anyhow (How): contrary to business systems, processes don’t have to follow predefined scripts and versatility and non determinism are the rules of the game.

Taking a sortie in a restaurant for example: the actual event is associated to a reservation, car(s) and phone(s) are active objects geo-localized at a fixed place and possibly linked to diners, great wines can be authenticated directly by smartphone applications, phones are used for conversations and pictures, possibly for adding to reviews, friends in the neighborhood can be automatically informed of the sortie and invited to join.

A dinner on the Net: place (restaurant), event (sortie), active objects (car, phone), passive object (wine), message (picture), business objects (reviews, reservations), and social beholders (network friends).

A dinner on the Net: place (restaurant), event (sortie), active objects (car, phone), passive object (wine), message (picture), business objects (reviews, reservations), and social beholders (network friends).

As this simple example illustrates, the internet of things brings together dumb objects, smart systems, and knowledgeable documents. Navigating such a tangle will require more than the Semantic Web initiative because its purpose points in the opposite direction, back to the origin of the internet, namely how to extract knowledge from data and information.

Moreover, while most of those “things” fall under the governance of the traditional internet of systems, the primary factor of change comes from the exponential growth of smart physical things with systems of their own. When those systems are “embedded”, the representations they use are encapsulated and cannot be accessed directly as symbolic ones. In other words those agents are governed by hidden agendas inaccessible to search engines. That problem is illustrated a contrario (things are not services) by services oriented architectures whose one of primary responsibility is to support services discovery.

Semantic Integration: The Actuality of Meanings

The internet of things is supposed to provide a unified perspective on physical objects and symbolic representations, with the former taken as they come and instantly donned in some symbolic skin, and the latter boiled down to their documentary avatars (as Marshall McLuhan famously said, “the medium is the message”). Unfortunately, this goal is also a challenge because if physical objects can be uniformly enlisted across the web, that’s not the case for symbolic surrogates which are specific to social entities and managed by their respective systems accordingly.

With the Internet of Systems, social entities with common endeavors agree on shared symbolic representations and exchange the corresponding surrogates as managed by their systems. The Internet of Things for its part is meant to put an additional layer of meanings supposedly independent of those managed at systems level. As far as meanings are concerned, the latter is flat, the former is hierarchized.

The internet of things is supposed to level the meaning fields, reducing knowledge to common sense.

The internet of things is supposed to level the meaning fields, reducing knowledge to common sense.

That goal raises two questions: (1) what belongs to the part governed by the internet of things and, (2) how is its flattened governance to be related to the structured one of the internet of systems.

A World of Phenomena

Contrary to what its name may suggest, the internet of things deals less with objects than with phenomena, the reason being that things must manifest themselves, or their existence be detected,  before being identified, if and when it’s possible.

Things first appear on radar when some binary footprint can be associated to a signalling event. Then, if things are to go further, some information has to be extracted from captured data:

  • Coded data could be recognized by a system as an identification tag pointing to recorded known features and meanings, e.g a bar code on a book.
  • The whole thing could be turned into its digital equivalent, and vice versa, e.g a song or a picture.
  • Context and meanings could only be obtained by linking the captured data to representations already identified and understood, e.g a religious symbol.
How to put things in place

From data to information: how to add identity and meaning to things

Whereas things managed by existing systems already come with net identities with associated meaning, that’s not necessarily the case for digitized ones as they may or may not have been introduced as surrogates to be used as their real counterpart: if handshakes can stand as symbolic contract endorsements, pictures thereof  cannot be used as contracts surrogates. Hence the necessary distinction between two categories of formal digitization:

  • Applied to symbolic objects (e.g a contract), formal digitization enables the direct processing of digital instances as if performed on actual  ones, i.e with their conventional (i.e business) currency. While those objects have no counterpart (they exist simultaneously in both realms), such digitized objects have to bear an identification issued by a business entity, and that put them under the governance of standard (internet of systems) rules.
  • Applied to binary objects  (e.g a fac-simile), formal digitization applies to digital instances that can be identified and authenticated on their own, independently of any symbolic counterpart. As a corollary, they are not meant to be managed or even modified and, as illustrated by the marketing of cultural contents (e.g music, movies, books …), their actual format may be irrelevant. Providing agreed upon de facto standards, binary objects epitomize internet things.

To conclude on footprint, the Internet of Things appears as a complement more than a substitute as it abides by the apparatus of the Internet of Systems for everything already under its governance, introducing new mechanisms only for the otherwise uncharted things set loose in outer reaches. Can the same conclusion hold for meanings ?

Organizational vs Social Meanings

As epitomized by handshakes and contracts, symbolic representations are all about how social behaviors are sanctioned.

Signals are physical events with open-ended interpretations

When not circumscribed within organizational boundaries, social behaviors are open to different interpretations.

In system-based networks representations and meanings are defined and governed by clearly identified organizations, corporate or otherwise. That’s not necessarily the case for networks populated by software agents performing unsupervised tasks.

The first generations of those internet robots (aka bots) were limited to automated tasks, performed on the account of their parent systems, to which they were due to report. Such neat hierarchical governance is being called into question by bots fired and forgotten by their maker, free of reporting duties, their life wholly governed by social events. That’s the case with the internet of things, with significant consequences for searches.

As noted above, the internet of things can consistently manage both system-defined identities and the new ones it introduces for things of its own. But, given a network job description, the same consolidation cannot be even considered for meanings: networks are supposed to be kept in complete ignorance of contents, and walls between addresses and knowledge management must tower well above the clouds. As a corollary, the overlapping of meanings is bound to grow with the expanse of things, and the increase will not be linear.

Contrary to identities, meanings usually overlap when things float free from systems' governance.

Contrary to identities, meanings usually overlap when things float free from systems’ governance.

That brings some light on the so-called “virtual world”, one made of representations detached from identified roots in the actual world. And there should be no misunderstanding: “actual” doesn’t refer to physical objects but to objects and behaviors sanctioned by social entities, as opposed to virtual, which includes the ones whose meaning cannot be neatly anchored to a social authority.

That makes searches in the web of things doubly challenging as they have to deal with both overlapping and shifting semantics.

A Taxonomy of Searches

Semantic searches (forms and pattern matching should be seen as a special case) can be initiated by any kind of textual input, key words or phrase. As searches, they should first be classified with regard to their purpose: finding some specific instance or collecting information about some topic.

Searches about instances are meant to provide references to:

  • Locations, addresses, …
  • Antiques, stamps,…
  • Books, magazines,…
  • Alumni, friends,…
  • Concerts, games, …
  • Cooking recipes, administrative procedures,…
  • Status of shipment,  health monitoring, home surveillance …
What are you looking for ?

What are you looking for ?

Searches about categories are meant to provide information about:

  • Geography, …
  • Products marketing , …
  • Scholarly topics, market researches…
  • Customers relationships, …
  • Business events, …
  • Business rules, …
  • Business processes …

That taxonomy of searches is congruent with the critical divide between things and symbolic representations.

Things and Symbolic Representations

As noted above, searches can be heeded by references to identified objects, the form of digital objects (sound, visuals, or otherwise), or associations between symbolic representations. Considering that finding referenced objects is basically a indexing problem, and that pattern matching is a discipline of its own, the focus is to be put on the third case, namely searches driven by words (as opposed to identifiers and forms). From that standpoint searches are best understood in the broader semantic context of extensions and intensions , the former being the actual set of objects and phenomena, the latter a selected set of features shared by these instances.

ccc

Searching Steps

A search can therefore be seen as an iterative process going back and forth between descriptions and occurrences or, more formally, between intentions and extensions. Depending on the request, iterations are made of four steps:

  • Given a description (intension) find the corresponding set of instances (extension); e.g “restaurants” >  a list of restaurants.
  • Given an instance (extension), find a description (intension); e.g “Alberto’s Pizza” > “pizzerias”.
  • Extend or generalize a description to obtain a better match to request and context; e.g “pizzerias” > “Italian restaurants”.
  • Trim or refine instances to obtain a better match to request and context; e.g a list of restaurants > a list of restaurants in the Village.

Iterations are repeated until the outcome is deemed to satisfy the quality parameters.

The benefit of those distinctions is to introduce explicit decision points with regard to the reference models heeding  the searches. Depending on purpose and context, such models could be:

  • Inclusive: can be applied to any kind of search.
  • Domain specific: can only be applied to circumscribed domains of knowledge. That’s the aim of the semantic web initiative and the Web Ontology Language (OWL).
  • Institutional: can only be applied within specific institutional or business organizations. They could be available to all or through services with restricted access and use.

From Meanings to Things, and back

The stunning performances of modern search engines comes from a combination of brawn and brains, the brainy part for grammars and statistics, the brawny one for running heuristics on gigantic repositories of linguistic practices and web researches. Moreover, those performances improve “naturally” with the accumulation of data pertaining to virtual events and behaviors. Nonetheless, search engines have grey or even blind spots, and there may be a downside to the accumulation of social data, as it may increase the gap between social and corporate knowledge, and consequently the coherence of outcomes.

Meanings can be inclusive, domain specific, or institutional

Meanings can be inclusive, domain specific, or institutional

That can be illustrated by a search about Amedeo Modigliani:

  • A inclusive search for “Modigliani” will use heuristics to identify the artist (a). An organizational search for an homonym (e.g a bank customer) would be dealt with at enterprise level, possibly through an intranet (c).
  • A search for “Modigliani’s friends” may look for the artist’s Facebook friends if kept at the inclusive level (a1), or switch to a semantic context better suited to the artist (a2). The same outcome would have been obtained with a semantic search (b).
  • Searches about auction prices may be redirected or initiated directly, possibly subject to authorization (c).

Further Reading

External Links