Considering Alan Turing’s question, “Can machines think ?”, could the distinction between communication and knowledge representation capabilities help to decide between human and machine ?
What happens when people interact ?
Conversations between people are meant to convey concrete, partial, and specific expectations. Assuming the use of a natural language, messages have to be mapped to the relevant symbolic representations of the respective cognitive contexts and intentions.
Assuming a difference in the way this is carried on by people and machines, could that difference be observed at message level ?
Communication vs Representation Semantics
To begin with, languages serve two different purposes: to exchange messages between agents, and to convey informational contents. As illustrated by the difference between humans and other primates, communication (e.g alarm calls directly and immediately bound to imminent menace) can be carried out independently of knowledge representation (e.g information related to the danger not directly observable), in other words linguistic capabilities for communication and symbolic representation can be set apart. That distinction may help to differentiate people from machines.
Exchanging messages make use of five categories of information:
- Identification of participants (Who) : can be set independently of their actual identity or type (human or machine).
- Nature of message (What): contents exchanged (object, information, request, … ) are not contingent on participants type.
- Life-span of message (When): life-cycle (instant, limited, unlimited, …) is not contingent on participants type.
- Location of participants (Where): the type of address space (physical, virtual, organizational,…) is not contingent on participants type.
- Communication channels (How): except for direct (unmediated) human conversations, use of channels for non direct (distant, physical or otherwise) communication are not contingent on participants type .
Setting apart the trivial case of direct human conversation, it ensues that communication capabilities are not enough to discriminate between human and artificial participants, .
Knowledge Representation Capabilities
Taking a leaf from Davis, Shrobe, and Szolovits, knowledge representation can be characterized by five capabilities:
- Surrogate: KR provides a symbolic counterpart of actual objects, events and relationships.
- Ontological commitments: a KR is a set of statements about the categories of things that may exist in the domain under consideration.
- Fragmentary theory of intelligent reasoning: a KR is a model of what the things can do or can be done with.
- Medium for efficient computation: making knowledge understandable by computers is a necessary step for any learning curve.
- Medium for human expression: one the KR prerequisite is to improve the communication between specific domain experts on one hand, generic knowledge managers on the other hand.
On that basis knowledge representation capabilities cannot be used to discriminate between human and artificial participants.
Returning to Turing Test
Even if neither communication nor knowledge representation capabilities, on their own, suffice to decide between human and machine, their combination may do the trick. That could be achieved with questions like:
- Who do you know: machines can only know previous participants.
- What do you know: machines can only know what they have been told, directly or indirectly (learning).
- When did/will you know: machines can only use their own clock or refer to time-spans set by past or planned transactional events.
- Where did/will you know: machines can only know of locations identified by past or planned communications.
- How do you know: contrary to humans, intelligent machines are, at least theoretically, able to trace back their learning process.
Hence, and given scenarii scripted adequately, it would be possible to build decision models able to provide unambiguous answers.