The development of conversational AI has been underway for more than 60 years, in large part driven by research done in the field of natural language processing (NLP). In the 1980s, the departure from hand-written rules and shift to statistical approaches enabled NLP to be more effective and versatile in handling real data (Nadkarni, P.M. et al. 2011, p. 545). Since then, this trend has only grown in popularity, notably fuelled by the wide application of deep learning technologies. NLP in recent years finds remarkable success in classification, matching, translation, and structured prediction (Li, H. 2017, p. 2), tasks easier accomplished through statistic models. Naturalistic multi-turn dialogue still proves challenging, however, which some believe will remain unsolved until we develop an artificial general intelligence that is capable of “natural language understanding” (Bailey, K. 2017).
To investigate effective system architecture for designing conversational AI, this abstract gives careful consideration to methodologies described in autopoietic theory and conversation theory. Building on the intersection of these theories and other multidisciplinary studies, it argues that conversation construction requires systematic representations of the world, especially those based on situated understanding. Furthermore, the sufficiency of a conversational AI should not be measured from its commonsense cognitive abilities, but from how well it imitates interchanges between human beings. This assertion readily alludes to the definition of intelligence by G.Pask (1976, p.7–8): “Intelligence is a property that is ascribed by an external observer to a conversation between participants if, and only if, their dialogue manifests understanding.” In this light, if a conversational AI displays situated understanding during a successful exchange, it could be said to have demonstrated intelligence.
The approach shown in this abstract aims to provide a new direction for tackling naturalistic multi-turn dialogue and to expand the benefit of contemporary NLP technologies. It suggests designing conversational AI as a self-referred system (Maturana, H.R. & Varela, F.J. 1980, p. xiii) that participates in “a process of understanding, retaining and learning that goes on” (Pask, G. 1972, p.212). This is highly achievable if deep learning methods were leveraged. The abstract also gives a nod to the “imitation game” famously incepted by A. Turing (1950, p. 433), which he proposed as a favourable alternative to asking the question “Can machines think?”
Continues in source: CONVERSATIONAL AI: THE IMITATION MACHINE – Towards Data Science