Since Alan Turing first posed the question “Can
machines think?” in his seminal 1951 paper, “Computing machinery and
intelligence” the field of Artificial Intelligence (AI) has been a growing and
fruitful one. While some might argue we are still as far away as ever from
reproducing anything approaching human intelligence, the capabilities of
artificial systems have expanded to encompass some impressive achievements.
The defeat of world chess champion, Garry Kasporov,
by IBM’s Deep Blue in 1997, was a
significant moment, but perhaps not as surprising as Watson’s victory on the US quiz show, Jeopardy!, last year. The
processing of spoken language proved to be a particularly hard nut for AI to crack,
but Watson’s victory signals that a
widening range of abilities traditionally thought of as uniquely human are
beginning to yield to the arsenal of techniques AI researchers have at their
disposal.
In the wake of Turing’s centenary
last month, it seems fitting to survey how far the field the London-born
mathematician pioneered has come in the UK since he was tragically lost to us
over half a century ago. This article therefore attempts to identify some of the most significant recent trends in AI research in the UK, as well as describing some projects that have either
generated significant media interest, had significant social impact, or offer
great promise for the future.
Computational creativity
One of the most visible recent successes has come from
attempts to develop software which exhibits behaviour which would be judged creative in a human. A growing community of researchers have been
active in this area for some time, culminating in the first computational creativity conference in
Lisbon in 2010.
An important player has been Simon Colton at
Imperial College London, with his PaintingFool software artist. In a 2009 editorial Colton and others described the fragmentation of AI research from the ambition of early projects
aimed at “artefact generation” into subfields, such as machine learning,
planning, etc., in a “problem-solving paradigm”. They went on to claim that
computational creativity researchers are “actively engaged in putting the
pieces back together again”. The aim is
to combine various AI methods with techniques in areas like computer graphics,
to automatically generate “artefacts of higher cultural value”.
He has argued that for software to
be judged creative, it needs to demonstrate three key aspects of human
creativity - skill, appreciation, and imagination. Skill is not difficult to
demonstrate for software which can abstract regions of colour in images, shift
colour palettes, and simulate natural media such as paint brushes, but the
others would seem at first glance to be beyond the capabilities of current
systems.
Nevertheless, an attempt to demonstrate appreciative behaviour, by using machine
vision techniques to detect human emotion and paint appropriate portraits,
won the 2007 British Computer Society's Machine
Intelligence prize. More recently, the group’s use of generative AI
techniques, such as evolutionary search, along with 3D modelling tools, have led to works of art deemed sufficiently
imaginative to have been exhibited in two exhibitions in Paris last year. The
pieces were neither abstract nor based on photographs, challenging the idea
that mere algorithms cannot produce original, figurative art. The pictures were
also featured in the Horizon
documentary, The Hunt for AI, broadcast
this April.
But it is the combination of many different
computing and AI techniques, and their “layering” into a teaching interface
through which the system learns, which demonstrates the bold claim of “putting
the pieces back together again”.
Enactive cognition
A potential criticism of computational creativity is
that it lacks what some might claim is the crucial purpose of artistic
endeavour. Art is often thought of as a form of communication and any current artificial
system lacks the intentionality
necessary to motivate communication between what philosophy calls autonomous agents. There is a radical
view in cognitive science, which claims that consciousness is a unique property
of evolved, biological life. This is the life-mind
continuity thesis of the Enactive
Cognition movement. Life (and the
potential for death) gives rise to self-generated, self-perpetuating action, and
thus intention, which is precisely what an engineered system lacks.
First articulated by Varela, Thompson and Rosch in 1991,
enactive cognition, in its broadest sense, is an attempt to reframe the
questions we ask when we investigate cognition and consciousness. Rather than
focussing on individual components of mind such as neural activity or
functional anatomy, a much broader focus is encouraged - on whole organisms and
their interactions with the environment and each other. Notions entrenched in
mainstream cognitive science, such as computation
and representation, are rejected as inappropriate
models of biological cognition which will ultimately thwart attempts to
understand consciousness. Professor Mark Bishop, chair of cognitive computing
at Goldsmiths, put it this way: “By reifying the interaction of my brain, in my
body, in our world, enactivist theory offers an alternate handle on how we
think, how we see, how we feel, that may help us escape the 'Cartesian divide'
that has plagued Cognitive Science since Turing.”
Enactivists also see perception and action as
inseparable aspects of the more fundamental activity of “sense-making”,
involving the purposeful regulation of dynamic coupling with the environment
via sensorimotor loops. The
“sensorimotor contingencies” theory of psychologist Kevin O’Regan, where perceiving is something we do rather
than sensations we have, fits neatly
within this framework. Such embodied approaches
to perception are gaining ground in psychology and attracting media attention,
as evidenced by numerous features in NewScientist.
Chrystopher Nehaniv’s group at the University
of Hertfordshire have also made progress using an enactive approach to
robotics. A focus on social
interaction has led to robots which play peek-a-boo and can learn
simple language rules from interacting with humans.
Currently however, enactivism is primarily a
critique of the classical paradigm and lacks a coherent research agenda of its
own. As a first step towards remedying this, the Foundations of Enactive Cognitive Science meeting held in Windsor,
UK, this February, brought together researchers from philosophy, psychology, AI
and robotics, in an attempt to start building a framework for future research.
Bio-machine hybrids
The enactivist claim that biological cognition and
computation are fundamentally different things raises interesting questions
about another recent trend – that of bio-machine hybrids, or “animats”. A group
of systems engineers and neuroscientists at the University of Reading, led by
Kevin Warwick, have developed a robot controlled by cultures of organic neurons
that is capable of navigating obstacles. Cortical cells removed
from a rat foetus were cultured in nutrients on a multi-electrode array (MEA), until
they regrew a dense mesh of interconnections. The MEA was then fed signals from
an ultrasound sensor on a wheeled robot via a Bluetooth link. The team were
able to identify patterns of action potentials in the culture’s responses to
input, which could be used to steer the robot around obstacles.
Footage of the “rat-brained robot” in action,
produced by New Scientist, currently
has 1,705,230 hits on YouTube. Its performance is far from perfect of course,
and to what extent such an artificially grown, stripped-down “brain” makes a
good model for processes in a real brain is an open question. But the team hopes
to gain insights into the development and function of neuronal networks which
could contribute to our understanding of the mechanisms governing cognitive
phenomena such as memory and learning. There is also the hope that by
interfering with such a system once trained, new insights may be gained into
the causes and effects of disorders like Alzheimer’s and Parkinson’s,
potentially leading to new treatments.
The question alluded to above is whether such
hybrids are constrained by the same limits to computation identified for
“standard” computational devices such as Turing Machines (a
mathematical abstraction which led to the development of computers). A related question
arises from consideration of the emerging technology of neuronal prostheses.
Given these technologies could be seen as extreme ends of a spectrum of systems
with the potential to converge, it seems timely to ask whether, and at what
point, their capacity in terms of notions such as autonomy, intentionality, and
even consciousness, would also converge. Questions such as these where the
focus of the 5th AISB Symposium on Computing and Philosophy: Computing, Philosophy and the Question of
Bio-Machine Hybrids, held at the University of Birmingham, UK, as part of the AISB/IACAP World Congress 2012 in honour of Alan Turing, earlier this month.
AI and the NHS
A project which has had significant social impact in
the UK is a system developed between the University of Reading, Goldsmiths and
@UK plc, which makes it possible to analyse an organisation’s e-purchasing
decisions using AI to identify equivalent items and alternative suppliers and
so highlight potential savings. Using this SpendInsight
system the UK National Audit Office (NAO) identified potential savings for the UKNHS of £500m per annum on only 25% of spend. Ronald Duncan, Technical Director at @UK plc, said:
“The use of AI means the system can automatically analyse billions of pounds of
spend data in less than 48 hours. This is a significant increase in speed
compared to manual processes that would take years to analyse smaller data
sets.” He added: “This is the key to realising the savings.”
An extension to the system also allows the ‘carbon footprint’ of spending patterns to be analysed and the results of a survey of civil
servants suggests that linking this to a cost analysis would go a long way
towards overcoming the institutional inertia that is currently costing the UK billions
each year. In other words, strong environmental policies could, in
this instance, save the UK billions in a time of financial crisis, through the
application of AI technology.
Quantum linguistics
Finally, Bob Coecke, a physicist at the University
of Oxford, has devised a novel way of simultaneously analysing the syntax and semanticsof language. The technique is being referred to as “quantum linguistics” due to
its origins in the work Coecke and his colleague, Samson Abramsky, pioneered, which
applies ideas from category theory to problems in quantum mechanics.
Traditional mathematical models of language, known
as formal semantic models, reduce sentences
to logical structures where grammatical rules are applied to word meanings in
order to evaluate truth or falsity and so draw inferences. Meaning tends to be
reduced to Boolean logic by such models, which therefore don’t capture nuances
such as the different distances between the meanings of “pony” and “horse”
compared to “horse” and “cat”. Distributional
models quantify the meaning of words by statistically analysing the contexts
they occur in and using this to represent the word as a vector in a single,
high-dimensional space. In this way, the horse-pony-cat distance might be quantified,
in any number of dimensions, by context words like “ride”, “hooves” or
“whiskers”. While these models have the advantage of being flexible and
empirical, they are not compositional
and so cannot be applied to sentences.
Coecke, together with Mehrnoosh Sadrzadeh and
Stephen Clark, devised a way of using the graphical maps he had used to model
quantum information flow, to represent the flow of information between words in
a sentence. Talking about this transition from quantum mechanics to
linguistics, Professor Coecke said: "By visualizing the flows of
information in quantum protocols we were able to use the same method to expose
flows of word meaning in sentences. It is as if meaning of words is
`teleported' within sentences."