Friday 27 July 2012

Turing's legacy: AI in the UK today


Since Alan Turing first posed the question “Can machines think?” in his seminal 1951 paper, “Computing machinery and intelligence” the field of Artificial Intelligence (AI) has been a growing and fruitful one. While some might argue we are still as far away as ever from reproducing anything approaching human intelligence, the capabilities of artificial systems have expanded to encompass some impressive achievements. 

The defeat of world chess champion, Garry Kasporov, by IBM’s Deep Blue in 1997, was a significant moment, but perhaps not as surprising as Watson’s victory on the US quiz show, Jeopardy!, last year. The processing of spoken language proved to be a particularly hard nut for AI to crack, but Watson’s victory signals that a widening range of abilities traditionally thought of as uniquely human are beginning to yield to the arsenal of techniques AI researchers have at their disposal. 

In the wake of Turing’s centenary last month, it seems fitting to survey how far the field the London-born mathematician pioneered has come in the UK since he was tragically lost to us over half a century ago. This article therefore attempts to identify some of the most significant recent trends in AI research in the UK, as well as describing some projects that have either generated significant media interest, had significant social impact, or offer great promise for the future.

Computational creativity

One of the most visible recent successes has come from attempts to develop software which exhibits behaviour which would be judged creative in a human.  A growing community of researchers have been active in this area for some time, culminating in the first computational creativity conference in Lisbon in 2010.

An important player has been Simon Colton at Imperial College London, with his PaintingFool software artist. In a 2009 editorial Colton and others described the fragmentation of AI research from the ambition of early projects aimed at “artefact generation” into subfields, such as machine learning, planning, etc., in a “problem-solving paradigm”. They went on to claim that computational creativity researchers are “actively engaged in putting the pieces back together again”.  The aim is to combine various AI methods with techniques in areas like computer graphics, to automatically generate “artefacts of higher cultural value”.

He has argued that for software to be judged creative, it needs to demonstrate three key aspects of human creativity - skill, appreciation, and imagination. Skill is not difficult to demonstrate for software which can abstract regions of colour in images, shift colour palettes, and simulate natural media such as paint brushes, but the others would seem at first glance to be beyond the capabilities of current systems.
Nevertheless, an attempt to demonstrate appreciative behaviour, by using machine vision techniques to detect human emotion and paint appropriate portraits, won the 2007 British Computer Society's Machine Intelligence prize. More recently, the group’s use of generative AI techniques, such as evolutionary search, along with 3D modelling tools, have led to works of art deemed sufficiently imaginative to have been exhibited in two exhibitions in Paris last year. The pieces were neither abstract nor based on photographs, challenging the idea that mere algorithms cannot produce original, figurative art. The pictures were also featured in the Horizon documentary, The Hunt for AI, broadcast this April.

But it is the combination of many different computing and AI techniques, and their “layering” into a teaching interface through which the system learns, which demonstrates the bold claim of “putting the pieces back together again”.

Enactive cognition

A potential criticism of computational creativity is that it lacks what some might claim is the crucial purpose of artistic endeavour. Art is often thought of as a form of communication and any current artificial system lacks the intentionality necessary to motivate communication between what philosophy calls autonomous agents. There is a radical view in cognitive science, which claims that consciousness is a unique property of evolved, biological life. This is the life-mind continuity thesis of the Enactive Cognition movement. Life (and the potential for death) gives rise to self-generated, self-perpetuating action, and thus intention, which is precisely what an engineered system lacks.  

First articulated by Varela, Thompson and Rosch in 1991, enactive cognition, in its broadest sense, is an attempt to reframe the questions we ask when we investigate cognition and consciousness. Rather than focussing on individual components of mind such as neural activity or functional anatomy, a much broader focus is encouraged - on whole organisms and their interactions with the environment and each other. Notions entrenched in mainstream cognitive science, such as computation and representation, are rejected as inappropriate models of biological cognition which will ultimately thwart attempts to understand consciousness. Professor Mark Bishop, chair of cognitive computing at Goldsmiths, put it this way: “By reifying the interaction of my brain, in my body, in our world, enactivist theory offers an alternate handle on how we think, how we see, how we feel, that may help us escape the 'Cartesian divide' that has plagued Cognitive Science since Turing.”

Enactivists also see perception and action as inseparable aspects of the more fundamental activity of “sense-making”, involving the purposeful regulation of dynamic coupling with the environment via sensorimotor loops. The “sensorimotor contingencies” theory of psychologist Kevin O’Regan, where perceiving is something we do rather than sensations we have, fits neatly within this framework. Such embodied approaches to perception are gaining ground in psychology and attracting media attention, as evidenced by numerous features in NewScientist.

Chrystopher Nehaniv’s group at the University of Hertfordshire have also made progress using an enactive approach to robotics. A focus on social interaction has led to robots which play peek-a-boo and can learn simple language rules from interacting with humans.

Currently however, enactivism is primarily a critique of the classical paradigm and lacks a coherent research agenda of its own. As a first step towards remedying this, the Foundations of Enactive Cognitive Science meeting held in Windsor, UK, this February, brought together researchers from philosophy, psychology, AI and robotics, in an attempt to start building a framework for future research.

Bio-machine hybrids

The enactivist claim that biological cognition and computation are fundamentally different things raises interesting questions about another recent trend – that of bio-machine hybrids, or “animats”. A group of systems engineers and neuroscientists at the University of Reading, led by Kevin Warwick, have developed a robot controlled by cultures of organic neurons that is capable of navigating obstacles. Cortical cells removed from a rat foetus were cultured in nutrients on a multi-electrode array (MEA), until they regrew a dense mesh of interconnections. The MEA was then fed signals from an ultrasound sensor on a wheeled robot via a Bluetooth link. The team were able to identify patterns of action potentials in the culture’s responses to input, which could be used to steer the robot around obstacles.
Footage of the “rat-brained robot” in action, produced by New Scientist, currently has 1,705,230 hits on YouTube. Its performance is far from perfect of course, and to what extent such an artificially grown, stripped-down “brain” makes a good model for processes in a real brain is an open question. But the team hopes to gain insights into the development and function of neuronal networks which could contribute to our understanding of the mechanisms governing cognitive phenomena such as memory and learning. There is also the hope that by interfering with such a system once trained, new insights may be gained into the causes and effects of disorders like Alzheimer’s and Parkinson’s, potentially leading to new treatments.

The question alluded to above is whether such hybrids are constrained by the same limits to computation identified for “standard” computational devices such as Turing Machines (a mathematical abstraction which led to the development of computers). A related question arises from consideration of the emerging technology of neuronal prostheses. Given these technologies could be seen as extreme ends of a spectrum of systems with the potential to converge, it seems timely to ask whether, and at what point, their capacity in terms of notions such as autonomy, intentionality, and even consciousness, would also converge. Questions such as these where the focus of the 5th AISB Symposium on Computing and Philosophy: Computing, Philosophy and the Question of Bio-Machine Hybrids, held at the University of Birmingham, UK, as part of the AISB/IACAP World Congress 2012 in honour of Alan Turing, earlier this month.

AI and the NHS

A project which has had significant social impact in the UK is a system developed between the University of Reading, Goldsmiths and @UK plc, which makes it possible to analyse an organisation’s e-purchasing decisions using AI to identify equivalent items and alternative suppliers and so highlight potential savings. Using this SpendInsight system the UK National Audit Office (NAO) identified potential savings for the UKNHS of £500m per annum on only 25% of spend. Ronald Duncan, Technical Director at @UK plc, said: “The use of AI means the system can automatically analyse billions of pounds of spend data in less than 48 hours. This is a significant increase in speed compared to manual processes that would take years to analyse smaller data sets.” He added: “This is the key to realising the savings.”

An extension to the system also allows the ‘carbon footprint’ of spending patterns to be analysed and the results of a survey of civil servants suggests that linking this to a cost analysis would go a long way towards overcoming the institutional inertia that is currently costing the UK billions each year. In other words, strong environmental policies could, in this instance, save the UK billions in a time of financial crisis, through the application of AI technology.

Quantum linguistics

Finally, Bob Coecke, a physicist at the University of Oxford, has devised a novel way of simultaneously analysing the syntax and semanticsof language. The technique is being referred to as “quantum linguistics” due to its origins in the work Coecke and his colleague, Samson Abramsky, pioneered, which applies ideas from category theory to problems in quantum mechanics. 

Traditional mathematical models of language, known as formal semantic models, reduce sentences to logical structures where grammatical rules are applied to word meanings in order to evaluate truth or falsity and so draw inferences. Meaning tends to be reduced to Boolean logic by such models, which therefore don’t capture nuances such as the different distances between the meanings of “pony” and “horse” compared to “horse” and “cat”. Distributional models quantify the meaning of words by statistically analysing the contexts they occur in and using this to represent the word as a vector in a single, high-dimensional space. In this way, the horse-pony-cat distance might be quantified, in any number of dimensions, by context words like “ride”, “hooves” or “whiskers”. While these models have the advantage of being flexible and empirical, they are not compositional and so cannot be applied to sentences. 

Coecke, together with Mehrnoosh Sadrzadeh and Stephen Clark, devised a way of using the graphical maps he had used to model quantum information flow, to represent the flow of information between words in a sentence. Talking about this transition from quantum mechanics to linguistics, Professor Coecke said: "By visualizing the flows of information in quantum protocols we were able to use the same method to expose flows of word meaning in sentences. It is as if meaning of words is `teleported' within sentences."

This map, representing the grammatical structure of a sentence, acts on the vector meanings of the words, to produce a vector space representation of the sentence as a whole. In other words, this represents an approach to language analysis which is both distributional, so that meaning is derived empirically, and compositional, so that it is sensitive to syntax and applicable to sentences. As such, it holds great promise for a future leap forward in the automatic processing of natural language. It has been verified on a number of hand-crafted tests as “proof-of-concept“, and work has begun using real-world language samples.

Sunday 15 April 2012

What’s all this fuss about bird flu research?

In February the World Health Organisation (WHO) held a meeting in Geneva to discuss the fate of two papers describing the mutation of H5N1 avian influenza viruses into forms more easily transmitted between mammals. Both studies induced mutations that enabled the virus to transmit more effectively between ferrets because transmission in ferrets is believed to closely mimic human transmission. One study, led by Yoshihiro Kawaoka of the University of Wisconsin, Madison, was submitted to Nature, the other, led by Ron Fouchier of Erasmus Medical Centre in Rotterdam, went to Science

This followed the flurry of media attention caused by the US National Science Advisory Board for Biosecurity’s (NSABB) recommendation in December that the two manuscripts, “should not be published except with the deletion of experimental details and the crucial data”, due to concerns about how the research could be misused. This is known as redaction. 

The researchers involved called a 60-day pause on further experiments so the scientific community could, “clearly explain the benefits of this important research and the measures taken to minimize its possible risks”, and to allow time for international debate to take place.

Following the WHO meeting, its chair, Dr Keiji Fukuda, announced they had unanimously agreed on full publication. Regarding redaction, he said there was agreement there were issues around how to do it, which would be impossible to resolve quickly. They also decided to extend the pause on research while they work on “increasing public awareness of issues surrounding this research” and “discuss what are the best biosafety conditions for future research to be conducted under”. 

Yoshihiro Kawaoka revealed his mutations at last week's meeting
On 30th March the NSABB voted unanimously in favour of full publication of the Kawaoka paper and 12:6 in favour for the Fouchier paper after reviewing revised manuscripts.

The public debate began in earnest at a meeting held at The Royal Society last week. Kawoaka and Fouchier both spoke about their work, although Fouchier was prevented from revealing details due to Dutch export control laws. The journal editors, members of the NSABB, other flu researchers, bioethicists, public health experts and journalists also spoke and took part in lively discussion. 

So why all the Fuss?

The H5N1 threat

Renowned virologist, Prof Robert Webster, currently at St Jude Children’s Research Hospital, was first to take the podium. Speaking on epizoonosis (the transmission of disease between species), he left little doubt as to the seriousness of H5N1 evolving into a strain capable of causing a human pandemic.

Influenza is an RNA virus with 3 distinct species: A, B and C. The most virulent is A which includes sub-types responsible for the 1918 “Spanish” flu, which wiped out more than 40 million people, the less severe “swine” flu of 2009 (both H1N1), and “bird” flu (H5N1) . The H and N refer to two proteins on the virus surface thought to play important roles in its functioning. Hemagglutinin (HA) is involved in binding the virus to target cells and so is important in determining both how easily the virus spreads (transmissibility) and which species it infects (host range). Neuraminidase (NA) is involved in the release of copies from infected cells and so might be related to the virulence, or pathogenicity, of the virus. These proteins are also antigens, for which antibodies can be generated, so they tend to be targeted by antivirals and vaccines.

Influenza A evolves very rapidly. There is no proof-reading mechanism during replication, so almost every copy is a slight mutation. These constant changes accumulate, causing antigenic drift. The virus genome consists of eight separate strands of RNA (coding for 11 viral proteins), so when two different viruses enter a cell, because the genome is segmented they can reassort (swap segments), causing abrupt changes known as antigenic shift. It’s this process which is thought to enable new strains to jump species. In Webster’s words: “This is an extremely variable virus.”

H5N1 is endemic in the wild bird population of the world in low pathogenic forms, but when it gets into another species, such as chickens (it first showed up in a Hong Kong poultry market in 1997) it can evolve rapidly into a highly pathogenic form which kills all the birds within a matter of days. This form has infected humans on occasion, but doesn’t transmit well between us, and so hasn’t yet caused a pandemic; but of the 598 infections recorded so far, 59% have died. This may be an overestimate, because there may have been many milder, unrecorded, infections. But even if the case fatality rate is ten times less than this, that’s still more deadly than our best estimate of the 1918 pandemic in which it’s thought 2% of those infected, died.

One of Webster’s main concerns is the question of whether the highly pathogenic form is being perpetuated in the wild bird population. Vaccination hasn’t been an effective control strategy in the past, but culling has. So although it might still be feasible to stamp out H5N1, there is no longer sufficient political will due to how widespread infection has become, and in any case, if it’s being perpetuated in wild birds this is no longer even an option.

The aims of the research

Our current understanding of the biology of influenza is a “black hole” with little really known about what determines transmissibility. When Kawoaka, spoke, he clearly stated the aims of his research:

                “To evaluate the pandemic potential of H5N1 viruses. Can they become transmissible between mammals in respiratory droplets? What changes are necessary?” 

Knowing what changes allow the virus to spread between mammals would mean we could monitor for those changes in nature. A single mutation can mean the difference between a low pathogenic and high pathogenic avian virus, but these studies seem to suggest multiple mutations are required to increase mammalian transmissibility. Of the four mutations in Kawoaka’s reassortant virus, only one has been seen in the wild, but it seems to be common in strains that have infected humans. Fouchier said all of the mutations in the wild-type H5N1 virus created in his lab have appeared in nature, and some of them have been seen in combination. This kind of information could serve as an early warning, allowing us to either stamp out a dangerous strain, or else get a head start on vaccine production and drug stockpiling. 

But not everybody agrees about the necessity of such work. Dr Thomas Inglesby, Director of the Center for Biosecurity of the University at Pittsburgh Medical Center, and an expert in pandemic flu planning and biosecurity, argued the surveillance we have in place isn’t capable of exploiting the knowledge such research produces. Many of the countries experiencing H5N1 infections submit very few specimens so only a small fraction of infections are sequenced. He said: “Out of the millions of H5N1 infections in birds, people and other animals, only 2934 HA sequences have been submitted in the last eight years to the influenza research database.” He added: “Even when countries do submit specimens, the resulting sequence data may not be analysed or published for months or years.” He isn’t suggesting research on H5N1 shouldn’t be done, only that it should focus on strains that evolve naturally. He said “novel, engineered strains” are what concern him and given the lack of immediate practical use of this research, the risks, to his mind, outweigh the benefits. 

This issue of working with natural versus engineered strains relates to the distinction between observational and experimental methods. One of the organisers, Prof Simon Wain-Hobson, talked about “passenger mutations”, claiming: “If you don’t do experiments, you don’t have any way of finding out which [mutations] are important, which ones might be explaining why the virus might be getting transmitted more efficiently.” Experiments allow scientists to infer causation with a confidence that’s difficult to gain from observing things that have already happened.

Professor Malik Peiris, a virologist at the University of Hong Kong and director of the Centre of Influenza Research, defended the work in terms of risk-assessing animal viruses for their pandemic threat. He said: “We certainly need more surveillance, but at the same time we need a better understanding of the science to risk-assess this surveillance data. We need a better understanding of the biology of transmission.” Surveillance without understanding won’t get us very far. Regarding surveillance, he said: “It’s not ideal, but there is sufficient to make a start”. He also suggested: “If these signatures were available, that would be a strong impetus to improve and an incentive for people to increase their surveillance.”

Another problem is there are many possible evolutionary paths to transmissibility. These experiments have by no means identified all the mutations that could lead to enhanced transmissibility. Pullitzer-prize-winning journalist, Laurie Garrett, expressed concern about setting up systems based on false security. She said: “We’re going to set ourselves up for having all capacity focussed on a very narrow set of genetic possibilities and nature once again laughs at us.” But Fouchier claims they’re not interested in specific mutations as such. He said: “Clear patterns are emerging about biological traits that allow transmission. So surveillance won’t be targeted to specific mutations, but to those biological traits.”

Doug Holtzmann from the B&M Gates foundation, which part-funded Kawoaka, made a related point with relevance to vaccine development. He said although it would be “myopic” to think a specific experimental mutation is “the issue of concern” the critical question is: “Are we talking about 50, 5000 or 5 billion strains that could have the potential to cause a human pandemic? I think the fundamental experiments being explored here are meant to begin to test that hypothesis.” If it is 50 or 500 strains, he said “high yield seed strains” could be set up in readiness to “shave off a very critical three months in terms of the vaccine production process”. This is critical because it can take 6-8 months to produce a new vaccine. The 2009 pandemic demonstrated that because flu spreads so rapidly, we’re currently ill-equipped to deal with an unanticipated strain. 

What are the Risks?

There are concerns about accidental release. Ingelsby said: “If a transmissible, virulent H5N1 strain got into a human population with little or no immunity it could be catastrophic, and, as uncommon as accidents are, they do happen.”

Ron Fouchier, speaking at the Royal Society meeting last week
The experiments were conducted in labs rated as Bio-Safety Level 3 enhanced (BSL3). Kawoaka described the safety precautions involved in considerable detail. His lab is a stand-alone unit with 4 layers between the virus and the outside world, decontamination procedures, air filtering and internal and external monitoring. Researchers wear sealed suits and require FBI clearance. Fouchier said if an accident occurred in his lab “the public won’t be exposed, but the individuals in the laboratory will be”.

Some scientists have called for H5N1 experiments to be conducted in BSL4 labs, but others say this will limit the number of researchers who can work with H5N1 as these facilities are scarce. Webster said: “Please do not tie our hands by raising the biosafety level to BSL4.”

Some are worried the information in the papers could be used for bioterrorism. The chances of this seem low, for a number of reasons. The independent biosecurity assessment commissioned by Nature when they reviewed the paper stated: “There is no doubt this information could be used immediately by an exceptionally competent laboratory to provide the foundation for a programme to develop a pandemic strain of this virus.” Note the phrase: exceptionally competent laboratory. Philip Campbell, editor of Nature, emphasised: “None of this work could be done in a garage; you really do need sophisticated people and sophisticated techniques to do this work.” When you also consider there is no known way to target flu to specific populations, vaccines and anti-virals already exist, and more lethal, less haphazard pathogens already exist, this all adds up to the conclusion that there are easier, more effective ways to conduct bioterrorism.

The philosopher and bioethicist, Prof John Harris of the University of Manchester, said: “We cannot prevent bioterrorism at the cost of virus research because bioterrorism is not the only [or] even the most likely cause of bioterror.” Nature is the more likely source of threat. 

But Inglesby cautioned: 

“There’s no way to calculate the chances that this work will be replicated in the future as the result of the action of a malevolent scientist, a terrorist group, or a country. History is full of examples of how often we’ve misjudged the intentions and capabilities of others and how often science and technology has been used in ways that weren’t initially intended.” 

Well, there's no accounting for "crazies".

So what are the options?

Well, we could simply not do this kind of work, but as many speakers stressed at the meeting, this would leave us “blind” and at the mercy of nature.

The other option is to publish the work in “redacted” form, leaving out the crucial methods, thus making it difficult to replicate. There are a number of problems with this. It would hamper science as genuine researchers need the information to work effectively. Setting up a system for disseminating the full details to those who “need to know” and are judged “safe” would be tricky. What would the criteria be for deciding who gets this information? Who decides this? How do we ensure the information remains with only those people? Speaking about this, Campbell said: “Once you’ve sent a paper that’s restricted, to say, ten people in the academic community, you’ve lost any hope, in my opinion, of maintaining restricted dissemination.” 

Redaction could also lead to dangerous territory politically. If scientists in poor countries on the front line of the fight against H5N1 felt excluded, they might attempt the work themselves. It’s likely that would happen in labs below BSL3. This could mean that in attempting to deal with what Harris referred to as the “dangerous to know” problem, we end up creating a “dangerous to do” problem. But Prof Arthur Caplan, a bioethicist at the University of Pennsylvania, is worried publication could lead to a proliferation of similar work around the world, possibly in places with less developed biosafety “cultures”.

Many scientists are concerned about the effect censorship might have in discouraging bright scientists from working in the area. The biosafety assessment carried out for Nature concluded: “The majority of life scientists fear the emergence of diseases for which we have no counter measures, and pushing the best scientists towards blander areas in which they can more easily publish must increase our vulnerability to such entities.” Not everyone is convinced by this argument. Garrett pointed out that while biodefense is “the most regulated research you can imagine” it is also well funded, so although regulation far exceeds proposals for increased scrutiny of H5N1 research, scientists have not been driven away from the area. She said: “There’s been no trouble attracting people to the field, of all ages, because there’s money there.” Paul Keim, chair of the NSABB, said if funding bodies think certain work is important, but should be regulated, they have to put more money into those areas. He said the only reason any research is done on anthrax, given the “really onerous regulation”, is because a lot more money went into it. He added: “The same thing will happen here.”

Probably the most important criticism of redaction is that it probably wouldn’t achieve much. These studies didn’t invent any new methods, so anyone with the expertise and resources to replicate it could likely do so without the methods sections of the published papers. This might not always be the case, but as cyber security “guru” Bruce Schneier made clear in his presentation, the fact the work is carried out on computers, copies of the papers circulated by email during review, etc., means a determined, skilled “adversary”, would not be stopped by censorship. He said: “Any data that’s available on a computer can be hacked. Not most. Any.”

So redaction is no longer being considered. Keim explained the factors which influenced the NSABBs decision to change their recommendation. He said: “The revised papers had more clarity on risks and benefits.” It seems Fouchier’s original paper was unclear about the lethality of the mutated virus. Although ferrets that had the virus physically implanted in their trachea died, those infected by aerosol transmission didn’t. The virus was also sensitive to the antiviral Tamiflu. In fact, both viruses were non-lethal through aerosol transmission, didn't transmit as efficiently as the 2009 pandemic strain, and could be treated with existing drugs. So no killer superbugs were created here. 

Another factor was the realisation that restricted dissemination was impractical, particularly in light of national export controls, so the choice became between publishing in full, or not at all. They were also influenced by the US government’s publication, days earlier, of a new policy for regulation of what it calls Dual-Use Research of Concern (DURC), which identifies the types of experiments that warrant concern and requires assessment from the point of funding decisions onwards, rather than waiting to catch problems at the publication stage.

But this kind of work will continue, so it’s reasonable to ask whether there are any “red lines” which we shouldn’t be willing to cross. Researchers will want to know why these viruses weren’t deadly. Should experiments aiming to increase the virulence of transmissible strains be conducted? What about engineering vaccine or antiviral resistance? Inglesby said: “Are there any red lines? Let’s decide on that before proceeding.”

And where do we go from here?

Looking beyond the fate of these two papers, this is a truly global issue. That’s what pandemic means. We’re all at risk so we all need to cooperate to manage that risk. Garret gave an insightful analysis of the global politics of public health. She pointed out H5N1 is already a public health crisis in countries like Bangladesh, where millions of chickens have been culled, often without government compensation, because that’s often their primary source of protein. She said: 

“It is a perceived threat to humanity and that is why we ask of the poor people of the planet that they take the very serious measures that they take, imperilling their own economic survival, in order to control it for the rest of humanity.”

Indonesia declared “viral sovereignty” due to perceptions of conspiracy between rich nations and global organisations to “create false epidemics that would force poor countries to buy the pharmaceutical products of the wealthy world”. Egypt is currently struggling with H5N1. What would happen if the Arab world perceived a significantly different response to human infections cropping up in neighbouring Israel? How much extra risk will developing BSL3 labs in Pakistan create? Garret said: “Surveillance is politics, and if you’re not embracing the politics then you’re not going to have a solution to anything.”

Researchers need access to samples from the poor countries where H5N1 is endemic. Those nations should have access to knowledge gained from research and any resulting vaccines and drugs. Not just because Indonesia provided Fouchier’s lab with their virus, but, to quote John Harris, “because they have a need”. Gordon Duff, of Sheffield University, who co-chaired the UKs Scientific Advisory Group for Emergencies (SAGE) during the 2009 pandemic, said: “It’s a global matter – it’s about sharing resources, clinical isolates, animal isolates, results of research and products deriving from it to the benefit of all countries.” The WHO pandemic preparedness plan is based on this notion of “reciprocity”, and a number of speakers stressed the importance of sharing, cooperation, equity and trust.

We need to build more genuinely safe labs on the front line and train more scientists there in virology and biosafety. Ross Upshur of the University of Toronto said: “If global science wants to take this on, we need to make sure that we have highly trained, capable, qualified, virologists and virological labs, all around the world.”

This whole controversy has highlighted the need for this kind of research to be properly monitored and regulated. An international, independent, regulatory body needs to be constituted to do this. Upshur said: “An adequate assessment of potential social risks requires prospective review by an international body with a range of expertise, including, in this case, influenza biology and biosecurity.” How this authority should be constituted, structured and held accountable, and who should be involved, are questions still to be answered, but the scientific community needs to engage with this now if it wants to avoid excessive external regulation and government legislation.

Paul Berg of Stanford University, via videoconference link
Finally, Nobel Prize-winner Paul Berg, via videoconference link from Stanford University, spoke about the importance of building public trust “about what one knows, what one doesn’t know, and what needs to be learned”. He said: “It is important that the public should understand the nature and magnitude of the health and/or security risks and they certainly deserve to know that all means are being taken to mitigate those concerns.”

This is all going to take time, but at least the conversation has begun.

If you want to know more video coverage of the meeting is now online, or you can read more here and here.

Tuesday 15 November 2011

What's all this fuss about Libel Reform?

I went to the Houses of Parliament last Wednesday (9th Nov). I’d never been before (it’s worth the trip, but be aware it takes the best part of half an hour to clear security), and I don’t even live in London, so what prompted my sudden incursion into the halls of British democracy? I went to attend a public meeting on libel law reform, organised by the Libel Reform Campaign, a coalition of Index on Censorship, English PEN and Sense About Science. I’ve been following the campaign since its launch in 2009, and as of now, 57,292 people have signed their petition. So what’s all the fuss about?

You may have spotted the word Libel in the news once or twice over the past couple of years. You’ve probably heard the name Simon Singh. He’s a science writer who was sued by the British Chiropractors Association (BCA) for a piece published in The Guardian calling some of the treatments handed out by chiropractors “bogus”. Or maybe you know that Ben Goldacre had to withhold a chapter of his book, Bad Science until a libel case against him was dropped? He had dared to criticise vitamin-pill entrepreneur, Matthias Rath for campaigning against anti-retroviral drugs in South Africa and claiming that the real answer to the AIDS epidemic there was multivitamin pills.

Those cases are old news by now, but as Simon Singh said on the day, “We’re old stories, but there are people still being threatened.” Citizen’s Advice has been unable to fully publish a report on certain practices of high street stores which are contrary to consumer protection regulations, despite spending a year’s research and campaign contingency budget libel-proofing it. Nature is in the courts right now for a 2008 news article which was critical of a journal editor. And so on.

The meeting was chaired by Dr Evan Harris, the campaign’s policy advisor, sponsored by Julian Huppert, LibDem MP for Cambridge, and speakers included Singh, Justice Minister Lord McNally, JohnathonHeawood, director of English PEN, John Kampfner, chief exec of Index on Censorship,Tracey Brown, director of Sense About Science (SAS), Dr Peter Wilmshurst, and representatives of Citizens Advice, mumsnet, Nature, Global Witness, Which?, Facebook, the Publishers Association, Liberty and AOL. MPs present who responded were Andy Slaughter (Labour) and Tom Brake (LibDem). Turnout on the day was fantastic; not that I know how many people usually turn up to such things, but there were no seats left by the time I got there. We were rammed in like sardines.

What’s the problem?

Libel law in this country is hopelessly out of date. 

There’s no specific protection for scientists writing in peer-reviewed academic publications, for instance. The law has also failed to keep up with the internet, which potentially makes each and every one of us a published commentator. Have you ever worried about how legal your Facebook or twitter posts are? Maybe you should start. Vaughan Jones is currently in court defending himself against a libel claim over a negative book review he wrote on Amazon. No, I’m not kidding.

Lisa Fitzgerald, senior counsel from AOL, talked about the legal quagmire of determining who is responsible for what content (ISP’s, websites, forum hosts, third-party content providers, users, etc.), commenting that, “The current law discourages active involvement in your website and encourages a take-down culture.” Richard Allan, director of European policy for Facebook, cautioned that “If the law is not got right, spaces like ours and mumsnet simply get smaller”.

And as for bloggers, all I can say is I’m being very careful what I write here…

The current law stifles free speech, and is often counter to the public interest.

The law is so complex that cases can take years, and hundreds of thousands in costs, to resolve. Large organisations with deep pockets can exploit this to use libel threats to force retractions of material they don’t like, effectively censoring information which might be in the public interest. David Marshall of Which? magazine said that, “Exploiting the uncertainty and inequality […] in the current regime is well worth it in the name of reputation management.” He was referring to the practice of using libel law as a PR tool, which is possible mainly because claimants currently don’t have to prove damages to launch a claim; all they have to demonstrate is publication. This makes it very easy to issue libel threats.

British cardiologist, Dr Peter Wilmshurst, was pursued through the UK courts by an American medical supplies company, NMT, for the best part of four years. He made comments at a conference suggesting that the failure of a clinical trial he was involved in may have been due to the failure of one of NMT’s devices. He claimed the argument was a matter for scientific debate, not for the courts, but the case against him wasn’t repelled by UK libel law. It only ended when NMT went into liquidation earlier this year. Speaking at the meeting he said, 

“The action against me prevented others with concerns about the safety of devices made by NMT from voicing their concerns, including making known life threatening problems with NMT’s devices. […] Patients have suffered because of [the law] being used to silence doctors with legitimate concerns about medical safety.”

Last March, the Ministry of Justice Working Group on Libel reported that debates in the public interest are being “chilled”. Judges in the BCA vs. Singh case came to similar conclusions.

There’s an imbalance in favour of rich and powerful claimants. 

The huge cost of libel cases gives an unfair advantage to rich and powerful individuals and organisations. Hardeep Singh’s lawyer stood up and told a tale of being taken down a "dark passage in the Royal Courts of Justice" by a claimant’s lawyer and told, “Our pockets are a lot deeper than yours. Give up.”

Awards are typically not that large (typically in the tens of thousands), but costs can run to several hundred thousand. So if, as an individual, you’re threatened with libel and you decide to defend yourself, if you lose, you’re ruined; if you win, you usually don’t recover all your costs, so you’re lucky to break even. Much better to settle and pay damages you know won’t break you. 

As an analogy, there’s a concept used by serious poker players called “Expected Value”, which is just the net result of making the same decision in the same situation over and over again. If calling a £5 bet will win you £40 twenty percent of the time, but lose you £5 the rest of the time, the EV is +£4 (40 x .2 = +8 + (-5 x .8 = -4) = +4), so you have positive EV, so you call. You’ll usually lose, but in the long run, you gain. That’s a very common situation in poker, but typical libel defendant situations reverse that logic, because costs can run so high as to deter people even if they think they have a good chance of winning. If costs can run to several hundred thousand, but damages (and presumably the settlement figure) are just a few tens of thousands, then even if you think you have an 80 to 90% chance of winning… 

Well, I’ll let you do the maths, but it doesn’t add up to going to court. 

Not that anybody wants more cases ending up in court. One solution to the cost issue is procedural reform, and the development of alternative dispute resolution systems to deal with claims before you get to the hugely expensive court system. But even so, surely a system where damages, not costs, constitute the bulk of the ultimate bill stand a better chance of deterring genuinely malicious defamation, whilst encouraging people to speak up if they’re speaking the truth?

It’s internationally embarrassing.

Some concerns have been raised about cases of “libel tourism”. Dr Wilmshurst was sued (in the UK) by a US company, for comments made at a US conference, and posted on a US website by an American journalist.The journalist and website were not sued.Hardeep Singh was sued by an Indian "Holy man", who neither reads, writes, nor speaks English, and who had apparently never even been here. 

Last year President Obama signed into law the SPEECH Act, designed to protect Americans from libel tourism. 

        What’s the other side of the story?

In the early days of the campaign the clamour of voices calling for reform drew some criticism from the legal community. Two academic lawyers, Professor Alastair Mullis (UEA) and Dr Andrew Scott (LSE), published a rejoinder, which explained why some of the changes being called for might be very bad ideas. Reversing the burden of proof, for example, so the person claiming libel has to prove that what they were accused of was false, wouldn’t be such a good idea. If somebody accuses you of something, the onus should be on them to produce the evidence.

Gavin Phillipson was the law professor on the government's working group on libel and has been a moderating voice helping to rebalance the debate and debunk some of the wilder claims reported in the media. He welcomes debate on the issue and advocates some reform, but he isn’t impressed by allegations that English libel law is an international joke. There has been talk of restricting UK jurisdiction to cases where the publication in question has a sufficient proportion of its circulation here, but if the law is reformed along the lines being proposed, this becomes a moot point anyway.

Episodes of media hysteria like the MMR scandal (should) have taught us that just because an issue is grabbing headlines, that doesn’t mean it’s valid. But it’s hard to watch scientists, doctors, NGOs, consumer affairs magazines, internet companies, MPs, journalists, bloggers, publishers, mums, journals, and just plain citizens, stand up and say this is a problem without thinking that it may, in fact, be a problem. I know that isn’t very scientific, but if there are statistics showing that these are isolated, anomalous cases, and that actually the current law is doing the best possible job of balancing the right to reputation with the right to free speech, why isn’t anybody working harder to make those facts public? Where is the anti-reform campaign? The stats I did hear came from Victoria Lustigman from the Publisher’s Association, regarding a survey of their members they conducted earlier this year:

“A hundred percent of respondents said they had modified content or language ahead of publication due to threats of libel actions, forty three percent had actually withdrawn publications, a third had refused work from authors for fear of libel suits, a third had avoided publication of particular subjects…”