Posted: February 12, 2013
This response is categorized as:    link to Pandemic and Other Infectious Diseases index
Hover here for
Article SummaryThe controversy over whether scientists should be allowed to bioengineer potentially pandemic bird flu viruses had pretty much died down by the time I was asked to speak at a February 2013 conference on the issue in France. Since I had criticized the controversy’s consistently miserable risk communication, I was delighted that at least one post mortem conference wanted a risk communication perspective. But I had prior commitments and couldn’t go. When the organizers invited me to present by telephone instead, I jumped at the chance. My speech notes – mostly borrowed from my previous writing on the topic – are more extensive than I had time for in the actual presentation. On the other hand, the MP3 recording of the actual presentation includes about 25 minutes of Q&A.

More Spin than Science:
Risk Communication about the H5N1
Bioengineering Research Controversy

(presented via telephone at a conference on “Freedom in Biological Research: How to Consider Accidental or Intentional Risks for Populations,” Fondation Mérieux and Institut National de la Santé et de la Recherche Médicale, Veyrier-du-lac, France, February 8, 2013)

An MP3 of the actual presentation is also on this website.

The controversy over whether scientists should be allowed to bioengineer potentially pandemic bird flu viruses had pretty much died down by the time I was asked to speak at a February 2013 conference on the issue in France. Since I had criticized the controversy’s consistently miserable risk communication, I was delighted that at least one post mortem conference wanted a risk communication perspective. But I had prior commitments and couldn’t go. When the organizers invited me to present by telephone instead, I jumped at the chance.

These speech notes were distributed to conference participants. They are more extensive than I had time for in the actual presentation. On the other hand, the MP3 recording of the actual presentation includes about 25 minutes of Q∓A.

My presentation was mostly borrowed from my previous articles and Guestbook entries on the controversy, all of which are listed and linked at the end of these notes.

Introduction

I want to apologize for needing to talk to you via telephone instead of being present in person. I’m especially regretful that I haven’t had a chance to hear the other speakers – so I won’t know when I’m repeating things that have already been said … or disagreeing with things that have already been said.

But I think most of what I have to say nobody else is going to say, because I come to your topic – how to reconcile the risks and benefits of biological research in general, and of H5N1 bioengineering research in particular – from an outsider’s perspective. I am not a virologist or a laboratory scientist of any sort; I’m not a lab safety expert or a bioethicist. I know very little about the actual risks and benefits of biological research.

I am a risk communication expert, and I want to talk to you today about a meta-topic: how the risks and benefits of biological research are communicated to the general public. In particular, I want to focus on public communications about the two H5N1 research papers by Dr. Kawaoka and Dr. Fouchier that launched the controversy, and on public communications about the voluntary moratorium on H5N1 bioengineering research that was initiated a little over a year ago and that ended in most countries just a few weeks ago.

You all know the dilemma posed by so-called dual-use research. On one side we have not just the potential benefits of such research – in the case at hand, benefits that might include helping to prevent or manage a potentially disastrous pandemic – but also the fundamental freedom of scientists to study what they want and publish what they find. On the other side we have a competing principle that perhaps scientific freedom should be constrained by other societal forces. And we have an array of potential harms – in the case at hand, harms that might include starting a potentially disastrous pandemic.

In their introduction to this conference, the convenors point out that “debates about what is known as the ‘dual-use dilemma’ should foster a dialogue between civil society players and the scientific community.” One goal of such a dialogue is to explore, and again I quote, “whether and how the public should play a role in decisions about potentially dangerous projects.” I want to comment on the quality of that dialogue so far.

If time permits, I hope to make FIVE points.

Point One: The vast majority of the public – of “civil society” – has no interest in any of this so far.

In early 2012 when the controversy was hot, I read a lot of casual references to how H5N1 bioengineering research was arousing high levels of public concern. Both sides said so. Opponents of the Fouchier and Kawaoka research suggested that the public’s concern was evidence that the research should stop. Supporters suggested that the public’s concern was evidence of the need for public education, and perhaps a moratorium to allow time for public education. No one suggested publicly that the public’s concern was evidence that the public is an idiot and that research of this sort should therefore be done quietly if not quite secretly – though of course that’s what many researchers quietly/secretly believe.

A quick look at Google Trends suggests that the controversy was of only brief and modest interest to the general public, as measured by search volume. H5N1 Google searches peaked in October 2005, as H5N1 spread across Europe, suddenly making many westerners think that a bird flu pandemic might be imminent. Then the Google searches fell off precipitously; the number remained fairly constant and fairly low between 2007 and 2009, and then got lower still. A tiny blip in early 2012 coincided with the research controversy; it’s barely detectable, and never approached even the level of 2007–2009, much less the level of 2005.

The Google Trends news coverage graph goes back only to 2008. The controversy generated more H5N1 news coverage in early 2012 than had been typical in the previous few years (though surely less than in 2005) – but since the controversy has abated the amount of H5N1 coverage is now lower than in the 2008–2011 baseline.

So scientists who wanted to engage civil society – who wanted to go beyond a small cohort of already interested stakeholders and enter into dialogue with a wider range of publics – would face a steep uphill climb. As I will argue later, there’s reason to doubt that the scientists who launched the 2012–2013 research moratorium truly wanted such a dialogue. For sure they didn’t get it.

Given the immense difficulty of involving uninterested publics in decisions, what sort of civic engagement is feasible? The field of risk communication suggests several answers, but by far the most fruitful one is to involve stakeholders instead of publics – people who already feel a stake in pending decisions, or who will feel such a stake when informed that those decisions are pending. The lowest-hanging fruit, of course, is involving people who are already clamoring to be involved … which usually means involving your critics. That’s doable. But for obvious reasons, it’s often not very appealing to those who claim to be interested in civic engagement.

Scientists who are accustomed to doing their work with zero public scrutiny may imagine that H5N1 bioengineering research has just endured a year of intensive public scrutiny. But by the standards of my corporate and government clients, accustomed to spending months in the spotlight of news headlines and social media commentary, this has been a very small kerfuffle indeed.

Point Two: Some aspects of this controversy are intrinsically more interesting than others, and therefore more likely to garner attention from the public, the media, and even the stakeholders. Unfortunately, the least interesting issue is probably the most important: the risk of a natural H5N1 pandemic.

It’s possible to imagine a big public controversy over H5N1 research … and it’s possible to imagine such a controversy broadening into a furor over scientific research more generally. If that happened, the main focus would not be the importance of scientific freedom and the threat of censorship. It would be the threat of out-of-control scientists arrogantly ignoring the dangers posed by their research.

And some of those dangers would get far more attention than others. In fact, in the small-scale debate we have actually seen, focused narrowly on H5N1 research itself, some of those dangers have gotten far more attention than others.

Let me list some of the ways H5N1 research might start a pandemic, because the public debate has focused far more on certain ones of those ways than on the others. I see six of importance, starting with the ones that have received the most attention:

number 1
A laboratory accident in spite of strict adherence to safety procedures.
number 2
An intentional release resulting from a terrorist attack on a research laboratory.


(These first two have received, I think, by far the most public attention … though they are arguably the least likely of the six.)

number 3
A laboratory accident resulting from careless or intentional violations of safety procedures.


(I am not qualified to have an opinion on the size of the risk of an H5N1 lab release, but everything I know about risk perception and risk communication tells me that the risk is greater than the experts believe it to be, and that the experts believe it to be greater than they are willing to admit publicly.

It’s worth noting that both public and internal debate over the possibility of an H5N1 laboratory accident have focused far more on ramping up safety precautions than on ramping up enforcement and sanctions against violations of those precautions.

In the transcript of the news conference accompanying the end of the moratorium, Dr. Kawaoka said: “We have always followed the rules in the past and will comply with whatever additional guidelines and measures our governments deem are necessary.” Dr. Kawaoka may have forgotten a well-documented 2007 controversy in which his team was found by NIH to have conducted Ebola research that required a BSL-4 lab in a lab that was only BSL-3. The infraction came to light after Dr. Kawaoka sought permission to do the work in a BSL-2 lab. Apparently he thought BSL-4 was excessive for the work he was doing, and thought that was reason enough to interpret the rule that way until NIH explicitly said he was wrong. Edward Hammond of the Sunshine Project, which unearthed the infraction, said in the press release: "It is dismaying but not surprising that NIH’s biodefense program was funding work that violates NIH’s safety rules. The guidelines have been an unenforced afterthought for years.”

Dr. Fouchier commented similarly when the moratorium’s end was announced. “I’ve been convinced from the beginning,” he said, “that this work can be done safely and is done safely by the laboratories who are doing it.” How much careful compliance with new safety rules can we expect from scientists who have been convinced from the beginning that everything is safe already … and are perfectly comfortable saying so?

Anyone who actually believes research scientists adhere scrupulously to safety rules they consider foolish has never spent much time with research scientists.)

number 4
An intentional release perpetrated by a laboratory employee – who went crazy, or who was recruited by malevolent forces, or who was bribed or blackmailed by those forces, or who felt mistreated by the laboratory itself and sought revenge by releasing a pathogen.


(Ever since the Bhopal disaster of 1984, widely believed to have been employee sabotage, my corporate clients have paid at least a little attention to the question: “How do you protect yourself from an employee who is muttering angrily over lunch about getting the bastards?” My public health clients seem to find this question insulting, and have virtually no answer … the anthrax experience of 2001 notwithstanding. They seem to have what I have elsewhere called “a blind spot for bad guys” – at least bad guys inside the biological research world.)

number 5
A laboratory accident at a facility not bound by safety procedures – perhaps in the garage of a hobbyist inspired and instructed by the published research of more legitimate scientists; perhaps in an unregulated weapons lab run by terrorists, or by rogue governments, or by secret agencies of our own governments, which similarly benefited from the published research of scientists like Dr. Kawaoka and Dr. Fouchier.


(Although a number of NSABB members were chiefly worried about this scenario – they wanted to restrict publication of the Fouchier and Kawaoka studies, not the research itself – it has received surprisingly little public attention. Nor do scientists themselves seem especially worried about it. This is in line with a longstanding scientific perspective that says scientists are not responsible for evil uses to which others put their research … rather like the contention of gun manufacturers that they are similarly not responsible for evil uses to which others put their products.)

number 6
An intentional release perpetrated by a terrorist or a government or an insane individual without any involvement of mainstream scientists, except the inspiration and knowledge gleaned from published research.


(This scenario too gets very little attention, at least in the United States.)

This demonstrates a crucial principle of risk communication and risk perception. People (and journalists – and even scientists) are enormously more interested in what I call the “outrage” component of risk – how upsetting something is – than they are in the “hazard” component – how dangerous it is.

Among the reasons for worrying about H5N1 research, the risk of terrorism has a lot more outrage potential than the risk of a laboratory accident. People are certainly capable of getting outraged after a devastating lab accident, with endless follow-up news stories after the disaster about inadequate safety rules inadequately enforced, about the long history of under-reporting of safety violations, etc. But we haven’t had a devastating lab accident involving flu viruses (at least not since a lab accident probably launched the 1977 “Russian flu” outbreak). Possible accidents and even near-misses are a lot less outrage-provoking than images of terrorists storming somebody’s lab.

More importantly, consider the outrage potential of a possible natural H5N1 pandemic versus the outrage potential of a possible terrorist attack – especially in the wake of the mild swine flu pandemic (widely misperceived in Europe as a false alarm). Natural pandemics are, well, natural; terrorism isn’t. The same potential risk from a technical perspective – in this case the possibility of efficient human-to-human transmission of H5N1 – is a far bigger source of outrage if it’s intentional or even accidental than if it’s natural. Look at people’s strong reactions to oil spills compared to their unconcern about oil seeps. Look at the difference between radon emitted by uranium-bearing rock under your house and radon emitted by a mining company’s waste pile near your house. Look at methane in your drinking water before versus after a gas company starts fracking nearby.

In addition, terrorism is more memorable and more dreaded than a naturally occurring event. And terrorism is morally relevant, it’s “evil” as well as “dangerous.” In a battle to arouse outrage, bioterrorism beats nature hands-down.

It isn’t impossible to arouse significant public concern about a natural risk like a pandemic. We did it about H5N1 in 2005, briefly – mostly by giving people the impression that a bird flu pandemic was imminent. We’re paying for that now – and for failing to acknowledge the mildness of the H1N1 pandemic a few years later – with vastly increased public resistance to pandemic warnings. A new terrorist threat strikes journalists and the public as a lot more newsworthy than a bunch of scientists and public health professionals saying yet again that they’ve done a study and they’re worried about a pandemic. Been there, done that.

Warning people about risks is famously difficult; we have enough things to worry about already. Success depends largely on taking advantage of “teachable moments” when something happens that temporarily captures the attention of the public and the media. If they’re skillful and lucky, risk communicators can use a teachable moment to achieve a lasting change in public opinion and public policy.

Maybe, just maybe, the Fouchier and Kawaoka papers offered such a teachable moment, a chance to overcome accumulated skepticism and alert people – or re-alert people – to the risk of an H5N1 pandemic. “Bird flu can spread through the air from one ferret to another! And it can kill ferrets! Will people be next!!??” That opportunity was squandered. Or rather it was hijacked … by a sideshow discussion of the pros and cons of unfettered research and publication.

Point Three: The scientific community never really sought public engagement on the H5N1 research controversy. Its preferred option would have been to ignore the public, but it was forced to retreat to its second choice: to “educate” the public. Listening to the public wasn’t really on the agenda.

I don’t mean to question the sincerity of the organizations sponsoring this conference and the individuals who have given their time to organize it and attend it. Quite the contrary. The sponsors of this conference have a track record for genuine stakeholder consultation on controversial biological research. In 2004, my wife Jody Lanard was helping the World Health Organization develop outbreak communication guidelines, after the SARS epidemic. Discussing the importance of public involvement, a senior WHO official told this story:

The National Institute of Infectious Diseases in Tokyo, in the late 90s, built a Level 4 lab but it is still operating as a Level 3 lab because the community will not accept it. There was no attempt before construction to sell the lab to the local populations – it will probably never operate as a Level 4 lab.

In Lyon when [Jean] Mérieux started planning his Level 4 lab, he consulted with WHO, and WHO told him the Tokyo incident. He immediately conducted a series of town meetings in which top WHO officials participated, the lab opened on time (despite its high visibility on pilings above the Institute Pasteur), and there has been no anti-sentiment for the first five years of its operation.

So I will take it as a given that the people I am speaking to today genuinely want to engage with civil society. But most scientists, certainly most H5N1 scientists, do not.

Their disinclination to engage with civil society does not distinguish scientists. They share that disinclination with corporations, governments, and virtually everybody else. The history of public consultation is mostly the history of forced consultation (outraged people successfully demanded to be heard) or of pro forma consultation (powerful institutions carefully arranged to appear to be listening). The exceptions become legendary – even debatable exceptions like the 1975 Asilomar conference.

Consider the agenda of an “international consultative workshop” on the H5N1 research controversy held December 17–18 in Bethesda, Maryland, under HHS sponsorship. The preliminary agenda listed an astonishing 73 speakers, moderators, and panelists over 16-1/2 hours. Allowing no time for meals or bathroom breaks, that’s less than 14 minutes apiece. The agenda also called for five opportunities for “moderated discussion” with the audience. I wasn’t there so I can’t tell you to what extent audience comment was allowed to cut into each speaker’s 14 minutes. A “moderated discussion” that lets some of the people in the room sound off for a minute or two each and then “that’s it, you had your turn, let’s hear from somebody else” isn’t a discussion at all. It virtually guarantees that nobody’s contribution will be followed up, explored, responded to, or even listened to.

The goal of the moratorium on H5N1 bioengineering research was explicitly to make time to “educate” the public – which (unfortunately and inappropriately, in my judgment) in context quite clearly meant to reassure the public that this research was sufficiently safe to allow it to continue. Hearing from the public was also mentioned from time to time, but was a distinctly secondary goal. And insofar as dialogue was a goal, it was part of the larger goal of reassurance; the hope was that worriers would worry less if they felt they had been heard.

As a way to address controversy, trying to “educate” one’s opponents out of their opposition never succeeds. In the risk communication literature and the planning literature, this strategy goes under the label “decide – announce – defend.” It is thoroughly discredited, because it doesn’t work. It certainly doesn’t work when serious risks are involved. “How safe is safe enough” is a values question for society, not a science question for experts who have a horse in the race.

What was most impressive to me when H5N1 scientists announced their voluntary moratorium in January 2012, and again when they announced its end a few weeks ago, was their nonchalant acknowledgment that the moratorium was intended simply to “educate” the public, not to listen or respond to its concerns. I might have expected some of the scientists involved to make a pretense of seeking a two-way dialogue … and, a year later, of having had one. With occasional exceptions, there was no such pretense. Most of the scientists involved in the moratorium were quite comfortable describing it as an opportunity to teach, not to learn. At least they were honest about that: They seem to have learned very little.

In the news conference announcing the end of the moratorium, Richard Webby captured this best. Asked “what sort of lesson should biomedicine research take from what we’ve just been through,” he answered:

[T]he communications of the benefits … is the major learning points from this and something that perhaps we don’t do particularly well, you know? … Certainly from our point of view, that’s their job to really highlight the benefits of this research…. We certainly can do better and for me [that] is … probably the biggest lesson to come out of this.

So did the moratorium succeed as a communication strategy?

  • It didn’t succeed in alerting lots of people to the risk of doing this research and the risk of not doing it and thus ceding the field to weapons researchers and amateurs – the most fundamental DURC dilemma. That was never the goal.
  • It didn’t succeed in making H5N1 research significantly safer – not even the segment of H5N1 research covered by the new NIH rules. There are some new rules, and if they’re enforced (a big if) they may help improve safety. But surely that, too, was never the goal of the moratorium, which sought if anything to deter the promulgation of new rules that scientists considered excessive.
  • It didn’t succeed in engaging critics in substantive dialogue. That was never the goal either. Pro forma dialogue was perhaps a goal, and there has been some. But the success of a stakeholder engagement is defined by the extent to which those who are engaged feel that the outcome showed they were heard. By that standard, there wasn’t much successful engagement.
  • It didn’t succeed in educating critics out of their criticisms. That was the articulated goal … and maybe it was the actual goal. But one of the most fundamental truths of risk communication is that you can’t “educate” outraged people to calm down. Monologue can sometimes arouse the apathetic, but only genuine dialogue stands a chance of calming the outraged.
  • But the moratorium might have succeeded in another goal, one that was never articulated: suppressing public debate – that is, in confining the debate to insiders and the most determined critics. As I have pointed out already, there wasn’t that much public debate even when the issue was at its hottest, but there was a little – and the moratorium might have helped to squelch that little. Given the scanty media coverage of the moratorium’s end – in the U.S. at least – I think it’s at least arguable that the moratorium deserves some of the credit (or blame) for killing the controversy.

The dangers of concocting a potentially deadly pandemic virus in the lab are obvious. The benefits of doing so are less obvious. (Phrases like “mad scientist” come easily to mind.) So the burden of proof is on those who wish to assert that this is a sensible thing to do. Before making their case, they must first “own” the burden of proof, listen respectfully to people’s concerns, and join in a collaborative search for a potential compromise. Arrogant and self-serving rants about “censorship” won’t help. H5N1 bioengineering researchers are essentially supplicants, asking everyone else for permission to carry out work with huge (but unquantifiable) potential risks and huge (but unquantifiable) potential benefits. That is how they should address public concerns – as a supplicant.

Some of my corporate clients use the term “social license to operate” to capture their hard-won realization that they can’t do what they want to do if the public doesn’t want them to (and that that’s how it should be). Science, too, needs a social license to operate. The first step in securing your social license is acknowledging that you need it: supplicant, not educator.

Point Four: H5N1 scientists did themselves no favors by addressing public and stakeholder concerns with one-sided advocacy.

Risk communication about the H5N1 research controversy was one-sided in the sense that there was too much monologue – “education” – and too little dialogue. But it was one-sided in another, more harmful way as well. In their effort to win over publics and stakeholders, scientists far too often did violence to the truth … and to the science.

Both sides have often sounded more like advocates than scientists. At a minimum they have been frequently one-sided as they made their case for or against publication of the Fouchier and Kawaoka papers, and for or against continuation of the papers’ research focus. At times they have gone beyond one-sided and been flat-out dishonest.

Without taking the time to parse individual statements that demonstrate this point, let me note the strange correlation of opinions on the core debate with opinions on such scientifically disparate questions as whether ferrets are a good animal model for human influenza response and whether there are likely to have been large numbers of asymptomatic or mild cases of human H5N1 that were missed. The majority of scientists who supported constraints on research and publication seemed to believe that ferrets are a good model and there were few undiagnosed cases; the majority of scientists who opposed such constraints seemed to believe that ferrets are an unreliable model and there were lots of undiagnosed cases. I can’t find many experts who said “Even though I agree that ferrets are a good animal model, here’s why I still support publication…” or “Even though I agree that ferrets are an unreliable animal model, here’s why I still oppose publication….” I can’t find many experts who said “You’re right that there haven’t been many mild cases, but I’m still pro-publication…” or “You’re right that 59% is way too high, but I’m still anti-publication….”

This can’t be a coincidence. On issue after issue, I saw scientists choosing up sides and then marshaling their evidence. That’s how lawyers assess evidence: as ammunition they embrace or disdain depending on which side they’re on. It’s not supposed to be how scientists assess evidence. Scientists who use evidence to prove their hypotheses rather than to test them are being deceptive. If they don’t know it, then they’re being self-deceptive as well.

And when scientists communicate, they’re expected to bend over backwards to be fair. Even if all the facts deployed to advance a case are accurate, scientists aren’t supposed to leave out equally accurate facts that might lead the audience to question their conclusion. Instead of cherry-picking facts, scientists pride themselves on acknowledging the flaws in their case and the sound arguments of their adversaries.

Cherry-picking facts isn’t just bad science. It is also bad risk communication. It exacerbates mistrust and increases the outrage of opponents.

But I think the side that opposed constraints on research and publication generally did worse risk communication than the other side. In particular, in advocating unfettered science, that side far too often showed contempt for the public’s concerns, typically representing those concerns as “perceptions” or “fears.” As used by these scientists, “perceptions” are misperceptions, and “fears” are unjustified fears.

Technical experts tend to imagine that their own risk assessments are purely data-driven. They may realize (and may even acknowledge) that there’s a lot of uncertainty in the data. But they’re unlikely to realize how much their interpretations of the data are driven by things like values, professional biases, self-interest, and even outrage – not to mention all the cognitive biases and universal distortions that Daniel Kahneman and Amos Tversky dubbed “heuristics.”

The term “risk perception” is almost always used to refer to somebody else’s risk perception, especially when we think that the perception in question is mistaken. I “analyze” or “assess” a risk. You merely “perceive” it – which is to say, you misperceive it. Technical experts in particular talk a lot more about the general public’s “risk perceptions” than about their own.

But we are all stuck in our perceptions. I don’t know which path to a devastating H5N1 pandemic is likeliest – random mutation, intentional attack, or laboratory accident. I do know that opinions about H5N1 bioengineering research depend largely on which path you think is likeliest. And I know that which path you think is likeliest – and which path I think is likeliest, and which path the NSABB members think is likeliest, and which path flu experts think is likeliest – depends largely on risk perception factors that have very little to do with the evidence.

Even the Nature letter announcing the moratorium dripped with disdain. Consider this over-reassuring sentence:

Responsible research on influenza virus transmission using different animal models is conducted by multiple laboratories in the world using the highest international standards of biosafety and biosecurity practices that effectively prevent the release of transmissible viruses from the laboratory.

Nothing can go wrong … go wrong … go wrong….

Or consider this excerpt:

Despite the positive public-health benefits these studies sought to provide, a perceived fear that the ferret-transmissible H5 HA viruses may escape from the laboratories has generated intense public debate in the media on the benefits and potential harm of this type of research.

Note the extraordinary lack of parallelism. The proper parallel to “benefits” is “harms” – but it’s customary to contrast benefits with risks, potential harms. This already embeds an asymmetry, but it’s conventional so I’ll let it pass. Okay, the conventional contrast is benefits versus risks – or if you prefer, potential benefits versus potential risks, or even perceived benefits versus perceived risks. These are all reasonably parallel formulations. But the moratorium announcement does not contrast its confidently asserted “positive public-health benefits” with risks … or with concerns about those risks … or even with fears about those risks … but with something much more ephemeral: a mere “perceived fear.”

Paradoxically, the contempt for public concerns too often demonstrated by supporters of unfettered H5N1 research might actually provoke stricter regulation of that research, and of science more generally. If scientists are nasty enough and myopic enough in their insistence that nobody but other scientists has a right to an opinion about the risks of what they do and what they publish, it is conceivable that the society might rebel against unbridled scientific autonomy. Even that is a long shot. Most people have a very strong conviction that governments don’t know how to regulate scientists and we’re better off leaving them alone. That autonomy has nurtured a lot of scientific arrogance, but the arrogance hasn’t yet undermined the autonomy, and odds are it won’t this time either. But if there was a threat to scientific autonomy in the past year’s debate, it did not come from the NSABB’s recommendations. It came from the arrogant, scientifically dishonest, risk-insensitive way some scientists responded to those recommendations.

Point Five: The evolution of Ron Fouchier’s public descriptions of his own research provides a particularly stunning example of scientists’ disinclination to be candid with the public. The fact that scientists who were privately irate at Dr. Fouchier closed ranks to hide his miscommunications is even more stunning … and far more dangerous.

Ron Fouchier, the senior author of one of the two papers that launched the controversy, was particularly guilty of these scientific sins. For several months in late 2011 and early 2012, Fouchier appeared to be trying to arouse interest in his study. His messaging – much of it not journalists’ interpretations or paraphrases or even quotations but Fouchier’s own statements on the Erasmus Medical Center website – was all about how dangerous he considered the H5N1 virus and how terrifying (but incredibly useful) he considered his own soon-to-be-published study. But as the controversy over publication grew, Fouchier became less focused on arousing interest and more focused on allaying concern. And his messaging altered to match his new goal. The change seems to have been unveiled at a private World Health Organization meeting in Geneva on February 16–17. It went fully public on February 29, 2012, when he participated in a panel discussion sponsored by the American Society for Microbiology.

I don’t have time to trace the intricate history of Fouchier’s public miscommunications about his H5N1 bioengineering study; that history is detailed in an article on my website.

Here are Fouchier’s four key reversals from when he was trying to arouse concern to when he was trying to suppress controversy. (Again, Fouchier’s original position is based on his own statements on the website of his own institution; his revised position comes from the transcript and recording of his ASM presentation.)

  • H5N1 in the wild has a human 59% or 60% case fatality rate versus the real rate is much lower.
  • The scenario of a catastrophic H5N1 pandemic is credible versus that scenario is extremely low-probability.
  • Fouchier’s mutated virus transmitted easily via aerosol in ferrets versus it transmitted only with difficulty.
  • The ferrets died versus they barely got sick.

For a few weeks after Fouchier’s February 29 ASM panel presentation, the tiny world of flu researchers and flubies was abuzz with rumors. Had Fouchier reconsidered his own data? Did he have new data? Had his original paper been unclear? If so, how could a paper that unclear have survived peer review? Might the paper have been “clear” but misleading, perhaps even dishonest? Was Fouchier communicating inconsistently or even irrationally, perhaps because of the pressure of controversy? Or had Fouchier simply been hyping his findings because he wanted to arouse attention, and then decided he’d better downplay his findings instead when all the attention looked like it might threaten publication?

What fascinated me even more than these questions about Fouchier’s about-face was the public reaction of the flu research community. Long-term supporters of publication wanted the NSABB to reconsider in light of Fouchier’s new messages that his virus was not lethal via aerosol, and only weakly transmissible. Opponents of publication said lethality and even efficiency of transmission had never been the issue; what really mattered was that the two studies had expanded the range of species in which H5N1 could transmit.

Neither side said in the mainstream media that they smelled a rat – though I certainly did, and I was convinced they did too. The dominant meme that arose wasn’t that Fouchier had misled everyone about his work. It was that the media and the public had misunderstood his work. To their discredit, scientists who had been equally misled mostly went along with that meme. At worst, some pointed out publicly that the original paper had been “unclear” or “confusing” and needed to be “clarified.” But few if any scientists publicly used the word “misleading,” and none came anywhere near the possibility of dishonesty.

I find it outrageous – though not really that surprising – that the flu science guild united in defense of the reputation of one of its own. This protective response may well have been augmented by the fact that Fouchier had become the poster child for unfettered scientific publication. Scientists who wanted to advocate on behalf of publishing Fouchier’s paper would have found it awkward to criticize discrepancies in how he had described the work. Scapegoating the media for misreporting and the public for misunderstanding is an easy cheap shot.

Several virologists (and two NSABB members) have told me privately that they and many of their peers are outraged at Fouchier. But unlike the freely expressed outrage of scientists at the threat of publication censorship, the outrage of scientists at Fouchier’s miscommunications has been almost entirely suppressed.

Conclusion

Is there a bottom-line conclusion to be drawn from these observations about risk communication regarding the H5N1 bioengineering research controversy? I want to draw three.

Conclusion One is about the distinction between monologue and dialogue. I named this presentation “More Spin than Science,” but I could just as well have named it “More Teaching than Learning.” Engaging the general public on a controversy is extremely difficult. I am reluctant to criticize the research community for not trying very hard to engage the public. But engaging critics is easy, if not pleasant. Despite recent self-congratulations by Tony Fauci and others, this hasn’t been done very well. Perhaps the most important task – both important and doable – is engaging stakeholders who are neither the apathetic public nor the dedicated opposition. This conference may be a belated baby step in the right direction. But so far, there has been very little engagement of this sort throughout the controversy.

Conclusion Two is implicit in the title I actually chose, “More Spin than Science.” Both sides in this controversy have acted more like lawyers than like scientists, briefing the case for their positions rather than exploring the holes in their hypotheses. But for three reasons, I am more concerned about the one-sidedness of the defenders of continuing the research.

number 1
They have been the worst offenders in this regard.
number 2
They’re on the right side, in my opinion – a point I have intentionally saved until my conclusion. On balance, I think, H5N1 bioengineering research does need to continue. So I am especially distressed to see the case for continuing it made so one-sidedly.
number 3
They’re on the reassuring side. In any risk controversy, the alarming side is allowed considerably more leeway to exaggerate than the reassuring side. This is just a special case of the conventional risk management criterion of conservativeness. If a smoke alarm fails to go off when there’s a fire, that’s a major disaster. If a smoke alarm goes off when there’s no fire, that’s a minor inconvenience. So we calibrate smoke alarms – and alarmists generally – to go off too much rather than too little. Thus exaggerated warnings about laboratory safety and terrorist attacks arouse far less outrage than exaggerated reassurances about those possibilities. Those who would reassure the public about H5N1 bioengineering research safety must be especially meticulous about acknowledging the other side’s valid arguments. I think they have failed badly on this standard.

Conclusion Three, my final conclusion, is self-serving, although I’m mostly retired so I’ll let myself make it: Risk communication can help. It is not a magic bullet. We’re still a very sloppy science, grounded more in observation and experience (our equivalent of “natural history”) than in methodologically sound hypothesis testing. Risk communication is a tiny field and a new field – but it is a field, and it upsets me to see organization after organization and meeting after meeting try to figure out how to talk to nonscientists about H5N1 bioengineering research without a risk communication professional onboard. That’s why I was so pleased to be invited to speak to you today.

Copyright © 2013 by Peter M. Sandman

      Comment or Ask      Read the comments
Contact information page:    Peter M. Sandman

Website design and management provided by SnowTao Editing Services.