I am neither a supporter nor an opponent of H5N1 bioengineering research. I’m not qualified to assess the risk-benefit tradeoffs of continuing this research, banning it, or circumscribing it with new safety regulations. I’m a risk communication expert, and my assessment is confined to risk communication aspects of the controversy.
The goal of the moratorium on H5N1 bioengineering research was explicitly to make time to “educate” the public – which (unfortunately and inappropriately, in my judgment) in context quite clearly meant to reassure the public that this research was sufficiently safe to allow it to continue. Hearing from the public was also mentioned from time to time, but was a distinctly secondary goal. And insofar as dialogue was a goal, it was part of the larger goal of reassurance; the hope was that worriers would worry less if they felt they had been heard.
It was never a goal of the moratorium to educate people that there might be a significant risk of releasing a bioengineered H5N1 virus with pandemic potential from a laboratory working with such a virus. To the contrary, the moratorium aimed to buy time to persuade people that that risk was negligible.
I am not qualified to have an opinion on the size of the risk of an H5N1 lab release, but everything I know about risk perception and risk communication tells me that the risk is greater than the experts believe it to be, and that the experts believe it to be greater than they are willing to admit publicly.
Beyond that, I know the following:
- Laboratory accidents are more common than the public thinks and than officials acknowledge.
- Laboratory accidents are underreported and often kept secret.
- Near-misses aren’t reported or catalogued at all – a far cry from state-of-the-art safety management in most corporate settings.
The pretense is that the most significant risk is outside terrorism. But the biggest actual risk is either an out-and-out accident or an insider who flouts the safety rules, acts out on an internal grievance, hooks up with a terrorist group, or goes crazy. There are surprisingly few precautions taken to prevent a laboratory scientist or technician from going rogue with a deadly vial. For decades I have asked my corporate clients what their companies do when an employee in a risk-sensitive job is acting squirrelly. Their answers were far from ideal, but I suspect academic research lab managers’ answers would be worse. Actually, I suspect they would be offended by the question.
Even more than corporate managers, public health officials and academic researchers have what Jody Lanard and I have called “A Blind Spot for Bad Guys.” The central example in our 2005 column with that title is the little-known story of how U.S. health officials responded when they learned that an infectious disease test company had mistakenly mailed influenza test kit samples of a potentially pandemic flu strain to labs all over the world. The College of American Pathologists (CAP), which had commissioned the test kits, sent a fax to all the labs telling them what had happened and asking them to destroy the dangerous sample – thus converting a small accident risk into a much larger terrorism risk.
When the H5N1 bioengineering research controversy surfaced in early 2012, the general public paid little attention – and that scant attention waned long ago. There was never much opportunity for “public involvement.” But there really could have been more meaningful dialogue with stakeholders, especially critical and skeptical stakeholders.
Such a dialogue was not a detectable goal of the research moratorium. Consider the agenda of an “international consultative workshop” on the H5N1 research controversy held December 17–18 in Bethesda, Maryland, under HHS sponsorship. The preliminary agenda listed an astonishing 73 speakers, moderators, and panelists over 16-1/2 hours (8:00 to 6:30 on the 17th and 9:00 to 3:00 on the 18th). Allowing no time for meals or bathroom breaks, that’s less than 14 minutes apiece.
The agenda also called for five opportunities for “moderated discussion” with the audience. I wasn’t there so I can’t tell you to what extent audience comment was allowed to cut into each speaker’s 14 minutes. A “moderated discussion” that lets some of the people in the room sound off for a minute or two each and then “that’s it, you had your turn, let’s hear from somebody else” isn’t a discussion at all. It virtually guarantees that nobody’s contribution will be followed up, explored, responded to, or even listened to.
The moratorium was a communication strategy. It was never a significant safety precaution. It didn’t affect anyone except those who signed onto it and those who received or sought public funding from U.S. agencies like NIH. Research by other scientists around the world could proceed. Research in secret military labs (in the U.S. and elsewhere) could proceed. Amateur research in garages could proceed. And research by moratorium signatories and NIH-affiliated scientists was bound to be restarted once the right opportunity had been crafted – and restarted without too much ratcheting up of safety precautions, which those scientists didn’t think needed ratcheting up in the first place.
So did the moratorium succeed as a communication strategy?
It didn’t succeed in alerting lots of people to the risk of doing this research and the risk of not doing it and thus ceding the field to weapons researchers and amateurs – the most fundamental DURC dilemma. That was never the goal.
It didn’t succeed in making H5N1 research significantly safer – not even the segment of H5N1 research covered by the new NIH rules. That was never the goal either.
It didn’t succeed in engaging critics in substantive dialogue. That was never the goal either.
It might have succeeded in suppressing public debate – that is, in confining the debate to insiders and the most determined critics. There wasn’t that much public debate even when the issue was at its hottest, but there was a little – and the moratorium might have helped to further reduce that little. That was very much the goal, I think. We’ll know better how well it worked when we see how much public debate there is when the moratorium is lifted. If it’s a big story, the moratorium will have failed. If it’s a nonstory for most of the media and most of the public, the moratorium might deserve some of the credit (or blame) for killing the controversy.
For more of my writing on this issue, see:
- Bird flu risk perception: bioterrorist attack, lab accident, natural pandemic (January 2012)
- The H5N1 Debate Needs Respectful Dialogue, Not “Education” or One-Sided Advocacy (February 2012)
- Talking to the Public about H5N1 Biotech Research (March 2012)
- Science versus Spin: How Ron Fouchier and Other Scientists Miscommunicated about the Bioengineered Bird Flu Controversy (June 2012)
- Does the public care about the H5N1 research controversy? How can officials involve the public? Do they really want to? (December 2012)
Copyright © 2013 by Peter M. Sandman