Posted: September 5, 2001
Hover here for
Article SummaryThe term “sound science” is almost always a way of pulling rank, of harnessing the high stature of science on behalf of one side in a policy debate – usually the side less protective of health or the environment. This column exposes some of the pretenses that typically underlie the term: the pretense that your scientific support is stronger than it is; the pretense that your actions are grounded in science when they are grounded mostly in other considerations; and the pretense that your disputes with critics are about science when they are mostly about trans-scientific issues. The column is also about my clients’ tendency to believe their own pretenses – to forget that they are using or even misusing science to achieve their goals and imagine instead that they are science’s virtuous handmaidens.

Sound Science

A government agency client recently brought me an interesting problem. The agency had been charged with setting a cleanup standard for copper around a century-old smelter. It started with existing data – or at least existing standards in other jurisdictions – about how much copper inside the body constituted a health threat. Then it did some work on how much copper was likely to be absorbed by residents of the neighborhood in question. Much to the agency’s relief, it found that the likely uptake was way below the hazardous level. Knowing that even a very stringent body burden standard would lead to virtually no cleanup requirement, it articulated just such a stringent standard.

Then came the problem. There had been a clerical error in transcribing the absorption data; the numbers were off by several orders of magnitude. And so it turned out that implementing the standard the agency had announced would require digging up half of the town. My client’s problem wasn’t whether to admit the error; the agency had already done that. The problem was how to disclose that now that the action implications of its new standard had been recalculated, the agency was abandoning that standard and substituting a much less conservative one.

Of course there is nothing unreasonable about deciding that it doesn’t make sense to excavate whole neighborhoods for the sake of a highly conservative health standard. But when the standard was first announced, nobody said it was based on practical criteria like how little it would cost to implement. The agency had simply said it was the right standard to protect people’s health. Suddenly the agency needed to say it was no longer the right standard, now that the cost of implementing it was clear.

I don’t know yet how much community outrage the revised standard will provoke. Maybe not too much – after all, people don’t really like seeing their gardens destroyed and their neighborhoods disrupted. If the outrage does get out of hand, the agency may well decide that outrage trumps cost and choose to stick with the original standard. No doubt that decision too will be explained as a scientific judgment – in the invariable alliteration, “sound science.”

This story illustrates the slippery relationship between science and policy. Of course you want the scientific input to a policy decision to be sound science rather than unsound science. You want to avoid clerical errors in data transcription, for example. But whether the science is sound or unsound, it is only one of the components of a policy decision.

My clients tend to deny this in two quite different ways. First, they pretend that their own decisions are based exclusively on scientific considerations, ignoring or disavowing their reliance on “trans-scientific” factors from cost to community outrage. The second pretense is even more harmful and more disingenuous. My clients typically pretend that their disagreements with critics are scientific disagreements rather than trans-scientific disagreements. They defend their policy preferences as “sound science,” and attack competing policy preferences as unsound science.

Of course unsound science does exist, and deserves to be exposed. There are marginal “experts” at both tails of the normal distribution, ready to claim either that vanishingly low concentrations of dimethylmeatloaf are the probable cause of all the cancers in the neighborhood or that terrifyingly high concentrations are probably good for you. Some so-called scientists are crackpots; others fall prey to the temptation of ideology (on the alarmist side) or money (on the reassuring side). But most scientists – even those whose views are influenced by ideology or money – have views that are within the range of scientific acceptability. In other words, they may be right.

Most risk controversies, moreover, are chiefly about values and interests, not science. Veterans of these sorts of controversies may recall the 1989 battle over Alar, a chemical that was sprayed on apples to hold them on the tree longer. When studies surfaced suggesting that Alar might have adverse health consequences, the U.S. Environmental Protection Agency launched a slow legal process to begin phasing it out of the food supply. The Natural Resources Defense Council, which had long advocated faster regulatory action on a wide range of pesticide-related issues, chose Alar as its poster child for regulatory reform – not because it was the most hazardous agricultural chemical around, but because it was a surefire media winner: Children consume most of the nation’s apples and apple juice. So EPA and NRDC got into a huge battle over how urgent it was to get the Alar off the apples.

NRDC and EPA did interpret the science differently. EPA’s estimate of the health risk of Alar was about an order of magnitude lower than NRDC’s estimate. Or maybe it was two orders of magnitude. I don’t remember, and it didn’t matter. I do remember asking an NRDC spokeswoman if the group would abandon its crusade to move faster on Alar if it discovered EPA was right on the numbers after all. No, she said. Then I asked an EPA spokesman if the agency would speed up to NRDC’s schedule if it accepted NRDC’s numbers. No, he said. Though the two organizations genuinely disagreed about the science, the disagreement was more the result of a policy difference than the cause of it. What they really disagreed about was how bad a risk needs to be to kick it out of the “routine” regulatory category and call it an emergency. If the Alar risk had been as bad as NRDC thought, EPA would still have considered it a routine problem; if it had been as mild as EPA thought, NRDC would still have considered it an emergency. Not to mention EPA’s stake in defending its past decisions, and NRDC’s stake in dramatizing the case for regulatory reform. Or perhaps EPA’s desire to go easy on the apple and chemical industries. Or perhaps NRDC’s need for a good fundraising issue. Lots of things were going on in this battle, and a scientific disagreement was far from the most important for either side.

On the whole, I think, the reassuring side in a risk controversy is more likely than the alarmist side to appropriate the mantle of science. Companies are particularly inclined to frame the dispute as “sound science” versus “junk science.” Activists are likelier to frame it as human health versus corporate greed. When government agencies are siding with companies, they ground their decisions in a lot of references to the data; when they’re siding with the activists, they talk about protectiveness. There are exceptions. In one RCRA cleanup I am involved in, both sides are claiming science as an ally. Supporters of a capping remedy see the problem as science versus public hysteria; supporters of hauling away the waste see it as science versus politics.

Of course even a company that cares more about getting its own way than about scientific merit may still be right on the science. And if the company’s science is sounder than the activists’ science, the company is entitled to say so – even if science ranks low among its motivations. Which leads me to the other typical characteristic of the science in risk controversies. It isn’t just comparatively unimportant to both sides. It is also flat-out unclear.

Put aside disagreements about the data, and look at error bands – that is, quantitative estimates of uncertainty in the data. Quite often, you’ll find an error band that’s three or four orders of magnitude wide. At that point it isn’t much of an exaggeration to conclude, “The data show the risk is somewhere between negligible and catastrophic.” Big help! And that’s just the quantifiable error, emerging from the research methodology itself. If you back up and look at the assumptions underlying the methodology, then rethink the findings with different (more or less conservative) assumptions, you can easily justify even wider error bands.

So even before you get to the real trans-scientific issues (cost, outrage, ethics, values, politics, and the rest), you have two crucial trans-scientific issues about the science itself. The first question is how risky something has to be before you decide to act; the second question is how sure you have to be before you decide to act. The two get intertwined easily, but they are conceptually different: how safe is safe enough versus how sure is sure enough. The first question is the one NRDC and EPA were fighting about, but the second is the one that provokes the most non-scientific fights claiming to be about science.

Enter the Precautionary Principle. This is a legal term of art, especially in Europe, and its precise meaning is usually unclear on purpose. But the essence of the precautionary approach is clear enough: Don’t mess around with things you don’t entirely understand. (The political right tends to take a precautionary stance toward the sociosphere but not toward the ecosphere; half-understood chemicals are okay but half-understood social changes are dangerous. The political left makes the opposite judgments.) Taken to an extreme, the Precautionary Principle is a recipe for paralysis: Don’t do anything unless you can prove with certainty that it won’t do any harm. Since science can’t prove anything with certainty, and can’t prove a negative at all, this translates to don’t do anything, period. But this extreme version of the Precautionary Principle is more often wielded as a straw man by industry than it is seriously advanced by activists. A milder version is much more sensible: If you have some evidence that something might be harmful but you’re not sure yet, move slowly until you have gathered more data, especially if the harm would be irreversible.

All too often, industry’s definition of “sound science” is the extreme opposite of the Precautionary Principle: Unless you have definitive proof it’s dangerous, full speed ahead. Companies reach this untenable position in stages. They start by asserting that if you don’t have any reason to think it’s dangerous, you ought to assume it’s probably safe. Fair enough. Then they assert that anecdotal evidence isn’t really evidence. And that anomalies show up in every complex data set and shouldn’t be taken as evidence (or pursued to see if they’re evidence ... but that goes unsaid). And that most studies make such conservative assumptions that even a pattern of statistically significant findings still doesn’t really constitute good evidence. And so, little by little, the company comes to believe that anything short of conclusive proof of harm is a clean bill of health.

Activists and the public tend to suppose that companies are intentionally deceiving them. After a quarter-century of corporate consulting, I think the companies are more often deceiving themselves. Greed plays a role in the self-deception, of course. But it isn’t really profitable to persuade people that dimethylmeatloaf is safe when it’s going to turn out dangerous; that way lies the litigation abyss. My clients often assert that they are “confident” about some reassuring scientific conclusion despite gaping gaps in the data or even contrary indications they have decided are misleading and therefore decided to suppress. They are wrong, both ethically and scientifically. But they’re not lying. They really are confident.

The only way I have found to untangle the self-deception is to ask clients to consider writing enforceable guarantees with stipulated penalties. If dimethylmeatloaf turns out to be hazardous (according to specified criteria), big money will change hands. When the client indignantly refuses to write such a guarantee, I get to ask why. An enforceable promise that the impossible won’t happen shouldn’t be a deal-breaker. If it is, the company should reconsider its confidence.

Just in the past few months, a cell phone client told me “sound science” showed that cell phones posed no health risk (despite the existence of studies in both directions and the paucity of any studies about the newest generation of phones). And a biosolids client told me “sound science” showed it was safe to apply treated sludge to farmers’ fields (despite the existence of studies in both directions and the paucity of any studies about newly targeted contaminants and potential health outcomes). And a pharmaceutical client told me “sound science” showed that drugs flushed down toilets didn’t seriously contaminate the environment (despite ... but you get the point). Now it may well turn out that cell phones, biosolids, and pharmaceuticals in the environment are all trivial hazards. My point is that we don’t know enough yet to draw that conclusion with great confidence. In the meantime, it is certainly arguable that precautions are premature, or at least that expensive and inconvenient precautions are premature. It is also arguable that some precautions are wise, at least until we know more. But neither position is grounded in “sound science.” The argument is not about science at all.

The worst thing about pretending that industry’s innocent-until-proven-guilty approach is grounded in sound science is that the pretense slows the science. Three years ago the Environmental Defense Fund published a report showing that the health risk posed by many high production volume chemicals – chemicals produced in huge quantities throughout the developed world – could only be described as “unknown.” There just wasn’t enough health and safety information in the published literature to tell. In the U.S., the Chemical Manufacturers Association found the study’s conclusion so preposterous it replicated it ... and reached the same conclusion. (Both organizations have since rechristened themselves, Environmental Defense and the American Chemistry Council.) Now Environmental Defense, the ACC, and the EPA are collaborating on a project to fill the data gaps. The gaps would have been filled a lot sooner if chemical companies hadn’t kept insisting that “sound science” showed the chemicals were benign.

“Sound science,” by the way, apparently means only sound physical and biological science. Sound social science is ignored altogether. Consider two of the best established principles in the social studies of science:

Researchers tend to find what their funders hope they will find – not because they lack integrity, but because they unconsciously resolve the many intangibles in the research process in ways conducive to their funders’ goals. A Greenpeace-funded study is likely to find worse environmental problems than an industry-funded study, even when both are careful and honest.
People’s reactions to health, safety, and environmental risks are not proportional to the magnitude and probability of those risks, but rather to a set of factors like control, fairness, responsiveness, dread, and trust. I call these the outrage factors. But even social scientists who don’t much like my “outrage” construct agree that these non-technical factors determine which risks people will shrug off and which they will worry about.

So sound science tells us not to trust sound science too much, at least not unless we share the values of its funders. And sound science tells us that it is the trans-scientific “outrage” half of risk, not the scientific “hazard” half, that determines people’s level of concern. Yet companies continue to insist that we ought to trust the studies they paid for that prove they’re right about how low the risk is. And companies continue to do things that needlessly exacerbate people’s outrage, then wonder aloud why those people are being so “irrational” about the risk when “sound science” proves they needn’t worry.

The cluster of errors I have addressed under the rubric of “sound science” are worth disentangling. There are four:

  • Pretending your scientific support is stronger than it is.  This includes ignoring or disparaging studies that go the other way, ignoring or hiding data gaps and anomalies, and neglecting to address remaining questions with further research.
  • Pretending your actions are grounded in science when they are grounded largely in other considerations.  The pretense doesn’t usually fool people, but it does add to your embarrassment when the science changes but your position doesn’t, or when something else forces your position to change even though the science didn’t.
  • Pretending your disputes with critics are about science when they are mostly about trans-scientific issues.  The trans-scientific issues that are closest to science are the two I have focused on: how safe is safe enough and how sure is sure enough. But even the more distant trans-scientific issues are often confused with science. The current furor over stem cell research, for example, is about morality, not science.
  • Believing your own pretenses.  This is in some ways the worst of the four, and probably the hardest to correct.

When companies misuse “sound science” in these four ways – especially the last – they don’t just hurt the rest of us. They hurt themselves as well. Companies do worse science than they should because they imagine they have better science than they have. And companies do worse “trans-science” than they should because they imagine that the fight is all about science when it isn’t. Too often my clients are deaf to their stakeholders’ trans-scientific concerns: deaf to outrage, deaf to values, deaf to emotion. Too often they are unresponsive to uncertainty; they claim confidence when they should be tentative, and demand the right to move fast when they should ask permission to go slow. In the name of “sound science,” in short, companies are often insensitive to all their critics’ claims, scientific and trans-scientific alike.

The damage companies do themselves by mishandling “sound science” isn’t limited to those occasions when they turn out wrong about the risk ... and end up wishing we had stopped them in time. A better measure of the damage they do themselves is the frequency with which they lose – we do stop them – even though they are probably right about the risk. And perhaps the best measure of the damage they do themselves is our society’s ever-declining confidence in science as a guide to decision-making about risky technologies. A less arrogant approach to “sound science” would facilitate greater public confidence in sound science.

Copyright © 2001 by Peter M. Sandman

      Comment or Ask      Read the comments
Contact information page:    Peter M. Sandman

Website design and management provided by SnowTao Editing Services.