4. Share dilemmas.
Dilemma-sharing is to the future what acknowledging uncertainty is to the past and present. It’s acknowledging that you’re not sure what to do.
This is not the same as claiming to have no idea what to do, or claiming not to have considered what to do. Of course you’ve worked on the problem, and of course you have some ideas. But if you’re not sure, say you’re not sure – and say why. “We’re trying to decide between X and Y. X has these advantages and disadvantages. Y has those advantages and disadvantages. We ruled out Z easily for the following reasons – but the choice between X and Y is a tough choice.”
You can do this when you haven’t decided – and ask for guidance from your stakeholders. But dilemma-sharing is just as important when you had to make the decision and you did – to make it clear that it was a tough decision, that you know the choice you didn’t make has merit. This has several advantages: (a) Those who favor the losing choice feel respected; their preferred option got its due consideration. (b) Those who favor the losing choice can’t easily pretend that they are obviously right, when you’re saying it’s not obvious at all who’s right. (c) Those who want to depend entirely on your judgment now and blame you later if you were wrong are forced to recognize that you’re not God and not claiming to be God; that you’re not sure.
In terms of the seesaw, dilemma-sharing is a way of moving to the fulcrum.
Some dilemmas will always be dilemmas; ethical questions, for example, don’t ever get answered. But scientific dilemmas do eventually get answered. In these cases, sharing the dilemma can mean predicting that you will make some mistakes: “We’re going to do X rather than Y for the following reasons. We may turn out wrong.” Or it can mean refusing to decide until the evidence is better: “At this point X and Y look about equally justified. The data aren’t clear enough to choose between them. Patients should consult with their own physicians and make their own choices.” Dilemma-sharing goes down a bit easier in the first scenario than in the second. It is often possible to state a preference, even while explicitly insisting that the science isn’t there to justify it. Or you can offer an algorithm: If people are more worried about A, they’ll probably want to do X, but those who are more worried about B will probably want to do Y.” The purest form of dilemma-sharing — describing the situation as a toss-up and providing no guidance whatever – is also the most painful.
Among the dilemmas CDC had to face during the 2001 anthrax crisis were these: deciding how strenuously to discourage the public from stockpiling antibiotics; deciding which individuals to test, which to medicate, when to stop; deciding which buildings to test, which to close, how to clean, when to reopen. In the early days, I think, CDC tended to come across a little more confident than it should have about these decisions. When the agency got around to dilemma-sharing, it sounded like a change in position – even when it wasn’t.
Here’s a nice example of dilemma-sharing from the November 6, 2001 New York Times. I don’t know who is quoted in this paragraph, but the subject is cleaning the Hart Building: “‘It’s a totally new paradigm and so we’re a bit panicked about it until we develop solutions,’ said a senior federal health official. Ultimately, the official said, the potential for such microbial assaults and subsequent spread of spores should decline.” The reassuring second sentence is all the more reassuring coming as it does from an official who is comfortable confessing that he’s “a bit panicked” trying to figure out how to get rid of the spores.
Dilemma-sharing is hard because it goes against everybody’s grain. Scientific sources usually would rather reach their best judgment, however tentative, and then claim confidence. And the audience usually would rather be told what to do by a confident scientific source. We all tend to get overdependent in a medical emergency, whether the crisis is personal or matter of public health. Unfortunately, that doesn’t keep us from wreaking vengeance if an expert gives us overconfident advice that turns out bad. For the expert, therefore, the choice is clear: Irritate your audience now by acknowledging uncertainty and sharing the dilemma; or claim omniscience now and risk paying a far higher cost later in outrage and lost credibility.
A month or so after I presented these ideas at CDC, the agency faced a classic opportunity for dilemma-sharing – and took it in its purest and most difficult form. (As always, I have no idea how much of this was my influence; there were plenty of other factors to credit or blame.) The issue was what to do with the cohort of individuals who were just finishing their sixty-day course of antibiotics after possibly being exposed to anthrax. CDC had based the original sixty-day regimen on the only research it had – decades-old research on natural anthrax and healthy patients. But there was animal research suggesting that anthrax spores might conceivably survive in the lungs for longer than sixty days. And CDC scientists had learned to expect the unexpected. So CDC offered patients a choice among three options: (a) Stop at sixty days, and watch for the remote possibility of illness; if it happens, get to a doctor fast. (b) Take another forty days of antibiotics, trading the additional risk of antibiotic side-effects for the additional protection against any anthrax spores that might still be lurking. (c) Take the anthrax vaccine – which has possible side-effects of its own, and has never been tested on people already exposed to the disease. CDC made no recommendation. Any of the three options, it said, had low, uncertain, non-zero risk; given the sorry state of the science, any of the three might be best.
The reaction from the media, from politicians, and (at least as quoted in the media) from the patients themselves was uniformly negative. CDC’s clear statement about the unclear nature of the science was described as “muddled” and “confused”; CDC was repeatedly characterized as having “admitted” that it wasn’t sure what patients should do, as if being sure, or sounding sure, were its obvious responsibility. In a congratulations-and-condolences email to CDC officials, I wrote: “There is this consolation: The criticism that comes to an agency that refuses to give advice when it lacks a scientific basis is nothing compared to the criticism that comes to an agency that guesses wrong. And this further consolation: Once we get used to an agency that refuses to guess, we will be more tolerant of its uncertainty when it is uncertain, and much, much more trusting of its advice when it decides that advice is merited.”
5. Do anticipatory guidance.
Anticipatory guidance is a common strategy in clinical medicine: Tell the patient how the illness is likely to progress, how the medication is likely to feel, etc. It’s especially important with respect to negative future contingencies: “Some patients experience this or that side-effect. You may be tempted to stop taking the medicine. Instead, try….”
In a bioterrorism attack, there are many occasions to offer anticipatory guidance:
- The antibiotics will have side-effects. Some people will be tempted to go off them prematurely.
- There may be new cases after it looks like everything is over, resulting from the long incubation period.
- Many of the people on antibiotics may be taken off once we determine that their exposure was small enough that it’s safe to take them off – and they may understandably feel insufficiently protected when that happens. “We’re putting you on the meds just to be sure. We hope to establish that your exposure was minimal, so we can take you off ASAP.”
- We may never find some answers – the source of some people’s infections, for example.
- We’re recommending closing this building. These are the conditions under which we will recommend reopening it….
- Our interim standards will change as we learn more. People who don’t like the new standard may resent the changes. People who were managed under the old standard may also resent the changes.
The main benefit of anticipatory guidance about likely negative outcomes is that it reduces the dispiriting impact of those outcomes. Here is Michael Osterholm again, explaining in the October 28, 2001 New York Times how anticipatory guidance works:
The explanations have to include bad news along with the good, said Michael Osterholm…. Mr. Osterholm said he gained hard experience during a 1995 outbreak of meningitis in Mankato, Minn., where he oversaw vaccinations for 30,000 residents in just four days. At the outset he was careful to warn townspeople that one out of seven people who were infected would probably die. Less than a week into the outbreak, a patient died; the news, he said, was accepted without “fueling the fire,” because “people had anticipated it could happen.”
Anticipatory guidance has another benefit that can be even more important in a bioterrorism crisis. Telling people that they are likely to react in a particular way to some future event prepares them to overrule that reaction if appropriate. Assume, for example, that you want to discourage people from trying to get their own antibiotic supplies to have on hand for a possible attack. (I will leave aside for now my questions about the wisdom of this goal.) There are two schools of thought about how to proceed. The don’t-put-foolish-ideas-into-their-heads school advises that you ignore this possibility until people actually start calling their doctors; then you point out why they are wrong to do so. The anticipatory guidance school recommends that you tell people in advance that they may feel tempted to build their own antibiotic stockpiles, and that the temptation is natural but should be resisted, in your judgment, for the following reasons…. In favoring the latter approach (could you tell?), I am making three empirical claims: that many people will find it easier to resist the temptation when they have been forewarned to expect it; that few people will be lured into experiencing the temptation by the forewarning; and that both those who end up stockpiling and those who do not will feel a stronger alliance with the authorities if they have been forewarned than if the subject hasn’t been raised.
Like much of risk communication, anticipatory guidance requires that you have confidence in people’s ability to bear difficult situations. But it beats the alternative, which is surprising them with difficult situations.
6. Acknowledge the sins of the past.
I routinely advise my clients to acknowledge anything negative about their own performance that the audience already knows, or that critics know and will tell the audience when they see fit. In fact, I advise clients to “wallow” in the negatives until their stakeholders – not just the clients themselves – are ready to move on.
In public relations, as opposed to stakeholder relations, this is not sensible advice. At most, PR professionals recommend acknowledging negative information briefly before transitioning to something more positive. Why wallow in a bad piece of news that most of the audience hasn’t even found out about? But stakeholders are assumed to be interested enough, and resistant enough, that they are bound to learn the bad news anyway. So wallowing in it makes sense.
Try to imagine Exxon talking about its environmental record to a roomful of environmentalists without mentioning the Valdez spill. The audience is sitting there waiting to see if the spokesperson is going to mention Valdez – and until he or she does, we are only half-listening. A wise Exxon environmental communicator would therefore have an “acknowledgement macro” for the spill; push just one button and out it comes: “As the company responsible for the Valdez disaster….” In the early 1990s, by contrast, I went to the Exxon pavilion at Disney World’s Epcot Center, a wonderful show on Exxon’s record of environmental protection – with not a word about Valdez. It was a great icebreaker; perfect strangers were murmuring to each other about Exxon’s gall in ignoring Valdez. Nothing the company could have said about the accident would have been as damaging as saying nothing.
Whether to acknowledge negatives that nobody knows and nobody is likely to find out — to blow the whistle on your own dirty secrets – is a tougher call. Leaving aside questions of law and ethics, risk communicators estimate that bad news does about twenty times as much damage if you try to keep it secret and fail than if you own up to it forthrightly. It follows that secrecy pays for itself only if an organization can achieve a 95% success rate at keeping secrets. If you fall short of 95%, as I think most organizations do, then blowing the whistle on yourself is cost-effective. But while secrecy is usually a bad risk, it isn’t crazy. What’s crazy is to reveal the secret and then behave as if it were still a secret.
In the 2001 anthrax attacks, CDC had one piece of negative information that was especially important to acknowledge: the fact that in the early days of the attack the agency was in error about whether anthrax spores could escape a sealed envelope to threaten … and kill … postal workers. This wasn’t a secret; the only question was how often CDC spokespeople chose to mention it. My advice to CDC: The more often you do so, the better. I think some CDC officials saw this advice as unfair or unfeeling. I didn’t mean it to be. I realize both how difficult it is to guess right about new risks and how painful it is to have guessed wrong. For a science-based organization like CDC, the need to act on incomplete data is itself painful – but in a bioterrorism crisis the need is unavoidable, and so error is unavoidable as well. There was discussion at CDC about whether it was appropriate to call this a mistake. I suggested that CDC neither call it a mistake nor object when others called it a mistake. What was essential, I said, was for CDC to refer to it often, so the rest of us didn’t feel compelled to do so ourselves.
I actually went further. When asked for its judgment about other matters where the science is unsettled, I suggested, CDC should remind us that its judgment had been fatally flawed before. As far as I know, nobody took this advice. And perhaps it went too far. Certainly in traditional public relations, a source that incessantly reminded reporters of prior errors might well provoke them to look for a more confident source. Nonetheless, I think the risk of dwelling too much on your past sins is a small risk, both in terms of its probability and in terms of its magnitude. The big risk is that you will mention them too seldom.
In addition to acknowledging the sins you have committed, it is important also to acknowledge the sins you have been accused of but have not committed – that is, to acknowledge what critics have said, and why it is understandable that they feel that way. (Defending yourself against mistaken charges works in proportion to how visibly you concede valid charges.) The claim that class or race underlay the difference in how anthrax was handled in Congressional buildings versus how it was handled in postal facilities was the sort of charge that needed to be acknowledged and sympathetically rebutted … not ignored. “Sympathetic rebuttal” isn’t easy, partly because it isn’t conventional (the very word “rebuttal” suggests a hard-edged debate) and partly because self-esteem gets in the way (it is much more appealing to cream the purveyors of false charges than to acknowledge that their error is an understandable one). Nor will it help to replace forthright anger with condescension. If sympathetic rebuttal is beyond you, at least you can manage irritated rebuttal; any acknowledgment is better than none at all. Even in public relations, “I wouldn’t dignify that with an answer” isn’t much of an answer. In stakeholder relations it’s a nonstarter.
Make a list of things that you have been, are being, or even might be criticized for. Talk about all of them. Acknowledge the germ of truth in as many as have a germ of truth; the more self-critical you are, the less critical we will be. If there isn’t even a germ of truth you can cop to, try as you might, at least acknowledge that people think or may think there is. And remember that this is what you do when you are dealing with stakeholders.
7. Be contrite or at least regretful, not defensive.
It’s not enough to acknowledge your prior misdeeds; you have to do so in a way that shows you know they are misdeeds. Exxon’s handling of the Valdez spill is a good example. Corporate officials did in fact acknowledge what had gone wrong, but in a tone that oscillated between pride and defensiveness; they often sounded like they thought they were the victims of the spill rather than its perpetrators. No way were they going to show contrition. The company was ultimately assessed billions of dollars in punitive damages chiefly as a result of its failure to show contrition.
Or consider the 1993 food poisoning outbreak at Jack-in-the-Box fast-food restaurants in the western United States. The hamburger patty processor had sent Jack-in-the-Box contaminated patties, which happens sometimes, and Jack-in-the-Box had cooked them insufficiently to kill the pathogens. Jack-in-the-Box managers did say they felt “responsible” for the safety of all customers, but they denied they were “responsible” for the outbreak, blaming it almost entirely on the supplier and inadequacies in the government meat inspection program. This did not feel like contrition to most consumers. The failure to show contrition, in turn, contributed significantly to the loss in sales.
The problem is what to say when you don’t feel contrite. Yes, there was a bad outcome that was in some sense your responsibility. Maybe you made a decision that in hindsight was unwise; or at least it turned out wrong. Maybe you relied on someone else (the ship captain had been drinking; the supplier sent you contaminated burgers). You wish you’d acted differently, but you don’t feel you did anything wrong – and your attorneys are at your elbow to make sure you don’t say anything that implies otherwise.
If contrite goes too far, aim for regretful. Unfortunately, the word “regret” no longer conveys regret; it sounds more lawyerly than apologetic. “Sorry” does the job well. Or “We feel terrible that….” Ride the seesaw of blame. Give us the information that shows you did your best, you couldn’t have helped it, it wasn’t really your fault, etc. But put this information in a subordinate clause, while in the main clause you regretfully blame yourself.
There are many real-world examples of the seesaw of blame. In the famous case of the Tylenol poisonings, several people died after someone added cyanide to random Tylenol capsules. The CEO of Johnson & Johnson held a video news conference in which he took moral responsibility for the poisonings, insisting that it was J&J’s job to have tamper-proof packaging. Millions of people who watched the clip on the news that night undoubtedly said to themselves, “It’s not his fault, it was some madman.” The Tylenol brand recovered.
But my favorite example is hypothetical. Let’s suppose your 12-year-old got into a fight at school and was sent to the principal’s office. Now your child has to tell you what happened. Consider two scenarios.
Scenario One: | |
---|---|
Child: | Mom, Dad, I really messed up in school today. I hit Johnny, and the teacher had to send me to the principal’s office. |
You: | Why did you hit Johnny? |
Child: | Well, he called me a dirty name, and I lost my temper and hit him. But I shouldn’t have. I should have kept my temper. It was my fault. |
At this point you are very much on your child’s side, perhaps even a little proud. Probably there will be no further punishment at home.
Scenario Two: | |
---|---|
Child: | You won’t believe what that fool of a teacher did to me today. Johnny called me a dirty name, so of course I slugged him, and that jerky teacher had the nerve to send me to the principal’s office! |
Your child has an attitude problem and you are likely to provide some at-home “attitude adjustment.”