Posted: March 21, 2024
This response is categorized as:   link to Pandemic and Other Infectious Diseases index
Hover here for
Article Summary The Heterodox Academy has the important mission of fostering respectful debate that includes unpopular opinions. So I was delighted to be invited to speak (via Zoom) at its February 23, 2024 day-long conference on “COVID and the Academy: What Have We Learned?” My title was “What Goes Into an Expert’s Expert Judgment Other Than That Expert’s Expertise (with COVID examples).“ I covered eight points, all points I have written about before, but not all in the same place and not illustrated with COVID examples. Since I was assigned only 20 minutes (and graciously granted 31), the speech was sketchy, especially toward the end – so I am also posting my much-less-sketchy notes for the hour-long speech I wished I could give.

What Goes Into an Expert’s Expert Judgment Other Than That Expert’s Expertise (with COVID examples)

Notes for a presentation (via Zoom) at a conference on “COVID and the Academy: What Have We Learned?” Heterodox Academy Research Symposium, Stanford, California, February 23, 2024

(The presentation on YouTube includes a machine transcript.
See also the presentation audio and slide set.)

Introduction

I’m not just the only panelist speaking remotely – from a ship in the middle of the Atlantic Ocean, about halfway from Rio de Janeiro, Brazil to Cape Town, South Africa. I’m also the only one with no research to report.

I’m retired after 50 or so years of consulting on the field of risk communication. The fundamental goal of risk communication is to achieve a level of concern in some public that’s commensurate with the actual risk – that is, with what experts think the actual risk is. You want to scare unduly apathetic people into taking precautions against serious risks; calm unduly upset people into not taking (or demanding) precautions against trivial risks; and guide rightly upset people into choosing the right precautions against serious risks.

I did a lot of work for industry, a lot for governments (including local, national, and international public health agencies), and a fair amount for activist groups.

Much of the time my clients saw my job as helping them figure out how to persuade the public to believe what my client’s experts said they ought to believe. So I spent a lot of time working with experts in one or another aspect of risk assessment – experts in figuring out how upset the public ought to be about some risk and what precautions if any it ought to take.

So I had my nose routinely rubbed in the reality that experts often disagree and not always because of the data. The experts my clients instructed me to listen to didn’t necessarily have the same risk assessment as other experts working with my clients’ opponents.

My expertise isn’t any aspect of risk assessment – it’s social psychology and communication. I didn’t have the expertise to agree or disagree with my clients’ experts’ judgments about how dangerous something is. But I had a lot of expertise relevant to assessing the various ways that my clients’ experts’ risk judgments might be biased.

And that’s what I want to talk about today – what goes into an expert’s expert judgment other than that expert’s expertise. Or to put the question more simply: What goes into an expert’s judgment other than expertise?

Given the focus of this panel, to the extent that time permits I will give examples from the past four years of COVID pandemic experience. But most of the points I propose to make are generic.

Again, I want to stress that I’m not giving you the conclusions of a body of empirical research. I think there is empirical research validating most of the points I plan to make, but it’s not my research and I’m not going to describe it or even reference the research today, much less assess how robust it is. I’m basing my presentation on 50 or so years of risk communication consulting, ending in four years of trying to help public health experts communicate about COVID. As they say, the plural of anecdote isn’t data, and all I’ve got for you today is anecdote.

One other introductory point: My comments today will be critical of the public health establishment – both in general and in its management of COVID in particular. So I need to stress that I’m on their side. They’re the good guys. Their goals are usually more altruistic than the goals of their critics, and their scientific claims usually turn out closer to truth than the scientific claims of their critics. (Usually, not always.)

But in large part because they are the good guys, and feel like the good guys, and feel like the public should damn well know they’re the good guys, they tend to be resistant to overcoming the biases I am about to talk about.

I’m likely to run over if you let me. Please tell me when I’ve got just three minutes left and I’ll start skipping the details.


    Expert opinion is mostly second-hand opinion. Outside their narrow specialties, most “experts” know only the conclusions that the handful of real experts have promoted.

Expert opinion isn’t what we think it is. A handful of people are actually experts on the specific question at hand – they produce the data and pay careful attention to each other’s data. The rest of the “experts” are second-hand experts – experts on what the real experts think. Generally they’re in the same field as the real experts, and they may be real experts themselves on some other narrow aspect of that field – but on the aspects their disciplinary colleagues have been studying for decades, they are merely second-hand experts.

They haven’t produced the data or examined the data. At best they may have read key articles in top journals. More often they have read only summaries of the literature. And sometimes not even that: They have gotten the scuttlebutt from other second-hand experts on what “the science” says. Despite their second-handness, they are often promoted as experts to media by their universities: “Dr. So-and-So is available for interviews with expert guidance on Whatever….”

This process works fine for questions where two specifications are met: The real experts all agree and the real experts are right. On those relatively infrequent occasions when the expert consensus is wrong, of course, the process backfires. The process is also problematic on those not-so-infrequent occasions when there is no true expert consensus.

A secondhand expert is likely to interpret as consensus the school of thought that he or she has been exposed to and inculcated in.

Two examples:

  • For much of the first few years of the pandemic, the real expert consensus in the U.S. was that schools should be closed because of COVID – so U.S. second-hand experts may not have realized (or may not have cared) that the real expert consensus in many European countries was that schools should stay open.
  • Also for much of the first few years, the consensus among real expert epidemiologists was that COVID probably spreads mostly through close contact via droplets, which meant that social distancing was crucial and even nonmedical masks were very useful. Second-hand expert epidemiologists learned this consensus and then spread it. They may not have realized (or may not have cared) that there was a different consensus among real experts in respiratory protection, fluid dynamics, industrial hygiene, and engineering: that COVID probably spreads mostly through aerosols that float for a considerable distance, so ventilation is a lot more crucial than social distancing and face coverings ought to be N95 respirators, not cloth masks.

An even bigger problem: What happens if the real experts who disagree with the majority aren’t pretty aggressive about saying so? Or what happens if the real experts who disagree have been excoriated as cranks? In such cases, second-hand experts are likely not to know there are any real experts who disagree with the majority. I’m going to return to this point in a minute.

In short, knowledge – including expert knowledge – is a lot more communal than we normally realize. Most of what we think we know is actually other people’s knowledge that we take on faith. It’s worth noting that we human beings – including experts – believe falsehoods for pretty much the same reason we believe truths: because sources we trust told us they’re true.

And we humans tend to lose track of this distinction between firsthand and secondhand knowledge. We absorb conclusions from sources we trust and imagine that we “know” why those conclusions are valid.

This last point is important. It’s not just that experts aren’t usually independently making up their own minds based on the data – they’re just telling us what the “real experts” believe … or rather, they’re telling us what they’ve been told the subset of real experts they trust believe. What’s also crucial is that they come to imagine that they’re telling us what they themselves believe based on the data.

They tell us to trust the science – but what they themselves are trusting isn’t the science. It’s merely some of the scientists. It’s their second-hand impression of what they think is the consensus of a subset of real experts.


    Expert “consensus” is often not a genuine consensus, just a majority opinion. This is true for both the first-hand experts and the much larger group of second-hand experts. Minority opinions often go underground.

I don’t want to overstate this point. Most scientific conclusions on long-studied questions are unanimous, or nearly unanimous; there are few if any expert outliers.

But what happens when scientific conclusions are not unanimous, when there are experts (whether first-hand “real experts” or second-hand sort-of-experts) who disagree with the majority opinion, or at least have doubts about it?

What happens more often than not: Minority opinions tend to go underground. I think there’s a tipping point phenomenon at work here. If expert opinion is 50–50, it’s likely to be clear to everybody that there’s no consensus. Somewhere around 75–25 – that is, a three-to-one imbalance of expert opinion – the minority expert opinion tends to disappear from public view.

I’m talking about “majorities” and “minorities” as if everyone’s vote counted equally. But of course that’s not really the case. Sometimes what I’m calling the “majority” is the leadership, and the “minority” is many or even most experts of lower professional stature.

Putting that aside, here’s what happens:

  • Experts with minority opinions who want to pursue the matter have trouble accumulating evidence. Their grant proposals are less likely to be funded. Their journal articles are less likely to be accepted for publication, or appear in less prestigious and less widely read journals. They’re less likely to be invited to give papers, less likely to get tenure, etc.
  • Experts who share these minority opinions but are less aggressively inclined and more self-protective of their careers see the handwriting on the wall and go silent.
  • Experts who refuse to go silent may be ostracized in the faculty lounge and publicly disparaged, redefined as not real experts at all. Think about the Great Barrington Declaration authors in this context.
  • Under some circumstances these outlier experts may be deplatformed – that is, censored. I suspect most of this Heterodox Academy audience knows a good deal about censorship of outlier expert opinion on COVID – some of it engineered by government officials at NIH, CDC, the White House, and elsewhere; some of it undertaken voluntarily by Facebook, Twitter, etc., without need for pressure or jawboning on the part of government.
  • All of these factors distort what second-hand experts “learn” about the distribution of real expert opinion. A 90–10 split among real experts or even a 75–25 split can come to look to the second-hand experts like there’s a consensus.
  • And of course the same pressures are exerted on the second-hand experts themselves – maybe worse pressures, since the controversy at issue isn’t their professional focus, so insisting on a minority viewpoint about that controversy isn’t the hill they want to die on. Even if they’re aware of the minority opinion, they are incentivized not to explore it, not even to mention it.
  • Imagine yourself an untenured scientist at a university School of Public Health in the early months of the pandemic. You’ve probably read at least a little about the Great Barrington Declaration. What you’ve read isn’t likely to motivate you to read further, and is profoundly unlikely to motivate you to mention it in a TV interview as an example of how the experts aren’t all on the same page about whether lockdown is a good idea.
  • The pressure to conform isn’t just overt and behavioral. It’s also subtle and cognitive. If I know the top people in my field nearly all believe X, I’m likely to shrug off any doubts I might have about certain aspects of X. A methodological defect in a study that concludes X is less likely to strike me as fatal. An anomaly in a dataset that points away from X is less likely to strike me as worth pursuing. And since I already “know” that X is true, because virtually all my colleagues say so, I’m a whole lot less likely to read the studies demonstrating that X is true carefully and critically – or, indeed, to read them at all.
  • So the general public gets an even more distorted impression of the distribution of expert opinion than the second-hand experts got. That 75–25 split comes to look like consensus. The outliers come to look like dissidents – worse than dissidents, crazies. To the extent laypeople run across outlier opinions at all, they’re likely to see them branded as misinformation.

Underlying much of what I’m saying here is the view that expert disagreement is normal. It doesn’t necessarily mean that one bunch of experts is incompetent, dishonest, or biased. It can just mean that the answer isn’t firmly known. But most experts seem to think otherwise. When experts are forced to acknowledge that the other side exists at all, they often assert explicitly that the other side is incompetent, dishonest, or biased.

It is shockingly rare for experts matter-of-factly to tell the public that other equally qualified experts disagree. And throughout the pandemic, it has been shocking to me how few mutually respectful debates academia fostered or even permitted on tough COVID controversies.

In fairness, there is a contrary trend – experts with minority or even outlier opinions whose reputations are built on their idiosyncratic position on some issue. The iconoclast expert is a niche in the expert ecosystem. But it’s a small niche, and aiming to occupy it isn’t generally considered a wise career move. (I say this with full realization that one key role of the Heterodox Academy is to nurture that contrarian niche.)

I should also concede that there’s a contrary trend to my argument that expert consensus is often manufactured. The contrary argument is all about manufactured dissensus and manufactured uncertainty. Interest groups advancing a cause that the vast majority of experts have rejected understandably try to make the question sound more open, more uncertain, more debatable than the mainstream experts believe it is. Among the obvious examples: climate change and vaccine safety.

On any given issue, I think, the extent of expert disagreement is likely to be more than the majority claims … and less than the minority claims. But since the majority is getting most of the public’s attention, it’s the former bias – understating expert disagreement – that dominates public opinion and policy formation.

When is public understanding of the range of expert opinion on a scientific question likeliest to get distorted in the ways I’ve been outlining? When there’s a relevant public policy controversy, of course. Expert disagreement is likeliest to be suppressed when the expert majority supports one side in a hot public policy debate:

  • COVID lockdown, for example.
  • Or whether everybody ought to get yet another COVID booster, or just high-risk people.
  • Or what level of infection justifies going back to remote learning.

I need to add that my disapproval of manufactured consensus is not shared by many other risk communication experts, whose mantra on the subject is “Speak with one voice.” (We don’t all speak with one voice on the wisdom of speaking with one voice.) I agree that real expert consensus is a wonderful thing, as long as it stays tentative and open to new evidence. Fake consensus that masks real disagreement is something else entirely – and I think we’ve seen a lot of that during COVID.


    Like a gas, expertise expands to fill the available space. If we let them, all experts will happily opine beyond their expertise. And we usually let them. We certainly let them vis-à-vis COVID.

Experts are not the most reliable judges of the limits of their own expertise. When talking to other professionals, experts tend to define their expertise quite narrowly, often deferring to another expert in the room whose expertise is more on-target, demurring that the question “isn’t really my field.” The same experts typically define their expertise much more broadly with a lay audience. As long as the audience knows even less about the question than they do, they often feel qualified to opine freely – in newspaper op-eds, for example.

Of course everybody is entitled to a non-expert opinion. The problem is that experts often offer “expert” opinions on topics well beyond their expertise.

I include myself in this indictment. As a risk communication consultant, I have acquired a fair amount of knowledge about a few areas of public health, most notably vaccination and emerging infectious diseases. When I write about these topics, I don’t always remember that I have learned just enough to get it wrong (sometimes) while sounding like I know what I’m talking about. And I don’t always remember to warn my audience that I’m an expert in risk communication, not public health. On the other hand, I do try hard to get the technical details right. Knowing how far outside my field I am, I try extra-hard.

But I find endless technical misstatements in the writing of public health professionals about vaccination and emerging infectious diseases. They’re far enough from their actual expertise to make mistakes, but not far enough to feel they’d better check before they write.

Many technical disciplines were central to understanding the COVID pandemic. They are different fields. Virologists often know surprisingly little epidemiology, and vice-versa. Neither necessarily knows much immunology. Or clinical medicine. Or industrial hygiene. Or mathematical modeling. Or toxicology. And on and on. Expertise in all these fields made essential contributions to our understanding of COVID. Experts in any one of these fields often felt qualified to opine on all of them.


    During the pandemic, we unwisely allowed technical experts (epidemiologists, virologists, and the rest) to dominate decisions that desperately needed social science expertise.

Epidemiologists often stray into virologists’ lane, and vice-versa – but at least they know there’s a lane there that they’re straying into. They’re not so sure social science is a lane at all.

Given the current replication crisis in social science, I grant that maybe they have a point. But this is a panel of social scientists, so I feel reasonably safe asserting that fields like psychology and sociology and political science and communication are actually fields. They’re squishier than physics to be sure. They’re probably squishier than epidemiology and virology too. But they’re fields.

Epidemiologists and virologists apparently don’t think so.

It’s not that they don’t think the questions social scientists address aren’t real questions. They are, in fact, questions of central importance to managing a pandemic. Questions like these:

  • Of course mandates increase compliance with pandemic precautions. But how will the coercion affect people’s attitudes toward those precautions – and toward the government agencies behind them? Just about every social scientist is aware of reactance (in this case, oppositionality provoked by coercion); just about every COVID policy decision was made without this awareness.
  • To what extent should the FDA relax some of its safety and efficacy standards for pandemic vaccines, antivirals, and tests in order to get them out more quickly? Has it relaxed them too much already? Too little? How should it talk about this dilemma to the public?
  • How will closing schools for many months affect children? To what extent will remote learning mitigate the downsides?

There are two points here, both too obvious to belabor.

  1. COVID outcomes might have been significantly better if social scientists had been consulted more, both by policy-makers and by media.
  2. Epidemiologists and virologists are unlikely to make optimal social science decisions – especially if they are making those decisions without realizing there’s a there there, a field about which they know next to nothing.

This is just a special case of the broader point that experts aren’t very good at staying in their lane.


    A related and even bigger problem: Too often experts assume that their technical expertise gives them policy expertise as well. And policy questions are largely values questions. In large measure they are not scientific questions; they’re trans-scientific.

(Warning: This is going to be by far the longest section.)

Arguably policy is a social science, in which case this is just a restatement of the previous point.

But I think all science, even social science, is about “what’s true,” whereas policy is about “what should we do.” Social scientists study policy-making and policy-makers. There’s a jump from how policy decisions get made to how they ought to get made, and an even bigger jump from how policy decisions ought to get made to what ought to be the policy. Even social scientists, I think, should be humble about imagining that their expertise extends to “what should we do.”

But when COVID pandemic management decisions were under consideration, epidemiologists and virologists et al. had no trouble making the jump from their expertise to “what should we do.”

In other words, policy questions aren’t just social science questions. They are also values questions. What’s probably going to happen if we do X is a data question. Even if you’re guessing/modeling, at least you’re guessing/modeling what the data would show if you had data. Whether we should do X, on the other hand, is a values question.

That’s the core flaw in “follow the science.” Of course we can’t wisely decide whether to do X without considering what’s probably going to happen if we do X, a scientific question. But once we’ve figured out as best we can what’s going to happen if we do X, we still need to decide whether to do X. That’s a values decision. The experts on what’s going to happen if we do X have nothing further to offer. They are not experts on whether we should do X.

“Follow the science” too often turns out to mean “follow the scientists” – even when the scientists are offering trans-scientific recommendations grounded in values.

Of course a technical expert’s technical opinions may lead logically to certain policy positions – in which case the expert’s policy positions deserve a kind of “quasi-expert” credibility. Even then, we have to be careful. Other experts in different fields may have opposing policy positions that follow just as logically from their expertise.

But the bigger question is which came first, the expert’s technical opinions or the expert’s values and policy views. It’s far from rare for an expert to start with a deeply felt preference for a particular recommendation, grounded in that expert’s values or perhaps that expert’s employer’s values – then marshal the available evidence to support the preferred recommendation.

In that case the expert’s policy views are not expert opinions at all, not even quasi-expert opinions. Worse still, even the expert’s technical opinions aren’t purely technical, since they’re affected or even determined by his or her values and policy preferences.

Public health experts almost invariably conflate their scientific/technical judgments with their trans-scientific/value judgments – claiming and perhaps imagining that their policy preferences flow directly from their technical expertise, rather than the other way around.

Perhaps the most stunning COVID-related example of this is the debate over the origins of the pandemic virus. Dichotomize experts (including second-hand experts) based on whether, pre-pandemic, they were sanguine or skeptical about the safety of “gain of function” research on dangerous pathogens. Then check out their expert opinions on where SARS-CoV-2 (the COVID virus) originated, in a laboratory or in an animal market. Logically, these are independent questions. But in fact it’s hard to find any advocates for two of the four logically possible positions:

  • “I’m a long-term opponent of GoF research, but I think COVID originated in an animal market.”
  • “I’m a long-term supporter of GoF research, but I think COVID originated in a lab.”

Experts’ “expert opinions” on where COVID originated are reliably predictable based on their preexisting values regarding the safety of GoF research.

Another case in point is the debate about the first tranche of COVID vaccine boosters, back in August through October of 2021. The debate was never mostly about the scientific evidence. It focused instead on three trans-scientific questions:

  1. “We know the boosters will reduce the incidence of mild infections, but only temporarily. How important is that?” This is not a data question.
  2. “We don’t know yet to what extent or for how long the boosters will reduce the incidence of severe infections. Should we take a ‘better safe than sorry’ approach based on preliminary data, or should we wait for stronger evidence?” This is not a data question either.
  3. “The politicians want us to okay the boosters, in large measure because a growing segment of the public wants a booster now. Should we defer to this pressure, or should we insist on publicly advocating what we think is best?” This, too, is not a data question.

A lot of experts opposed the boosters. They thought mild infections weren’t worth preventing; they wanted to wait for better data regarding severe infections; they were outraged at the pressure from politicians. Other experts had the opposite opinions on these three questions. None of them is a technical question; there were no data that could resolve the disagreement.

But experts on both sides claimed their preferences were grounded in scientific evidence – and cherry-picked scientific evidence that favored their trans-scientific, values-based preferences.

Two vaccine tranches later, in late 2023, the CDC’s Advisory Committee on Immunization Practices was considering whether to recommend that nearly every American get another COVID booster. Some committee members argued that the evidence wasn’t all that strong for boosting healthy young adults. There was discussion of maybe recommending boosters only for the elderly and people whose comorbidities made them high-risk if they caught COVID. The decision went the other way in large part because the committee was worried about high-risk people who don’t know they’re high-risk or who don’t like considering themselves high-risk.

Are high-risk people likelier to take a precaution if it’s recommended for everybody or if it’s recommended specifically for them? That’s a social science question – one the ACIP was asking and answering with zero social science expertise in the room (as far as I know).

Among the values and policy questions are these two:

  • Is it ethical to give lower-risk people an inflated impression of the benefit of the booster to them in order to maximize the number of high-risk people who end up getting boosted?
  • Given that we have minimal data about the benefit of the booster to lower-risk people, should our default be no booster for them till we know its benefit, or keep boosting because it might be beneficial and does little if any harm?

Right now, the CDC is considering relaxing its recommendation for how long COVID-positive people should isolate. (It may have done so by the time this conference takes place.) The switch is clearly bowing to the practical reality that very few people except those who are high-risk or extremely cautious are following the current recommendation – and hanging onto a recommendation that nearly everyone is ignoring tends to undermine CDC’s credibility and thus its other recommendations. This is a totally defensible policy decision in my judgment. But it’s not something CDC is prepared to say out loud. When it finally makes the switch, the agency is sure to assert that it is following the very best scientific evidence, grounded in the fact that COVID is a lot less deadly than it used to be. CDC is unlikely to add that it is preserving credibility by bowing to reality.

One measure of the extent to which expert opinions on scientific controversies are grounded in values versus evidence is how resistant those opinions are to change based on new evidence. How often do experts say, “I was of X opinion, but this new study makes me think that maybe Y is closer to the truth”? On narrow technical questions, experts routinely change their minds or at least open their minds based on new data. But on whether COVID originated in a lab leak or an animal market? On whether mask mandates significantly reduced COVID transmission? Nearly all the experts cite data supporting their view, ignore or critique data rebutting their view, and stick to their view.

Several studies have documented this immunity to data by looking at how experts’ methodological judgments are affected by their pre-existing substantive opinions. Pairs of “research papers” are crafted that are methodologically identical but report diametrically opposed conclusions. Experts (mostly second-hand experts) are then asked to assess the methodology of just one paper. Almost invariably, experts on both sides of the question judge that the paper they agree with is methodologically pretty solid, while finding more serious methodological flaws in the methodologically identical paper whose conclusions they don’t like.

I’m obviously not objecting to the fact that experts have values. I’m not even objecting to the fact that their values inevitably influence their expert opinions, that trans-scientific factors contaminate their scientific judgments – though I do wish they would try harder to keep them separate. I object most fervently to the fact that experts so often deny that any of this is true, insisting that their policy preferences are grounded in nothing but objective, scientific truth, and that anybody with different policy preferences is either a bad scientist or a liar.


    Insofar as experts’ values are political – explicitly partisan or merely correlated with political leanings – they are far likelier to lean left than right.

I won’t go so far as to say that all fields of expertise lean left. Some don’t. But experts in fields related to public health tend to lean left – which makes sense, since public health is so thoroughly grounded in government intervention in people’s lives. There’s a lot of truth in the oversimplification that the political left cares more about societal welfare than individual liberty, while the political right cares more about individual liberty than societal welfare. (I’m talking about traditional conservativism here, not rightwing populism.) Public health is all about societal welfare, often at the expense of individual liberty. It is intrinsically left-leaning, and it attracts people whose values lean left.

And it will come as no surprise to a Heterodox Academy audience that the academy leans left. So academic experts in fields related to public health tend to have a double dose of left-leaning values. It follows that the sizable increase in public distrust of the public health profession and public health experts since COVID reached the U.S. in 2020 has taken place mostly on the political right.

Public health professionals and public health experts mostly blame that increase in distrust on their critics – on the antivax movement or, more broadly, on what they sometimes call the “anti-science movement.” I think this gets the causality backwards. No doubt some people started mistrusting what they were hearing from public health experts because they listened to antivax outliers and extremists. But far more people started listening to antivax outliers and extremists because they mistrusted what they were hearing from public health experts.

This is especially though not exclusively true of people whose politics lean rightward.

What did public health experts (and of course public health officials) do to earn right-leaning people’s mistrust? I could list literally dozens of COVID examples of earned mistrust, but here are a few that strike me as explicitly political – that were likely to arouse more mistrust from conservatives than from progressives:

  • Excoriating anti-lockdown demonstrations as superspreader events while giving the George Floyd demonstrations against police violence a free pass.
  • Proposing that young people of color were a higher-priority target for scarce vaccine doses than elderly white people.
  • Shutting down churches during lockdowns as inessential while keeping liquor stores open.
  • Delaying the vaccine rollout until after Election Day in part so that Donald Trump couldn’t take an October victory lap.
  • Deferring to teachers’ unions in decision-making about whether and when to reopen schools.

Two groups lagged most in COVID vaccine uptake in the early months of the pandemic: urban blacks and conservative whites. Public health agencies worked hard to reach out to urban blacks, struggling with considerable success to ameliorate their mistrust. They did very little to ameliorate the mistrust of conservative whites. Mostly they blamed conservative whites for mistrusting them.


    Experts in fields related to health value health over other priorities. So insofar as the values of infectious disease experts determined COVID policies, those policies prioritized reducing infectious disease mortality and morbidity over economics, education, and even liberty.

Vaccination mandate controversies were fundamentally about health versus liberty. Controversies over the CDC’s eviction moratorium were fundamentally about health versus property rights. School closure controversies were fundamentally about health versus education. Lockdown controversies were fundamentally about health versus economics and psychological wellbeing.

In each instance public health experts and officials were entitled to make their case that health should prevail. But they weren’t entitled to pretend that there was nothing of value on the other side of the debate. At the very least, they should have practiced what I call “even-though risk communication,” explicitly acknowledging that there were non-health social goods that needed to be balanced against health considerations.

Public health experts and officials understandably prioritize health over other goals and values: liberty, property rights, education, economics, psychological wellbeing, convenience, quality of life, etc. Two choices make sense.

  • Either public health professionals stand tall for health considerations – in which case they can only be advisors, not decision-makers, even in a pandemic.
  • Or public health professionals take non-health considerations onboard when making pandemic management decisions.

For what it’s worth, I prefer the former option, but either is preferable to what public health professionals actually did. Throughout the pandemic, they insisted on being the decision-makers while ignoring or seeming to ignore considerations other than health. It was actually narrower than that; they pretty much ignored even health considerations other than infectious disease mortality and morbidity – mental health, for example.

This narrowmindedness greatly undermined public acceptance and public trust. It ultimately undermined even their efforts to minimize infectious disease mortality and morbidity.

Very belatedly, in January 2024, former NIH head Francis Collins acknowledged this critical mistake. He’s worth quoting at length: “If you’re a public health person and you’re trying to make a decision,” he said, “you have this very narrow view of what the right decision is, and that is something that will save a life. Doesn’t matter what else happens. So you attach infinite value to stopping the disease and saving a life. You attach zero value to whether this actually totally disrupts people’s lives, ruins the economy, and has many kids kept out of school in a way that they never recover from.”

Back when it mattered, neither Collins nor other public health leaders were prepared to acknowledge their narrowmindedness – much less to overcome it or share control with someone less narrow-minded. Far more typical was this title of a 2021 article in the American Journal of Public Health: “COVID-19 Mitigation: Individual Freedom Should Not Impede Public Health.”

Health versus freedom is a dilemma. Health versus economics is a dilemma. Health versus education is a dilemma. I am fine with public health professionals standing tall for the supreme importance of health, as long as the final decision is made by someone with a broader perspective. (In a democracy, that someone is normally an elected politician.) What’s disastrous is when public health professionals disappear the other horn of the dilemma, aiming to convince the rest of us that pandemic management is purely about preventing disease.


     During the pandemic, public health and infectious disease experts prioritized reducing infectious disease mortality and morbidity over telling the truth. Though academia is assumed to focus on truth-finding and truth-telling, experts often tell us what they think is good for us to know. Experts’ “noble lies” aren’t just a COVID phenomenon (consider childhood vaccination, climate change, and criminal justice).

Public health has a long history of deciding what to tell people based on what will get them to do the right thing to protect or improve their health.

Sometimes that means flat-out lying. The polio eradication campaign, for example, spent many years telling parents in the developing world that the oral polio vaccine can’t cause polio, hiding the (rare) reality of vaccine-associated paralytic polio (VAPP) and vaccine-derived poliovirus (VDPV) in order to encourage vaccine acceptance.

In 1981 I started doing communication work with the American Cancer Society. One of the big ACS activities, then as now, was corporate smoking cessation programs. In order to help sell these programs to companies, we commissioned an economist to do a study of the economic impact of employee smoking on companies. We expected to show a big cost due to medical expenses. Instead, the study came out showing that employees who smoked actually saved their company money (pension money and healthcare money) by dying more rapidly after retirement. It simply wasn’t in a company’s economic interests to support smoking cessation.

What do you think we did with the study results? We suppressed them, and continued to tell companies they would benefit economically from sponsoring ACS smoking cessation clinics for their employees. I argued for candor, or at least for dropping the false argument. I lost. I was told pretty explicitly that public health was a higher value than telling the truth.

But when they can, public health experts prefer to mislead without lying, cherry-picking data to emphasize the health-promoting portion of the truth. Consider the claim that flu vaccination was 70% to 90% effective, a claim (grounded in early studies of healthy, young soldiers) that public health continued to make long after everyone in the field knew or should have known that in most years for most vaccinees the flu vaccine doesn’t work nearly that well. In recent years the standard talking point has moderated to a much more defensible 40–60%, though it’s still conventional to avoid mentioning the flu vaccine’s significantly lower efficacy in the elderly.

COVID examples of public health professionals’ dishonesty in the service of health are plentiful. Perhaps the best known example is Anthony Fauci. Fauci has acknowledged telling people there was no reason to wear masks in part because he was worried about the mask shortage in healthcare settings. He has acknowledged making overly optimistic claims about COVID herd immunity because he thought the public wasn’t yet ready to hear what he really believed about that. With extraordinary lack of self-awareness, he continues to maintain that he has done nothing to undermine public trust, and that anyone who mistrusts his pronouncements is mistrusting science itself.

(Fauci was for decades a genuine public health hero. His contributions to our country and our world are undeniable. Sadly, his contributions to the widespread mistrust of public health messaging are also undeniable.)

I want to close with a very recent, rather trivial, totally typical example of public health COVID dishonesty. In mid-February 2024, as I was putting together this presentation, the CDC issued yet another statement promoting the benefits of the “updated” COVID vaccine. As usual, the announcement used the word “updated” again and again. Now it’s true that the monovalent vaccine now on offer is updated from the bivalent one we were offered previously. But the CDC’s data do not show that the updated vaccine is more effective. The data show only that getting a new vaccine dose now offers benefits over relying on an older vaccine dose you got a year or so ago. In other words, the data don’t assess whether the new vaccine works any better against currently circulating variants than a new dose of the old vaccine would have worked, if the old vaccine were still available. The data show only that the old vaccine’s value has waned and a shot of the new one will re-up your immunity for a while. A new shot of the old vaccine would also re-up your immunity for a while – maybe less effectively, but the CDC has no data to show that.

But the CDC believes, probably rightly, that “get another shot because your old shot has worn off (and the new one will soon too)” is a weaker pitch than “get an updated vaccine that works even better than the previous vaccine!” So it leans heavily into the word “updated.”

This is a minor sort of dishonesty. The vaccine is genuinely updated, and updating it to match newer circulating strains of the virus was probably worth doing. Still, a pharmaceutical company would be in trouble if it took a decongestant, say, off the market and then promoted its new “updated” decongestant as an improvement – without convincing evidence that the new product works any better than the old one would have worked if it were still available.

I have argued for decades that prioritizing health over truth risked undermining the credibility of the entire public health enterprise. I didn’t have a lot of examples. I had plenty of examples of public health’s dishonesty – but very few examples where that dishonesty backfired. Even when my public health clients reluctantly conceded that, yes, they do sometimes say not-quite-honest things in order to save lives, they invariably pointed out that their dishonesty did indeed save lives, lives they could document, whereas I had little evidence for my claim that they were eroding trust in the process.

Sadly, COVID has given me a lot of new ammunition.

Or maybe not so sadly. Maybe it’s a good thing that the COVID experience has undermined public trust in public health experts … and in public health more generally … and perhaps also in experts more generally.

For decades, I had pretty decent success in convincing corporate clients that their dishonesty was backfiring and that a more honest approach might also be a more profitable approach. I was singularly UNsuccessful in convincing my public interest clients to be more honest – not just public health agencies and organizations, but also environmental groups and others with altruistic goals. And maybe they were right. Maybe public trust in public health was so high – so unjustifiably high – that altruistic dishonesty (so-called “noble lies”) was actually the best policy.

But even if altruistic dishonesty made public health sense before COVID (which I doubt), it surely no longer makes sense.

In October 2009, I gave a keynote presentation to the National Public Health Information Coalition entitled “Trust the Public with More of the Truth: What I Learned in 40 Years in Risk Communication.” Now it’s February 2024. Maybe – just maybe – in the wake of the loss of trust that COVID produced, maybe there is finally hope that we can convince public health experts to trust the public with more of the truth.

Copyright © 2024 by Peter M. Sandman


For more on infectious diseases risk communication:    link to Pandemic and Other Infectious Diseases index
      Comment or Ask      Read the comments
Contact information page:    Peter M. Sandman

Website design and management provided by SnowTao Editing Services.