THE MISINFORMATION AGE
Cailin O’Connor and James Weatherall
Reviewed by Erin J. Nash
The Misinformation Age: How False Beliefs Spread
Cailin O’Connor and James Owen Weatherall
New Haven, CT: Yale University Press, 2019, £18.99
ISBN 9780300234015
The presence and persistence of misinformation and false beliefs about scientific issues within my social networks was actually what led me to philosophy. However, I was disappointed to find a dearth of philosophical literature on this topic when I entered the field in 2012. The Misinformation Age: How False Beliefs Spread by California-based philosophers of science Cailin O’Connor and James Weatherall is the book I wished I had back then. It will rightly become an important text for scholars working on social and political epistemology, and related issues like the role of science and expertise in democratic societies. It is also a very entertaining read because it’s full of fascinating examples and stories—who knew that leading botanists once believed that there was a fruit that contained tiny lambs! Known as the Vegetable Lamb of Tartary, its existence was widely accepted for many centuries amongst the learned of Europe.
O’Connor and Weatherall set out to answer a series of inter-related empirical questions (p. 6):
- How do we form beliefs, especially false ones?
- How do they persist?
- Why do they spread?
- Why are false beliefs so intransigent, even in the face of overwhelming evidence to the contrary?
- What can we do to change them?
They assert that focusing on the psychology and intelligence of individuals ‘is to badly misdiagnose how false beliefs persist and spread’, and that doing so ultimately leads us ‘to the wrong remedies’ (pp. 7–8). Instead, O’Connor and Weatherall contend that misinformation and the false beliefs it gives rise to are largely social phenomena, as the very same social mechanisms that help us learn from others and form true beliefs make us vulnerable to accepting falsehoods.[1] I suspect that many others will also find the networked social-epistemology orientation a welcome shift in emphasis in this debate. At times, however, the agency of individuals and their rights and responsibilities felt neglected, and this gave their analysis and solutions a rather ‘managerial’ tone. I was left with an image of a battle—the ‘epistemic arms race’, as O’Connor and Weatherall put it—between ‘good’ and ‘evil’ elites for control of the levers of the ‘public mind’.
One of the unique features of this book is that O’Connor and Weatherall carried out a series of experiments using computer simulations based on Bala–Goyal models to answer their research questions. Social learning programmes mimic how ideas within social networks arise and spread, enabling O’Connor and Weatherall to gain a better understanding of which specific features of networks aid and abet the flow of misinformation. These methods also allowed them to study the potential consequences of interventions, in terms of how likely they were to curtail or exacerbate the spread of certain beliefs. Although they do briefly mention the limitations of their methodology (p. 52), I would have liked to see more discussion of the risks and uncertainties associated with applying the results of these models to the real world.
But before going into the book in detail, just what is truth and misinformation? Readers will be disappointed that O’Connor and Weatherall don’t define the latter anywhere in the book, especially given its title. The opening chapter does, however, provide a discussion of how O’Connor and Weatherall understand the notion of truth, and why they think adopting true beliefs, and avoiding false beliefs, is important. In an endnote they clarify: ‘we understand “true beliefs” to be beliefs that generally successfully guide action, and more important, we understand “false beliefs” to be ones that generally fail to reliably guide action’ (p. 188). Their understanding of truth thus has a ‘strong dose of pragmatism’ and they further specify that it is a ‘broadly deflationary attitude in the spirit of what is sometimes called ‘disquotationalism’ (pp. 188–9).
While I accept that doing what works is a good description of why scientists do and should pursue hypotheses, or why we sometimes treat hypotheses as if they were true for practical purposes, it’s not clear to me why we should equate this with ‘scientific truth’. Once a definition of truth is tied to notions of ‘success’ and ‘reliability’, ‘truth’ then inescapably becomes bound up with partial non-epistemic value judgements.[2] The issue I see with O’Connor and Weatherall’s definition in the context of misinformation is that given reasonable value pluralism in democratic societies, there will oftentimes be competing claims to ‘scientific truth’ and it won’t be clear which (if any) should be labelled as ‘false beliefs’ or ‘misinformation’. For instance, O’Connor and Weatherall say that whether or not we support the reduction of public debt will depend on ‘what we believe about whether the debt will affect our future well-being’ (p. 6). But which beliefs formed in response to this question would count as ‘true’ or ‘false’? Does it even make sense to think of the answers to this question in these terms? I find it difficult to see how any theory that doesn’t give us the resources to distinguish between evaluative and non-evaluative claims can actually do the work O’Connor and Weatherall want in pushing back against propaganda. Moreover, adopting this kind of definition seems to risk encouraging people to paint too many things as ‘false’ beliefs, misinformation, and ‘alternative facts’, where disagreements are perhaps best understood as a product of legitimate value differences. I will say more on this towards the end of this review.
The book then unfolds by looking at how beliefs circulate in three key parts of our public knowledge system. Chapter 2—‘Polarization and Conformity’—examines scientific communities and asks: under what conditions do scientists converge on true and false beliefs? The aetiology of scientists’ beliefs is, like our own, strongly influenced by others’ actions and beliefs. The purpose of this chapter is to describe the social mechanisms that work to propagate truth and falsity through networks of scientists.
O’Connor and Weatherall’s modelling suggests that when scientists share evidence with one another, it becomes extremely likely that all of the scientists within that network will eventually form the same beliefs, whether those beliefs are actually true or false (p. 61). The agreement that emerges among scientists will generally be around a true belief, but under some conditions the network will instead converge to a false consensus. This happens in the models when some of the scientists in the network obtain and share misleading results. Since the agents in this version of the model are only responding to evidence—there is no psychology, no ideology, and so on here—O’Connor and Weatherall say that this goes to show that groups of people can form false beliefs even when they are all rational and reasonable truth-seekers.
However, in the real world, the way scientists update their beliefs is much more complex and they do not all treat evidence in the same way. For one thing, scientists make judgements about the reliability of their colleagues’ findings. O’Connor and Weatherall therefore run their models again using a different decision rule (Jeffrey’s rather than Bayes’s) to account for scientists’ uncertainties about the reliability of their colleagues’ evidence. They then found that scientists regularly split into polarized groups with different beliefs. O’Connor and Weatherall’s models show that this sort of polarization can be very stable: In some cases, no amount of evidence from scientists with the correct beliefs will be able to move the credences of the scientists who are in error. When lower levels of mistrust were assumed, scientists still discounted the evidence generated by those they disagreed with, though they didn’t ignore it completely. Under these conditions, some scientists who hold false beliefs will eventually start to form true beliefs and a consensus will again emerge.
Scientists are also not solely interested in truth when they are forming beliefs and credences; they’re also influenced by conscious or sub-conscious motivations, such as the desire to conform with their peers (p. 84). When O’Connor and Weatherall introduced this feature into their models, they found, unsurprisingly, that conformity leads to poorer performance in arriving at true beliefs. This is especially so when there is a greater pay-off for conforming than for having true beliefs. And while conformity can halt the spread of bad ideas and false beliefs too, O’Connor and Weatherall’s models suggested that, on average, tendencies to conform led to a higher frequency of false rather than true beliefs. Moreover, conformity within weakly connected groups can prevent the scientific community at large from arriving at a consensus around the true belief.
That polarization within scientific communities can arise for different reasons makes it hard to evaluate possible interventions. O’Connor and Weatherall suggest that when polarization is driven predominantly by conformity, disrupting social networks will help to lead more individuals to true beliefs. But when mistrust is the bigger issue, they suggest this intervention is likely to fail and may even exacerbate the issue. In real scientific communities, conformity and mistrust may well co-occur and be related, which makes it all the more difficult to select the right interventions.
What was missing in this chapter was an exploration of other explanations for the polarization that can occur in scientific communities in the real world. The structure of their models doesn’t seem to fit that of many scientific disagreements. Take the current divisions among scientists over when the Anthropocene Epoch began (Maslin and Ellis [2016]). This disagreement is driven by differences in background beliefs and values, and many of those differences are unlikely to be resolved by the results of empirical testing. O’Connor and Weatherall’s framework seems to suggest that one group of scientists in this debate have ‘true beliefs’ about what year the Anthropocene started and that the other scientists are in error. But, as above, is there really just one ‘correct’ answer about when new geological eras commence? Or, for these kinds of cases at least, could many different ‘empirically adequate’ beliefs be held, and continue to co-exist, because each is useful in terms of the different purposes to which the scientists put them? I think what’s needed is more attention to these sorts of questions, as well as how truth and/or misinformation should be understood for different kinds of scientific claims.
I thought Chapter 3—‘The Evangelization of Peoples’—was the book’s best chapter, and where O’Connor and Weatherall make their most original and valuable contributions to the literature. This chapter provides an illustration of how ideas and evidence generated within the scientific community come to influence the beliefs of policymakers and other ‘elites’, and how ideologically or financially motivated actors can come to manipulate this process for their own ends. O’Connor and Weatherall’s analysis here thus provides further support for agnotology literature by offering a more detailed and robust explanation of just how propagandists can shape experts’ and non-experts’ beliefs alike.
Their modelling suggests that if scientists come to a consensus, then policymakers’ beliefs will generally track this consensus. Those policymakers who are only connected to a small number of scientists will arrive at true beliefs more slowly, but they get there eventually. Things change dramatically, however, when a propagandist is added to the network. O’Connor and Weatherall define a propagandist here as an actor who is also connected to policymakers, but who (unlike scientists) is not interested in identifying true claims; their motivation is to persuade policymakers to adopt beliefs that will best support the propagandist’s own ends. We might ask, though, whether propagandists always intentionally mislead others or hold insincere beliefs. Much of the time, those we take to be propagandising are sincere and are perhaps better characterized as making reckless or negligent speech acts.
According to O’Connor and Weatherall, propagandists use two key strategies. The first targets the first-order claims and evidence of scientists by deploying a series of tactics to bias the total evidence available that scientists and policymakers use to update their beliefs. While they can do so by embedding a propagandist within the scientific community to deliberately generate dodgy results, what’s really interesting is that O’Connor and Weatherall show how propagandists don’t need to use this risky strategy. Instead, propagandists can manipulate peoples’ beliefs simply by biasing three things: the results that other scientists within the network are exposed to (‘biased production’); the distribution of scientists within the network (‘industrial selection’); and the way research results are shared with non-scientists, such as policymakers (‘selective sharing’). In many cases, propagandists can be successful merely by creating the appearance of disagreement and controversy within the scientific community. The second key strategy targets the higher-order evidence that non-experts use to evaluate the trustworthiness of experts. Propagandists can manipulate our dispositions to defer to scientists’ first-order claims by elevating or attacking particular scientists’ reputations, for example.
O’Connor and Weatherall’s models demonstrate that policymakers can form a consensus around a false belief even as the scientific community comes to a consensus around the true belief. Worryingly, the modelling also suggests that policymakers will remain unconvinced as long as the propagandist remains active. This seems to provide support for those scholars, like Oreskes ([2017]) and Cook ([2017]), who continue to advocate for the importance of consensus messaging about climate science. Which way policymakers’ credences will move, and how quickly, depends on whether the scientists or the propagandists are winning the ‘tug-of-war’. O’Connor and Weatherall say that the more scientists the policymakers are connected to, the higher the chance that they will receive enough evidence to lead them towards the correct theory. Conversely, the fewer independent connections to scientists, the more vulnerable policymakers are to the influence of the propagandist.
What I think is really exciting about this research is that it could have enormous value if it is applied within real policymaking arenas to highlight practices that similarly bias the total evidence base upon which policymakers form their beliefs. For instance, O’Connor and Weatherall’s models could be used to better understand the likely outcomes of current practices of calling expert witnesses to testify in Congressional committee hearings in the US. They could also be used to test the effects of a range of alternative democratically developed procedures for calling experts and using scientific evidence in policymaking.
The final chapter—‘The Social Network’—looks at how (mis)information about science then spreads from elites to and amongst the broader public. O’Connor and Weatherall state that their models apply to these groups and individuals in the same way as they apply to scientists, as the social mechanisms they’ve identified operate in largely the same way for any group (p. 151). They point to an interesting parallel, showing that there need not be a propagandist (as defined by O’Connor and Weatherall) for people to arrive at false beliefs; all that is required is a mechanism by which evidence is selectively shared. Science journalism’s penchant for novel results is one such mechanism. The journalistic norm of ‘balance’ has long been recognized as another (Boykoff and Boykoff [2004]).
In the second half of this chapter, O’Connor and Weatherall turn to making suggestions for changes to our knowledge systems and governance structures in light of their results. While it’s good to see philosophers of science connecting their work to public policy and practice, and even though many of their recommendations seem reasonable, my general sense was that O’Connor and Weatherall proceeded much too quickly from their descriptive findings—especially given that they’re based on highly simplified and idealized models—to recommended solutions to the problem of misinformation. They make some very strong claims about freedom of speech and democratic governance, for instance, without sufficiently engaging with the literature on these topics from philosophy of science or science studies, let alone scholarship from relevant debates within political philosophy and democratic theory.[3] As just one example, they cite the existence of free speech restrictions on misleading advertising, hate speech, and libellous and defamatory speech, and then say that these legislative frameworks should be extended to cover the spread of scientific misinformation too (pp. 182–3). However, misinformation about science is disanalogous with these other types of speech in all sorts of ways. To make even a pro tanto case for misinformation about science to be regulated by the state requires much more careful normative work.
Similarly, to address the problems of ‘false balance’, O’Connor and Weatherall contend:
[…] the suggestion that it would be unfair not to report contrarian views—or at least, not to report them specifically as contrarian views—especially when the scientific community has otherwise reached consensus, is wrong, as least if what we care about is successful action. (p. 159)
But what about when a scientific consensus is problematic in some way? O’Connor and Weatherall consider this objection by raising the example of the consensus that formed among politicians, the media, and the public around the false belief that Saddam Hussain was developing weapons of mass destruction in the early 2000s. But they then attempt dismiss the objection by making a distinction between journalism about ‘current or historical events’, such as the presence of WMDs in Iraq, and reporting about ‘science’, and suggest journalists have a greater role to play in the former compared to the latter (p. 160). But how different is the case of WMDs in Iraq from ‘science’? They both involve probabilistic claims about the empirical world being made by technical experts, with evidence that the public can’t directly access or evaluate themselves.
More importantly, there are numerous examples of longstanding scientific consensuses that eventually came to be viewed as epistemically flawed and morally bankrupt—for example, the consensus among psychiatrists that homosexuality was a mental illness. Why shouldn’t it have been the role of journalists to critique this consensus, and to give certain types of coverage to the dissenting psychiatrists and the LGBTIQ activists attempting to excise homosexuality from medical diagnostic manuals? How would this debate have evolved if the minority were only given proportional coverage, and their opinions were labelled ‘contrarian’ or, worse, ‘misinformation’? I would have liked O’Connor and Weatherall to delve into these kind of examples to test their recommendations.
Similarly, as Reiss ([2019]) argues in a recent article, we should also be suspicious of certain instances of contemporary consensus formation in some parts of science or for some scientific claims, because, he argues, there is no such thing as uncontroversial social scientific facts. But throughout their book, O’Connor and Weatherall treat many factual claims as uncontroversial and misinformation as something relatively easy to identify. For instance, they say that the American public is divided on the issue of ‘whether free trade agreements ultimately improve the country’s economic conditions’, and that a significant part of this policy disagreement boils down to disagreements over ‘basic matters of fact’ (p. 151). But I suspect Reiss would disagree, instead suggesting that we need more public scrutiny of the experts’ claims not less, and that we’d be making a grave mistake to label as ‘misinformation’ the claims of heterodox economists.
So although I think O’Connor and Weatherall’s book makes progress, I think there is still a long way to go to provide an understanding of scientific truth and misinformation, and to identify accurately truth and misinformation, so that we don’t illegitimately suppress reasonable pluralism. I also think that more careful normative work needs to be done to understand and adequately justify proposed changes to democratic institutions and public knowledge systems. And if we’re going to ‘target’ misinformation (and I do think that there is some that we should check), limited resources means we’ll also have to make judgements about which misinformation matters. Perhaps it’s just because I am Australian, but I found the emphasis on Russia’s role in propagating misinformation before the US election throughout the book to be a bit bizarre. This focus seemed to be because O’Connor and Weatherall believe that these false beliefs in particular were causally efficacious in propelling Trump to the White House. But why focus on these false beliefs rather than others, including the ignorance and false beliefs of elites that probably contributed both to Trump’s election as well as their shock at the outcome? Russians seem to me to be a distraction from more significant, home-grown sources ignorance that has been drip-drip-dripping through our public knowledge systems over many years, if not decades (Lears [2018]).
This book cites examples that ought to encourage epistemic humility in researchers, and perhaps my beliefs about the real threat to public knowledge and policymaking in the US is my own Vegetable Lamb of Tartary. What’s yours?
Erin J. Nash
School of Humanities and Languages
University of New South Wales (Sydney)
e.nash@unsw.edu.au
References
Boykoff, M. T. and Boykoff, J. M. [2004]: ‘Balance as Bias: Global Warming and the US Prestige Press’, Global Environmental Change, 14, pp. 125–36.
Cook, J. [2017]: ‘Response by Cook to “Beyond Counting Climate Consensus”’, Environmental Communication, 11, pp. 733–5.
de Melo-Martín, I. and Intemann, K. [2018]: The Fight against Doubt: How to Bridge the Gap between Scientists and the Public, Oxford: Oxford University Press.
Lears, J. [2018]: ‘What We Don’t Talk about When We Talk about Russian Hacking’, London Review of Books, 40, pp. 15–18.
Maslin, M. and Ellis, E. [2016]: ‘Scientists Still Don’t Understand the Anthropocene—and They’re Going about It the Wrong Way’, The Conversation.
Moore, A. [2017]: Critical Elitism: Deliberation, Democracy, and the Problem of Expertise, Cambridge: Cambridge University Press.
Nash, E. J. [2018]: ‘In Defense of “Targeting” Some Dissent about Science’, Perspectives on Science, 26, pp. 325–59.
Oreskes, N. [2017]: ‘Response by Oreskes to “Beyond Counting Climate Consensus”’, Environmental Communication, 11, pp. 731–2.
Reiss, J. [2019]: ‘Expertise, Agreement, and the Nature of Social Scientific Facts, or: Against Epistocracy’, Social Epistemology, 33, pp. 183–92.
Schauer, F. [2012]: ‘Social Epistemology, Holocaust Denial, and the Post-Millian Calculus’, in M. Herz and P. Molnar (eds), The Content and Context of Hate Speech: Rethinking Regulation and Responses, Cambridge: Cambridge University Press, pp. 129–43.
Notes
[1] A point also nicely articulated by legal philosopher Schauer ([2012]).
[2] Unlike many philosophers of science, I (unfashionably) do think that we can make a distinction between facts and values for some parts of science/some scientific claims. A definition of scientific truth or misinformation in my account would be value neutral—that is, it would apply to those claims, and only those claims, that one can accept as true without having to buy into a particular ideology.
[3] Recent books by de Melo-Martin and Intemann ([2018]) and Moore ([2017]) are good examples of the former and the latter, respectively.