COVID-19, INDUCTION, AND
SOCIAL EPISTEMOLOGY
Igor Douven
A recent study by researchers from Johns Hopkins University found that people’s willingness to follow governmental measures to mitigate the spread of the novel coronavirus SARS-CoV2 largely depends on their level of trust in science. More shockingly, they also found that almost half of the American population distrusts science. What to many of us looks like a concerted effort of vast numbers of experts to limit the damage of an (in our lifetime) unprecedented health crisis looks to some like a grand conspiracy aimed at taking away the livelihoods of hard-working citizens.
Might philosophers, with their ineradicable scepticism, have contributed to this distrust in scientific expertise? It is not unreasonable to think that the relativism and post-truth-ism propagated not only by continental philosophy but also by the strong programme and the related field of science and technology studies have harmed the status of science. Their practitioners, after all, tend to see science as ‘just one of the many stories we tell each other’. Why not, in our current situation, believe a story with less inconvenient consequences, one that, for instance, allows us to go about without masks?
Most readers, I’m sure, will have little sympathy for the aforementioned views and will instead be committed to solid, scientific philosophy. But has this philosophy not done its part as well, when it comes to eroding trust in science? I am thinking here of Hume’s argument that induction is unjustifiable. The doubters might ask, ‘Hasn’t one of your biggest stars shown that what scientists are constantly doing—predicting how things are going to look like, based on what they have looked like—lacks a good reason?’. The question has no easy answer. For indeed, according to Hume we cannot justify the practice of induction deductively, given that—as all agree—induction is not necessarily a reliable inference method. But we cannot justify it inductively either: what would a positive outcome prove if induction is in fact unreliable? (We might try to justify induction abductively, but then how do we justify abduction? Via induction? For the problems this raises, see Douven [forthcoming]) What may have seemed a boutique concern of one branch of philosophy thus emerges as being of central importance to addressing one of today’s main challenges.
There is, as said, no easy answer to the question of why it is rational to rely on induction. But there is an answer, albeit one that is subtle and needs a bit of explaining. The answer has been developed by Gerhard Schurz ([2008], [2009], [2012], [2019]).
Schurz’s key point is that Hume and all later commentators have been wrong in assuming that rational reliance on induction requires an argument to the effect that induction is reliable. While the assumption appears reasonable—how could we rationally rely on induction in the absence of such an argument?—it is nonetheless wrong: to justify our reliance on induction, we only need to show that induction is optimal, that we cannot do better than to rely on it. The word ‘only’ might suggest that this is an easy task. It is not. It takes Schurz several hundred pages of at times highly technical argumentation to demonstrate the optimality of induction. But he does demonstrate it! That is a major achievement: classical philosophical problems tend to get clarified, reconceptualized, refined… but usually not solved.
Schurz’s approach is two-pronged. First, using formal results from the field of prediction with expert advice, he is able to justify analytically (what he calls) meta-induction, that is, induction over inductive methods. The proof consists in showing that, in every possible world, we can never do better than if we rely on meta-induction (that is, choose our inductive methods on inductive grounds). The second part of the demonstration then consists of an application of meta-induction to incontrovertible empirical findings about the past predictive accuracy of our actual inductive practices as compared to various non-inductive methods.
Will this silence those peddling conspiracy theories about COVID-19? We shouldn’t hold our breath. For note that Schurz makes a subtle point. We see on an almost daily basis footage from across the globe of citizens protesting mitigation measures. Perhaps I am misreading their facial expressions, their gestures, their seemingly threatening behaviour, but I find it difficult to escape the impression that subtle points are lost on the vast majority of these protesters.
In ‘Explaining the Success of Induction’, building on Schurz’ insights, I make a less subtle point, with the intention of further helping to restore confidence in induction (and thus in science). The point is that we should expect to be good at inductive reasoning, supposing our reasoning capacities are the result of an evolutionary process of selection and variation—as evolutionary epistemologists have long argued—and supposing social epistemologists are right that the pursuit of truth is an essentially collective endeavour. To show why this is reasonable to expect, I conducted a series of computer simulations modelling an evolutionary process in which epistemically interacting inductive reasoners are selected to reproduce on the basis of their success at getting at the truth accurately (how close do they get to the truth?) and quickly (how long does it take them to get close to the truth, if they get there at all?).
The Hegselmann–Krause model (Hegselmann and Krause [2002]) was used to codify the social aspect of inductive learning. This model lets agents update their opinions on the basis of, first, information they receive directly from the world, and second, the opinions of other agents in their community who they regard as their epistemic peers.
The NSGA-II algorithm—an evolutionary algorithm widely used for optimization purposes in science and engineering—served to model the evolutionary part. To make the simulations specifically about inductive reasoning, the agents populating the Hegselmann–Krause model and being subjected to evolutionary pressures processed the information from the world via some Carnapian λ rule (a particular type of probabilistic learning rule). This meant that each agent could be characterized by three parameters determining how they learned: two determining the agents’ level of social engagement in learning (how liberal they are in considering others their peers, and what weight they give to the opinions of those they do not consider their peers), and one determining their learning rate (how willing they are to let new evidence impact their opinions).
The outcomes from the simulations showed that a ‘fine-tuning’ of all three parameters took place during the evolutionary process: later generations tended to have values for those parameters that, in combination, made the agents better at getting close to the truth quickly, in comparison with their predecessors. In fact, the evolutionary process led, on average, to a threefold increase in accuracy—meaning that last-generation agents deviated less from the truth than first-generation agents, on average by a factor of three—and to an average six-fold increase in speed of convergence, meaning that it made agents six times faster at getting close to the truth, on average.
It does not follow from these findings that evolution made us successful inductive reasoners. But at least they show how we might may become such reasoners. Will the doubters now start paying more attention to what scientists have to say about how we ought to navigate our way through the current crisis? You may think the point is still too subtle to sway the demonstrators. You may be right. However, research by cognitive psychologists has shown that people are more likely to accept that something is so when it is pointed out to them why it is so or how it came to be (see especially work Lombrozo [2016]). So I have at least some hope that providing a possible explanation of how we have become good at inductive reasoning may help to convince the doubters that we are indeed good at it.(It may be objected that here I am relying on the assumption that we inhabit a stable environment. And by making that assumption, I am relying on induction. However, that I can justifiably do so follows precisely from Schurz’s work. Admittedly, this is a subtle point again. But anyone able to raise the objection will also be able to understand the response.)
The simulations not only helped to demonstrate the power of the process of variation and selection to shape the reasoning capacities of social learners, work on the simulations also engendered one of the best illustrations of the importance of social learning that I have come across. The simulations were coded in Julia, a new high-performance language for scientific computing developed at MIT by Alan Edelman and some of his post-docs and PhD students (Bezanson et al. [2017]). The simulations required code for the so-called non-dominating sorting function, which is part of the NSGA-II algorithm mentioned above. My first attempted implementation of this function took just over 10ms for one partial ranking of the agents on the basis of the scores that were relevant in the simulations (basically, scores for accuracy and speed, in the senses previously explained). Thereby, the function was about four times slower than the non-dominated sorting function that is predefined in a designated package for the statistical computing language R.
I posted the initial function on the main message board for Julia developers, hoping to receive some advice about how to make it run faster. Thanks to the contributions of various members of the forum, within a day, a more than 2000-fold speedup was obtained—from just over 10ms to just under 45μs; see this thread. That (and some of the tricks I learned from the same thread) helped to reduce total computation time for the simulations by close to forty percent. On my own, I would not have been able to achieve this, not even after months. I’m grateful that I could rely on a community of interacting Julia experts, much like I’m grateful that we can presently rely on a community of tightly collaborating virologists, epidemiologists, and other health experts.
Listen to the audio essay
Subscribe to podcast
FULL ARTICLE
Douven, I. [2023]: ‘Explaining the Success of Induction’, British Journal for the Philosophy of Science, 74
doi: doi.org/10.1086/714796.
Igor Douven
Université Paris-Sorbonne
igor.douven@gmail.com
References
Bezanson, J., Edelman, A., Karpinski, S. and Shah, V. [2017]: ‘Julia: A Fresh Approach to Numerical Computing’, SIAM Review, 59, pp. 65–98.
Douven, I. [forthcoming]: The Art of Abduction, Cambridge, MA: MIT Press.
Hegselmann, R. and Krause, U. [2002]: ‘Opinion Dynamics and Bounded Confidence: Models, Analysis, and Simulation’, Journal of Artificial Societies and Social Simulation, 5.
Lombrozo, T. [2006]: ‘The Structure and Function of Explanations’, Trends in Cognitive Sciences, 10, pp. 464–70.
Schurz, G. [2008]: ‘The Meta-inductivist’s Winning Strategy in the Prediction Game: A New Approach to Hume’s Problem’, Philosophy of Science, 75, pp. 278–305.
Schurz, G. [2009]: ‘Meta-induction and Social Epistemology’, Episteme, 6, pp. 200–20.
Schurz, G. [2012]: ‘Meta-induction in Epistemic Networks and the Social Spread of Knowledge’, Episteme, 9, pp. 151–70.
Schurz, G. [2019]: Hume’s Problem Solved: The Optimality of Meta-induction, Cambridge, MA: MIT Press.
© The Author (2021)
FULL ARTICLE
Douven, I. [2023]: ‘Explaining the Success of Induction’, British Journal for the Philosophy of Science, 74
doi: doi.org/10.1086/714796.