Daniel Greco
IDEALIZATION IN EPISTEMOLOGY
Reviewed by Joe Roussos
Idealization in Epistemology: A Modest Modeling Approach ◳
Daniel Greco
Oxford: Oxford University Press, 2023, £60.00
ISBN 9780198860556
Cite as:
Roussos, J. [2024]: ‘Daniel Greco’s Idealization in Epistemology’, BJPS Review of Books, 2024
In his new book, Daniel Greco argues that epistemology is inherently idealized and ought to be seen as a discipline engaged in building models. The book is part meta-philosophical discussion of the methods of epistemology and part intervention in several linked debates in first-order epistemology. The case studies serve both to advance the overall argument and as self-contained developments of their respective debates. The book is lively, interesting, and well worth a read.
At the meta-level, two positions emerge as through lines of the book. The first is a view of the central concepts of epistemology (belief, knowledge, confidence) as emergent properties. As such they are non-fundamental, but feature in highly useful and tractable models. Epistemology is thus intrinsically idealized; in a basic way, attributing propositional attitudes to agents always involves abstraction and distortion. The second through line is that this undermines hope for a unified account of the epistemic domain. Instead, the best we can do is to build models that succeed at limited purposes within parts of that domain.
An epistemologist who accepts these views is a modest modeller and ‘is content to work with a collection of models, each partial and less than fully accurate, without holding out hope for a grand unification on the horizon’ (p. 20). This is contrasted with the ambitious modeller, who seeks models of increasing scope and dreams of the perfect model. They might use similar models but will have different views about which research programmes are promising and which are the deep problems in epistemology. One of Greco’s aims is to argue that several puzzles, which look important from an ambitious standpoint, need not worry us much once we accept modest modelling.
Chapters 1 and 2 introduce idealization and modest modelling, respectively. Chapter 3 is about possible worlds models, and centres on a defence of fragmentation: far from being ad hoc, it is a natural consequence of the two through lines. Chapters 6 and 7 cover higher-order attitudes and inter-level coherence. In chapter 6, Greco defends the use of models that succeed at specific purposes, but fail to exhibit inter-level coherence or don’t even acknowledge the significance of attitudes at other levels. Chapter 7 concerns common knowledge, putting some nice pressure on Lederman’s ([2018]) recent pessimistic take on the possibility of common knowledge. Chapter 8 closes the book with a reflection on the distinction between ideal and non-ideal epistemology.
To showcase some of the book’s features, I’ll discuss chapters 4 and 5 in more detail. In chapter 4, Greco argues that models that seem to capture uncertainty well (such as Bayesian models with Jeffrey conditionalization) nevertheless struggle to capture undercutting defeat. Rather than despair, we should move away from seeing these models as steps on the road to a perfect model of uncertain reasoning. The discussion concludes with one of the book’s clearest presentations of modesty:
[…] if our Bayesian is modest about model selection [then,] given a decision or learning problem described in natural language, she can tailor her formal model of the problem to make sure that whatever must be uncertain, is uncertain, and whatever must be vulnerable to undercutting defeat, is vulnerable to undercutting defeat. She only gets into trouble if she takes on further, more ambitious commitments, to the effect that her model can adequately capture not only the problem as it was initially posed, but also arbitrary elaborations of that problem. (pp. 89–90)
Greco pulls off this manoeuvre several times in the book: starting with a story about a particular issue in epistemology—certainty and undercutting—and turning it into story about how we approach epistemology. Often in seminars and response papers, we act as though a model proposed as an account of a particular situation must also work in a different situation. Thus, it is a reasonable criticism to offer a variation of a case and say, ‘look, your model gets it wrong here!’. Greco thinks that this game has no winners, because of the deeply idealized nature of epistemology. We can win the game described above, in which we produce locally successful models. If this sounds too small an ambition, you might take comfort in the fact that models are built from frameworks—for example, particular Bayesian models are generated from a common set of ingredients and approaches. A successful framework may generate models for many situations. But modesty applies to frameworks too! One shouldn’t hope for a universal framework, nor an ‘uber-framework’ for framework selection.
Chapter 5 is about belief and credence, and is interesting because Greco does not take the pluralist stance one might expect from a modest modeller. Instead, he rebuts two arguments against credence-first views. In one case, this is because they try to force Bayesians to be ambitious modellers. In the other, the dialectic is quite different: Greco argues that neither credence- nor belief-based accounts fit all legal cases involving high probability. He then argues that we should prefer credence-based accounts as they have greater simplicity and generality—because they are embedded in orthodox decision theory. While I think the argument won’t satisfy defenders of qualitative attitudes in epistemology, it does show that Greco means it when he says that modest modelling isn’t an ‘anything goes’ stance. We can still criticize models and have meaningful discussions about model selection and model proliferation.
I agree with this, and have argued something similar (Roussos [2025]), but I think it sits uneasily in the book. In other places, Greco takes provocatively permissive stances. For example, his model-contextualist version of fragmentation insists that David Lewis ([1982])—with his belief that that Nassau Street runs east–west, that the railroad runs north–south, and that the two are roughly parallel—has a single, consistent belief state; it is just that which state it is depends on the purposes of the modeller. A sceptical reader might well wonder why modesty is, on the one hand, so permissive that it leads to a kind of anti-realism about mental states in fragmentation cases, and yet strong enough that it results in something like a credence-only view. The answer, I suspect, is that we’re getting both Greco’s particular views about debates in epistemology and an introduction to his modest modelling view, which could be taken in quite different directions in other hands.
Greco’s target audience is, I think, epistemologists and this means that the book is perhaps a bit light on philosophy of science for readers of BJPS. That is not to say that the philosophy of science is ignored—Greco uses it to introduce key ideas about models, emergence, and theoretical reduction early on. What’s missing, to my taste, is building and applying a toolkit from the philosophy of scientific models. There are some exceptions, such as in chapter 8’s discussion of Rice’s ([2018], [2019]) notion of holistic distortion. But there are a few places in the book where I was left with questions—some of which I’ll discuss below—that could have been answered with more engagement with the bustling literature on models in science.
As a case in point, Greco says surprisingly little about the meaning of ‘idealization’ for a book with this title. He leads with examples rather than definitions, and only in chapter 7 do we get some conceptual machinery from philosophy of science to help grapple with it. This left me wanting more of a framework for thinking about idealization as I read. For example, in chapter 3, Greco discusses the logical omniscience of agents whose beliefs are modelled using possible worlds. One might say that possible worlds models are idealized, and so this property isn’t intended either to describe or prescribe anything about belief. But, says Greco, this concedes too much: it invites ‘the thought that, whatever the fruitfulness of the possible worlds framework as an approach to modeling ideal agents, we should be on the lookout for a different approach to modeling the beliefs of non-ideal agents like us’ (p. 45). Why would this be the case?
As far as I can tell, this passage uses ‘ideal’ to mean a normative ideal: an agent whose beliefs fit perfectly with the demands of rationality. But this, as Greco acknowledges, is not the meaning of ‘ideal’ in ‘idealization’ qua scientific practice. There, the term roughly refers to the practice of simplifying or distorting an object when representing it in a model (but see Frigg and Hartmann [2020]). So there’s something odd in Greco’s pushback against the hunt for models of ‘non-ideal agents like us’. For if ‘idealized’ simply has its usual meaning in philosophy of science, then a possible worlds model can be a model of an ideal agent and a model of agents like us. It is a ‘model of agents like us’ in that we are the target system of the model: the system under investigation, which the model is a representation of. It is a ‘model of ideal agents’ in Elgin’s ([1983]) sense of representation-as. The model represents us (real agents) as having consistent, deductively closed beliefs, although we don’t. It is idealized even if there is nothing ideal about having consistent beliefs.
This is important to clarify since we risk talking past one another. The person who wants to model ‘the beliefs of non-ideal agents like us’, meaning this normatively, may want a representation that captures normative failures because they want to study the impact that these have on our epistemic lives. For example, one may wish to study failures of logical reasoning, while thinking that logic is normative. Replying that epistemological models are inevitably idealized, meaning that belief is an emergent category, misses the point because it trades on the ambiguity in the term ‘ideal’.
Despite phrases like that quoted above, Greco acknowledges this ambiguity. He distinguishes between normative and descriptive idealizations in an early footnote, and then more substantially in the final chapter on non-ideal theory. This highlights that there are two ‘opponents’ who Grego is addressing in the book, and who are sometimes unhelpfully elided. Greco focuses mostly on the ambitious modeller who seeks a unified general account of everything. But he also argues against the non-ideal modeller who is interested in the messy reality that is often left out of epistemological models. One can get these two opponents to seem like the same figure by describing them in sparse ways, for example, as wanting to do away with idealization or covering more cases than are standardly considered. But this would be to ignore actual communities of practice: the people who talk about Bayesian superbabies are not usually the same people who talk about cognitive limitations and satisficing.
Interestingly, when talking about normative and descriptive idealization in the conclusion, Greco asserts that ‘eschewing purely normative idealizations is typically easier than eschewing idealizations that play a descriptive role’ (p. 172). This is because he thinks that descriptive idealization involves holistic distortion (Rice [2018]), while normative idealization typically does not. I wasn’t entirely convinced of this, and we get only a ‘proof of concept’ involving the transitivity of preference. The point of the story is that to a revealed preference theorist, questions about the behaviour of an agent with intransitive preferences would seem ill posed, since preferences are summaries of choice behaviour that build in transitivity constitutively. Meanwhile, to a philosopher, transitivity is a normative axiom, which one can take or leave. The problem is that the example doesn’t generalize. There is no necessary connection between descriptive decision theory and revealed preference theory. Empirical decision theorists regard Savage’s P1 axiom (which says preferences are transitive) as a descriptive idealization, while philosophers use the same model, while regarding that axiom as normative. Whether or not the idealization is holistic, there’s no difference between the normative and descriptive here.
The thing I took away from this is the fascinating observation that one person’s normative idealization is another’s descriptive idealization. This tantalizing idea isn’t explored further, although on his own account it interacts with Greco’s claim that ‘there isn’t a feasible, general project of “de-idealizing” the field’ (p. 175). It may be true that ‘we’re all ideal theorists’ (p. 175), but how much modesty is forced on us as a result remains unclear.
Joe Roussos
Institute for Future Studies
joe.roussos@iffs.se
References
Elgin, C. Z. [1983]: With Reference to Reference, Indianapolis, IN: Hackett.
Frigg, R. and Hartmann, S. [2020]: ‘Models in Science’, in E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy, available at <plato.stanford.edu/archives/spr2020/entries/models-science/>.
Lederman, H. [2018]: ‘Two Paradoxes of Common Knowledge: Coordinated Attack and Electronic Mail’, Noûs, 52, pp. 921–45.
Lewis, D. K. [1982]: ‘Logic for Equivocators’, Noûs, 16, pp. 431–41.
Rice, C. [2018]: ‘Idealized Models, Holistic Distortions, and Universality’, Synthese, 195, pp. 2795–819.
Rice, C. [2019]: ‘Models Don’t Decompose That Way: A Holistic View of Idealized Models’, British Journal for the Philosophy of Science, 70, pp. 179–208.
Roussos, J. [2025]: ‘Normative Formal Epistemology as Modelling’, British Journal for the Philosophy of Science, 76, available at <doi.org/10.1086/718493>.