AMBIGUOUS DECISIONS IN BAYESIANISM AND IMPRECISE PROBABILITY
Mantas Radzvilas, William Peden & Francesco De Pretis
When we judge how strongly to believe a hypothesis, we might have more or less evidence. Imagine that you are about to play a game involving an unfamiliarly shaped object, shaped somewhat like a gömböc, with uncertain physical dynamics. Suppose that the gömböc-like object’s sides are marked A and B. How strongly should you believe that the object will land on a side marked A on its next throw?
According to what we shall call ‘standard Bayesianism’, it should be possible to model your degrees of belief (‘credences’) as quantities measurable by a probability function that assigns precise probabilities to each proposition that you have considered. These probabilities provide a scale from zero to one, representing how strongly you believe each proposition. For instance, standard Bayesians argue that we can (or should) have a credence of 50% in the hypothesis that the gömböc-like object will land on A. This credence has the appeal that it is neutral between A and B. It is arguably natural to be neutral, because (by assumption) you know nothing about the object’s dynamics.
But what if a trustworthy friend told you that she had thrown the object in 500 previous games and that it landed on A in 50% of the games? On a standard Bayesian analysis, given some common assumptions, your credence should be unchanged. However, it seems that something meaningful has changed: your relevant evidence about the gömböc-like shape has increased. This change is also called a reduction in ‘ambiguity’ (or ‘ignorance’).
The representation of ambiguity in the standard Bayesian model of belief has long been a criticism of it (Keynes [1921]; Popper [1958]; Ellsberg [1961]; Kyburg [1968]; Norton [2011]; Reiss [2014]). These challenges are part of wider debates between standard Bayesians and their rivals.
One reaction by some Bayesians to the challenge of representing ambiguity has been to develop an ‘imprecise’ Bayesian approach, where beliefs are represented using a set of probability functions, rather than a single probability function (Levi [1974]; Walley [1991]; Brady [1993]; Joyce [2005]; Benétreau-Dupin [2015]). An initial state of high ambiguity can be modelled by a set of probability functions assigning more diverse (‘divergent’) probabilities to propositions. As more evidence accumulates regarding a hypothesis, the probabilities in the set will often converge towards a common value, reflecting the fall in ambiguity.
Let’s turn back to our game. An imprecise Bayesian could represent the initial ambiguous situation (where you are very uncertain about the object’s bias or fairness) via a wide range of probabilities for the hypothesis that the next toss of the object will land on the side marked A. In contrast, after you learn that the object landed on A about 50% of the time for your friend, the probabilities will all move towards 50%, given a few assumptions about the probability functions modelling your beliefs. Thus, using an imprecise Bayesian representation of your beliefs, the reduction in ambiguity with respect to the object toss can be reflected in the reduction in the divergence of the probabilities.
There are many different debates about the comparative virtues of standard and imprecise Bayesianism (Levi [1974]; Joyce [2005]; White [2010]; Al Najjar and Weinstein [2009]; Bradley [2019]; Joyce [2010]; Bradley [2017]). Not all standard Bayesians accept that their imprecise siblings have a better representation of ambiguity and similar aspects of reasoning. However, if imprecise Bayesianism is better in this respect, this does not answer the question, ‘which approach leads to better decisions?’. Both standard and imprecise Bayesianism are alleged to provide good guidance for our decisions. Is one identifiably better than the other in this respect, at least for particular sorts of application?
Both types of Bayesianism generally converge on the same behaviour in the long-run—the point where more of the same type of evidence makes only a very marginal difference to decision-making performance. Consequently, long-run performance is not a very useful criterion for comparison. Instead, we focused on the short-run, because this period is when any differences in performance would be apparent.
Imprecise Bayesians normally follow a rule for revising credences that applies standard Bayesian revision methods to each probability function in their set. This rule is known to face decision-theoretic problems when representing extremely ambiguous contexts. In particular, if the initial set of probabilities is maximally divergent (or contains probability functions that are too stubborn in response to evidence), then an imprecise Bayesian reasoner can become ‘stuck’ and not revise their beliefs as they acquire new evidence (Vallinder [2018]; Peden [2024]). This phenomenon is called ‘inertia’. From a decision theory perspective, the danger is that imprecise Bayesians can fail to use reliable and relevant information.
In such scenarios, imprecise Bayesians may reasonably compromise their representation of ambiguity for the sake of successful decision-making (Joyce [2010], pp. 291–92). However, we found that the tension between these objectives goes far beyond inertia. We call this tension the ‘ambiguity dilemma’.
Following the lead of others working in this area (Kyburg and Teng [1999]; Radzvilas et al [2021], [2023]), in our article we developed a comparative study by first coding algorithms (‘players’) based on the two Bayesianisms, and then comparing the short-run performances of these players in a classic decision-making problem that features ambiguity. The decision problem involves a series of binomial trials, which are events with just two possible outcomes and the same probability for an outcome in each trial. (The two outcomes themselves may have different probabilities.) We used the language of coin tosses, but with the important proviso that players only know that the tosses are binomial. They have no relevant background knowledge to help them estimate the bias or fairness of the coin tosses.
The decision problem consists of a series of ‘games’ where a player can choose to bet on the last toss, after four new observations of coin tosses. A parameter δ (that can take any value between zero and one) is randomly generated for each game. Players can, first, bet on heads for a prize of (1 − δ) if they win and a loss of −δ if they lose; second, bet on tails for a prize of δ if they win and a loss of (δ − 1) if they lose; third, abstain for a guaranteed result of zero.
Players accumulate observations as they play these games. We created ‘tests’ of players, each consisting of 1000 games. We investigated coin biases towards heads of 0.9, 0.7, 0.5, 0.3, and 0.1. We used random and common sequences of coin tosses across all players, as well as common sequences of δ values. Players made their decisions separately.
Each player has a decision rule, in addition to their standard or imprecise Bayesian learning rule. Our standard Bayesian player tries to maximize their expected winnings (‘payoffs’) in each game. This means that they choose an action whose sum of payoffs for each possible outcome (weighted by probabilities) is higher than that of any other action. There is no comparably popular decision rule for imprecise Bayesianism, but we implemented nine different approaches that an imprecise Bayesian might use in our decision problem.
For the prior (credences at the start of the tests) of our standard Bayesian player—let’s call this algorithm ‘Stan’—we used a flat beta prior. This prior is how most Bayesian statisticians would approach our decision problem. A flat beta prior initially assigns equal probabilities to heads and tails. Additionally, the flat prior means that as Stan observes tosses, its credence in heads (or tails) rapidly approximates the frequencies of heads (or tails) among its observations. For example, if Stan has observed 100 tosses and 90 of them landed heads, then Stan’s credence that the next toss will land heads will be very close to 0.9. Our imprecise Bayesian players use a set of priors, ranging from biased towards heads to biased towards tails, to represent (at least partly) the high initial ambiguity in the decision problem.
Our main result was that Stan did significantly better than the imprecise Bayesian players in terms of average profits over the games and coin biases. When the bias was strongly towards or against heads, Stan consistently outperformed the imprecise Bayesians. Some players could match Stan’s performance given other biases, but they never did better.
The cause of the difference in performance was that in binomial trials, even small samples can provide practically significant information for guiding decisions. Stan was highly successful at identifying this information and using it. By contrast, the imprecise Bayesians’ divergent sets of probabilities were slower to change. Compared to Stan, they more often bet incorrectly or abstained from betting. In these situations, they either lost money by betting incorrectly or missed winnings that Stan was able to obtain.
Did this performance by Stan come at the cost of early net losses? A net loss occurs when a player’s losses exceed their winnings. The attraction of ‘cautious’ decision-making is often used to argue for imprecise Bayesians’ decision rules. Yet, surprisingly, Stan did better at avoiding net losses than all the imprecise Bayesian players. The reason is that a net loss is determined by both losses and winnings. In this decision problem, some losses are unavoidable, but Stan was better at using early information to minimize losses and maximize winnings.
Suspending judgement and refusing to bet either way in ambiguous situations like our decision problem has been regarded as a strength of the imprecise Bayesian approach (Walley [1991]; Bradley [2019]). However, for those imprecise Bayesian players who abstained in our tests, it proved to be costly in terms of comparative performance. This is counterintuitive, perhaps: Abstaining in a particular game has no ‘cost’ in the sense of lost money. And if one can reliably bet, then there will tend to be a loss of potential winnings. However, Stan was able to reliably make successful bets in cases where some imprecise Bayesian players abstained. Stan’s successful bets tended to more than compensate for the losses. In contrast, when losses were made by imprecise Bayesian players who sometimes abstained, they did not have the cushion of earlier winnings, and hence made net losses.
An unknown bias is the type of situation that is often used to motivate imprecise Bayesian representations of ambiguity (Walley [1991], p. 222). Yet doing so comes at a decision-making cost. None of the rules that we investigated avoids this tough choice between ambiguity and comparative decision-making performance. Hence, the ambiguity dilemma is something that imprecise Bayesians must confront in some types of decision problem.
Imprecise Bayesians are usually pragmatic about representing ambiguity. Thus, it is consistent with their approach to use a less divergent set, which would improve performance relative to Stan. However, this would compromise their representation of the ambiguity in the decision problem. This sacrifice would be proportional: the less divergent the set, the poorer the representation of the ambiguity.
Managing this ambiguity dilemma will presumably depend on the context. Consider artificial intelligence (AI): Suppose that imprecise Bayesianism is closer than standard Bayesianism to how humans reason under ambiguity, as some of its advocates claim (Levi [1974]; Benétreau-Dupin [2015]; Colombo et al. [2021]). If we want an AI that reasons like human beings, then we might favour more divergent probabilities in decision problems, analogous to those in our article. If we want an AI to make successful decisions in practical applications, then we might forgo the ambiguity representation tools of imprecise Bayesianism. Many applications of Bayesian reasoning will be between these extremes.
Therefore, the ambiguity dilemma is not, in itself, a reason for Bayesians to abandon the imprecise approach. We expect debates between standard and imprecise Bayesians to continue for many years to come. However, our results show how the costs of representing ambiguity using the imprecise Bayesians’ tools are far more widespread than previously discussed. This is not only significant for formal epistemology—it should also be considered when making choices about whether to employ imprecise Bayesianism for practical decisions (Bradley and Steele [2015]). Since imprecise Bayesianism is supposed to guide decisions as well as model ambiguity, the ambiguity dilemma is something that imprecise Bayesians must take seriously.
Mantas Radzvilas
University of Konstanz
mantas.radzvilas@uni-konstanz.de
William Peden
Johannes Kepler University, Linz
william.peden@jku.at
Francesco De Pretis
Indiana University Bloomington
and
University of Modena and Reggio Emilia
francesco.depretis@unimore.it
References
Al-Najjar, N. I. and Weinstein, J. [2009]: ‘The Ambiguity Aversion Literature: A Critical Assessment’, Economics and Philosophy, 25, pp. 249–84.
Benétreau-Dupin, Y. [2015]: ‘The Bayesian Who Knew Too Much’, Synthese, 192, pp. 1527–42.
Bradley, R. [2017]: Decision Theory with a Human Face, Cambridge: Cambridge University Press.
Bradley, S. [2019]: ‘Imprecise Probabilities’, in E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy, available at .
Bradley, S. and Steele, K. [2015]: ‘Making Climate Decisions’, Philosophy Compass, 10, pp. 799–810.
Brady, M. [1993]: ‘The Bayesian Who Knew Too Much’, British Journal for the Philosophy of Science, 44, pp. 357–76.
Colombo, M., Elkin, L. and Hartmann, S. [2021]: ‘Being Realist about Bayes, and the Predictive Processing Theory of Mind’, British Journal for the Philosophy of Science, 72, pp. 185–220.
Ellsberg, D. [1961]: ‘Risk, Ambiguity, and the Savage Axioms’, The Quarterly Journal of Economics, 75, pp. 643–69.
Joyce, J. M. [2005]: ‘How Probabilities Reflect Evidence’, Philosophical Perspectives, 19, pp. 153–78.
Joyce, J. M. [2010]: ‘In Defence of Imprecise Credences’, Philosophical Perspectives, 24, pp. 281–323.
Keynes, J. M. [1921]: A Treatise on Probability, London: Macmillan.
Kyburg, H. E. [1968]: ‘Bets and Beliefs’, American Philosophical Quarterly, 5, pp. 54–63.
Kyburg, H. E. and Teng, C. M. [1999]: ‘Choosing among Interpretations of Probability’, in K. B. Laskey and H. Prade (eds), Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, San Francisco CA: Morgan Kaufmann, pp. 359–65.
Levi, I. [1974]: Gambling with Truth: An Essay on Induction and the Aims of Science, Cambridge, MA: MIT Press.
Norton, J. D. [2011]: ‘Challenges to Bayesian Confirmation Theory’, in P. S. Bandyopadhyay and M. R. Forster (eds), Philosophy of Statistics, Oxford: Elsevier, pp. 391–439.
Peden, W. [2024]: ‘Evidentialism, Inertia, and Imprecise Probability’, British Journal for the Philosophy of Science, 75, available at
Popper, K. R. [1958]: ‘A Third Note on Degree of Corroboration or Confirmation’, British Journal for the Philosophy of Science, 8, pp. 294–302.
Radzvilas, M., Peden, W. and De Pretis, F. [2021]: ‘A Battle in the Statistics Wars: A Simulation-Based Comparison of Bayesian, Frequentist and Williamsonian Methodologies’, Synthese, 199, pp. 13689–748.
Radzvilas, M., Peden, W. and De Pretis, F. [2023]: ‘Making Decisions with Evidential Probability and Objective Bayesian Calibration Inductive Logics’, International Journal of Approximate Reasoning, 162, available at
Reiss, J. [2014]: ‘What’s Wrong with Our Theories of Evidence?’, Theoria, 29, pp. 283–306.
Vallinder, A. [2018]: ‘Imprecise Bayesianism and Global Belief Inertia’, British Journal for the Philosophy of Science, 69, pp. 1205–30.
Walley, P. [1991]: Statistical Reasoning with Imprecise Probabilities, London: Chapman and Hall.
White, R. [2010]: ‘Evidential Symmetry and Mushy Credence’, Oxford Studies in Epistemology, 3, pp. 161–86.
Listen to the audio essay
FULL ARTICLE
Radzvilas, M., Peden, W. and De Pretis, F. [2027]: ‘The Ambiguity Dilemma for Imprecise Bayesians’, British Journal of the Philosophy of Science, 78, <doi.org/10.1086/729618>.
© The Author (2024)
FULL ARTICLE
Radzvilas, M., Peden, W. and De Pretis, F. [2027]: ‘The Ambiguity Dilemma for Imprecise Bayesians’, British Journal of the Philosophy of Science, 78, <doi.org/10.1086/729618>.