Other Philosophy Papers

A Bayesian analysis of self-undermining arguments in physics

Analysis 83 (2023) 295-298.

Some theories in physics seem to be ‘self-undermining’: that is, if they are correct, we are probably mistaken about the evidence that apparently supports them. For instance, certain cosmological theories have the apparent consequence that most observers are so-called ‘Boltzmann brains’, which exist only momentarily and whose apparent experiences and memories are not veridical. I provide a Bayesian analysis to demonstrate why theories of this kind are not after all supported by the apparent evidence in their favor, taking advantage of the split between ‘primary evidence’, which directly supports a theory, and ‘proximal evidence’, which is our evidence (largely records and testimony) for the primary evidence.


Diachronic Rationality and Prediction-Based Games (2010)

Proceedings of the Aristotelian Society 110 (2010) 243-266

I explore the debate about causal versus evidential decision theory, and its recent developments in the work of Andy Egan, through the method of some simple games based on agents' predictions of each other's actions. My main focus is on the requirement for rational agents to act in a way which is consistent over time and its implications for such games and their more realistic cousins.


Justifying conditionalisation: conditionalisation maximises expected epistemic utility (2005)

(Hilary Greaves and DW) Mind 115 (2006) 607-632

According to Bayesian epistemology, the epistemically rational agent updates her beliefs by conditionalization: that is, her posterior subjective probability after taking account of evidence X, pnew, is to be set equal to her prior conditional probability pold(.|X). Bayesians can be challenged to provide a justification for their claim that conditionalization is recommended by rationality --- whence the normative force of the injunction to conditionalize?

There are several existing justifications for conditionalization, but none directly addresses the idea that conditionalization will be epistemically rational if and only if it can reasonably be expected to lead to epistemically good outcomes. We apply the approach of cognitive decision theory to provide a justification for conditionalization using precisely that idea. We assign epistemic utility functions to epistemically rational agents; an agent's epistemic utility is to depend both upon the actual state of the world and on the agent's credence distribution over possible states. We prove that, under independently motivated conditions, conditionalization is the unique updating rule that maximizes expected epistemic utility.