::: about
::: news
::: links
::: giving
::: contact

::: calendar
::: lunchtime
::: annual lecture series
::: conferences

::: visiting fellows
::: postdoc fellows
::: senior fellows
::: resident fellows
::: associates

::: visiting fellowships
::: postdoc fellowships
::: senior fellowships
::: resident fellowships
::: associateships

being here
::: visiting
::: the last donut
::: photo album

::: center home >> events >> annual lecture series >> lectures 2015-16

56th Annual Lecture Series, 2015-16

Titles and abstracts TBA

Crime, Punishment and 'Specific Evidence'
Katie Steele
London School of Economics
Friday, 11 September 2015, 3:30 pm
817R Cathedral of Learning

Abstract: Various real and imagined criminal law cases provoke the intuition that there is something wanting with statistical evidence in the courtroom (i.e. the appeal to the frequency of properties in a sample population whose members are similar to the person of interest). But in the cases in question, the probabilities of guilt or culpability are apparently very high—high enough to meet the relevant standard of evidence. This problem is known as the 'proof paradox'. An oft-expressed position is that legal verdicts should be based on 'specific' rather than 'general' evidence of guilt. But despite considerable academic debate, it remains unclear whether a position along these lines can be defended, and if so, whether the problem with general evidence is ultimately moral or epistemic in character. We argue that, all other things being equal, moral considerations should not influence the import of legal evidence. Moreover, the most promising epistemic distinction between specific and general evidence does not warrant downgrading the latter in a legal trial. Finally, we argue that, in the problem cases, the probabilistic inferences are, after all, wanting. We explain this in terms of 'meta-uncertainty', and suggest a rethinking of the roles for, and relationship between, specific and general evidence. (This is joint work with Mark Colyvan.)

What Kind of Cancer Did You Have? (And other Unanswerable Questions)
Anya Plutynski
Washington University St. Louis
Friday, 6 November 2015, 3:30 pm
817R Cathedral of Learning

Abstract: How ought we to classify cancers? Is cancer one, or many kinds of thing? If many, how many? What would count as a "natural" classification of cancers? In this talk, I argue against a hierarchical classification of cancer. Rather, I contend that there are several, equally legitimate cross-cutting modes of classifications of cancer; this is because cancer is a family of processes with variable natural histories, whose causal bases are multiply realized. Moreover, there are several equally scientific purposes which cancer classifications serve. Some have defended the view that cancer is a homeostatic property cluster kind, or kinds (Williams, 2011; Khalidi, 2013). While there are some common causal mechanisms “for” cancer, they operate in different ways in different cancers, and these mechanisms are not, by and large, “homeostatic” in character. I thus take issue both with Khalidi (2013), who argues that cancer is a “homeostatic property cluster” kind, but also with Lange (2007), who argues that cancer is a 'hodgepodge' of a kind, and molecular medicine is rendering diseases "obsolete" as natural kinds. Despite the fact that they arrive at opposing conclusions, both make similar arguments, drawing upon similar observations, namely: that cancers share many “hallmarks,” each driven by distinct molecular and genetic features of cancer cells. Where they both err, in my view, is in taking the molecular genetic characterization of cancer as casting the deciding vote on natural classification. While genetic and molecular features of cancer cells may be essential to their status as kinds of cells (though, I'm skeptical), we should be extremely wary of extrapolating from cellular properties to properties of cancers as a whole. I consider several case studies that suggest why, and speak briefly about the implications of this view for Obama's recent launch of the Precision Medicine Initiative.

Kant on Mathematical Force Laws
Daniel Warren
University of California, Berkeley
Friday, 29 January 2016, 3:30 pm
817R Cathedral of Learning

Abstract: In the Metaphysical Foundations of Natural Science, Kant is clearly committed to the idea that the laws of the fundamental forces of matter can only be known empirically. In the General Remark to the Dynamics chapter of that work, he says, “no law of either attractive or repulsive force may be risked on a priori conjectures. Rather, everything, even universal attraction as the cause of weight, must be inferred, together with its laws, from data of experience.” (MF 4:534)
And yet, Kant also presents what seem like a priori arguments for the inverse-square character of the law of gravitational attraction. Such passages appear in the Prolegomena (§32), in the pre-Critical Physical Monadology and, again, in the Dynamics chapter of the MF. These arguments, which are, in part, explicitly geometrical in character, concern the properties of spheres, specifically the fact that the area of a sphere varies with the square of the radius. Moreover there are closely parallel arguments presented in the same sections of the MF, as well as the Physical Monadology, purporting to establish some kind of inverse-cube law for matter’s original repulsive force, and to do so by appeal to the fact that the volume of a sphere varies with the cube of the radius. In this paper I want to address two questions. What is the character of the seemingly a priori arguments for these fundamental force laws? And do these aprioristic considerations leave any room for the “data of experience” in grounding these laws? With the first question, I will be concerned, in particular, with the relative roles of mathematical and more metaphysical (perhaps “transcendental”) aprioristic lines of thought in Kant’s treatment of these force laws.

Measurement and Empirical Content
Chris Smeenk
University of Western Ontario
Friday, 12 February 2016, 3:30 pm
817R Cathedral of Learning

Abstract: I will defend the view that structure on the space of theoretical models is needed to understand how theories represent nature. Many other views, by contrast, locate empirical content within a single model. There are several ways in which the need for structure on the space of models can come to light, but I will focus on measurement. Physical theories provide us with an account of what systems can be used to reliably measure some fundamental quantity introduced by the theory, and over what domains they can be successfully applied. Assessing the reliability of measurements characterized in this way requires claims that extend beyond a single model, since these implicitly consider a range of counterfactual circumstances. Capturing this modal dimension of measurement requires an appeal to structures defined on the space of models. Philosophers have been far too willing to regard assessments of instrumental reliability as part of the messy details of scientific work that can be neglected in considering the structure of scientific theories. On my alternative view, putting these questions front and center leads to a strikingly different account of empirical content, with implications for underdetermination and continuity through theory change. I will sketch the view, consider several objections to it, and consider some of these implications.

Using Robots to Study the Evolutionary Transition from Body to Brain to Mind
Josh Bongard
University of Vermont
Friday, 18 March 2016, 3:30 pm
817R Cathedral of Learning

Abstract:  In this talk I will outline one of the long-term goals of evolutionary robotics that has philosophical appeal: how can the abstract aspects of cognition (such as categorization, self-awareness, or language) ultimately be grounded in the tangible dynamics of brain-body-environment interaction dynamics?

In the first third of my talk I will explain the standard methodology of evolutionary robotics, and place it in the larger context of embodied cognition research. In the middle third, I will describe a series of experiments that ground some of these cognitive building blocks in perception and action. I will extrapolate these works to argue that evolutionary robotics is a good candidate methodology for creating a smooth gradient along which we can engineer incrementally more cognitive machines. In the final third, I will propose how this approach may provide a solution to a major growing ethical concern, which is how to create superintelligent yet human-friendly machines.

Reasonable Doubt: Epistemological Reflections on Jurors' Decision-making
Marion Vorms
Birkbeck College, University of London/Paris 1
Friday, 22 April 2016, 3:30 pm
817R Cathedral of Learning

Abstract: The goal of this paper is to study whether and how the notion of ‘reasonable doubt’ might be a useful concept to study belief dynamics and decision-making in general — jurors being taken as a model for everyday reasoners and decision-makers.

In Common Law systems, jurors in criminal trials are instructed to return a verdict of guilty if and only if they estimate that the evidence presented in court does not leave any reasonable doubt as to whether the defendant is guilty. Although there is no consensus about the meaning of this standard of proof in legal theory and practice, interpreting it in terms of a probabilistic threshold seems rather natural and sensible.

Taken more generally, the notion of reasonable doubt seems to be prima facie easily definable in decision-theoretic terms: whether doubt in a given hypothesis is reasonable or not (and hence whether action based on it is rational or not) seems to depend on the degree of confirmation of this hypothesis, and on a probabilistic threshold that itself depends on the decisional context (utilities). But is that all we mean when we say that a given agent’s doubting such or such hypothesis is (un)reasonable? Isn’t there more to “reasonable doubt” than the complement of “reasonable belief” — more than a probabilistic threshold for action? And don’t we need something more than degrees of belief (“outright belief”, or “acceptance”) to account for our intuitions regarding whether doubt is reasonable or not in different situations? In other words, by approaching belief revision and decision-making through the lens of reasonable doubt, are we forced to amend, or complement the Bayesian framework by another level of analysis? The goal of this talk is to tackle these issues through the analysis of different examples drawn from the judiciary, the scientific, and everyday contexts.



The Annual Lecture Series is hosted by the Center for Philosophy of Science.

Generous financial support for this lecture series has been provided by
the Harvey & Leslie Wagner Endowment.      

Revised 3/30/16 - Copyright 2012