home
   ::: about
   ::: news
   ::: links
   ::: giving
   ::: contact

events
   ::: calendar
   ::: lunchtime
   ::: annual lecture series
   ::: conferences

people
   ::: visiting fellows
   ::: postdoc fellows
   ::: resident fellows
   ::: associates

joining
   ::: visiting fellowships
   ::: postdoc fellowships
   ::: senior fellowships
   ::: resident fellowships

being here
   ::: visiting
   ::: the last donut
   ::: photo album


::: center home >> events >> annual lecture series >> lectures 2009-10

50th annual lecture series, 2009-10

Epidemiological Method, Causal Inference, and Non-Randomized Statistics: The Case of Three Mile Island
Kristin Shrader-Frechette
University of Notre Dame, Departments of Philosophy and Biological Sciences
Friday, 18 September 2009, 3:30 pm

::: photos

Abstract: This paper uses recent epidemiological studies of the Three Mile Island (TMI) nuclear accident to argue for 4 claims. These are (i) that the dominant scientific position on TMI health effects (that increased, TMI-area cancers probably have been caused by accident-related stress, not radiation) is arguably wrong; (ii) that an alternative scientific conclusion is more likely correct (radiation probably caused the increased health effects); (iii) that 2 methodological errors likely contributed to these TMI errors; and (iv) that avoiding these errors in future requires a fundamental shift in epidemiological method. The methodological errors, contributing to erroneous TMI conclusions, include misunderstanding the randomization conditions necessary for use of classical statistics -- and misunderstanding the constraints on causal inferences in observational, non-experimental studies. To avoid these errors in future, the paper argues that epidemiologists are likely to need fundamental changes in their methods, including much greater use of inference to the best explanation, especially contrastive explanation, and avoiding overemphasis on black-box, or risk-factor, epidemiology, to the exclusion of eco-epidemiology.

A Case for Scientific Pluralism
Hasok Chang
University College London, Department of Science and Technology Studies
Friday, 13 November 2009, 3:30 pm

::: photos

Abstract: I outline various arguments for normative scientific pluralism, by which I mean the doctrine that it is beneficial to have multiple systems of knowledge in each area of science.   I provide a different set of arguments for each of the various possible views about the aims of science.  If the main aim of science is taken to be Truth, the chief argument for pluralism is based on the unpredictability of scientific development: since we do not know which line of inquiry will be ultimately successful, it makes sense to cultivate various lines.  If the main aim of science is empirical adequacy or understanding, there are further arguments for pluralism because different systems of knowledge can contribute to the aim in different ways.  If we consider that science has various aims simultaneously, then there are even further pluralist arguments.  I close by indicating how history and philosophy of science can help put scientific pluralism into practice by assisting with the proliferation of systems of knowledge.

Science, Supposition and Reference:  The New Program
Robert Rynasiewicz
Johns Hopkins University, Department of Philosophy
Friday, 4 December 2009, 3:30 pm

::: photos

Abstract: ‘Supposition’ is taken here in a strictly non-epistemic sense, as, e.g., in supposing for the sake of argument. Suppositions may involve what I call ‘objects of supposition’, i.e., entities whose existence is granted only courtesy of the supposition, e.g., a fictional character or object that exists only according to the story. Supposition and objects of supposition have a rich and characteristic linguistic phenomenology common to discourse about fictions, mathematical entities, and discarded hypothetical entities from the history of science. Indeed, barring magical theories of reference, all hypothetical entities in science should be regarded as objects of supposition. This raises a puzzle, viz., how is it possible to “discover” a hypothetically postulated entity.

After we resolve this apparent puzzle, we are left with a view of science that transcends the traditional realism-antirealism debates. The key distinction is not between the observable and the unobservable, but between the referring and the non-referring. This diachronically shifting distinction is necessarily cumulative and puts into new light both the traditional argument from instrumental success and the pessimistic meta-induction. There is also a corollary underwriting various applications of Ockam's razor.

Understanding, Formal Verification, and the Philosophy of Mathematics
Jeremy Avigad
Carnegie Mellon University, Department of Philosophy
Friday, 5 February 2010, 3:30 pm

::: photos

Abstract:  The philosophy of mathematics has long been focused on
determining the methods that are appropriate for justifying claims of
mathematical knowledge, and the metaphysical considerations that make them
so. But, as of late, a number of philosophers have noted that a much broader
range of normative judgments arise in ordinary mathematical practice; for
example, questions can be natural, theorems important, proofs explanatory,
concepts powerful, and so on. Such judgments are often viewed as providing
assessments of mathematical understanding, something more complicated and
mysterious than mathematical knowledge.

Meanwhile, in a branch of computer science known as "formal
verification," interactive proof systems have been developed to support the
construction of complex formal axiomatic proofs. Such efforts require one to
develop models of mathematical language, inference, and proof that are more
elaborate than the simple foundational models of the last century.  In this
talk, I will explain how these models illuminate various aspects of
mathematical understanding, and discuss ways that such work can inform, and
be informed by, a more robust philosophy of mathematics.

Discovering Mechanisms in Cognitive Neurobiology:  An Experimentalist's Perspective
Edda Thiels
University of Pittsburgh, Department of Neurobiology
Friday, 26 February 2010, 3:30 pm

Abstract:  Discovering the biological underpinnings of learning and memory
has been a long-standing goal in the field neuroscience.  During the past
three to four decades, cellular and molecular neurobiologists have
contributed to, if not greatly determined the experimental approaches toward
and, as a consequence, conceptualization and realization of that goal.
Using as examples research from my own laboratory and that of others, I will
discuss cellular and molecular entities, activities ascribed to them, and
the ongoing development of mechanistic schemata put forth in an effort to
identify the neurobiological mechanisms of learning and memory.  As I will
show, these schemata can have explanatory value at the synaptic, i.e.,
cellular level.  However, it is an open question whether they ought to and
can carry explanatory power at the cognitive, i.e., behavioral
phenomenological level.  I hope to end with an open discussion on the value
of mechanistic schemata at levels of operation multiple tiers removed from
the cognitive-behavioral level for advancing our 'understanding' of learning
and memory.

Somewhat Antirealist Bottom-up Realism
C. Kenneth Waters
University of Minnesota, Center for Philosophy of Science
Friday, 26 March 2010, 3:30 pm

::: donuts

Abstract: I will begin by reviewing two chief arguments for and against realism, the miracle argument and the pessimistic meta-induction. I will argue that the success of biological science does merit explanation, but the success to be explained is the success of its practice, not the success of its theoretical explanation. The pessimistic meta-induction, like the miracle argument, is also onto something significant. Biologists' claims about the fundamental nature of the processes they have investigated have turned out to be wrong, time after time. This is as true of biology as other sciences. But I will argue that in the case of many biological sciences (at least) the mistaken claims are those associated with theorizing about the fundamentals. Situated claims, such as claims made in the context of manipulating processes and explaining experimental results, turn out to have staying power. What this suggests is that the antirealist division in many biological sciences (at least) ought to be drawn between fundamental claims generated from theorizing about the fundamentals and situated claims grounded in experimental practices. I will argue that in the case of many biological sciences (at least), it is the truth of situated claims, not the truth of claims about supposed fundamental structures and processes, that explains the success of scientific practices.

The Paradoxes of Perception: Cartesian Sources-- and Solutions
Catherine Wilson
University of Aberdeen, Department of Philosophy
Friday, 9 April 2010, 3:30 pm

::: donuts

Absract: Sensory perception is commonly and intuitively considered to involve two things: an object, scene, event, or quality that is real, and someone's experience (what Descartes calls an idea") of that thing that matches it or fails to. There are notorious problems with this "two things" conceptualization, including scepticism (the "veil of ideas"); and the evident relativity of objects, scenes, events, and qualities to various perceptual systems in nature. I try to show how Descartes brought these difficulties upon us, but also how thinking more rigorously and consistently along Cartesian lines can help to remove them.

The Annual Lecture Series is hosted by the Center for Philosophy of Science.

Generous financial support for this lecture series has been provided by
the Harvey & Leslie Wagner Endowment.      

 
Revised 11/30/10 - Copyright 2009