University of Pittsburgh · School
of Arts and Sciences |
|
|
|
|
Past Graduates
|
|
Holly Andersen (2009)
Philosophy of Science,
Philosophy of Psychology & Cognitive Science,
Philosophy of Mind,
Epistemology / Metaphysics
Currently at Simon Fraser University (assistant professor)
|
homepage |
Dissertation:
The Causal Structure of Conscious Agency
I examine the way implicit causal assumptions about features of agency and action affect the philosophical conclusions we reach from neuroscientific results, as well as provide a positive account of how to incorporate scientific experiments on various features of agency into philosophical frameworks of volition, using tools from interventionist causal analysis and research on human automatism. I also provide new, general, arguments for the autonomy for any higher level causes, including but not limited to features of conscious agency (see the description on my webpage for more detail).
|
|
Peter Gildenhuys (2009)
Philosophy of Biology, Philosophy of Science, Biomedical Ethics, Virtue Ethics, Causal Reasoning, Philosophy of Language
Currently at Lafayette College (assistant professor)
|
homepage |
Dissertation:
A Causal Interpretation of Selection Theory
My dissertation is an inferentialist account of classical population genetics. I present the theory as a definite body of interconnected inferential rules for generating mathematical models of population dynamics. To state those rules, I use the notion of causation as a primitive. First, I put forward a rule stating the circumstances of application of the theory, one that uses causal language to pick out the types of entities over which the theory may be deployed. Next, I offer a rule for grouping such entities into populations based on their competitive causal relationships. Then I offer a general algorithm for generating classical population genetics models suitable for such populations on the basis of information about what causal influences operate within them.
Dynamical models in population genetics are designed to demystify natural phenomena, chiefly to show how adaptation, altruism, and genetic polymorphism can be explained in terms of natural rather than supernatural processes. In order for the theory to serve this purpose, it must be possible to understand, in a principled fashion, when and how to deploy the theory. By presenting the theory as a system of ordered inferential rules that takes causal information as its critical input and yields dynamical models as its outputs, I show explicitly how classical population genetics functions as a non-circular theoretical apparatus for generating explanations.
Though focused on the foundations of population genetics, my dissertation has implications for a number of important philosophical disputes. Foremost, a causal interpretation of classical population genetics, if successful, would settle the hotly debated issue of whether that theory is causally interpretable. But further, the algorithm I developed for generating population genetics equations also shows how such notions as group selection, fitness, drift, and even natural selection itself do not serve as critical inputs in the construction of classical population genetics models. The meanings of these terms are contested in part because they need not be given a definite role to play in generating classical population genetics models.
While many writers have aimed to generalize the theory of natural selection, I show how the theory is applicable to systems that are picked out and distinguished because of very general features of their causal structure, rather than their more narrow biological ones, and hence I show how the theory is deployable over more than just organisms bearing genetic variations. Even the various different sorts of groupings one finds in population genetics can be distinguished using causal language. Indeed, my explicit understanding of these last two aspects of selection theory has prepared me to demonstrate that inheritance is not a requirement for selection, as well as to show that population genetics models featuring diploid organisms are not instances of models of “group selection” that feature sub-groups of genes. Other corollaries of my work are an account of the causes that produce drift and a novel stance on why selection theory is a stochastic theory.
Finally, because selection theory must function as a principled theory, it must be possible to say explicitly under what conditions equations can be used to make inferences about system dynamics when the theory is applied to natural populations. Accordingly, I show how applications of classical population genetics equations do not hold ceteris paribus; rather they can be coupled with a different proviso, one with a definite meaning that makes explicit what conditions must hold for the equations to function as tools of inference. The alternative to the vacuous proviso ceteris paribus that I offer for population genetics equations should be generalizable, with suitable modification, for use in the special sciences more generally. Lastly, my dissertation has implications for the relationship between models and laws, at least in the domain of classical population genetics, wherein laws are typically of narrow applicability and models generalizations of them, the opposite of the pattern seen in physical theory.
|
|
Julie Zahle (2009)
Philosophy of the Social Sciences, Philosophy of Science, Philosophy of Mind
Currently at University of Copenhagen (assistant professor)
|
homepage
|
Dissertation:
Practices, Perception, and Normative States
Theories of practice are widespread within the humanities and the social sciences. They reflect the view that the study of, and theorizing about, social practices hold the key to a proper understanding of social life or aspects thereof. An important subset of theories of practice is ability theories of practice. These theories focus on the manner in which individuals draw on their abilities, skills, know-how, or practical knowledge when participating in social practices.
In this dissertation, I concentrate on ability theories of practice as advanced within the social sciences and the philosophy of the social sciences. Ability theorists within these two fields stress individuals’ ability to act appropriately in situations of social interaction. But how, more precisely, is this ability to be understood? The thesis I develop and defend provides a partial answer to this important question: In situations of social interaction, individuals’ ability to act appropriately sometimes depends on their exercise of the ability directly to perceive normative states specified as the appropriateness of actions.
In the first part of the dissertation, I introduce and motivate this thesis. I provide an overview of ability theories of practice and, against that background, I present my thesis. Though generally unexplored, influential ability theorists have toyed with the thesis. Or, their theories invite an extension in this direction. For this reason, I argue, the thesis constitutes a natural way in which further to develop their approach.
In the second part of the dissertation, I develop and defend my thesis. First, I present a plausible way in which to make ontological sense of the claim that normative states are sometimes directly perceptible. Next, I offer an account of perception and argue that, by its lights, individuals sometimes have the ability directly to perceive normative states. Finally, I briefly show that individuals’ ability to act appropriately sometimes depends on their exercise of this ability directly to perceive normative states. From both a practical and a theoretical perspective, the development and defense of this thesis constitutes a valuable elaboration of the basic approach associated with ability theories of practice.
|
|
Zvi Biener (2007)
Metaphysics and Epistemology in the Early-Modern Period, History of Philosophy
Currently at Western Michigan University (assistant professor)
|
homepage |
Dissertation:
The Unity and Structure of Knowledge: Subalternation, Demonstration, and the Geometrical
Manner in Scholastic-Aristotelianism and Descartes
The project of constructing a complete system of knowledge---a system capable of integrating
all that is and could possibly be known---was common to many early-modern philosophers and was
championed with particular alacrity by René Descartes. The inspiration for this project often
came from mathematics in general and from geometry in particular: Just as propositions were
ordered in a geometrical demonstration, the argument went, so should propositions be ordered
in an overall system of knowledge. Science, it was thought, had to proceed more geometrico.
In this dissertation, I offer a new interpretation of `science more geometrico' based on an extended
analysis of the explanatory and argumentative forms used in certain branches of geometry. These branches
were optics, astronomy, and mechanics; the so-called subalternate, subordinate, or mixed-mathematical
sciences. In Part I, I investigate the nature of the mixed-mathematical sciences according to Aristotle
and early-modern scholastic-Aristotelians. In Part II, the heart of the work, I analyze the metaphysics
and physics of Descartes' Principles of Philosophy (1644, 1647) in light of the findings of Part I and
an example from Galileo. I conclude by arguing that we must broaden our understanding of the early-modern
conception of `science more geometrico' to include exemplars taken from the mixed-mathematical sciences.
These render the concept more flexible than previously thought.
|
|
Brian Hepburn (2007)
History and Philosophy of Science, Philosophy of physics, History of Science
Currently at the University of British-Columbia (post-doc)
|
| Dissertation:
Equilibrium and Explanation in 18th Century Mechanics
The received view of the Scientific Revolution is that it was completed with the publication of Isaac Newton's (1642-1727) Philosophiae Naturalis Principia Mathematica in 1687. The century following was relegated to a working out the mathematical details of Newton's program, expression into analytic form. I show that the mechanics of Leonhard Euler (1707—1782) and Joseph-Louis Lagrange (1736—1813) did not begin with Newton's Three Laws. They provided their own beginning principles and interpretations of the relation between mathematical description and nature. Functional relations among the quantified properties of bodies were interpreted as basic mechanical connections between those bodies. Equilibrium played an important role in explaining the behavior of physical systems understood mechanically. Some behavior was revealed to be an equilibrium condition; other behavior was understood as a variation from equilibrium. Implications for scientific explanation are then drawn from these historical considerations, specifically an alternative to reducing explanation to unification. Trying to cast mechanical explanations (of the kind considered here) as Kitcher-style argument schema fails to distinguish legitimate from spurious explanations. Consideration of the mechanical analogies lying behind the schema is required.
|
|
Jackie Sullivan (2007)
Philosophy of Science, Philosophy of Neuroscience, Philosophy of Mind
Currently at the University of Alabama at Birmingham (assistant professor)
|
homepage
|
Dissertation:
Reliability and Validity of Experiment in the Neurobiology of Learning
and Memory
The concept of reliability has been defined traditionally by philosophers of science as a feature that an experiment has when it can be used to arrive at true descriptive or explanatory claims about phenomena. In contrast, philosophers of science typically take the concept of validity to correspond roughly to that of generalizability, which is defined as a feature that a descriptive or explanatory claim has when it is based on laboratory data but is applicable to phenomena beyond those effects under study in the laboratory. Philosophical accounts of experiment typically treat of the reliability of scientific experiment and the validity of descriptive or explanatory claims independently. On my account of experiment, however, these two issues are intimately linked. I show by appeal to case studies from the contemporary
neurobiology of learning and memory that measures taken to guarantee the reliability of experiment often result in a decrease in the validity of those scientific claims that are made on the basis of such experiments and, furthermore, that strategies employed to increase validity often decrease reliability. Yet, since reliability and validity are both desirable goals of scientific experiments, and, on my account, competing aims, a tension ensues. I focus on two types of neurobiological experiments as case studies to illustrate this tension: (1) organism-level learning experiments and (2) synaptic-level plasticity experiments. I argue that the express commitment to the reliability of experimental processes in neurobiology has resulted in the invalidity of mechanistic claims about learning and plasticity made on the basis of data obtained from such experiments. The positive component of the dissertation consists in specific proposals that I offer as guidelines for resolving this tension in the context of experimental design.
|
|
Jim Tabery (2007)
Philosophy of Science, Philosophy of Biology, Bioethics, History of Biology
Currently at the University of Utah (assistant professor)
|
homepage |
Dissertation:
Causation in the Nature-Nurture Debate: The Case of Genotype-Environment Interaction
In the dissertation I attempt to resolve an aspect of the perennial nature-nurture debate. Despite the widely endorsed “interactionist credo”, the nature-nurture debate remains a quagmire of epistemological and methodological disputes over causation, explanation, and the concepts employed therein. Consider a typical nature-nurture question: Why do some individuals develop a complex trait such as depression, while others do not? This question incorporates an etiological query about the causal mechanisms responsible for the individual development of depression; it also incorporates an etiological query about the causes of variation responsible for individual differences in the occurrence of depression. Scientists in the developmental research tradition of biology investigate the causal mechanisms responsible for the individual development of traits; scientists in the biometric research tradition of biology investigate the causes of variation responsible for individual differences in traits. So what is the relationship between causal mechanisms and causes of variation, between individual development and individual differences, and between the developmental and biometric traditions?
I answer this question by looking at disputes over genotype-environment interaction (or G×E). G×E refers to cases where different genotypes phenotypically respond differently to the same array of environments. Scientists in the developmental tradition argue that G×E is a developmental phenomenon fundamentally important for investigating individual development and its relation to variation. Scientists in the biometric tradition argue that G×E is simply a population-level, statistical measure that can generally be ignored or eliminated. In this way, an isolationist pluralism has emerged between the research traditions.
In contrast to this isolationist solution, I offer an integrative model. The developmental and biometric research traditions are united in their joint effort to elucidate what I call difference mechanisms. Difference mechanisms are regular causal mechanisms made up of difference-making variables that take different values in the natural world. On this model, individual differences are the effect of difference-makers in development that take different values in the natural world. And the difference-making variables in the regular causal mechanisms responsible for individual development simultaneously are the causes of variation when the difference-making variables naturally take different values.
I then use this general integrative framework to resolve the disputes over G×E. I first show that there have been two, competing concepts of G×E throughout the history of the nature-nurture debate: what I call the biometric concept (or G×EB) and what I call the developmental concept (or G×ED). On the integrative model, however, these concepts can also be related: G×E results from differences in unique, developmental combinations of genotype and environment when both variables are difference-makers in development that naturally take different values and the difference that each variable makes is itself dependent upon the difference made by the other variable; and this interdependence may be measured with population-level, statistical methodologies.
|
|
Ingo Brigandt (2006)
Philosophy of biology, philosophy of mind, philosophy of language
Currently at the University of Alberta (post-doc from 2006-2008, then assistant professor)
|
homepage |
Dissertation:
A Theory of Conceptual Advance: Explaining Conceptual Change in Evolutionary,
Molecular, and Evolutionary Developmental Biology
The theory of concepts advanced in the dissertation aims at accounting
for a) how a concept makes successful practice possible, and b) how a
scientific concept can be subject to rational change in the course of
history. Traditional accounts in the philosophy of science have usually
studied concepts in terms only of their reference; their concern is to
establish a stability of reference in order to address the incommensurability
problem. My discussion, in contrast, suggests that each scientific concept
consists of three components of content: 1) reference, 2) inferential
role, and 3) the epistemic goal pursued with a concept’s use. I
argue that in the course of history a concept can change in any of these
three components, and that change in one component—including change
of reference—can be accounted for as being rational relative to
other components, in particular a concept’s epistemic goal.
This semantic framework is applied to two cases from the history of biology:
the homology concept as used in 19th and 20th century biology, and the
gene concept as used in different parts of the 20th century. The homology
case study argues that the advent of Darwinian evolutionary theory, despite
introducing a new definition of homology, did not bring about a new homology
concept (distinct from the pre-Darwinian concept) in the 19th century.
Nowadays, however, distinct homology concepts are used in systematics/evolutionary
biology, in evolutionary developmental biology, and in molecular biology.
The emergence of these different homology concepts is explained as occurring
in a rational fashion. The gene case study argues that conceptual progress
occurred with the transition from the classical to the molecular gene
concept, despite a change in reference. In the last two decades, change
occurred internal to the molecular gene concept, so that nowadays this
concept’s usage and reference varies from context to context. I
argue that this situation emerged rationally and that the current variation
in usage and reference is conducive to biological practice.
The dissertation uses ideas and methodological tools from the philosophy
of mind and language, the philosophy of science, the history of science,
and the psychology of concepts.
|
|
Francesca DiPoppa (2006)
History of early modern philosophy
Currently at Texas Tech University, Lubbock TX (assistant professor)
|
homepage |
Dissertation:
God acts through the laws of his nature alone": From the Nihil ex Nihilo
axiom to causation as expression in Spinoza's metaphysics.
One of the most important concepts in Spinoza's metaphysics is that of
causation. Much of the expansive scholarship on Spinoza, however, either
takes causation for granted, or ascribes to Spinoza a model of causation
that, for one reason or another, fails to account for specific instances
of causation-such as the concept of cause of itself (causa sui).
This work will offer a new interpretation of Spinoza's concept of causation.
Starting from the "nothing comes from nothing" axiom and its
consequences, the containment principle and the similarity principle (basically,
the idea that what is in the effect must have been contained in the cause,
and that the cause and the effect must have something in common) I will
argue that Spinoza adopts what I call the expression-containment model
of causation, a model that describes all causal interactions at the vertical
and horizontal level (including causa sui, or self-cause). The model adopts
the core notion of Neoplatonic emanationism, i.e. the idea that the effect
is a necessary outpouring of the cause; however, Spinoza famously rejects
transcendence and the possibility of created substances. God, the First
Cause, causes immanently: everything that is caused is caused in God,
as a mode of God.
Starting from a discussion of the problems that Spinoza found in Cartesian
philosophy, and of the Scholastic and Jewish positions on horizontal and
vertical causation, my dissertation will follow the development of Spinoza's
model of causation from his earliest work to his more mature Ethics. My
work will also examine the relationship between Spinoza's elaboration
of monism, the development of his model of causation, and his novel concept
of essence (which for Spinoza coincides with a thing's causal power).
|
|
Abel Franco (2006)
History of early modern philosophy
Currently at California State University, Northridge (assistant professor)
|
|
Dissertation:
Descartes' theory of passions
Descartes not only had a theory of passions, but one that deserves a place among
contemporary debates on emotions. The structure of this dissertation attempts to make
explicit the unity of that theory. The study of the passions by the physicien (who
not only studies matter and motion but also human nature) [Chapter 2] appears to be
the “foundations” (as he tells Chanut) of morals [Chapters 1 and 4] insofar as their
main function [Chapter 3] is to dispose us to act in ways which directly affect our
natural happiness. In other words, Descartes is in the Passions of the Soul (1649)
climbing the very tree of philosophy he presented two years earlier in the Preface
to French Edition of the Principles of Philosophy: the trunk (in this case a section
of it: our nature) leads us to the highest of the three branches (morals) when we
study human passions. Human passions constitute the only function of the mind-body
union that can guide us in the pursuit of our (natural) happiness. They do this (1) by
informing the soul about the current state of perfection both of the body and, most
importantly, of the mind-body union; (2) by discriminating what is relevant in the
world regarding our perfection; and (3) by proposing (to the will) possible ways
of action (i.e. by disposing us to act). The virtuous (the generous) are those who
have achieved “contentment” not by impeding the arousal of their passions but by
living them according to reason, that is, by following freely the dispositions to act
(brought about by them) which can increase our perfection—i.e. the disposition to join
true goods and to avoid true evils. Regarding current debates on emotions [Chapter 5],
Descartes’ perceptual model not only provides a satisfactory answer to the major challenges
faced today both by feeling theories (intentionality) and judgment theories (feelings and
the passivity of emotions) but it can also help advance those debates by, on one hand,
bringing into them new or neglected ideas, and, on the other, providing a solid overall
framework to think about passions.
|
|
Doreen Fraser (2006)
Philosophy of physics, philosophy of science, history of science
Currently at Waterloo (assistant professor)
|
homepage |
Dissertation:
Haag's theorem and the interpretation of quantum field theories with interactions
Quantum field theory (QFT) is the physical framework that integrates quantum mechanics and
the special theory of relativity; it is the basis of many of our best physical theories.
QFT’s for interacting systems have yielded extraordinarily accurate predictions. Yet, in
spite of unquestionable empirical success, the treatment of interactions in QFT raises
serious issues for the foundations and interpretation of the theory. This dissertation takes
Haag’s theorem as a starting point for investigating these issues. It begins with a detailed
exposition and analysis of different versions of Haag’s theorem. The theorem is cast as a
reductio ad absurdum of canonical QFT prior to renormalization. It is possible to adopt different
strategies in response to this reductio: (1) renormalizing the canonical framework; (2) introducing
a volume (i.e., long-distance) cutoff into the canonical framework; or (3) abandoning another
assumption common to the canonical framework and Haag’s theorem, which is the approach adopted
by axiomatic and constructive field theorists. Haag’s theorem does not entail that it is impossible
to formulate a mathematically well-defined Hilbert space model for an interacting system on infinite,
continuous space. Furthermore, Haag’s theorem does not undermine the predictions of renormalized
canonical QFT; canonical QFT with cutoffs and existing mathematically rigorous models for
interactions are empirically equivalent to renormalized canonical QFT. The final two chapters
explore the consequences of Haag’s theorem for the interpretation of QFT with interactions. I
argue that no mathematically rigorous model of QFT on infinite, continuous space admits an
interpretation in terms of quanta (i.e., quantum particles). Furthermore, I contend that extant
mathematically rigorous models for physically unrealistic interactions serve as a better guide
to the ontology of QFT than either of the other two formulations of QFT. Consequently, according
to QFT, quanta do not belong in our ontology of fundamental entities.
|
|
Greg Frost-Arnold (2006)
History of analytic philosophy, philosophical logic, philosophy of science
Currently at the University of Nevada, Las Vegas (assistant professor)
|
homepage
blog: Obscure and Confused Ideas |
Dissertation:
Carnap, Tarski, and Quine's Year Together: Logic, Science and Mathematics
During the academic year 1940-1941, several giants of analytic philosophy congregated at Harvard:
Russell, Tarski, Carnap, Quine, Hempel, and Goodman were all in residence. This group held both
regular public meetings as well as private conversations. Carnap took detailed diction notes that
give us an extensive record of the discussions at Harvard that year. Surprisingly, the most prominent
question in these discussions is: if the number of physical items in the universe is finite (or possibly
finite), what form should the logic and mathematics in science take? This question is closely connected
to an abiding philosophical problem, one that is of central philosophical importance to the logical
empiricists: what is the relationship between the logico-mathematical realm and the natural, material
realm? This problem continues to be central to analytic philosophy of logic, mathematics, and science.
My dissertation focuses on three issues connected with this problem that dominate the Harvard discussions:
nominalism, the unity of science, and analyticity. I both reconstruct the lines of argument represented in
Harvard discussions and relate them to contemporary treatments of these issues.
|
|
Francis Longworth (2006)
Philosophy of science, metaphysics
Currently at Ohio University (assistant professor)
|
|
Dissertation:
Causation, Counterfactual Dependence and Pluralism
The principal concern of this dissertation is whether or not a conceptual analysis of our ordinary
concept of causation can be provided. In chapters two and three I show that two of the most promising
univocal accounts (the counterfactual theories of Hitchcock and Yablo) are subject to numerous
counterexamples. In chapter four, I show that Hall's pluralistic theory of causation, according to
which there are two concepts of causation, also faces a number of counterexamples. In chapter five, I
sketch an alternative, broadly pluralistic theory of token causation, according to which causation is a
cluster concept with a prototypical structure. This theory is able to evade the counterexamples that beset
other theories and, in addition, offers an explanation of interesting features of the concept such the
existence of borderline cases, and the fact that some instances of causation seem to be better examples of
the concept than others.
|
|
David Miller (2006)
History of early modern philosophy, history of science
Currently at Yale (Andrew W. Mellon Postdoctoral Fellow, Humanities Whitney Humanities Center)
|
homepage |
Dissertation:
Representations of Space in Seventeenth Century Physics
The changing understanding of the universe that characterized the birth of modern science included a fundamental shift in the prevailing representation of space - the presupposed conceptual structure that allows one to intelligibly describe the spatial properties of physical phenomena.
At the beginning of the seventeenth century, the prevailing representation of space was spherical. Natural philosophers first assumed a spatial center, then specified meanings with reference to that center. Directions, for example, were described in relation to the center, and locations were specified by distance from the center. Through a series of attempts to solve problems first raised by the work of Copernicus, this Aristotelian, spherical framework was replaced by a rectilinear representation of space.
By the end of the seventeenth century, descriptions were understood by reference to linear orientations, as parallel or oblique to a presupposed line, and locations were identified without reference to a privileged central point. This move to rectilinear representations of space enabled Gilbert, Kepler, Galileo, Descartes, and Newton to describe and explain the behavior of the physical world in the novel ways for which these men are justly famous, including their theories of gravitational attraction and inertia. In other words, the shift towards a rectilinear representation of space was essential to the fundamental reconception of the universe that gave rise to both modern physical theory and, at the same time, the linear way of experiencing the world essential to modern science.
|
|
Christian Wüthrich (2006)
Philosophy of physics, philosophy of science, metaphysics
Currently at UC-San Diego (assistant professor)
|
|
Dissertation:
Approaching the Planck Scale from a Generally Relativistic Point of View: A
Philosophical Appraisal of Loop Quantum Gravity
My dissertation studies the foundations of loop quantum gravity, a candidate for a quantum theory of gravity based on classical general relativity. After an evaluation of the motivations for seeking a quantum theory of gravity, I embark upon an investigation of how loop quantum gravity codifies general relativity's main innovation, the so-called background independence, in a formalism suitable for quantization. This codification pulls asunder what has been joined together in general relativity: space and time. It is thus a central issue whether or not general relativity's four-dimensional structure can be retrieved in the alternative formalism. I argue that the rightful four-dimensional spacetime structure can only be partially retrieved at the classical level, while its retrieval at the quantum level is an open question. Next, I scrutinize pronouncements claiming that the "big-bang" singularity of classical cosmological models vanishes in quantum cosmology based on loop quantum gravity and conclude that these claims must be severely qualified. Finally, a scheme is developed of how the re-emergence of the smooth spacetime from the underlying discrete quantum structure could be understood.
|
|
Erik Angner (2005)
History and Philosophy of Social Science, Social and Political Philosophy
Currently at the University of Alabama at Birmingham
|
homepage |
Dissertation:
Subjective Measures of Well-Being: A philosophical examination
Over the last couple of decades, as part of the rise of positive psychology,
psychologists have given increasing amounts of attention to so-called
subjective measures of well-being. These measures, which are supposed
to represent the well-being of individuals and groups, are often presented
as alternatives to more traditional economic ones for purposes of the
articulation, implementation and evaluation of public policy. Unlike economic
measures, which are typically based on data about income, market transactions
and the like, subjective measures are based on answers to questions like:
³Taking things all together, how would you say things are these days
would you say you¹re very happy, pretty happy, or not too happy
these days?² The aim of this dissertation is to explore issues in
the philosophical foundations of subjective measures of well-being, with
special emphasis on the manner in which the philosophical foundations
of subjective measures differ from those of traditional economic measures.
Moreover, the goal is to examine some arguments for and against these
measures, and, in particular, arguments that purport to demonstrate the
superiority of economic measures for purposes of public policy. My main
thesis is that the claim that subjective measures of well-being cannot
be shown to be inferior to economic measures quite as easily as some have
suggested, but that they nevertheless are associated with serious problems,
and that questions about the relative advantage of subjective and economic
measures for purposes of public policy will depend on some fundamentally
philosophical judgments, e.g. about the nature of well-being and the legitimate
goals for public policy.
|
|
Megan Delehanty (2005)
Currently at the University of Calgary
|
homepage
|
Dissertation:
Empiricism and the Epistemic Status of Imaging Technologies
This starting point for this project was the question of how to understand the epistemic status of
mathematized imaging technologies such as positron emission tomography (PET) and confocal microscopy.
These sorts of instruments play an increasingly important role in virtually all areas of biology and
medicine. Some of these technologies have been widely celebrated as having revolutionized various
fields of studies while others have been the target of substantial criticism. Thus, it is essential
that we be able to assess these sorts of technologies as methods of producing evidence. They differ
from one another in many respects, but one feature they all have in common is the use of multiple
layers of statistical and mathematical processing that are essential to data production. This feature
alone means that they do not fit neatly into any standard empiricist account of evidence. Yet this
failure to be accommodated by philosophical accounts of good evidence does not indicate a general
inadequacy on their part since, by many measures, they very often produce very high quality evidence.
In order to understand how they can do so, we must look more closely at old philosophical questions
concerning the role of experience and observation in acquiring knowledge about the external world.
Doing so leads us to a new, grounded version of empiricism. After distinguishing between a weaker and
a stronger, anthropocentric version of empiricism, I argue that most contemporary accounts of observation
are what I call benchmark strategies that, implicitly or explicitly, rely on the stronger version according
to which human sense experience holds a place of unique privilege. They attempt to extend the bounds of
observation iii and the epistemic privilege accorded to it – by establishing some type of relevant
similarity to the benchmark of human perception. These accounts fail because they are unable to establish
an epistemically motivated account of what relevant similarity consists of. The last best chance for any
benchmark approach, and, indeed, for anthropocentric empiricism, is to supplement a benchmark strategy
with a grounding strategy. Toward this end, I examine the Grounded Benchmark Criterion which defines
relevant similarity to human perception in terms of the reliability-making features of human perception.
This account, too, must fail due to our inability to specify these features given the current state of
understanding of the human visual system. However, this failure reveals that it is reliability alone
that is epistemically relevant, not any other sort of similarity to human perception. Current accounts
of reliability suffer from a number of difficulties, so I develop a novel account of reliability that
is based on the concept of granularity. My account of reliability in terms of a granularity match both
provides the means to refine the weaker version of empiricism and allows us to establish when and why
imaging technologies are reliable. Finally, I use this account of granularity in examining the importance
of the fact that the output of imaging technologies usually is images.
|
|
Alan Love (2005)
Philosophy of Biology, Philosophy of Science, Biology
Currently at the University of Minnesota (assistant professor)
|
homepage |
Dissertation:
Explaining Evolutionary Innovation and Novelty: A Historical and Philosophical
Study of Biological Concepts
Explaining evolutionary novelties (such as feathers or neural crest cells)
is a central item on the research agenda of evolutionary developmental
biology (Evo-devo). Proponents of Evo-devo have claimed that the origin
of innovation and novelty constitute a distinct research problem, ignored
by evolutionary theory during the latter half of the 20th century, and
that Evo-devo as a synthesis of biological disciplines is in a unique
position to address this problem. In order to answer historical and philosophical
questions attending these claims, two philosophical tools were developed.
The first, conceptual clusters, captures the joint deployment of concepts
in the offering of scientific explanations and allows for a novel definition
of conceptual change. The second, problem agendas, captures the multifaceted
nature of explanatory domains in biological science and their diachronic
stability. The value of problem agendas as an analytical unit is illustrated
through the examples of avian feather and flight origination. Historical
research shows that explanations of innovation and novelty were not ignored.
They were situated in disciplines such as comparative embryology, morphology,
and paleontology (exemplified in the research of N.J. Berrill, D.D. Davis,
and W.K. Gregory), which were overlooked because of a historiography emphasizing
the relations between genetics and experimental embryology. This identified
the origin of Evo-devo tools (developmental genetics) but missed the source
of its problem agenda. The structure of developmental genetic explanations
of innovations and novelties is compared and contrasted with those of
other disciplinary approaches, past and present. Applying the tool of
conceptual clusters to these explanations reveals a unique form of conceptual
change over the past five decades: a change in the causal and evidential
concepts appealed to in explanations. Specification of the criteria of
explanatory adequacy for the problem agenda of innovation and novelty
indicates that Evo-devo qua disciplinary synthesis requires more attention
to the construction of integrated explanations from its constituent disciplines
besides developmental genetics. A model for explanations integrating multiple
disciplinary contributions is provided. The phylogenetic approach to philosophy
of science utilized in this study is relevant to philosophical studies
of other sciences and meets numerous criteria of adequacy for analyses
of conceptual change.
|
|
Andrea Scarantino (2005)
Currently at Georgia State University. |
homepage
|
Dissertation:
Explicating Emotions
In the course of their long intellectual history, emotions have been identified
with items as diverse as perceptions of bodily changes (feeling tradition),
judgments (cognitivist tradition), behavioral predispositions (behaviorist
tradition), biologically based solutions to fundamental life tasks (evolutionary
tradition), and culturally specific social artifacts (social constructionist
tradition). The first objective of my work is to put some order in the
mare magnum of theories of emotions. I taxonomize them into families and
explore the historical origin and current credentials of the arguments
and intuitions supporting them. I then evaluate the methodology of past
and present emotion theory, defending a bleak conclusion: a great many
emotion theorists ask "What is an emotion?" without a clear
understanding of what counts as getting the answer right. I argue that
there are two ways of getting the answer right. One is to capture the
conditions of application of the folk term "emotion" in ordinary
language (Folk Emotion Project), and the other is to formulate a fruitful
explication of it (Explicating Emotion Project). Once we get clear on
the desiderata of these two projects, we realize that several long-running
debates in emotion theory are motivated by methodological confusions.
The constructive part of my work is devoted to formulating a new explication
of emotion suitable for the theoretical purposes of scientific psychology.
At the heart of the Urgency Management System (UMS) theory of emotions
I propose is the idea that an "umotion" is a special type of
superordinate system which instantiates and manages an urgent action tendency
by coordinating the operation of a cluster of cognitive, perceptual and
motoric subsystems. Crucially, such superordinate system has a proper
function by virtue of which it acquires a special kind of intentionality
I call pragmatic. I argue that "umotion" is sufficiently similar
in use to "emotion" to count as explicating it, it has precise
rules of application, and it accommodates a number of central and widely
shared intuitions about the emotions. My hope is that future emotion research
will demonstrate the heuristic fruitfulness of the "umotion"
concept for the sciences of mind.
|
|
Armond Duwell
(2004)
Philosophy of Physics, Information Theory
Currently at the University of Montana, Missoula (assistant professor)
|
homepage
|
Dissertation:
Foundations of Quantum Information Theory and Quantum Computation Theory.
Physicists and philosophers have expressed great hope that quantum information
theory will revolutionize our understanding of quantum theory. The first
part of my dissertation is devoted to clarifying and criticizing various
notions of quantum information, particularly those attributable to Jozsa
and also Deutsch and Hayden. My work suggests that no new concept of information
is needed and the Shannon information theory works perfectly well for
quantum mechanical systems.
The second part of my dissertation is devoted to explaining why quantum
computers are faster than conventional computers for some computational
tasks. The current best explanation of quantum computational speedup is
that quantum computers can compute many values of a function in a single
computational step, whereas conventional computers cannot. Further, it
has been suggested that the Many Worlds Interpretation of quantum theory
is the only interpretation that can underwrite such a claim. In my dissertation
I clarify the explanandum and articulate possible explananda for the explanatory
task at hand. I offer an explanation and I argue that no appeal needs
to be made to any particular interpretation of quantum theory to explain
quantum computational speedup.
|
|
Uljana Feest (2003)
Cognitive and Behavioral Sciences
Currently research scholar at the Max Planck Institute for the History
of Science
|
homepage
|
Dissertation:
Operationism, Experimentation, and Concept Formation
I provide a historical and philosophical analysis of the doctrine
of operationism, which emerged in American psychology in the 1930s. While
operationism is frequently characterized as a semantic thesis (which demands
that concepts be defined by means of measurement operations), I argue
that it is better understood as a methodological strategy, which urges
that experimental investigation. I present three historical case studies
of the work of early proponents of operationism and show that all of them
were impressed by behaviorist critiques of traditional mentalism and introspectivism,
while still wanting to investigate some of the phenomena of traditional
psychology (consciousness, purpose, motivation). I show that when these
psychologists used “operational definitions”, they posited the existence
of particular psychological phenomena and treated certain experimental
data – by stipulation – as indicative of those phenomena. However, they
viewed these stipulative empirical definitions as neither a priori true,
nor as unrevisable. While such stipulative definitions have the function
of getting empirical research about a phenomenon “off the ground”, they
clearly don't provide sufficient evidence for the existence of the phenomenon.
In the philosophical part of my dissertation, I raise the epistemological
question of what it would take to provide such evidence, relating this
question to recent debates in the philosophy of experimentation. I argue
that evidence for the existence of a given phenomenon is produced as part
of testing descriptive hypotheses about the phenomenon. Given how many
background assumptions have to be made in order to test a hypothesis about
a phenomenon, I raise the question of whether claims about the existence
of psychological phenomena are underdetermined by data. I argue that they
are not. Lastly, I present an analysis of the scientific notion of an
experimental artifact, and introduce the notion of an “artifactual belief”,
i.e. an experimentally well confirmed belief that later turns out to be
false, when one or more of the background assumptions (relative to which
the belief was confirmed) turn out to be false.
|
|
Gualtiero Piccinini (2003)
Philosophy of Mind
Currently at University of Missouri at St Louis
|
homepage
Blog: Brains: On Mind and Related Matter |
Dissertation:
Computations and Computers in the Sciences of Mind and Brain
Computationalism says that brains are computing mechanisms, that
is, mechanisms that perform computations. At present, there is no consensus
on how to formulate computationalism precisely or adjudicate the dispute
between computationalism and its foes, or between different versions of
computationalism. An important reason for the current impasse is the lack
of a satisfactory philosophical account of computing mechanisms. The main
goal of this dissertation is to offer such an account. I also believe
that the history of computationalism sheds light on the current debate.
By tracing different versions of computationalism to their common historical
origin, we can see how the current divisions originated and understand
their motivation. Reconstructing debates over computationalism in the
context of their own intellectual history can contribute to philosophical
progress on the relation between brains and computing mechanisms and help
determine how brains and computing mechanisms are alike, and how they
differ. Accordingly, my dissertation is divided into a historical part,
which traces the early history of computationalism up to 1946, and a philosophical
part, which offers an account of computing mechanisms.
The two main ideas developed in this dissertation are that (1) computational
states are to be individuated by functional properties rather than semantic
properties, and (2) the relevant functional properties are specified by
an appropriate functional analysis. The resulting account of computing
mechanism, which I call the functional account of computing mechanisms,
can be used to individuate computing mechanisms and the functions they
compute. I use the functional account of computing mechanisms to taxonomize
computing mechanisms based on their different computing power, and I use
this taxonomy of computing mechanisms to taxonomize different versions
of computationalism based on the functional properties that they ascribe
to brains. By doing so, I begin to tease out empirically testable statements
about the functional organization of the brain that different versions
of computationalism are committed to. I submit that when computationalism
is reformulated in the more explicit and precise way I propose, the disputes
about computationalism can be adjudicated on the grounds of empirical
evidence from neuroscience.
|
|
Wendy Parker (2003)
Modeling and Simulation, Science and
Public Policy,
Environmental Philosophy
Currently at Ohio University (assistant professor) |
|
Dissertation:
Computer Modeling in Climate Science: Experiment, Explanation, Pluralism
Computer simulation modeling is an important part of contemporary scientific practice but has not yet received much attention from philosophers. The present project helps to fill this lacuna in the philosophical literature by addressing three questions that arise in the context of computer simulation of Earth's climate. (1) Computer simulation experimentation commonly is viewed as a suspect methodology, in contrast to the trusted mainstay of material experimentation. Are the results of computer simulation experiments somehow deeply problematic in ways that the results of material experiments are not? I argue against categorical skepticism toward the results of computer simulation experiments by revealing important parallels in the epistemologies of material and computer simulation experimentation. (2) It has often been remarked that simple computer simulation models - but not complex ones- contribute substantially to our understanding of the atmosphere and climate system. Is this view of the relative contribution of simply and complex models tenable? Io show that both simple and complex climate models can promote scientific understanding and argue that the apparent contribution of simple models depends upon whether a causal or deductive account of scientific understanding is adopted. (3) When two incompatible scientific theories are under consideration, they typically are viewed as competitors, and we seek evidence that refutes at least one of the theories. In the study of climate change, however, logically incompatible computer simulation models are accepted as complementary resources for investigating future climate. How can we make sense of this use of incompatible models? I show that a collection of incompatible models climate models persists in part because of difficulties faced in evaluating and comparing climate models. I then discuss the rationale for using these incompatible models together and argue that this climate model pluralism has both competitive and integrative components.
|
|
Chris Smeenk (2002)
Philosophy of Physics; Early Modern Philosophy
Currently at UCLA
|
homepage |
Dissertation:
Approaching the Absolute Zero of Time: Theory Development in Early Universe Cosmology
This dissertation gives an original account of the historical
development of modern cosmology along with a philosophical assessment
of related methodological and foundational issues. After briefly
reviewing the groundbreaking work by Einstein and others, I turn to
the development of early universe cosmology following the discovery of
the microwave background radiation in 1965. This discovery encouraged
consolidation and refinement of the big bang model, but cosmologists
also noted that cosmological models could accomodate observations only
at the cost of several ``unnatural'' assumptions regarding the initial
state. I describe various attempts to eliminate initial conditions in
the late 60s and early 70s, leading up to the idea that came to
dominate the field: inflationary cosmology. I discuss the pre-history
of inflationary cosmology and the early development of the idea,
including the account of structure formation and the introduction of
the ``inflaton'' field. The second part of my thesis focuses on
methodological issues in cosmology, opening with a discussion of three
principles and their role in cosmology: the cosmological principle,
indifference principle, and anthropic principle. I assess appeals to
explanatory adequacy as grounds for theory choice in cosmology, and
close with a discussion of confirmation theory and the issue of
novelty in relation to cosmological theories.
|
|
Daniel Steel (2002)
Causality and Confirmation; the Biological and Social Sciences
Currently at Michigan State
|
homepage |
My experience as a graduate student in the Pitt HPS department was overwhelmingly positive (though not without a few bumps along the road). Most of all, I enjoyed being in a context in which one is encouraged to study philosophy of science in a manner that integrates the topic with interests outside of its traditional domain—all the while being guided by some of the leading lights in the field. In my case, this generally meant thinking about the venerable old philosophical issues of causality and evidence in anthropology and biology. My dissertation, Mechanisms and Interfering Factors: Dealing with Heterogeneity in the Biological and Social Sciences, directed by Sandra Mitchell, touches upon these themes. While at Pitt, I have had articles accepted for publication at peer reviewed journals on the topics of Bayesian confirmation, unification and causal theories of explanation (in the context of anthropology), indigenous warfare in Amazonia, and the principle of the common cause. Currently, I am working on preparing segments of my dissertation to be submitted for publication and upon extending some of the ideas contained therein, particularly to the issue of reduction. On graduation I took up a tenure track position at Michigan State University.
Dissertation:
Mechanisms and Interfering Factors: Dealing with Heterogeneity in the
Biological and Social Sciences
The biological and social sciences both deal with population
that are heterogeneous with regard to important causes of interest, in
the sense that the same cause often exerts very different effects upon
distinct members of the population. For instance, welfare-to-work programs
are likely to have different effects on the economic prospects of trainees
depending on such variables as education, prior work experience, and so
forth. Moreover, it is rarely the case in biology or social science that
all such complicating variables are known and can be measured. In such
circumstances, generalizations about the effect of a factor in a given
population average over these differences, and hence take on a probabilistic
character. Consequently, a causal generalization that holds with respect
to a heterogeneous population as a whole may not hold for a given sub-population,
a fact which raises a variety of difficulties for explanation and prediction.
The overarching theme of the dissertation is that knowing how a cause
produces its effect is the key to knowing when a particular causal relationship
holds and when it does not. More specifically, the proposal is the following.
Suppose that X is the cause of Y in the population P. Then there is a
mechanism, or mechanisms, present among at least some of the members of
P through which X influences Y. So if we know the mechanism and the kinds
of things that can interfere with it, then we are in a much better position
to say when the causal generalization will hold and when it will not.
This intuitive idea has been endorsed by several philosophers; however,
what has been lacking is a systematic exploration of the proposal and
its consequences. That is what I aim to provide. The approach to the heterogeneity
problem is developed in the context of an example drawn from biomedical
science, namely, research into the causal mechanism by which HIV attacks
the human immune system. Moreover, I argue that my approach to the problem
of heterogeneity sheds new light on some familiar philosophical issues
that are relevant to the biological and social sciences, namely, ceteris
paribus laws and methodological holism versus methodological individualism.
|
|
Chris Martin (2001)
Philosophy of Physics; Gauge Theories
Currently at University of Indiana, Bloomington
|
|
I came to Pitt HPS in 1994 with an undergraduate degree in physics and a minor in philosophy of science from UC Irvine. While at Pitt, I pursued interests in both the history and the philosophy of physics, as well as in traditional philosophy (I received an M.A. in Philosophy in 2000). I also collaborated on a multi-volume work on the history of gravitation theory, a project centered at the Max Planck Institute for the History of Science in Berlin, Germany. My dissertation (John Earman and John Norton, advisors) centered on the role of gauge symmetry principles in modern physics. In 2002, I became Assistant Professor of History and Philosophy of Science at Indiana University. Since graduating, I have published articles taken from material in my dissertation, and I continue to work on issues surrounding the role of gauge symmetry in fundamental physics.
Dissertation: Gauging Gauge: Remarks on the Conceptual Foundations of Gauge Symmetry Of all the concepts of modern physics, there are few that have the sort of powerful, sometimes mysterious, and often awe-inspiring rhetoric surrounding them as has the concept of local gauge symmetry. The common understanding today is that al fundamental interactions in nature are described by so-called gauge theories. These theories, far from being just any sort of physical theory are taken to result from the strict dictates of principles of local gauge symmetry- gauge symmetry principles. The success – experimental, theoretical and other wise – of theories based on local symmetry principles has given rise to the received view of local symmetry principles as deeply fundamental, as literally “dictating” or “necessitating” the very shape of fundamental physics. The current work seeks to make some headway towards elucidating this view by considering the general issue of the physical content of local symmetry principles in their historical and theoretical contexts.
There are two parts to the dissertation: a historical part and a more “philosophical” part. In the first, historical part, I provide a brief genealogy of gauge theories, looking at some of the seminal works in the birth and development of gauge theories. My chief claim here is about what one does not find. Despite the modern rhetoric, the history of gauge field theories does not evidence loaded arguments from ( a priori) local symmetry principles or even the need for ascriptions of any deep physical significance to these principles. The history evidences that the ascendancy of gauge field theories rests quite squarely on the heuristic value of local gauge symmetry principles.
In the philosophical component of the dissertation I turn to an analysis of the gauge argument, the canonical means of cashing out the physical content of gauge symmetry principle. I warn against a (common) literal reading of the argument. As I discuss, the argument must be afforded a fairly heuristic (even if historically- based) reading. Claims to the effect that the argument reflects the “logic of nature” must, for many reasons that I discuss, be taken with a grain of salt.
Finally, I highlight how the “received view” of gauge symmetry – which takes it that gauge symmetry transformations are merely non-physical, formal changes of description – gives rise to a tension between the “profundity of gauge symmetry” and “the redundancy of gauge symmetry”. I consider various ways one might address this tension. I conclude that one is hard pressed to do any better than a “minimalist view” which takes it that the physical import of gauge symmetry lies in its historically based heuristic utility. While there are less minimalist views of the physical content to be ascribed to gauge symmetry principles, it is clear that neither the history nor the physics obliges us to make such ascriptions.
|
|
Andrew Backe (2000)
Philosophy of Mind; American Pragmatism
|
|
Dissertation: The divided psychology of John Dewey This dissertation examines the extent to which John Dewey's psychology was a form of behaviorism, and, in doing so, considers how metaphysical commitments influenced psychological theories at the turn of the century. In his 1916 Essays in Experimental Logic, Dewey described his psychology as a science not of states of consciousness, but of behavior. Specifically, Dewey argued that conscious states can be assimilated to modes of behavior that help the individual adapt to a situation of conflict. Hence, the role of psychology, Dewey argued, is to provide a natural history of the conditions under which a particular behavioral mode emerges. Based on an analysis of a number of Dewey's major works written during the period of 1884 to 1916, I claim that there is an underlying metaphysical intuition in Dewey's views that prevents a behavioristic interpretation of his psychology. This intuition, I argue, stems from Dewey's absolute idealist philosophy of the mid 1880s. The intuition raises the concern that, if psychologists permit a transition from one psychological state to another to be described in terms of a causal succession of discrete events, then there is no way that the transition can be held together in a relational complex. As applied to psychology by Dewey, the intuition rejected treating any psychological phenomenon as constituted of separate existences, regardless of whether the phenomenon is defined in terms of conscious or behavioral events. Instead, the intuition presupposed that psychological events are unified in a special kind of relation in which events merge and are, in a mystical sense, identical. I maintain that Dewey's intuition regarding psychological causation served as the basis for his concept of coordination, which Dewey set out in his criticism of the reflex arc concept in the context of the Baldwin-Titchener reaction-time controversy. According to my account, Dewey's coordination concept was at odds with the behaviorists' unit of analysis, which explicitly divided any psychological phenomenon into separate existences of stimulus and response. I consider the broader implications of Dewey's metaphysical intuition through a discussion of different types of causal explanation that emerged in psychology in the early twentieth century.
|
|
Benoit Desjardins (1999)
Causality; Statistical algorithms
Currently Director of the Cardiovascular MR/CT Research Laboratory, University of Michigan
|
|
Dissertation:
On the theoretical limits to reliable causal inference One of the most central problems in scientific research is the search for explanations of some aspect of nature for which empirical data is available. One seeks to identify the causal processes explaining the data, in the form of a model of the aspect of nature under study. Although traditional statistical approaches are excellent for finding statistical dependencies in a body of empirical data, they prove inadequate at finding the causal structure in the data. New graphical algorithmic approaches have been proposed to automatically discover the causal structure in the data. Based on strong connections between graph theoretic properties and statistical aspects of causal influences, fundamental assumptions about the data can be used to infer a graphical structure, which is used to construct models describing the exact causal relations in the data. If the data contain correlated errors, latent variables must be introduced to explain the causal structure in the data. There is usually a large set of equivalent causal models with latent variables, representing competing alternatives, which entail similar statistical dependency relations. The central problem in this dissertation is the study of the theoretical limits to reliable causal inference. Given a body of statistical distribution information on a finite set of variables, we seek to characterize the set of all causal models satisfying this distribution. Current approaches only characterize the set of models which satisfy limited properties of this distribution, notably its relations of probabilistic conditional independence. Such models are semi-Markov equivalent. Some of these models might however not satisfy other properties of the distribution, which cannot be expressed as simple conditional independence relations on marginal distributions. We seek to go beyond semi-Markov equivalence. To do so, we first formally characterize the variation in graphical structure within a semi-Markov equivalence class of models. We then determine possible consequences of this variation as either experimentally testable features of models, or as testable features of marginal distributions.
|
|
Elizabeth Paris (1999)
History of Particle Physics
|
|
Dissertation: Ringing in the new physics: The politics and technology of electron colliders in the United States, 1956--1972
The “November Revolution” of 1974 and the experiments that
followed consolidated the place of the Standard Model in modern particle
physics. Much of the evidence on which these conclusions depended was
generated by a new type of tool: colliding beam storage rings, which had
been considered physically unfeasible twenty years earlier. In 1956 a
young experimentalist named Gerry O'Neill dedicated himself to demonstrating
that such an apparatus could do useful physics. The storage ring movement
encountered numerous obstacles before generating one of the standard machines
for high energy research. In fact, it wasn't until 1970 that the U.S.
finally broke ground on its first electron-positron collider. Drawing
extensively on archival sources and supplementing them with the personal
accounts of many of the individuals who took part, Ringing in
the New Physics examines this instance of post-World War II techno-science
and the new social, political and scientific tensions that characterize
it. The motivations are twofold: first, that the chronicle of storage
rings may take its place beside mathematical group theory, computer simulations,
magnetic spark chambers, and the like as an important contributor to a
view of matter and energy which has been the dominant model for the last
twenty-five years. In addition, the account provides a case study for
the integration of the personal, professional, institutional, and material
worlds when examining an episode in the history or sociology of twentieth
century science. The story behind the technological development of storage
rings holds fascinating insights into the relationship between theory
and experiment, collaboration and competition in the physics community,
the way scientists obtain funding and their responsibilities to it, and
the very nature of what constitutes successful science in the post-World
War II era.
|
|
Tom Seppalainen (1999)
Visual Perception and Cognition; Metaphysics
Currently at Portland State University
|
|
Dissertation:
The problematic nature of experiments in color science
The so-called opponent process theory of color vision has played a prominent
role in recent philosophical debates on color. Several philosophers have
argued that this theory can be used to reduce color experiences to properties
of neural cells. I will refute this argument by displaying some of the
problematic features of the experimental inference present in color science.
Along the way I will explicate some of the methodological strategies employed
by vision scientists to accomplish integration across the mind-body boundary.
At worst, the integration follows the looks-like methodology where effects
resemble their causes. The modern textbook model for human color vision
consists of three hypothetical color channels, red-green, blue-yellow,
and white-black. These are assumed to be directly responsible for their
respective color sensations. The hue channels are opponent in that light
stimulation can cause only one of the respective hue sensations. The channels
are also seen as consisting of opponent neural cells. The cells and the
channels are claimed to have similar response properties. In my work,
I reconstruct some of the critical experiments underwriting the textbook
model. The centerpiece is an analysis of Hurvich and Jameson's color cancellation
experiment. I demonstrate that the experiment cannot rule out the contradictory
alternative hypothesis for opponent channels without making question-begging
assumptions. In order to accomplish this, I clarify the theorizing of
Hurvich and Jameson's predecessor, Ewald Hering, as well as the classic
trichromatic theory. I demonstrate that currently no converging evidence
from neurophysiology exists for the opponent process theory. I show that
the results from De Valois' studies of single cells are theory-laden.
The classification into cell types assumes the textbook model. Since the
textbook model is an artifact of experimental pseudo-convergence both
claims for a reductive and a causal explanation of color experiences are
premature.
|
|
Jonathan Bain (1998)
Philosophy of Spacetime, Scientific Realism, Philosophy of Quantum Field Theory.
Currently at Brooklyn Polytechnic Institute
|
|
Dissertation:
Representations of spacetime: Formalism and ontological commitment
This dissertation consists of two parts. The first is on the
relation between formalism and ontological commitment in the context of
theories of spacetime, and the second is on scientific realism. The first
part begins with a look at how the substantivalist/relationist debate
over the ontological status of spacetime has been influenced by a particular
mathematical formalism, that of tensor analysis on differential manifolds
(TADM). This formalism has motivated the substantivalist position known
as manifold substantivalism. Chapter 1 focuses on the hole argument which
maintains that manifold substantivalism is incompatible with determinism.
I claim that the realist motivations underlying manifold substantivalism
can be upheld, and the hole argument avoided, by adopting structural realism
with respect to spacetime. In this context, this is the claim that it
is the structure that spacetime points enter into that warrants belief
and not the points themselves. In Chapter 2, an elimination principle
is defined by means of which a distinction can be made between surplus
structure and essential structure with respect to formulations of a theory
in two distinct mathematical formulations and some prior ontological commitments.
This principle is then used to demonstrate that manifold points may be
considered surplus structure in the formulation of field theories. This
suggests that, if we are disposed to read field theories literally, then,
at most, it should be the essential structure common to all alternative
formulations of such theories that should be taken literally. I also investigate
how the adoption of alternative formalisms informs other issues in the
philosophy of spacetime. Chapter 3 offers a realist position which takes
a semantic moral from the preceding investigation and an epistemic moral
from work done on reliability. The semantic moral advises us to read only
the essential structure of our theories literally. The epistemic moral
shows us that such structure is robust under theory change, given an adequate
reliabilist notion of epistemic warrant. I call the realist position that
subscribes to these morals structural realism and attempt to demonstrate
that it is immune to the semantic and epistemic versions of the underdetermination
argument posed by the anti-realist.
|
|
Carl Craver (1998)
Visual Perception and Cognition; Metaphysics
Currently at Washington University, St. Louis
|
|
Dissertation:
Neural mechanisms: On the structure, function, and development of theories in neurobiology Reference to mechanisms is virtually ubiquitous in science and its philosophy. Yet, the concept of a mechanism remains largely unanalyzed; So too for its possible applications in thinking about scientific explanation, experimental practice, and theory structure. This dissertation investigates these issues in the context of contemporary neurobiology. The theories of neurobiology are hierarchically organized descriptions of mechanisms that explain functions. Mechanisms are the coordinated activities of entities by virtue of which that function is performed. Since the activities composing mechanisms are often susceptible to mechanical redescription themselves, theories in neurobiology have a characteristic hierarchical structure. The activities of entities at one level are the sub-activities of those at a higher level. This hierarchy reveals a fundamental symmetry of functional and mechanical descriptions. Functions are privileged activities of entities; they are privileged because they constitute a stage in some higher-level (+1) mechanism. The privileged activities of entities, in turn, are explained by detailing the stages of activity in the lower-level ($-$1) mechanism. Functional and mechanical descriptions are different tools for situating activities, properties, and entities into a hierarchy of activities. They are not competing kinds of description. Experimental techniques for testing such descriptions reflect this symmetry. Philosophical discussions of inter-level explanatory relationships have traditionally been framed by reference to inter-theoretic reduction models. The representational strictures of first order predicate calculus and the epistemological strictures logical empiricism combine in this reduction model to focus attention upon issues of identity and deriveability; these are entirely peripheral to the explanatory aims of mechanical ($-$1) explanation. Mechanical explanation is causal. Derivational models of explanation do not adequately reflect the importance of activities in rendering phenomena intelligible. Activities are kinds of change. 'Bonding,' 'diffusing,' 'transcribing,' 'opening,' and 'attracting' all describe different kinds of transformation. Salmon's modified process theory (1998) is helpful in understanding the role of entities and properties in causal interactions; but it ultimately makes no room for kinds of change in the explanatory cupboard. We make change intelligible by identifying and characterizing its different kinds and relating these to activities that are taken to be fundamental for a science at a time.
|
|
Heather Douglas (1998)
Philosophy of Science, Environmental Philosophy, Science and Public Policy
Currently at University of Tennessee
|
|
Dissertation:
The use of science in policy-making: A study of values in dioxin science
The risk regulation process has been traditionally conceived
as having two components: a consultation of the experts concerning the
magnitude of risk (risk assessment) and a negotiated decision on whether
and how to reduce that risk (risk management). The first component is
generally thought to be free of the contentious value judgments that often
characterize the second component. In examining the recent controversy
over dioxin regulation, I argue that the first component is not value-free.
I review three areas of science important to dioxin regulation: epidemiological
studies, laboratory animal studies, and biochemical studies. I show how
problems of interpretation arise for each area of science that prevent
a clear-cut answer to the question: what dose of dioxins is safe for humans?
Because of significant uncertainties in how to interpret these studies,
there is significant risk that one will err in the interpretation. In
order to judge what risk of error to accept, one needs to consider and
weigh the consequences of one's judgments, whether epistemic or non-epistemic.
Weighing non-epistemic consequences requires the use of non-epistemic
values. Thus, non-epistemic values, or the kind that are important in
risk management, have an important and legitimate role to play in the
judgments required to perform and interpret the dioxin studies. The risk
assessment component of the risk regulation process (or any similar consultation
of the scientific experts) cannot be claimed to be value-free and the
process must be altered to accommodate a value-laden science.
|
|
Mark Holowchak (1998)
Ancient Philosophy
|
|
Dissertation:
The problem of differentiation and the science of dreams in Graeco-Roman antiquity Dreams played a vital role in Graeco-Roman antiquity at all levels of society. Interpreters of prophetic dreams thrived at marketplaces and at religious festivals. Physicians used dreams to facilitate diagnosis. Philosophers talked of dreams revealing one's moral character and emotional dispositions. Many who studied dreams developed rich and elaborate accounts of the various sorts of dreams and their formation. All of this bespeaks a science of dreams in antiquity. Did these ancients, by a thorough examination of the content of dreams and their attendant circumstances, develop criteria for distinguishing the kinds or functions of dreams and, if so, were these criteria empirically reliable? I attempt to answer these questions chiefly through an evaluation of ancient Graeco-Roman 'oneirology' (the science of dreams) in the works of eight different Graeco-Roman oneirologists, especially philosophers and natural scientists, from Homer to Synesius. First, I argue that Homer's famous reference to two gates of dreams led subsequent thinkers to believe in prophetic and nonprophetic dreams. Additionally, the two gates engendered a practical approach to dreams that had a lasting impact on Graeco-Roman antiquity, especially through interpreters of prophetic dreams. Yet, as interpreters of dreams prospered, critics challenged the validity of their art. Ultimately, I argue that the interpreters' responses to their critics were unavailing. Moreover, the emergence of the belief in an agentive soul around the fifth century B.C. paved the way for psychophysiological accounts of dreams. Philosophers and physicians thereafter begin to explore nonprophetic meanings of dreams--like moral, psychological, or somatic meanings. Some philosophers rejected the notion of prophecy through dreams altogether, while many essayed to ground prophetic dreams by giving them psychophysiological explanations like other dreams. In general, those oneirologists who tried to give all dreams a psychophysiological explanation bypassed the problem of differentiating dreams by positing, strictly speaking, only one kind of dream--though committing themselves to a plurality of functions for them. In summary, I argue that the ancient Graeco-Roman oneirology--as a thorough admixture of the practical, Homeric approach to dreams and the psychogenetic approach--was an inseparable blend of literary fancy and respectable science.
|
|
David Sandborg (1998)
Philosophy of Mathematics, Explanation
|
|
Dissertation:
Explanation in mathematical practice
Philosophers have paid little attention to mathematical explanations
(Mark Steiner and Philip Kitcher are notable exceptions). I present a
variety of examples of mathematical explanation and examine two cases
in detail. I argue that mathematical explanations have important implications
for the philosophy of mathematics and of science. The first case study
compares many proofs of Pick's theorem, a simple geometrical result. Though
a simple proof surfaces to establish the result, some of the proofs explain
the result better than others. The second case study comes from George
Polya's Mathematics and Plausible Reasoning. He gives a proof that, while
entirely satisfactory in establishing its conclusion, is insufficiently
explanatory. To provide a better explanation, he supplements the proof
with additional exposition. These case studies illustrate at least two
distinct explanatory virtues, and suggest there may be more. First, an
explanatory improvement occurs when a sense of 'arbitrariness' is reduced
in the proofs. Proofs more explanatory in this way place greater restrictions
on the steps that can be used to reach the conclusion. Second, explanatoriness
is judged by directness of representation. More explanatory proofs allow
one to ascribe geometric meaning to the terms of Pick's formula as they
arise. I trace the lack of attention to mathematical explanations to an
implicit assumption, justificationism, that only justificational aspects
of mathematical reasoning are epistemically important. I propose an anti-justificationist
epistemic position, the epistemic virtues view, which holds that justificational
virtues, while important, are not the only ones of philosophical interest
in mathematics. Indeed, explanatory benefits are rarely justificational.
I show how the epistemic virtues view and the recognition of mathematical
explanation can shed new light on philosophical debates. Mathematical
explanations have consequences for philosophy of science as well. I show
that mathematical explanations provide serious challenges to any theory,
such as Bas van Fraassen's, that considers explanations to be fundamentally
answers to why-questions. I urge a closer interaction between philosophy
of mathematics and philosophy of science; both will be needed for a fuller
understanding of mathematical explanation.
|
|
Marta Spranzi-Zuber (1998)
Ancient and Early Modern Philosophy
|
|
Dissertation:
The tradition of Aristotle's 'Topics' and Galileo's 'Dialogue Concerning the Two Chief World Systems': Dialectic, dialogue, and the demonstration of the Earth's motion In this work I show that Galileo Galilei provided a 'dialectical demonstration' of the Earth's motion in the Dialogue concerning the two chief world systems, in the sense outlined in Aristotle's Topics. In order to understand what this demonstration consists of, I reconstructed the tradition of dialectic from Aristotle to the Renaissance, analyzing its developments with Cicero, Boethius, the Middle Ages up to the sixteenth century. As far as Renaissance developments are concerned, I singled out three domains where the tradition of Aristotle's Topics was particularly important: 'pure' Aristotelianism, the creation of a new dialectic modelled on rhetoric, and finally the theories of the dialogue form. In each case I focused on a particular work which is not only interesting in its own right, but also represents well one of these developments: Agostino Nifo's commentary to Aristotle's Topics, Rudolph Agricola's De inventione dialectica, and Carlo Sigonio's De dialogo liber respectively. As far as Galileo is concerned, I focused on the first Day of the Dialogue where Galileo proves that the Earth is a planet, as an example of dialectical strategy embodied in a literary dialogue. Galileo's dialectical demonstration of the Earth's motion can be identified neither with rhetorical persuasion nor with scientific (empirical) demonstration. Rather, it is a strategy of inquiry and proof which is crucially dependent on an exchange between two disputants through a question and answer format. A dialectical demonstration does not create consensus on a given thesis, nor does it demonstrate it conclusively, but yields corroborated and justified knowledge, albeit provisional and contextual, namely open to revision, and dependent upon the reasoned assent of a qualified opponent.
|
|
Andrea Woody (1998)
Philosophy of Science, History of Science, and Feminist Perspectives within Philosophy.
Currently at University of Washington, Seattle
|
|
Dissertation:
Early twentieth century theories of chemical bonding: Explanation, representation,
and theory development
This dissertation examines how we may meaningfully attribute
explanatoriness to theoretical structures and in turn, how such attributions
can, and should, influence theory assessment generally. In this context,
I argue against 'inference to the best explanation' accounts of explanatory
power as well as the deflationary 'answers to why questions' proposal
of van Fraassen. Though my analysis emphases the role of unification in
explanation, I demonstrate ways in which Kitcher's particular account
is insufficient. The suggested alternative takes explanatory power to
be a measure of theory intelligibility; thus, its value resides in making
theories easy to probe, communicate, and ultimately modify. An underlying
goal of the discussion is to demonstrate, even for a small set of examples,
that not all components of rational assessment distill down, in one way
or another, to evaluations of a theory's empirical adequacy. Instead,
the merits of explanatory structures are argued to be forward-looking,
meaning that they hold the potential to contribute significantly to theory
development either by providing directives for theoretical modification,
perhaps indirectly by guiding empirical investigation, or by facilitating
various means of inferential error control. The dissertation's central
case study concerns the development of twentieth century quantum mechanical
theories of the chemical bond, provocative territory because of the diversity
of models and representations developed for incorporating a computationally
challenging, and potentially intractable, fundamental theory into pre-existing
chemical theory and practice. Explicit mathematical techniques as well
as various graphical, schematic, and diagrammatic models are examined
in some detail. Ultimately these theoretical structures serve as the landscape
for exploring, in a preliminary fashion, the influence of representational
format on inferential capacities generally. Although the connection between
representation and explanation is seldom emphasized, this dissertation
offers evidence of the high cost of such neglect.
|
|
Rachel Ankeny (1997)
Bioethics, History of Contemporary Life Sciences
Currently at University of Adelaide
|
|
Dissertation:
The conqueror worm: An historical and philosophical examination of the
use of the nematode Caenorhabditis elegans as a model organism
This study focuses on the concept of a ‘model organism’ in
the biomedical sciences through an historical and philosophical exploration
of research with the nematode Caenorhabditis elegans. I examine
the conceptualization of a model organism in the case of the choice and
early use of C. elegans in 1960s, showing that a rich context
existed within which the organism was selected as the focus for a fledging
research program in molecular biology. I argue that the choice of C.
elegans was obvious rather than highly inventive within this context,
and that the success of the ‘worm project’ depends not only
on organismal choice but on the conceptual and institutional frameworks
within which the project was pursued.
I also provide a selective review of the C. elegans group research
in the late 1960s through the early 1980s as support for several theses.
Although development and behavior were the general areas of interest for
the research project, the original goals and proposed methodology were
extremely vague. As the project evolved, which investigations proved to
be tractable using the worm depended not only on which methodologies were
fruitful but also on the interests and skills of early workers. I also
argue that much of the power of C. elegans as a model organism
can be traced historically to the investment of resources in establishing
a complete description of the organism which was relatively unprecedented,
and which methodologically represents a return to a more naturalistic
biological tradition.
In light of the historical study, I provide a philosophical analysis
of various components that have contributed to the conceptualization of
a model organism in the case of C. elegans. I synthesize several
components of traditional views in the philosophy of science on modeling
and expand the concept of a descriptive model which thus allows C.
elegans to be viewed as a prototype of the metazoa, through exploration
of the three kinds of modeling that occur with C. elegans: modeling
of structures, of processes, and of information. I argue that C. elegans
as a model organism has been not only heuristically valuable, but also
essential to this research project. I conclude by suggesting that more
investigation of descriptive models such as those generated in the worm
project must be done to capture important aspects of the biomedical sciences
that may otherwise be neglected if explanatory models are the sole focus
in the philosophy of science.
|
|
Jonathan Simon (1997)
History of Chemistry Currently at University of Strasbourg, France
|
|
Dissertation:
The alchemy of identity: Pharmacy and the chemical revolution, 1777-1809
This dissertation reassesses the chemical revolution that occurred
in eighteenth-century France from the pharmacists' perspective. I use
French pharmacy to place the event in historical context, understanding
this revolution as constituted by more than simply a change in theory.
The consolidation of a new scientific community of chemists, professing
an importantly changed science of chemistry, is elucidated by examining
the changing relationship between the communities of pharmacists and chemists
across the eighteenth century. This entails an understanding of the chemical
revolution that takes into account social and institutional transformations
as well as theoretical change, and hence incorporates the reforms brought
about during and after the French Revolution. First, I examine the social
rise of philosophical chemistry as a scientific pursuit increasingly independent
of its practical applications, including pharmacy, and then relate this
to the theoretical change brought about by Lavoisier and his oxygenic
system of chemistry. Then, I consider the institutional reforms that placed
Lavoisier's chemistry in French higher education. During the seventeenth
century, chemistry was intimately entwined with pharmacy, and chemical
manipulations were primarily intended to enhance the medicinal properties
of a substance. An independent philosophical chemistry gained ground during
the eighteenth century, and this development culminated in the work of
Lavoisier who cast pharmacy out of his chemistry altogether. Fourcroy,
one of Lavoisier's disciples, brought the new chemistry to the pharmacists
in both his textbooks and his legislation. Under Napoleon, Fourcroy instituted
a new system of education for pharmacists that placed a premium on formal
scientific education. Fourcroy's successors, Vauquelin and Bouillon-Lagrange,
taught the new chemistry to the elite pharmacists in the School of Pharmacy
in Paris. These pharmacists also developed new analytical techniques that
combined the aims of the new chemistry with traditional pharmaceutical
extractive practices. The scientific pharmacist (for example, Pelletier
and Caventou) was created, who, although a respected member of the community
of pharmacists, helped to define the new chemistry precisely by not being
a true chemist.
|
|
Aristidis Arageorgis (1996)
Philosophy of Quantum Field Theory Currently at Athens College, Greece
|
|
Dissertation:
Fields, Particles, and Curvature: Foundation and Philosophical Aspects of Quantum Field Theory in Curved Spacetime
The physical, mathematical, and philosophical foundations of the quantum theory of free Bose fields in fixed general relativistic spacetimes are examined. It is argued that the theory is logically and mathematically consistent whereas semiclassical prescriptions for incorporating the back-reaction of the quantum field on the geometry lead to inconsistencies. Still, the relations and heuristic value of the semiclassical approach to canonical and covariant schemes of quantum gravity-plus-matter are assessed. Both conventional and rigorous formulations of the theory and of its principal predictions, cosmological particle creation and horizon radiation, are expounded and compared. Special attention is devoted to spacetime properties needed for the existence or uniqueness of the relevant theoretical elements (algebra of observables, Hilbert space representation(s), renormalization of the stress tensor). The emergence of unitarily inequivalent representations in a single dynamical context is used as motivation for the introduction of the abstract $/rm C/sp[/*]$-algebraic axiomatic formalism. The operationalist and conventionalist claims of the original abstract algebraic program are criticized in favor of its tempered outgrowth, local quantum physics. The interpretation of the theory as a wave mechanics of classical field configurations, deriving from the Schrodinger representations of the abstract algebra, is discussed and is found superior, at least on the level of analogy, to particle or harmonic oscillator interpretations. Further, it is argued that the various detector results and the Fulling nonuniqueness problem do not undermine the particle concept in the ways commonly claimed. In particular, arguments are offered against the attribution of particle status to the Rindler quanta, against the physical realizability of the Rindler vacuum, and against the more general notion of observer-dependence as to the definition of 'particle' or 'vacuum'. However, the question of the ontological status of particles is raised in terms of the consistency of quantum field theory with non-reductive realism about particles, the latter being conceived as entities exhibiting attributes of discreteness and localizability. Two arguments against non-reductive realism about particles, one from axiomatic algebraic local quantum theory in Minkowski spacetime and one from quantum field theory in curved spacetime, are developed.
|
|
Keith Parsons (1996)
Paleontology, Realism-Constructivism Currently at University of Houston, Clear Lake
|
|
Dissertation:
Wrongheaded science? Rationality, constructivism, and dinosaurs
Constructivism is the claim that the 'facts' of science are
'constructs' created by scientific communities in accordance with the
linguistic and social practices of that community. In other words, constructivists
argue that scientific truth is nothing more than what scientific communities
agree upon. Further, they hold that such agreement is reached through
a process of negotiation in which 'nonscientific' factors, e.g. appeals
to vested social interests, intimidation, etc., play a more important
role than traditionally 'rational' or 'scientific' considerations. This
dissertation examines and evaluates the arguments of three major constructivists:
Bruno Latour, Steve Woolgar, and Harry Collins. The first three chapters
are extended case studies of episodes in the history of dinosaur paleontology.
The first episodes examined are two controversies that arose over the
early reconstructions of sauropods. The more important dispute involved
the decision by the Carnegie Museum of Natural History in Pittsburgh,
Pennsylvania to mount a head on their Apatosaurus specimen which, after
forty five years, it came to regard as the wrong head. The second case
study involves the controversy over Robert Bakker's dinosaur endothermy
hypothesis. Finally, I examine David Raup's role in the debate over the
Cretaceous/Tertiary extinctions. In particular, I evaluate certain Kuhnian
themes about theory choice by examining Raup's 'conversion' to a new hypothesis.
In the last three chapters I critically examine constructivist claims
in the light of the case studies. The thesis of Latour and Woolgar's Laboratory
Life is clarified; I argue that each author has a somewhat different interpretation
of that thesis. Both interpretations are criticized. The constructivist
arguments of Harry Collins' Changing Order are also examined and rejected.
I conclude that a constructivist view of science is not preferable to
a more traditionally rationalist account. A concluding meditation reflects
on the role of the history of science in motivating constructivist positions.
|
|
Ofer Gal (1996)
Early Modern History and Philosophy of Science
Currently at the University of Sydney, Australia
|
|
Dissertation:
Producing knowledge: Robert Hooke
This work is an argument for the notion of knowledge production. It is
an attempt at an epistemological and historiographic position which treats
all facets and modes of knowledge as products of human practices, a position
developed and demonstrated through a reconstruction of two defining episodes
in the scientific career of Robert Hooke (1635-1703): the composition
of his Programme for explaining planetary orbits as inertial motion bent
by centripetal force, and his development of the spring law in relation
to his invention of the spring watch. The revival of interest in the history
of experimental and technological knowledge has accorded Hooke much more
attention than before. However, dependent on the conception of knowledge
as a representation of reality, this scholarship is bound to the categories
of influence and competition, and concentrates mainly on Hooke's numerous
passionate exchanges with Isaac Newton and Christiaan Huygens. I favourably
explore the neo-pragmatist criticism of representation epistemology in
the writing of Richard Rorty and Ian Hacking. This criticism exposes the
conventional portrayal of Hooke as 'a mechanic of genius, rather than
a scientist' (Hall) as a reification of the social hierarchy between Hooke's
Royal Society employers and his artisan-experimenters employees. However,
Rorty and Hacking's efforts to do away with the image of the human knower
as an enclosed realm of 'ideas' have not been completed. Undertaking this
unfinished philosophical task, my main strategy is to erase the false
gap between knowledge which is clearly produced--practical, technological
and experimental, 'know how', and knowledge which we still think of as
representation--theoretical 'knowing that'. I present Hooke, Newton and
Huygens as craftsmen, who, employing various resources, labor to manufacture
material and theoretical artifacts. Eschewing the category of independent
facts awaiting discovery, I attempt to compare practices and techniques
rather than to adjudicate priority claims, replacing ideas which 'develop',
'inspire', and 'influence', with tools and skills which are borrowed,
appropriated and modified for new uses. This approach enables tracing
Hooke's creation of his Programme from his microscopy, and reconstructing
his use of springs to structure a theory of matter. With his unique combination
of technical and speculative talents Hooke comes to personify the relations
between the theoretical-linguistic and the experimental-technological
in their full complexity.
|
|
David Rudge (1996)
The Role of History and Philosophy of Science for the Teaching and Learning of Science
Currently at Western Michigan University
|
|
Dissertation:
A philosophical analysis of the role of selection experiments in evolutionary
biology
My dissertation philosophically analyzes experiments in evolutionary
biology, an area of science where experimental approaches have tended
to supplement, rather than supercede more traditional approaches, such
as field observations. I conduct the analysis on the basis of three case
studies of famous episodes in the history of selection experiments: H.
B. D. Kettlewell's investigations of industrial melanism in the Peppered
Moth, Biston betularia; two of Th. Dobzhansky's studies of adaptive radiation
in the fruit fly, Drosophila pseudoobscura; and M. Wade's studies of group
selection in the flour beetle, Tribolium castaneum. The case studies analyze
the arguments and evidence these investigators used to identify the respective
roles of experiments and other forms of inquiry in their investigations.
I discuss three philosophical issues. First, the analysis considers whether
these selection experiments fit models of experimentation developed in
the context of micro-and high energy physics by Allan Franklin (1986,
1990) and Peter Galison (1987). My analysis documents that the methods
used in the case studies can be accommodated on both Franklin and Galison's
views. I conclude the case studies do not support claims regarding the
relative autonomy of biology. Second, the analysis documents a number
of important roles for life history data acquired by strictly observational
means in the process of experimentation, from identification of research
problems and development of experimental designs to interpretation of
results. Divorced from this context experiments in biology make no sense.
Thus, in principle, experimental approaches cannot replace more traditional
methods. Third, the analysis examines a superficial tension between the
use of experiments, which I characterize by the presence of artificial
intervention, and the stated goal of most investigations in evolutionary
biology, that of understanding how systems behave in the absence of intervention.
Experiments involve trade-offs between the control one has over the circumstances
of the study and how informative the study is with regard to questions
of interest to biologists regarding specific, actual systems in nature.
Experimental simulations of natural phenomena in other historical sciences
(e.g. meteorology) involve similar trade-offs, but there are reasons for
believing this tension is more prominent in biology.
|
|
Michel Janssen (1995)
Philosophy of Physics, History of Relativity Theory Currently at the University of Minnesota
|
|
Dissertation:
A comparison between Lorentz’s ether theory and special relativity
in the light of the experiments of Trouton and Noble
In Part One of this dissertation, I analyze various accounts
of two etherdrift experiments, the Trouton-Noble experiment and an earlier
experiment by Trouton. Both aimed at detecting etherdrift with the help
of a condenser in a torsion balance. I argue that the difficulties ether-theorists
Lorentz and Larmor had in accounting for the negative results of these
experiments stem from the fact that they did not (properly) take into
account that, if we charge a moving condenser, we not only change its
energy, but also its momentum and its mass. I establish two additional
results. (1) The Trouton experiment can be seen as a physical realization
of a thought experiment used by Einstein to argue for the inertia of energy.
(2) Closely following Rohrlich, I develop an alternative to Laue’s
canonical relativistic account of the Trouton-Noble experiment to show
that the turning couple Trouton and Noble were looking for is a purely
kinematical effect in special relativity. I call this effect the Laue
effect.
In Part Two, I use these results to illustrate some general claims about
the post-1905 version of Lorentz’s ether theory. I use (1) to illustrate
that Lorentz needs to assume more than the contraction of rods and the
retardation of clocks to make his ether theory empirically equivalent
to special relativity. I use (2) to illustrate that what makes the addition
of such assumptions unsatisfactory is not that it would make the theory
ad hoc, in the sense that it would compromise its testability, but that
it makes Lorentz invariance a symmetry of the dynamics in a classical
Newtonian space-time, whereas, in fact, it is a symmetry of the relativistic
Minkowski space-time. To provide the necessary context for my claims,
I give a detailed account of the conceptual development of Lorentz’s
theory from 1895 to 1916. In particular, I analyze the relation between
the so-called theorem of corresponding states and what I call the generalized
contraction hypothesis. I show that the various versions of Lorentz’s
theory have been widely misunderstood in the literature.
|
|
Madeline Muntersbjorn(1996)
History and Philosophy of Mathematics, Calculus in the Seventeenth Century
Currently at University of Toledo
|
|
Dissertation: Albebraic Reasoning and Representation in Seventeenth Century Mathemacis: Fermat and the Treatise on Quadrature C. 1657
Contemporary philosophers of mathematics commonly assume that mathematical reasoning is representation neutral, or that changes from one notational system to another do not reflect corresponding changes in mathematical reasoning. Historians of mathematics commonly hypothesize that the incorporation of algebraic representations into geometrical pursuits contributed to the problem-solving generality of seventeenth-century mathematical techniques and to the invention of the infinitesimal calculus. In order to critically evaluate the relative merits of these positions, the dissertation analyzes representational techniques employed by Pierre de Fermat (1601-1665) in the development of seventeenth-century quadrature methods. The detailed case study of Fermat's Treatise on Quadrature c. 1657 illustrates the manner in which his representational strategy contributes to the generality of his quadrature methods. The dissertation concludes that, although seventeenth-century mathematicians' use of algebraic representations cannot simpliciter explain the generality of mathematical techniques developed during that time, Fermat's use of a variety of representational means--figures, discursive text, equations, and so on--can explain the generality of his methods. Thus, the dissertation lays the foundation for a larger argument against the common philosophical assumption of representation neutrality and for the thesis that developing a good representational strategy is a philosophically significant feature of mathematical reasoning.
|
|
|