Abstracts
Abrantes, Paulo
University of Brasília, Brazil
Culture and transitions in individuality
Using Godfrey-Smith`s categorization, a "major" transition in individuality is fully accomplished when a new "paradigmatic" Darwinian population emerges. In the case we have a population inside another population, a process of "de-darwinization" of the lower level population is characterized by a suppression of variation as well as by changes in other parameters. Accounts such as Richerson & Boyd`s highlight the evolution of a new modality of inheritance and presuppose that selection at the level of human cultural groups was a key factor in bringing about large scale cooperation. Can this process be described as a transition in individuality?
Arsenijević, Miloš
University of Belgrade, Serbia
Gunky Continua and Continuous and Discontinuous Changes
As already shown by the author, the Aristotelian stretch-and-region-based and the Cantorian point-based theory of the continuum are only trivially different in a generalized sense. So, no question concerning gunky continua can be raised in relation to space and time but only to the physical world, which contains discontinuous changes. However, contrary to Hawthorne's and Arntzenius' No-Zero-Assumption, points, lines, surfaces, and instants exist really as supervening limits of the physical continua. There can be discontinuous changes but no continuity of them. In particular, the modal analysis of instantaneous velocities will explain a kind of their reality not envisaged by Arntzenius.
Barrotta, Pierluigi
University of Pisa, Italy
Facts, values and risk assessment
In this essay, I shall discuss an issue that is still open within the philosophy of risk, after being raised years ago. In the past, it was believed that risk could be assessed in an objective manner, and therefore experts had only the task of disseminating the knowledge they had achieved. For cultural theory, on the contrary, risk is a "social construct", since facts are value-laden. In both cases, it is assumed that if a research is not value-free or neutral, then it cannot be objective. In the essay, I shall argue that there is no necessary connection between objectivity and neutrality. Values, both moral and epistemic, influence the formation of "conceptual frameworks", which define the relevance of the phenomena, since they are necessary to connect empirical evidence with the hypotheses under control. Conflicting conceptual frameworks do not build different realities. Rather they focus on different aspects of the very same reality, thus allowing scientific and technological progress. In the essay, I will also introduce a case-study of great historical importance: the disaster of Vajont dam, which occurred in northern Italy in the '60s.
Bradie, Michael
Bowling Green State University, USA
The Moral Life of Animals
Discussions of the moral status of animals typically address the key questions from an anthropocentric point of view. An alternative approach adopts a non-anthropocentric perspective. In this paper, I explore the theoretical and experimental results which make this approach plausible and address two key questions. [1] To what extent is it proper to speak of the moral behavior of non-human animals? [2] To the extent that it is proper, what are the implications for our understanding of the nature and function of human morality and of our treatment of non-human animals?
Camardi, Giovanni
University of Catania, Italy
Wesley Salmon on Information and Causation
In "Scientific Explanation and the Causal Structure of the World" (1984), Wesley Salmon uses information theory in order to handle the concept of causation. Indeed, if causation consists in transmitting a mark or a signal, information theory is called in. Salmon treats information on the basis of his Statistical-Relevance Model. Such a tactic is grounded in the equivalence between a stochastic process and a discrete source of information that Claude Shannon put forward in 1948. As a conclusion, I will compare Salmon's conception with the manipulative conception of causation and its computational treatment.
Çevík, Ahmet Dinçer
Muğla University, Turkey
G. F. B. Riemann's Notion of "Manifold" and Its Position in Constructing a New Space and Geometry
This paper aims to achieve two things. One is to give an account of how Riemann explains his notion of manifold. Riemann was influenced by Herbart's philosophy on the one hand and Gauss's mathematics on the other. For this reason, Erhard Scholz (1982) claims that the notion of manifold is a quasi-philosophical concept. It is therefore important to explicate the role philosophy plays in Riemann's notion of manifold as this will show how and to what extend philosophy can be relevant to mathematics and perhaps other fields of study as well. I will then discuss possible consequences of this notion for geometry (especially issues regarding the evolution of ideas of space) from an epistemological point of view.
Dawid, Richard
University of Vienna, Austria
Evidence for the Higgs Particle and the Look Elsewhere Effect
At the time this is written, the collected and analysed data from the LHC at CERN technically constitutes 'evidence' but no 'observation' of the Higgs particle. Among particle physicists, the findings triggered an interesting debate on the most adequate characterisation of the status of the available data that revolves around the role of the so called 'look elsewhere effect'. The talk analyses philosophical aspects of this debate and uses it as a basis for discussing the relation between theoretical and empirical arguments in high energy physics.
Demir, Hilmi
Bilkent University, Turkey
Popper's Relative Probability Calculus and Zero Probability Events
In traditional Kolmogorovian probability theory, unconditional probabilities are taken as primitive, and conditional probabilities are defined in terms of them as the ratio of the joint probability of the two events in question and the probability of the condition. Popper (1959) constructed a theory of probability in which the traditional direction of the analysis of probability is reversed; namely, conditional probabilities are taken as primitive and unconditional probabilities are defined in terms of conditional probabilities. He called his axiomatic system the calculus of relative probability. Popper's main motivation for constructing his theory was the obvious failure of the Kolmogorovian definition of conditional probability. Because of being a ratio, in the Kolmogorovian definition, conditional probabilities are left undefined whenever the probability of the condition is 0. Zero probability events come in two different varieties: impossible events and genuine possibilities. Though it is controversial, one may legitimately claim that the former variety is not important; it is just a technical glitch that it comes out as undefined in the definition of conditional probability. The importance of the latter variety, however, is generally accepted. It suffices to state Borel's classic example: what is the probability that a randomly chosen point from the Earth's surface lies in the western hemisphere (W), given that it lies on the equator (E)? The answer must be 1/2, but according to the Kolmogorovian definition, it is undefined because the probability of E is 0.
According to Popper, leaving conditional probabilities undefined whenever the probability of the condition is 0 is a serious failure. Among many others, he gave the following example for the purpose of showing the seriousness of the failure. If we take two universal theories, s and t, say, such that s is deducible from t, we would like to assert that the probability of s given t is 1. But if the probability of t is 0, then we are prevented from doing so in the traditional probability theory. The probability of t, as being a universal theory, is 0 because, for Popper, universal laws have zero probability. Popper claimed that his theory of probability does not have this problem. To wit, conditional probabilities are defined even when the probability of the condition is 0. Popper's probability theory has seen renewed interest in recent years. Hajek (2003) and Hajek and Fitelson (ms.) present in detail the zero denominator problem that afflicts the traditional Kolmogorovian theory of probability and suggest a revolutionary change. They claim that Kolmogorovian theory should be replaced by Popper's theory because the latter does not run into the zero denominator problem. This renewed interest calls for a close scrutiny of Popper's theory of probability.
In this paper, I show that, contrary to what Popper claimed, his theory of probability does not completely avoid the zero denominator problem. It runs into quite a similar problem. In Popper's axiomatic system, conditional probabilities are accepted as defined by fiat even when the probability of the condition in question is 0. Such conditional probabilities are defined, but it turns out that it is impossible to assign a specific value for such conditional probabilities. Assigning any value between 0 and 1 for a particular conditional probability is consistent with Popper's axioms. In addition to this, Popper's axiomatic system runs into a further problem. As Popper clearly stated, in a plausible theory of probability, the probability of an event, say A, given the same event must be 1 regardless of the value of the probability A. Popper claimed to have proven this result from his axioms, but it turns out that assigning 0 instead of 1 to the probability of A given A is also consistent with Popper's axioms. In short, I claim that while trying to avoid the zero denominator problem, Popper's theory of probability runs into similar problems. Thus, replacing the traditional Kolmogorovian probability theory with Popper's relative probability calculus where the direction of the traditional analysis of probability is reversed does not solve the problems that stem from conditioning on zero probability events."
Elgin, Mehmet
Muğla University, Turkey
What is Science? Popper and Evolutionary Theory
Karl R. Popper notoriously claimed that "I have come to the conclusion that Darwinism is not a testable scientific theory, but a metaphysical research programme-a possible framework for testable scientific theories." Popper, Unended Quest, p. 195. The reason it is not a testable scientific theory is this: "Thus it could be described as "almost tautological"; or it could be described as applied logic…" Popper, Unended Quest, p. 195. Later he changed his mind. Was the change due to the pressure he felt from scientific and philosophical communities? Was it due to the fact that he came to his senses? Or was it because he was invited to deliver a lecture on Darwin and so he wanted to be nice to Darwin Family? The answer to these questions is not known, so I leave these questions aside. His official account of the change is as follows: "I have changed my mind about the testability and logical status of the theory of natural selection;. . ." The reason he gives for this change is: "The theory of natural selection may be so formulated that it is far from tautological. In this case it is not only testable, but it turns out to be not strictly universally true." Popper (1978), "Natural Selection and the Emergence of Mind", Dialectica, p. 345-346.
Popper's argument in the first case is that since the main principle of evolutionary theory is a tautological proposition, it is not falsifiable and therefore it is not testable. When he changed his mind about the tautological status of natural selection, he thinks that it is falsifiable statement-strictly speaking; natural selection is a claim that has been falsified. To require that main principles of science (its laws) must be testable in the sense of being falsifiable entails that such principles must be empirical propositions. Thus, in order for falsifiability to be accepted as a criterion of testability we must assume that laws of nature must be empirical. Therefore, I argue that in both cases, the assumption that guided Popper's decision is the same: Laws of nature must be empirical.
Not only Popper's claims about the status of natural selection and evolutionary theory but also his views of scientific methodology have been received extensive attention by philosophers and scientists. My goal here is not to add a new dimension to these discussions. Rather, I would like to point to a general problem in Popper's approach to such issues in science. I argue that when Popper claimed that evolutionary theory is a metaphysical research program, he was relying on an a priori philosophical principle about scientific methodology. When he changed his mind, he was reformulating a scientific principle in a way that it would conform to the canons of his views of scientific methodology. In both cases, Popper was judging the status of empirical science on the basis of a priori philosophical intuitions concerning scientific methodology that did not take scientific practice seriously. I find this approach quite problematic and I would like to explicate a better strategy in handling questions such as the scientific status of evolutionary theory or the status of natural selection.
Popper provides a very challenging argument for the claim that laws of nature must be empirical in his discussion against conventionalism in The Logic of Scientific Discovery. This is the most powerful argument I can find in the literature for the claim that laws must be empirical. The argument is that tautological claims are redundant in any proof but empirical claims are not. I argue that even some versions of axiomatic view of theories can tolerate some of the constituting principles to be a priori and still making important contributions to the empirical content of theoretical system. Most importantly, I argue that Popper's argument assumes the axiomatic view of theories. However, the axiomatic view of theories is not the only game in town. There are alternative approaches such as semantic view of theories or approaches that take models seriously in scientific practice to which the issue of redundancy does not apply to. I then argue that to judge the status of established scientific practice on the basis of controversial a priori philosophical intuitions about scientific methodology is not a very good strategy. The danger for evolutionary theory may have come to an end long time ago but the same danger is still threatening for new developing fields of science. The better strategy is to take scientific practice as given and build our theories of science on that basis.
Ereshefsky, Marc
University of Calgary, Canada
Scientific Kinds
Richard Boyd's Homeostatic Property Cluster (HPC) Theory of kinds is the preferred view of scientific kinds among philosophers of science. However, HPC Theory places overly strict requirements on what counts as a kind. Those requirements are too strict because some significant kinds identified by science fail to be kinds according to HPC Theory. This presentation begins by illustrating the limits of HPC Theory. It then offers an alternative account of scientific kinds that better captures the classificatory practices of science.
Galavotti, Maria Carla
University of Bologna, Italy
For a bottom-up approach to the philosophy of science
The paper heralds a bottom-up approach to the philosophy of science, which is probabilistic in character and has a strong pluralistic and pragmatist flavor. Far from confining philosophy of science to the context of justification, a similar approach takes the context of discovery as an essential component of the study of scientific knowledge, and regards important notions such as explanation, causation, and prediction as context-dependent. Therefore, the bottom-up approach to the philosophy of science assigns a crucial role the notion of context.
Gurova, Lilia
New Bulgarian University, Bulgaria
Principles vs. mechanisms across the sciences: Maybe the differences are not as big as they are claimed to be
A kind of consensus has been formed recently that mechanistic explanations best characterize the explanatory practice in life sciences, including cognitive science (see e.g. Bechtel 2011). The proponents of this view insist that unlike physics, life sciences can hardly benefit from any general principles. In my talk I'll provide evidence to the contrary: some genuine discoveries in cognitive science such as the basic level effects do not seem to allow mechanistic explanations. The proposed explanations (Rosch 1978; Atran & Medin 2008) refer to principles and the considerations for these principles strikingly resemble an episode from the history of physics: the early history of the principle of least action.
Healey, Richard
University of Arizona, USA
Quantum theory without beables: a desert pragmatist view
J.S. Bell took the familiar language of everyday affairs, including laboratory procedures, to assign objective properties-beables-to objects because they are there. Bell proved that predictions of a locally causal theory conflict with experiment. Must any acceptable account of these experimental results involve non-local causation? No: quantum theory's lack of beables helps explain them locally, in harmony with relativity. Quantum states and probabilities don't describe "what is there": acceptance of quantum theory even undermines Bell's claim that "The beables must include the settings of switches and knobs on experimental equipment, the currents in coils, and the readings of instruments".
Hofer-Szabó, Gábor
King Sigismund College, Hungary
Bell inequality and common causal explanation in algebraic quantum field theory
In the talk it will be argued that the violation of the Bell inequality in algebraic
quantum field theory do not exclude a commmon causal explanation of a set of quantum correlations if we abandon commutativity between the common cause and the correlating events. Moreover, it will turn out that the common cause is local, i.e. localizable in the common past of the correlating events. It will be argued furthermore that giving up commutativity helps to maintain the validity of Reichenbach's Common Cause Principle in algebraic quantum field theory."
Irzik, Gürol
Sabanci University, Turkey
Kuhn and Logical Empiricism: Gaps, Absences and Tactics of SSR
Even though the Structure of Scientific Revolutions was given the lion's share for the demise of logical empiricism, Kuhn's critique of logical empiricism in that book is surprisingly limited in scope and suffers from a number of absences that weaken his case against it. To compensate for them, Kuhn employs some tactics that has escaped attention so far. I discuss these absences and tactics. I conclude by making some suggestions as to how we should frame a historical understanding of logical empiricism in the USA in the 1950s against which Kuhn was reacting as he understood it.
Keeley, Brian L.
Pitzer College, USA
What kinds of kind are the senses?
Common sense speaks of five human senses, a claim challenged by science. One conceptual difficulty in thinking about the number and division of senses is that it's not clear whether the different senses constitute natural kinds and, if not, what kind of kind they are. Should we favor antirealism w/r/t the senses, akin to the arguments of some concerning the nature of species or race? I argue that this first problem is compounded by another: that we ought to be pluralists w/r/t the senses-what is meant by the term "sense" varies from context to context, varying even between scientific contexts.
Kochiras, Hylarie
New Europe College Inst. for Advanced Study, Romania
Newton on Matter and Space
This paper aims to explicate the concepts of matter and space that Newton develops in De gravitatione. As I interpret Newton's account of created substances, bodies are constructed from qualities alone, as configured by God. Although regions of space and then "determined quantities of extension" appear to replace the Aristotelian substrate by functioning as property-bearers, they actually serve only as logical subjects. An implication of the interpretation I develop is that only space is intrinsically extended; material bodies are spatially extended only in a derivative sense, via the presence of their constitutive qualities or powers in space.
Kuhlmann, Meinard
University of Bremen, Germany
Does Quantum Field Theory Support Ontic Structural Realism?
Ontic structural realism (OSR) is a hotly debated position these days. Most of the discussion centers around the alleged loss of individuality for "particles'" in quantum mechanical many-particle systems, around quantum entanglement or the hole argument for general relativity theory. It is often claimed or more or less tacitly assumed that the case for OSR is even stronger regarding QFT, with its spectacular successes based on very abstract symmetry considerations. However, since QFT is far more intricate than QM it is mostly omitted to actually spell out the connection between QFT and OSR. I survey, explicate and evaluate potential arguments in favour of OSR that are based on QFT. I will show that hardly any of these arguments are convincing in the end.
Leuridan, Bert
Ghent University, Belgium
Mechanisms in Analytic Sociology
The concept of 'social mechanism' plays an important role in the Analytic Sociology movement. It is currently not clear, however, what social mechanisms are or how they are investigated. I will argue that a three-tiered clarification is needed. First, one should decide which types of mechanisms (hierarchical or not, etc.) are eligible for social science. Second, one should decide which types of component parts and operations may be suitable (only individuals?, or also meso-level groups, etc?). Finally, in empirical studies, one should decide which are the specific parts/operations under study. Philosophers can help at stage one and two.
Machamer, Peter
University of Pittsburgh, USA
Mechanisms and Mechanical Explanations
O bliss, bliss and heaven! Oh it was gorgeousness and gorgeosity made flesh. The trombones crunched redgold under my bed, and behind my gulliver the trumpets three-wise, silver-flamed and there by the door the timps rolling through my guts and out again, crunched like candy thunder. It was like a bird of rarest spun heaven metal or like silvery wine flowing in a space ship, gravity all nonsense now. As I slooshied, I knew such lovely pictures!
Manders, Kenneth
University of Pittsburgh, USA
Expressive Means in Mathematical Understanding
Restricted or otherwise special expressive means, and transitions between them, often contribute to understanding through mathematics. We use some examples to explain how mathematics especially shapes contents to its needs.
Nesher, Dan
University of Haifa, Israel
Gödel on truth and proof: Epistemological Proof of Gödel's Conception of the Realistic Nature of Mathematical Theories and that Their Incompleteness Cannot Be Proved Formally
In this article, I attempt a pragmaticist epistemological proof of Gödel's conception of the realistic nature of mathematics, but then only when mathematical theories representing empirical facts of their external reality. Gödel generated a realistic revolution in the foundations of mathematics by attempting to prove formally the distinction between complete formal systems and incomplete mathematical theories. Employing pragmaticist epistemology, I will show that formal systems are only radical abstractions of human cognitive operations and cannot explain how we represent external reality. Therefore, if Gödel's incompleteness of mathematical theories holds, then we cannot prove the truth of the basic mathematical facts by any formal proofs. Hence, Gödel's formal proof of the incompleteness of mathematics cannot hold since his unprovable theorem cannot be formally proved true. However, Gödel separates the truth of mathematical facts from mathematical proof by assuming that ideal mathematical facts are eternally true and thus, the unprovable theorem can also be true upon these ideal true facts. Pragmatistically, realistic theories represent external reality, not by formal logic but by the epistemic logic of the complete proof of our perceptual propositions and realistic theories. Accordingly, it can be explained how all our knowledge starts from our perceptual confrontation with reality without assuming any a priori or "given" knowledge. Hence, mathematics is also an empirical science but its represented reality is neither that of ideal objects nor that of physical objects. Instead, we perceptually quasi-prove true the mathematical basic facts which are our operations of counting and measuring physical objects.
Nola, Robert
University of Auckland, New Zealand
Do Naturalistic Explanations of Religious Beliefs Debunk Religion?
The theory of evolution has led to a number of theories about the causes of religious belief. One is that religious belief arises as an evolutionary by-product from the operation of a "hyperactive agency detection device." Another begins with the idea that evolution has selected for a belief in "moralising, mind-policing God." Such arguments suggest that the mechanism that gives rise to religious belief is not reliably truth-tracking; so there are no good epistemic grounds for accepting the contents of such acts of believing. There is also a conflict between "folk" beliefs about the causes of such believings, which allegedly turn on the activity of some divinity, and scientific explanations which do not. Here inference to the best explanation enables one to draw the conclusion that scientific casual hypotheses (from evolution) of such believings offer the best explanation. Given these considerations it is possible to launch a debunking argument.
Norton, John D.
University of Pittsburgh, USA
Einstein as the Greatest of the Nineteenth Century Physicists
Modern day writers often endow Einstein with a 21st century prescience about physical theory that, it just so happens, is only now vindicated by the latest results of the same writers' research. There is a second side to Einstein. His outlook and methods were clearly rooted in 19th century physics.
Nounou, Antigone
Institute for Neohellenic Research, Greece
Varieties of Properties
The contemporary debate around scientific structuralism has revealed the need to reassess the standing and role of both structure and objects in the metaphysics of physics. Although ontic structural realism recommends that metaphysics be purged of objects, its proponents have failed to specify what it means for properties to be relational and structural, and, consequently, to show how the elementary objects postulated by our best theories can be re-conceptualized in structural terms. In order to untangle various types of relational properties I draw from modern physics, and propose conditions that should be fulfilled for properties to be characterized as structural.
Perovic, Slobodan
University of Belgrade, Serbia
Why It Matters Whether and How Experimentation Is Driven by the Theory it Tests
I challenge a widespread view (recently Woodward, Schindler) that even though experiments can be motivated by the theory they test, this sort of theory-drivenness is epistemically unproblematic and irrelevant to the decisions that lead to establishing (or failing to establish) phenomena discerned from background noise and determining whether these phenomena confirm the tested theory. In fact, theory-drivenness of the experimental apparatus and operational procedures is a key factor in data analysis. I show how physicists treat preliminary phenomena as conditional on the built-in assumptions and thus typically cross-check them against alternative operational procedures for data acquisition monitoring and analysis that test these same assumptions, while building in the assumptions tested by the initial programs. I conclude that whether or not the theory-drivenness of experiments is epistemically problematic depends on the context of a particular experiment and its aims.
Portides, Demetris
University of Cyprus, Cyprus
A unified conception of idealization in scientific modeling
In this paper I focus on the character of idealization, particularly regarding its use in scientific models. More specifically, I try to analyze the ways idealization enters in scientific modeling from the perspective of the reasoning process involved. I argue that the core feature of the reasoning process behind scientific modeling is the systematic omission or simplification of information present in the target system, which leads to reduction of information content in models. By relying on such an analysis I argue that three general ways by which idealization is performed can be distinguished: isolation, stabilization, and decomposition. These three kinds of idealizations are explained and an attempt is made to demonstrate their usefulness in making sense of a variety of characteristics exhibited by scientific models.
Pringe, Hernán
CONICET/UBA, Argentina - Universidad Diego Portales, Chile
The Coordination of Concepts and Spatio-Temporal Objects in Cassirer's Philosophy
In this talk I analyze Cassirer's account of the coordination of concepts and spatio-temporal objects. We shall see that, in contradistinction to Kantian schematism, Cassirer maintains that this coordination is not achieved by means of a third element (the schema), which albeit intellectual is nevertheless also sensible. Rather, in Cassirer's view, the coordination will take place through a specification of the concepts that should be sought "within the domain of concepts itself."
Rédei, Miklós
London School of Economics & Political Science, England
Defusing Bertrand's paradox
It is argued that in the case of an infinite numer of elementary random events the classical interpretation of probability based on the Principle of Indifference should be formulated in terms of probability measure spaces in which the probability is given by the Haar measure. Labeling Irrelevance is the claim that the probability of events understood according to the classical interpretation does not depend on how the random events are named and Labeling Irrelevance also is formulated in terms of the Haar measure. Bertrand's paradox is then interpreted as the provable mathematical fact that Labeling Irrelevance is violated in the category of Haar measure spaces. Thus Bertrand's paradox only refutes Labeling Irrelevance and does not undermine the classical interpretation of probability and the Principle of Indifference. Yet, it will be argued that the classical interpretation of probability and the Principle if Indifference are not maintainable for deeper and simpler reasons related to how one should view probability theory as a mathematical model of non-mathematical phenomena.
Reyhaní, Nebil & Ateş, Mustafa Efe
Muğla University, Turkey
The philosophical status of thought experiments in the context of externalism-internalism debate
The problem about the philosophical status of thought experiments seems to be relevant to the debate between externalism and internalism in at least two different points. First, the externalist account i n theory of knowledge depends from the very beginning to a great extend on various thought experiments. Putnams so called "twin earth" and the "brain in a vat" thought experiments were groundbreaking for the rise of externalist account. Thus, the plausibility of externalist arguments may depend on the reliability of thought experiments in general. We will focus on McKinsey paradox under this point of view and try to answer the question whether we deal here with a real paradox or this case just indicates that we exceeded the explanatory power of thought experiments which may in fact be very limited. The second point is this: The externalist account seems to necessitate a specific attitude against the problem of thought experiments. Since thoughts are not merely internal representations but have certain roots 'outside', operations on thoughts must be in parts operations on external things. In this view thought experiments would gain a philosophical status near to real experiments. It would be, besides, very tempting to say that every thought experiment is in fact a special kind of simulation. On the other hand, the fundamental claim of externalism could be understood as a restriction of all simulations in general. That is, from an externalist point of view thought experiments must be considered as more reliable than simulations.
Rodriguez, Victor
National University of Cordoba, Argentina
The Quantum Hall Effect in Context
The different versions of the Hall Effect are presented. The general context of the XIX Century physical phenomenon is used as a background scenario. The individual contributions of von Klitzing, Störmer, Laughlin and Wilczek are taken as representatives of the new ways of thinking around quantum effects associated to this old topic of classical physics. In the last part, some consequences of the developments of theoretical and experimental aspects of this effect are analyzed. It is argued that this trend contributes to interesting philosophical views on some concepts, like charge, among others.
Schindler, Samuel
Aarhus University, Denmark
From novelty to coherence
Intuitively, scientific theories that make successful predictions about novel phenomena should receive more credit than theories that only explain or "accommodate" already known facts. In this talk, I will explore the rationale for this intuition. In particular, I will argue that the most elaborated account for why successful predictions should be methodologically more valuable than accommodations fails on several counts. I shall then argue that the most commonly cited justification for why successfully predicted novel phenomena should be more valuable than accommodations in fact points to a property of the explanations of our theories: their coherence. Lastly, I shall gesture towards possible implications of the proposed methodological changes for contemporary epistemological debates about science.
Schrader, David
American Philosophical Association, USA
Living Together in an Ecological Community
A framework for understanding our life together in an ecological community is a challenge. It requires a recognition that an ecological community must be something more modest than a full-blown moral community, a polity less robust than the ethical republic. The chief requirements of such a polity are a genuine interest in the role of those parts of the community that lack robust means of communication with us, and a willingness to acknowledge the superior knowledge of the scientists who claim some sort of expertise concerning the working of those parts.
Sencan, Sinan
Muğla University, Turkey
Life Sciences and Philosophical Accounts of Theories
If there was ever any agreement at all on the question of what laws are, we can say that this agreement was on the claim that whatever else must be true about laws they are strictly universal and empirical. Given these conditions, some philosophers of science argued that distinctively biological generalizations cannot satisfy this requirement (e.g. Smart 1963; Beatty 1995). Supporters of this idea defend that biological generalizations violate strict universality criterion because organisms are both historically contingent and too complex. On the other hand, some philosophers think that scientific laws do not need to be empirical; in fact, they assert that life sciences have some a priori generalizations which can be considered as law (Sober 1997, Elgin 2003). Some others claim that since there are non-strictly universal biological generalizations that can fulfill functions attributed to laws, universality requirement should be reconsidered (Mitchell 1997, Woodward 2000). Yet others insist that complexity and contingency are no reason to think that biology cannot have strict laws (Sober 1997; Elgin 2006) and life sciences have hidden generalizations which will replace strict laws in future (Press 2009). However different all these suggestions may look, they all seem to share two common assumptions: (1) Scientific laws are indispensable for scientific activity and (2) Physic-oriented scientific methodology and terminology is proper to understand phenomenon of life sciences. While (1) is strictly related with the debate between received view of theories and semantic view of theories, (2) is in accord with received view of theories. I want to pursue the following two questions in company with those assumptions: Can semantic view offer more fruitful conception of biology than received view in relation to discussion of scientific laws? Is physics-oriented conception of science proper to understand phenomenon of life sciences? Neither of these questions is original. While some philosophers of science claim that semantic view of theories is more suitable for evolutionary theory (e.g. Beatty 1980, Thompson 1989), second question is directly connected with various current issues in philosophy of biology like autonomy (e.g. Mayr 1988), reductionism (e.g. Nagel 1961, Sarkar 1998) and explanation (Schaffner 1969, Rosenberg 2001). I argue that semantic view of theories is better suited to understanding life sciences than the received view. Secondly, I argue that the issue of explanation in life sciences takes a different form depending on whether one works with the received view of theories or with the semantic view of theories. Then, since both questions are strictly connected with concept of scientific model, I try to focus on biological models and their place in biology. Finally, I hope to reach a conclusion in light of the structure which I mention above that to evaluate biological generalizations without aforementioned assumptions may provide more fruitful understanding about biological phenomenon and objects like organisms and populations.
Siitonen, Arto
University of Helsinki, Finland
Reichenbach: from Kantianism to Empiricism
Reichenbach in his early years characterized himself "a Kantian philosopher". In his dissertation (1916) he even added probability to Kant's twelve categories. Later, his standpoint was probabilistic empiricism. In his "Turkish book" (1938) he defended epistemic realism. In USA, he published works on symbolic logic, quantum mechanics, scientific philosophy, counterfactuals, direction of time. Although he had abandoned Kantianism, he shared with Kant irony - and a variety of problems, e.g. knowledge claims, the concept of possible experience, tenability of moral directives, difference between volition and cognition. Following questions arise: how is and will be Reichenbach's legacy? How may his ideas be best pursued?
Stöckler, Manfred
University of Bremen, Germany
What are Levels of Nature?
Levels of nature are a traditional topic in the philosophy of science. In 1958 P. Oppenheimer and H. Putnam gave an account of levels based on the concept of microreduction. However, their selection of levels does not represent the practice of science adequately. 'Higher level' often just refers to composed things in opposition to their parts. I differentiate between this unproblematic use of 'level' and the case that the higher level entails new types of entities and properties. Not every accumulation of things creates a new level. I scrutinize the hypothesis that we get a new level if we employ new strategies for explaining typical phenomena in wholes. This approach would give us plausible criteria for discriminating levels and sublevels, and it would help in determining the ontological status of level-specific entities and properties.
Stoyanov, Drozdstoj
Medical University of Plovdiv, Bulgaria
The Pitt model of trans-disciplinary validity: challenges and prospects
Evidence acquired inside the mono-disciplinary matrices of neurobiology, clinical psychology and psychopathology is deeply insufficient in terms of their validity, reliability and specificity, and can not reveal the explanatory mechanisms underlying mental disorders. Moreover no effective trans-disciplinary connections have been developed between them. In epistemological perspective current diagnostic tool are different but overlapping instruments exploring the same phenomenology. In line with the more scientific re-definition of mental disorders we defend the view that this process should take place under intensive (not extensive) dialogue with neuroscience. As Kato suggested only neurobiological studies using modern technology could form the basis for a new classification.This is to say that categorical approach in diagnosis should be abandoned in favor of broader diagnostic constructs (such as dimensional and prototype units) which are endorsed or "flanked" with data from neuroscience. Those broader units should be a subject of comprehensive evaluation of the personal narrative in context.
Szabó, László E.
Eötvös University, Hungary
Is the special principle of relativity a falsifiable statement?
I believe, there is a wide consensus that it is, or at least it is intended to be, a falsifiable statement. If so, then it must have an unambiguous meaning, by which we can, at least in principle, to verify empirically whether it is satisfied or not. First, I will try to give a precise formulation of the principle, which we can read off from the standard textbook applications. Then, I will show a few non-trivial examples in which the statement has no unambiguous meaning; consequently, it is not verifiable whether it is true or not, in spite of the fact that all the relevant physical equations are Lorentz covariant.
Tekin, Serife
Boğaziçi University, Turkey
Theorizing Looping Effects: Lessons from Cognitive Sciences
Ian Hacking's "looping effects" tracks the interactive causal process in which the subjects of human sciences change in response to being classified, altering, in turn, the initial classifications. Early Hacking advances that mental disorders cannot be natural kinds because they are subject to looping effects, fueling significant discourse in philosophical psychopathology. Appealing to research in cognitive sciences, I address the ambiguity in Hacking's discussion of looping effects in the context of psychopathology. I argue that the ambiguity about the necessary and sufficient conditions for looping effects obscures how subjects' self-concepts and behaviour change upon receiving a psychiatric diagnosis. The source of this ambiguity is Hacking's failure to engage with the complexities of mental disorders and the self that is subject to psychopathology in theorizing looping effects.
Turner, Derek
Connecticut College, USA
The Relaxed Forces Strategy for Testing Natural State Theories: The Case of the ZFEL
This paper examines one initially promising strategy, which I call the "relaxed forces strategy," for using empirical evidence to discriminate between rival natural state theories. In their recent book, Biology's First Law, McShea and Brandon use this strategy to make an empirical argument for their zero-force evolutionary law (or ZFEL). But because it is not possible to determine what counts as a relaxed forces condition without appealing to the natural state theory one is trying to test, the relaxed forces strategy has a circularity problem. As a result, the empirical case for the ZFEL is weaker than McShea and Brandon suggest.
Votsis, Ioannis
University of Düsseldorf, Germany
Why Care about the Scientific Realism Debate?
In this talk, I try to provide motivation for why one ought to take the scientific realism debate seriously, paying particular attention to two groups: philosophers of science and scientists. Among other things, it is argued that various debates in the philosophy of science as well as in science turn out to involve, sometimes even inadvertently, substantial epistemic or metaphysical claims of the kind being debated in the scientific realism debate.
Waters, C. Kenneth
University of Minnesota, USA
Metaphysical Implications of Conceptual Practice in Genetics
What is the general nature of the natural world? Many philosophers assume that the world consists of different kinds of fundamental entities, properties, relations, processes, or structures. When science succeeds, on this view, it must be cutting nature close to its joints, revealing the fundamentals. In biology, philosophers often focus on questions about the fundamental kinds of entities such as individuals, on fundamental properties such as fitness, and on fundamental processes such as group selection. Research is often framed by questions such as 'what is a biological individual?', 'what is fitness?', 'what is group selection?' In this talk, I will argue that trying to answer such questions by philosophically scrutinizing conceptual practice in the most advanced biological sciences undermines the very metaphysical upon which these questions are based.
Woleński, Jan
Jagiellonian University, Poland
Does Physics Rest on Philosophical Assumptions or Lead to Philosophical Conclusions?
It is frequently maintained that physics rests on philosophical assumptions or entails philosophical conclusions, for example, by saying that indeterminism follows from quantum mechanics. Consider the following reasoning. Quantum theory entails that location and momentum of a particle cannot be simultaneously measured (the Heisenberg principle). Thus, since predictions of its future behavior is limited, determinism is false. However, this argument is dubious, because a conclusion of an inference cannot have terms which do not occur in its premises. This is just the case of the reasoning in question. The term in question must be already defined, for example, in the way proposed by Heisenberg himself. His definition as well as any other belongs to philosophy, not to physics. This shows that physics leads to philosophical conclusions via special interpretations. The same concerns "philosophical" premises of science.
Wolters, Gereon
University of Konstanz, Germany
Ambivalence and Conflict: Catholic Church and Evolution
Somewhat traumatized by the Galileo Affair the Church until recently showed low profile in the (unavoidable) conflicts with science, evolutionary theory included. The talk presents a categorization of possible relationships between science and religion by distinguishing between "Galilean conflicts", which are about mutually exclusive statements about matters of fact, and Freudian conflicts where an empirical science tries to explain away religion as a phenomenon in its own right. In the light of this distinction I deal with the reactions of the Church since 1859, particularly with the ambiguous position of the present Pope.
Zilhão, António
Lisbon University, Portugal
Moore's Problem
Moore's 'paradox' arises from the fact that consistent propositions of the form of (1) and (2):
(1) It is raining but I believe it is not raining
(2) It is raining but I don't believe it is raining
strike us as being contradictory. Shoemaker explained this oddity by producing a proof that belief in such sentences is either inconsistent or self-refuting. For Sorensen, many propositional attitudes have scopes smaller than the class of consistent propositions. Inaccessible consistent propositions are 'blindspots'. Moore-propositions are the blindspots of belief. According to either, Moore-propositions are unbelievable. I'll argue that some Moore-propositions are actually believable.
Program Committee: Mehmet Elgin (co-chair), Peter Machamer (co-chair), Ali Osman Gündoğan, John D. Norton |