home
   ::: about
   ::: news
   ::: links
   ::: giving
   ::: contact

events
   ::: calendar
   ::: lunchtime
   ::: annual lecture series
   ::: conferences

people
   ::: visiting fellows 
   
::: postdoc fellows
   ::: resident fellows
   ::: associates

joining
   ::: visiting fellowships
   ::: postdoc fellowships
   ::: resident fellowships
   ::: associateships

being here
   ::: visiting
   ::: the last donut
   ::: photo album


::: center home >> events >> conferences 2010-11 >> modedlling >> abstracts

Epistemology of Modeling & Simulation: Building Research Bridges Between the Philosophical and Modeling Communities

Abstracts

Papers

Petter Almklov, NTNU Social Research Ltd.
Thomas Østerlie,
Norwegian University of Science and Technology
Reservoir models as representations and tools
 
Drawing on a pragmatic epistemological perspective, we discuss how the reservoir model is used to represent an oil reservoir in petroleum reservoir management.

We first discuss the types of information available about the reservoir, and how combinations and extrapolations of these are used to construct integrated conceptions of it. There are three main data types. Seismics: a low resolution “ultrasound image” of the rock structures. Well logs: detailed, short range observations along the paths of existing wells. Production data: the observations of flow and pressures in and out of existing production and injection wells.

While a great amount of such data is available in reservoir management, it is still incomplete, in the sense that there are vast spaces below seismic resolution and between well observations of which there is no data. As a consequence reservoir models have great uncertainty. Due to the copious amounts of data, however, numerical models are necessary to manage the reservoir. The main reservoir simulation model contains the legacy of simplifications, compromises and extrapolations necessary to obtain integration, for its specific purposes. This is made particularly evident when it is employed beyond its normal applications, as in recent strategic developments. The multitude of combinations possible in the data holds a great potential for knowledge of the reservoir and representations are as such tools for understanding it. This is currently often bypassed by a simplified notion that the reservoir model is the best representation of the reservoir.

Brett Bevers, University of California, Irvine
Everett's Quantum Theory of Measurement: Fundamental Theory as Phenomenological Model

Hugh Everett III presented his model of quantum measurement, often called the many-worlds interpretation, as a solution to an important problem in theoretical physics. He proposed that a quantum measurement be modeled as a quantum interaction that entangles microscopic and macroscopic systems—including experimenters themselves. While Everett’s model is fairly straightforward in application, it encounters interpretive difficulties that limit its appeal within the physics community. His proposal met with resistance from the start, and we know that serious questions regarding interpretation were posed. Everett’s response to this criticism was to stress the empirical adequacy of his model of measurement and its potential applications, while eschewing any other question of interpretation. In some places, he argued broadly against realism in science, and proclaims that “any physical theory is essentially only a model for the world of experience.” We show that Everett held that his theory of measurement need only provide a method for (1) setting up a model of a measurement process, and (2) associating features of this model with the standard predictions—without creating an artificial limit to the applicability of quantum theory. Our understanding of the way that Everett approached the measurement problem explains why he was silent on various details, without having to conclude that he was oblivious to central issues.

Alisa Bokulich, Boston University
Some Lessons from Geomorphology

Prediction and explanation have long been recognized as twin goals of science, and yet a full understanding of the relations—and tensions—between these two goals remains unclear.  Here I examine a field known as geomorphology, which is concerned with understanding how landforms change over time.  The complexity of geomorphic systems makes the use of idealized models essential, and these models are typically trying to synthesize processes occurring on multiple time and length scales. There is a growing recognition in geomorphology that the sort of models that turn out to be the best for generating predictions (detailed, bottom-up, physically-based “simulation” models) are not the same kinds of models that are best for generating explanations (highly idealized, cellular “reduced complexity models”).  I examine three cases of explanatory models in geomorphology—a model-explanation of river braiding, a model-explanation of a characteristic coastline evolution, and a model-explanation of the formation of rip currents along planar beaches—and show how they fit my general philosophical account of model-explanations.  Because these explanatory models were not designed to provide quantitatively accurate predictions, there arises the question of how such models should be tested/validated.  I will examine how geomorphologists are using robustness analyses to test these models and justify them as being genuinely explanatory.

David M. Frank, University of Texas, Austin
Modeling Chagas Disease Risk in Texas: Idealization and Multiple Models for Use

This paper uses the example of modeling Chagas disease risk in Texas to explore some interconnections between the philosophical issues of scientific idealization and the use of science in society. The modeling consisted of constructing species distribution models for the vector Triatoma species, computing an incidence-based relative risk map based on known occurrences of the Chagas-causing parasite Trypanasoma cruzi, and combining these and other risk metrics. In the resulting paper we argued that the risk of Chagas in Texas---particularly south Texas---is significant enough that Chagas should be declared reportable. This paper offers some preliminary philosophical reflections on this modeling process. The example of modeling Chagas risk illustrates the role of "multiple-models idealization" in the epidemiology of vector borne diseases. The case study also shows that the epistemic permissibility of idealizations in multiple models depends upon the use-context. In this case, one important use of these models was to support the normative claim that Chagas should be declared reportable in Texas. I argue that this modest goal permitted significant idealizations and modeling omissions, where these would be unacceptable in other use-contexts.

Eric Hochstein, University of Waterloo
Minds, Models, and Mechanisms
In this paper, I provide a new characterization of the role that folk psychology, or intentional language, plays in science. Specifically, I propose that folk psychology plays the role of a phenomenological model in our theories of mind. This new interpretation of folk psychology as a type of phenomenological model provides insights into its scientific benefits. Phenomenological models have an essential role in the sciences of the mind distinct from, but necessarily complementary to, the mechanistic models we use to understand underlying neurological and physiological mechanisms responsible for behaviour. In this regard, intentional descriptions work in conjunction with mechanistic descriptions, each with a different scientific burden to bear in our study of the mind.

Julie Jebeile, Institute for History and Philosophy of Science and Technology
From Models to Simulations: How is it Possible to Overcome the Loss of Understanding?

Unlike traditional models, simulation models hardly provide a genuine understanding of the phenomena under study. On the one hand, complicated numerical methods for solving equations are embedded in the simulation models. On the other hand, simplifications and idealizations are often reduced to a minimum. In such cases, the relations between the inputs and outputs of simulations are not easily graspable by a single human mind, and we arrive at what Johannes Lenhard (2006) calls the “complexity barrier” of simulation models.
Once this complexity barrier is reached, how is it possible to overcome the induced loss of understanding? A recent response claims that such a possibility can be found in developing meta-models like those designed by economists. However, there is usually no such possibility available when we want to investigate complex systems, e.g. turbulent flows, spin glasses, population genetic systems, or stock markets. So what does remain when all that we have for describing target phenomena are simulation models? Can we ever gain understanding from simulation models?
In order to answer this question, I propose to examine the actual scientific practices in laboratories before and after the simulation process. These practices reveal insight into the kind of understanding the users of simulations are looking for. From their examination, I outline a pragmatic account of understanding according to which the user can understand the simulated phenomena by carrying out an investigation on the visual representations that simulations generally provide.

Koray Karaca, University of Wuppertahl
Understanding Data-Acquisition Through Process Modeling: The Case of the ATLAS Experiment

Diagram models are in use over at least three decades in a wide range of applications in system and software engineering. However, they have not yet received the attention of philosophers of science. In this work, I shall examine diagram models and their use in the ATLAS experiment, one of the three high-energy particle physics experiments currently underway at the Large Hadron Collider at CERN. I shall discuss three types of diagram models; namely, data-flow, sequence and communication diagrams. I shall point out that while data-flow diagrams involve graphical modeling of the processes that capture, manipulate, store, and distribute data between a system and its environment, sequence and communication diagrams graphically represent the ways in which the objects in a system interact or communicate with each other to achieve a common task. I shall examine the use of the aforementioned diagram models in the data-acquisition system of the ATLAS experiment and I shall conclude that they are used to organize various complex experimental procedures leading to the acquisition of experimental data. I shall thus argue that diagram models serve as “data-acquisition models” in the ATLAS experiment. Moreover, I shall point out that diagram models encompass certain sets of standards that enable to exchange ideas among different sub-groups of the ATLAS experiment, such as detector, data-selection and data-analysis sub-groups. In this sense, diagram models enhance communication within the ATLAS collaboration, thereby saving time and resource.

Ashley Graham Kennedy, University of Virginia
Idealization and Inference: How False Models Explain

In this paper I argue for a non-representationalist view of model explanation via an examination of two models from contemporary astrophysics. I use these examples to show that scientific models require idealizations in order to explain. The view that I present stands apart from other accounts of model explanation, because it does not depend upon any one view of scientific representation, either strong or deflationary. Instead, I argue that the explanatory power of models is not dependent upon the degree of representational accuracy or adequacy that the model has, but rather upon its false, non-representational, components. It is the manipulation of these components against a background of realistic data that generates the contrastive cases necessary for model explanation.

Tarja Knuuttila, University of Helsinki
Andrea Loettgers, California Institute of Technology
Synthetic Modeling and the Functional Meaning of Noise

In synthetic biology the use of engineering metaphors to describe biological organisms and their behavior has become a common practice. The concept of noise provides a compelling example of such transfer. But the use of this notion might seem also confusing: While in engineering noise is a destructive force perturbing artificial systems, in synthetic biology it has acquired a functional meaning. It has been found out that noise is an important factor in driving biological processes in individual cells or organisms. What is the epistemic rationale of using the notion of noise in both of these opposite meanings?  One philosophical answer to this question is provided by the idea of negative analogy. According to it negative features that come from an analogical comparison between two fields can be used as inference opportunities prompting theoretical development (e.g. Hesse 1966, Morgan 1997, Bailer-Jones 2009). But this is only part of the story. We will argue that the notion of noise in the field of synthetic biology actually subsumes more heterogeneous interdisciplinary relations and influences, which are drawn together by the combinatorial modeling practice characteristic of synthetic biology. In the field of synthetic biology it is customary for the same scientists to carry out and combine experiments on model organisms, and mathematical and synthetic modeling. In our account of the emergence of the functional meaning of noise in synthetic biology we pay particular attention to the various ways in which analogical reasoning was intertwined with the use of general computational templates (cf. Humphreys 2002, 2004).

Ronald Laymon, University of Texas, Austin
The Resurrection of Ancient Genes: Heuristic Methods of Exploiting the Data Produced by Computer Generated Phylogenetic Reconstructions to Determine the Mechanistic Basis for the Evolution of Extant Phenotypes 

There are no genetic fossils.  That’s a serious impediment for evolutionary biologists seeking to uncover the mechanistic basis for the emergence of complex systems.  Current methods of computer generated phylogenic reconstruction may be used, however, to resurrect genetic fossils in some cases.  But because of epistasis, the phylogeny and synthesized ancestral genes are not sufficient to reveal the mechanistic basis for the evolutionary process.  Experimental examination of all mutational combinations and trajectories from phylogenic node to node is not a practical possibility.  This presentation focuses on how a selection heuristic and subsequent experimental examination was used to uncover the mechanistic basis for the evolution of certain nuclear receptors.  It will also be shown that the mechanistic basis is not modular in a sense that some philosophers have claimed is necessary for an acceptable mechanistic explanation.  Epistasis and modularity are thus inherently in conflict.

Chiara Lisciandra, Tilburg Center for Logic and Philosophy of Science
Ryan Muldoon, University of Western Ontario
Stephan Hartmann, Tilburg Center for Logic and Philosophy of Science
Modeling and Simulation in Political Philosophy: On the Emergence of Norms

The emergence of norms in a society has recently attracted the attention of philosophers of social science, economists and political philosophers, both because we can learn much philosophically from the study of models and simulations of norm emergence, and because of the methodological questions it raises. To explore these issues, our study considers the emergence of the simplest kind of norm: descriptive norms. These are rules of behavior which often spread in a society if the members of the society exhibit a certain level of conformism. An individual feels the urge to follow a descriptive norms if an increasing number of the relevant part of the population follows it already. To model the process, we develop a Bayesian decision-making framework within an agent-based simulation. This allows us to model the individual decision problem (i.e. the calculation of a posterior probability) based on three quantities: a propensity towards the action in question (= the prior), other people’s behavior (= the evidence), and the individuals’ degree of conformity (which is related to the likelihood ratio). Drawing upon our previous results on a non-Bayesian probabilistic process (Muldoon et al. 2010), we point out the comparative advantages of the Bayesian framework for modeling social behavior and we explore the relations between automatic norms compliance and rational processes of belief updating.

Elisabeth A. Lloyd, Indiana University
The Role of ‘Simple’ Empiricism in the Debate about Satellite Data and Climate Models

Recent writings on approaches to theories, models, data, and empiricism have been struggling to articulate more specifically how the theory-ladenness and model-dependency of data manifests itself in the sciences, e.g. in the work of Ronald Giere, Bas van Fraassen, and Isabelle Peschard.  In this paper, I draw out how disparate foundational approaches to data and models manifested themselves in a two-decade long debate about the existence and evidence for global warming in the tropical troposphere.  It is a case in which the climate models appeared to some data handlers to be falsified by the satellite and weather balloon data, the latter of which they took as straightforwardly or transparently representing the state of the climate.  Many modelers and some other data handlers, however, said they would not accept the data as falsifying the models, and would not, in fact, accept the datasets as representative of the climate state itself.  In the end and in short, the models were right and the data were wrong, and therein lies an interesting story about how to think about data and models.

Stefan Mendritzki, Eindhoven University of Technology
Phenomenal Adequacy and Mechanistic Explanation: Clarifying the Concept of Validation of Agent-Based Models

The use of agent-based models (ABM) is often partially justified by claims about the resulting models being more realistic than other types of models.  If this were the case, it should be reflected in discussions of validation, which is the process of comparing of models to their associated target systems.  However, the best developed idea of validation of ABMs, output validation, fails to address these intuitions.  The alternative of structural validation could in principle address them but suffers from conceptual ambiguities.  The concept of mechanism validation is developed and claimed to be a superior alternative to structural validation.  The key advance of mechanism validation over structural validation is that it makes possible claims about levels of mechanism plausibility. This explicit view of the relationship between mechanisms and ABMs reinforces Epstein’s ‘generativist’ claim that ABMs are unique in being by definition composed of mechanisms.  It also agrees that generation is necessary for developing possible explanations.  Where it goes beyond the generativist approach is in characterizing the space between possible and actual explanations.  An important methodological result is that those ABMs focussing on mechanism validity play a different role in the research process compared to those ABMs focussing on output validity.  This has an influence on such issues as the specificity of target systems, framing as model selection vs. model construction, and the potential strength of validity tests.

Paolo Palmieri, University of Pittsburgh
An integrated theory of human hearing

This project aims at developing an integrated theory of human hearing. The novelty of the project consists in combining the history and philosophy of sound perception, philosophical reflection on epistemological issues raised by sound perception, psychoacoustic experimentation, and computer modeling of human hearing.

Roger Stanev, University of British Columbia
Modeling and Evaluating Statistical Monitoring Decisions of Clinical Trials

In my research, I propose a decision theoretic framework—a second order decision framework together with simulations of it—that provides means for modeling and evaluating statistical monitoring decisions. Incidentally, the framework is not arbitrary. It is sensitive to and based on what data monitoring committees (DMCs) often take to be relevant factors, including their own statistical procedure and plan for how to conduct trials. (Stanev 2011 is an example of such work) My talk, however, will focus on a specific problem regarding the pair of tasks modeling-evaluation: what does it take for an interim statistical decision of RCT to be considered a good decision? This question is important not only to philosophers and modelers but also to anyone who wants to evaluate interim monitoring decisions of RCTs. While statistical approaches tend to focus on the epistemic aspects of statistical monitoring rules (cf. Proschan, Lan, and Wittes 2006) often overlooking ethical considerations, ethical approaches to RCTs tend to fall short of providing the necessary means for evaluating monitoring rules and early stopping decisions (cf. Freedman 1987, 1996) by neglecting the epistemic dimension.

Eran Tal, University of Toronto
The Epistemology of Calibration: Modeling and Simulation in Contemporary Physical Measurement

The aim of calibration is to establish a relation between possible readings of a measuring instrument and corresponding measurement outcomes. According to the naive operationalist view, calibration is carried out by extrapolating data obtained by reference to standards. This view fails to make sense of contemporary standardization practices in two respects: first, the operationalist account does not explain how standards themselves are evaluated for accuracy. Second, the operationalist view fails to account for the centrality of theoretical models to the calibration process. As this paper shows, metrologists (i.e. experts in reliable measurement) do not simply choose standards arbitrarily. Rather, realizations of basic units such as the second, meter and kilogram are based on complex theoretical and statistical models.

Lisa Warenski, Union College
Financial Modeling in the Banking Industry: Some Reflections from the Trenches
This paper examines the epistemology of risk assessment in the context of financial modeling for the purposes of making a loan underwriting decision. An actual request for financing from a company in the paper and pulp industry is considered in some detail. The paper and pulp industry was chosen because (1) it is subject to some specific risks that have been identified and studied by bankers, investors, and managers of paper and pulp companies and (2) certain features of the industry enable analysts to quantify the impact of specific risk events of a given dimension on a company’s future financial performance. While companies in other industries may be subject to similar risk factors, the impact of risk events may be more difficult to gauge in those industries. The ability of financial analysts to model the impact of a risk event, and hence quantify a credit risk, increases the predictive accuracy of the model. I argue that bankers and regulators should recognize the uncertainty associated with unquantifiable credit risk in financial models and view it as a credit risk factor in and of itself. Evaluating the relative degree to which credit risk is quantifiable in financial models is a
potentially significant yet largely unrecognized tool for credit analysis. I consider some specific applications of this assessment tool for managing risk within the banking industry.

Richard Kent Zimmerman, University of Pittsburgh
Landmines for MIDAS: A Critique of the Philosophical Origins of
Health Promotion Theory and of the Legacy of the Vienna Circle

According to the one of the main texts in the health promotion, the dominant paradigm is logical positivism that was developed by the Vienna Circle. The influence on American academics has been large, as members of the circle emigrated and influenced health education and promotion.  Indeed, a number of premises underlying discussions at some MIDAS meetings can be traced to the influence of the Vienna Circle.  Central features of logical positivism include the unity of science, opposition to metaphysics, utilitarianism, and reductionism, which are critiqued.  The current efforts of MIDAS to investigate a thicker view of reality that includes law, sociology, and psychology, in addition to the infectious disease dynamics, are to be applauded.  However, for MIDAS to make a greater impact as it moves further into health promotion, the foibles of logical positivism must be avoided and the virtues, such as humility, justice and openness to peer review, well-regarded.

Posters

Gary An, MD, Department of Surgery, University of Chicago
Insights into Core Epistemological Issues in Biomedical Research Through the Use of Computational Modeling and Simulation: Addressing the Fallacy of Ontological Truth and Learning to Deal with Incompleteness

The bio-complexity of systems diseases, such as cancer and sepsis, has led to a realization that traditional reductionist methods are insufficient for their effective characterization, raising issues concerning the epistemological basis of the reductionist scientific method. While, intellectually, researchers understand the limits of empiricism, from a functional standpoint the limitations of Logical Positivism have not permeated the working biomedical research community. Mechanism-based, dynamic simulations serve three important roles in facing the epistemological limits associated with biomedical research: 1) the explicit nature of simulation rules provides an enlightening counterpoint to assumptions inherent biological experimental models, 2) simulation makes explicit the limited extent of biomedical knowledge and forces recognition of the gaps in that knowledge, and 3) bridge discovered limitations in mappings between biological experimental models and their clinical referents. I believe that the most important role of mathematical and computational modeling/simulation in the biomedical arena is as rhetorical objects for epistemological discourse. They can aid in defining the experimental frame for evaluating a particular problem, and more explicitly compare competing hypotheses. A simulation-based scientific discourse can avoid the futile infinite regress of trying to find the “next” mechanism/mediator by providing a means of dealing with the inherent incompleteness of biological knowledge through integrating and instantiating what is already known.

Nina Atanasova, University of Cincinnati 
Explanations, Models, and Simulations in Neuroscience

Mechanism is among the leading philosophies of neuroscience nowadays. Its proponents generally assume that a major aim of contemporary neuroscience is discovering mechanisms underlying neural phenomena and this is a process of explaining. Thus the main question that concerns them is: „What constitutes a good mechanistic explanation?‟ (e.g. Craver 2007, Bechtel 2008).
I argue that:
(1) The normative project of specifying the requirements for good explanations in neuroscience, as presented by Craver (2007), is flawed at least because it doesn‟t fit the actual practice of neuroscience due to the requirement for completeness;
(2) Bechtel admits that our knowledge of underlying mechanisms is partial due to epistemological restrictions but fails to demonstrate how this partial knowledge plays a satisfactory explanatory role;
(3) The seeming impasse of the situation, as it appears from the Craver-Bechtel dilemma (either methodologically strong but unrealistic or epistemologically realistic but methodologically weak view of mechanistic explanations), could be overcome if one abandons the idea that the study of mechanisms in neuroscience is directed exclusively or even mainly towards producing (accurate) explanations and acknowledges that simulations of mechanisms generate valuable cognitive production of a significantly different kind from the traditional theoretical explanations;

(4) This result, in turn, suggests the need of a novel approach to the epistemic evaluation of models and simulations in neuroscience. I use as an example the research on depression and anxiety that employs simulations with animal models.

Thomas  Breuer, Research Center PPE
Collateralised Lending and the Leverage Cycle: In Search of Experimental Evidence

Geanakoplos [The leverage cycle. Discussion Paper 1715, Cowles Foundation, 2009] develops a general equilibrium theory of the recent financial crises based on competitive collateralized lending. The theory is important because it identifies a potentially structural crises mechanism at a relatively fundamental level. In his theory the leverage cycle (and financial crises going with it) is an endogenous feature of competitive markets for collateralized lending: It is an equilibrium phenomenon. In line with other general equilibrium models, Geanakoplos makes non-trivial assumptions: Agents are assumed to be perfectly rational and have perfect information; the decision situation is highly simplified. The model delivers properties of an equilibrium state – but it remains open to which degree the equilibrium state is representative of the system behaviour.

I make a proposal for (1) an experimental market and (2) an agent-based simulation of this market. To this end the initial model used by Geanakoplos, which uses a continuum of agents, is reformulated as a finite type model which can be implemented in principle in an laboratory experiment and an agent-based simulation. The simulation allows for a thorough analysis of convergence or non-convergence to equlibrium for a large number of agents but has to make assumptions about the behaviour and preferences of agents. The experimental market is populated by a handful of real agents with their bounded rationality. Contrasting the experimental market with Geanakoplos’ equilibrium model addresses the question whether imperfect rationality and incomplete information matter. The agent-based simulation should provide answers to the question in how far the equilibrium state is representative of the system behaviour.

Roger S. Day, University of Pittsburgh
Challenges of Realism and Validation in Simulation with a Big Intervention Space

Simulation has a rich history in support of education and training. Early efforts aimed at instilling concrete measurable skills, presenting learners with a scenario and a short list of discrete choices, with feedback classifying choices into “right” and “wrong”. Then one moves on to the next problem. In reality, the realm of choices is rarely convertible to a short list, the “correct” answer is not always known, and one does not “move on” from a challenge after one choice, but engages a sequence of choices interweaved with responses of the system simulating factory, patient, ecosystem, economy, or population.

Our experience is with cancer education for medical students and oncology fellows, focused on tumor heterogeneity and cell population dynamics. With the explosion of molecular biology data (and perhaps knowledge), these questions haunt us: How will we train new cancer scientists to handle the vast volume of information, develop their knowledge, use their creative powers effectively, and find the cour­age to emerge from the cover of overspecializa­tion in tackling the cancer problem? One approach is to create a learning environment based on simulation of cancer biol­ogy with educational applications.

We developed the Oncology Thinking Cap (OncoTCap) to allow a broad intervention space, freeing learners from the preprogramming of educators and encouraging exploration. This poses tough challenges: authoring biomedical simulations efficiently; face validity and deep valid­ity; finding the right degree of realism; and managing random­ness. Our experience with OncoTCap illustrates these issues

Ahmet Erdemir, Cleveland Clinic 
Knowledge Discovery Through Computational Biomechanics: When We Cannot (or Do Not Want to) Measure

In our exploration of the biomechanical system, we are commonly challenged by our inability to obtain direct measurements i) to explain the function of healthy or diseased state of the human body and ii) to establish knowledgebase for clinical decision making. As a result, computational modeling, through descriptive analysis and predictive simulations, has been supporting the research of biomechanical function and prescription of interventions. Biomechanics literature provides many supportive arguments for simulation based knowledge discovery: estimation of muscle forces to understand neuromuscular control and related movement disorders; characterization of subject-specific tissue properties that may provide insight into the progression of pathology; multiscale investigations to establish the casual mechanical and biological relationships between body/organ function and tissue/cell/molecular response; identification of damage mechanisms and risk factors; and strategies for management of pathological conditions through virtual prototyping. Nonetheless, the value of modeling and simulation for knowledge discovery is dictated by the balance between engineering constraints, clinical requirements, and philosophical, sociological and political boundaries. For example, the desire for accurate representation and solution of the physiological system competes with clinical urgency, which dictates accelerated model development using limited data. Effective mining of data to openly deliver information through scientific and public dissemination and to aid regulatory agencies and public policy making can help simulation-based scientific discovery and its delivery for improved health care.

Joseph Andrew Giampapa, Software Engineering Institute, Carnegie Mellon University
A Framework for Modeling and Simulating Human Cyber-Physical Systems

A cyber-physical system (CPS) is an integration of computation and physical processes. A human cyber-physical system (HCPS) is the integration of physical robots and software agents with human activities – at the levels of individuals, organizations, and combinations thereof – operating in a context that is both human and cybernetic.  When reasoning about the interactions of individuals in an HCPS, their features of being are less important than the structural features that determine how they can act and interact with other HCPS individuals.  Through this shift of emphasis, HCPS individuals – be they robots, autonomous software agents, human individuals, human organizations, and teams that are composed of any combination of these – can be generically referred to as agents. The structural features that become important are: the inter-agent communication network models, the models of authority relations and normative behavior, the generalized distributed coordination protocol models, and coordination refinement models derived from application- and domain-specific considerations.

Each model is typically studied in abstract isolation of the other models, with limiting assumptions about their interactions with each other.  For example, coordination protocols often assume complete communications connectivity and point-to-point communications between any two agents. When real HCPS systems are assembled and tested, however, each HCPS agent is participating in multiple models, and it cannot be assumed that any two adjacent agents are participating in the same models. This poster introduces a framework by which the assembly of human cyber-physical systems can be studied, and the parameter space for configuring their interactions can be explored and understood.

Vanathi Gopalakrishnan, PhD, University of Pittsburgh, Department of Biomedical Informatics
Model Validity and Verification in the Context of Learning Classification Rules from Biomedical Data

Predictive modeling of biomedical data arising from clinical studies for early detection, monitoring and prognosis of diseases is a crucial step in biomarker discovery. Since the data are typically measurements subject to error, and the sample size of any study is very small compared to the number of variables measured, the validity and verification of models arising from such datasets significantly impacts the discovery of reliable discriminatory markers for a disease. Our group has been studying these questions for over a decade in close collaboration with domain experts. We have developed novel enhancements to the learning of classification rule models from biomedical datasets that provide fundamental frameworks to better answer these questions. We have also shown that classification rule learning produces modular, interpretable rule models that can be used successfully for biomarker discovery and verification.
One novel enhancement is an algorithm called Bayesian Rule Learning (BRL). The BRL method uses a Bayesian score to quantify the uncertainty about the validity of a rule model. This method seems promising as it produces accurate, parsimonious models that could provide bench scientists with biomarker panels for disease prediction that contain fewer numbers of markers for further verification. Another novel enhancement is a framework called Transfer Rule Learning (TRL). The TRL framework represents an important development for the analysis of clinical data as it supports incremental building, verification and refinement of rule models. I will provide insights into these methods and their application to proteomic biomarker discovery for Amyotrophic Lateral Sclerosis and lung cancer.

Ethan I. Huang & Chia-Ling Kuo, University of Pittsburgh and National Institutes of Health
Fourier patterns represent tone-burst and click response latencies in auditory nerve fibers

    Despite the structural differences of the middle and inner ears, the latency pattern in auditory nerve fibers to an identical sound has been found very similar across numerous species. Studies have shown the similarity in remarkable species with distinct cochleae or even without a basilar membrane. This stimulus-, neuron-, and species- independent similarity of latency cannot be explained by the concept of cochlear traveling waves that is generally accepted as the main cause of the neural latency pattern.
    We hypothesize that the hearing organ synchronizes auditory neural activity with a mathematical pattern (Fourier pattern) created by marking the next high peak for each sinusoid component of the stimulus, to maximize accuracy and efficiency. A combined research of experimental, correlational and meta-analysis approaches was used to test the hypothesis. We manipulated phase encoding (with vs. without Fourier pattern) and stimulus to test their effects on the predicted latency pattern. We compared animal studies using the same stimulus to determine the degree of relationship between the predicted and neural latency patterns.
    Our results showed that each marking accounts for a large percent of a corresponding peak latency in the peristimulus-time histogram. For each of the stimuli we considered, the latency predicted by the Fourier pattern was highly correlated with the latency counterpart in the auditory nerve fiber of the representative species.
    This phase-encoding mechanism in Fourier analysis is suggested to be the common mechanism that makes the across-species similarity possible.

Delia Shelton, Ali Ghazinejad, Hamid R. Ekbia, Indiana University
umWeltian Empirical Studies and Developmental Situated-Embodied Agents:Computational Models of Group Behavior

The unique sensory modalities of organisms’ gives rise to different environmental perceptions of organisms. This premise is the basis of von Uexkull’s claim that non-human animals are best understood by studying them in light of their self-worlds, not ours. The failure to employ von Uekull’s notion of umwelts, has resulted in the use of poor experimental procedures to assess the behavioral and physiological development of infant rodents. Further, these experimental manipulations have led to misclassification of infant rodents as animals that are unable to thermoregulate. The reason for this supposition lies in the implementation of experimental conditions that neglect infant rodents’, developmental niche, which is highly suited to their altricial state. The major points of contention lie in two faulty assumptions: 1) there is a central measure of body temperature, and 2) the physiological measures of infants are comparable to adults.  The computer simulation proposed here attempts to model the contact behavior of infant rodents as a function of thermoregulation in a three-dimensional environment. The model takes a multi-agent modeling approach to capture the emergent properties of pups interacting with other pups and the environment in regulating their body temperature. The model can serve as a medium to make predictions on how infant mice will behave under different environmental conditions.

Dennis W.Tafoya, CompSite, Inc.  
Simulations and the Application of Complexity Theory to Study Human Social Systems: An Examination of Epistemological Issues Associated with the Use of Simulations in a Study of the Emergence and Self-Organization of Labor Unions as Complex Adaptive Systems

Complexity theory is a tool used to examine and explain ways self-directed interactions emerge and can organize to produce tangible, meaningful phenomena. The theory is used widely in the physical and natural sciences but the theory's capacity to describe self-directed phenomena as well as its robust potential to contribute to efforts to quantify, even predict, behavior has lead to growing interest in applying complexity theory to the social sciences -- a challenging but not unreasonable goal.

Moving from applications once reserved for the physical and natural sciences to human social sciences is not always straightforward.  This paper introduces new models for examining behavior in organizations and uses simulations to explore one particularly unique phenomenon, the joining process associated with labor unions.  Together these material highlight special issues, from ethical to pragmatic to legal, that may arise in the study of organizations under special conditions.

From an epistemological perspective, the models and simulations illustrate benefits and potential risks regarding the use of models and simulations in applied research.  From a theoretical perspective the material illustrates the nature of complexity theory in social settings and adds to our understanding of how the theory may contribute to the study of human social systems.  Examining a common experience such as the joining process associated with unions provides insights into a facet of collective behavior among special interest groups or on a larger scale, among social movements generally.  In the end, the material provides a level of timeliness and realism most readers will find familiar and useful.

Sebastian Zacharias and Lenel Moritz, Max Planck Institute of History and Science and University of Mannheim, school of Law and Economics
Models as Representatives, Simulations as Tentative Transfer. A Case Study of the Successes and Failures of the Solow-Swan Growth Model
Simulations are experiments run on models instead of real systems. Thus, in order to understand simulations, we need to analyze how models differ from real systems and how this difference affects simulations. As a case in point, we study the Solow-Swan model of economic growth and its successes and failures in simulation.

We suggest a novel analytic framework which distinguishes five elements of models: (1) the Situation Type in which the model applies, (2) the Object Class to which the model applies, (3) the Input, i.e. the model’s explaining phenomena, (4) the Output, i.e. the model’s phenomena to be explained, (5) and a Mechanism which transforms Input into Output. We understand models as representatives/ analogue systems in the sense that the model’s Mechanism does not need to reflect reality. Instead, a model successfully models a real system iff both assign the same Output to the same Input. Such, the Solow-Swan growth model provides a very simplified, if not distorted account of actual economic processes, yet it is a successful model.

Models have epistemological value only if they transcend the experimentally established regularities, i.e. the data on which they were built. We argue that, simulations usually consist in extensions of established Input-Output relation to new Objects or new Situations – or both. We will demonstrate how the Solow-Swan model is employed in this manner.

We call such extensions ’Tentative Transfers’ for they are attempts to transfer well-established regularities to new phenomena. We argue that Tentative Transfers are a powerful heuristic, providing guidelines for systematic inquiry. Successful simulation on the model, however, does not guarantee successful application. We stress that Tentative Transfer always requires validation on the real system; otherwise it is but a theoretically informed guess.

 
Revised 4/7/11 - Copyright 2011