Representation, Idealization, & Explanation in Science
Talks at this conferenceAdd a talk
Friday April 5th
· 9-9:15am Opening remarks
· 9:15-10:30am Andrew Wayne “Model-based explanation in context”
· 10:30-11:45am Jill North ““Metaphysics and Equivalence””
· 11:45am-1pm lunch
· 1pm-2:15pm Paul Teller “Pan-Perspetival Realism”
· 2:15-3:30pm Catherine Elgin “Models as Felicitous Falsehoods”
· 3:30-3:45pm coffee (at conference venue)
· 3:45-5pm John Norton “Idealization and Approximation: Why the Difference Matters”
· 5-6:15pm Otavio Bueno “Representing and Seeing (!) the Unobservable”
Saturday April 6th
· 9-10:15am Tarja Knuuttila & Natalia Carrillo-Escalera “Mechanisms and Epistemic Artefacts: Modeling The Nervous Impulse”
· 10:15-11:30am Marc Lange “What could mathematics be for it to function in distinctively mathematical scientific explanations?”
· 11:30-12:30pm lunch
· 12:30-1:45pm Michael Strevens “Idealization, Prediction, Difference-Making”
· 1:45-3pm Melissa Jacquart “Idealization and Representation in Astrophysical Computer Simulations”
· 3-3:15pm coffee & snacks (at conference venue)
· 3:15-4:30pm Chris Pincock “A defense of veritism about explanation”
· 4:30-5:45pm Elay Shech “Do Idealizations Afford Understanding? The Case of the AB Effect”
· 5:45-5:55pm Closing remarks
Title: Model-based explanation in context
Abstract: This talk focuses on the connection between idealized local models and the larger scientific fields in which they are embedded. I contend that, in physics at least, local models are explanatory only when appropriately related to a global theory, and I develop a necessary condition for this relation. I use it to offer a solution to the vexed problem of explanatory asymmetry for non-causal accounts of explanation. Finally, I look at the case of recently-detected gravitational waves. What is interesting here is that there are two different kinds of models for deriving gravitational wave predictions in general relativity. I show how the condition can help distinguish explanatory from non-explanatory idealized models in this case.
Title: Approximation and Idealization: Why the Difference Matters
Abstract: It is proposed that we use the term “approximation” for inexact description of a target system and “idealization” for another system whose properties also provide an inexact description of the target system. Since systems generated by a limiting process can often have quite unexpected—even inconsistent—properties, familiar limit processes used in statistical physics can fail to provide idealizations but merely provide approximations.
Title: What could mathematics be for it to function in distinctively mathematical scientific explanations?
Abstract: Recently, several philosophers have suggested that some scientific explanations work not by virtue of describing aspects of the world's causal history, but rather by citing mathematical facts. This paper investigates what mathematical facts could be for them to figure in such "distinctively mathematical" scientific explanations.
Title: Metaphysics and Equivalence
Abstract: There has recently been increased attention among philosophers of physics to the notion of theoretical equivalence: of when two theories say the same things in different ways. I argue that there has been too much focus on the formal aspects of scientific theories, at the expense of what I call their ``metaphysical'' aspects---what theories say about what there is in the world, what it is like, and how and why things behave in certain ways. Another way of putting the point, linking it up to a theme of this conference: philosophers have unduly ignored the explanatory apparatus of a theory, which is essential to a theory and to questions of its equivalence with other theories. Although my primary conclusion should be somewhat uncontroversial, it will lead to more controversial conclusions for the non-equivalence of theories ordinarily considered equivalent.
Title: Idealization and Representation in Astrophysical Computer Simulations.
Abstract: One of the most fundamental scientific questions is “What is the universe made of?” To date, we only know the answer for about 4% of the universe, but astrophysicists tell us that perhaps a 24% of the remaining universe content is dark matter. However, we do not know, in any fundamental sense, the nature of dark matter, nor do we know where all the dark matter resides. In this talk, I detail collaborative work between astrophysicists and philosophers attempting to search for some of the “missing” dark matter. Our research group’s hypothesis that some of it resides in dark galaxies—pure dark matter halos that either never possessed or have lost their baryonic matter. We argue that we can locate dark galaxies by their signatures: the gravitational effects that they have on luminous galaxies through collision.
In this talk, I focus specifically on the role of idealization and representation in our computer simulations. By building computer simulations of dark galaxy/luminous galaxy interactions, we attempt to determine collisional morphology and kinematics. This case provides the opportunity to study the precise ways that idealization and representational trade-offs enter into the construction of simulations, and how they may determine values for simulation parameters. I argue that the use of simulation code that is flexible enough to de-idealize representations plays a specific role in our research group’s results. This is particularly salient when the simulations aim to connect a vast array of independent astronomical observations/phenomena (each of which are currently best explained by the presence of dark matter) to cosmologists’ more global arguments for dark matter.
Title: A defense of veritism about explanation
Abstract: Veritism about explanation is the view that truth is a necessary condition on genuine explanation. After clarifying what veritism does and does not involve, I respond to several recent objections to veritism that purport to draw on scientific practice. Five sorts of cases are often emphasized: (i) explanations that use entities like models that are not truth apt, (ii) past scientific theories that explain, despite their falsity, (iii) explanatory fictions or fables, (iv) abstract models that explain despite the omission of relevant details, and (v) idealized models that explain despite the distortion of relevant factors. I show how a defender of veritism has the resources to address these cases through a principled interpretation of these scientific practices.
Title: Pan-Perspetival Realism
Abstract: I reformulate conventional scientific realism as “referential realism”, the view that, when things go well, our theoretical terms have non-empty extensions. Referential realism fails, not because our well-formed theoretical terms have null extensions, but because the world is too complicated for our theoretical terms to get attached to any extension determining characteristic. Instead a term such as ‘atom’ functions in an open-ended fund of idealized models, simplified ways of thinking about the world. Might one agree that, e.g., there are no atoms, but that for each atom model there are real things that play the “atom role” in that model, what I call role player realism, an effort that also fails. Another worry is that, with no reference for the sub-visible, are we forced into instrumentalism? No, because the same kind of problems that undid conventional realism for the subvisible apply, in the same way, to the visible. But models, representing the world in simplified form still support a kind of realism because, though not exactly right, not right with respect to reference as well as properties and laws, when they work well they still count as knowledge of how things are, if you like, how things are, really.
Title: Idealization, Prediction, Difference-Making
Abstract: Every model leaves out or distorts some factors that are causally connected to its target phenomena—the phenomena that it seeks to predict or explain. If we want to make predictions, and we want to base decisions on those predictions, what is it safe to omit or to simplify, and what ought a causal model to capture fully and correctly? A schematic answer: the factors that matter are those that make a difference to the target phenomena. There are several ways to understand the notion of difference-making. Which are the most useful to the forecaster, to the decision-maker? This paper advances a view.
Title: Do Idealizations Afford Understanding? The Case of the Aharonov-Bohm Effect
Abstract: I argue that idealizations afford (objectual) understanding in the context of the (magnetic) Aharonov-Bohm effect. Two senses of understanding are discussed: understanding-why some phenomenon occurs and understanding-with, which has to do with understanding a scientific theory. Also, I outline of an account of understanding-with.
Catherine Z. Elgin
Title: Models as Felicitous Falsehoods
Abstract: I will argue that models enable us to understand reality in ways that we would be unable to do if we restricted ourselves to the unvarnished truth. The point is not just that the features that a model skirts can permissibly be neglected. They ought to be neglected. Too much information occludes patterns that figure in an understanding of the phenomena. The regularities a model reveals are real and informative. But many of them show up only under idealizing assumptions.
Tarja Knuuttila & Natalia Carrillo-Escalera
Title: Mechanisms and Epistemic Artefacts: Modeling The Nervous Impulse
Abstract: We discuss the artefactual account of model-based representation (Knuuttila 2011, 2018) through a case study on modeling the nerve impulse. While the various representational approaches on modeling have targeted, more or less abstractly, the relationship of representation between a model and its (worldly) target system, the artefactual approach focuses on the scientific question the scientists set out to answer and the specific representational tools employed in this task. That such representational tools vitally shape the target system is illustrated by the various ways in which the nerve impulse has been modeled. We examine two such modeling processes. The classic Hodgkin-Huxley (HH) model, which captures the dynamics of the action potential trough an equation corresponding to an electric circuit, and the Heimburg-Jackson model, which considers the nerve signal as a phase transition. The mechanistic discussion of modeling has generated various interpretations of the HH model, regarding the model alternatively as a mere mechanism sketch (e.g. Craver 2006) or a successful abstraction (Levy 2013). The realist commitments underlying both of these otherwise conflicting interpretations of the HH-model have been questioned by Colombo et al. (2015), who argue for adopting an antirealist perspective to mechanisms. In contrast, we argue for getting more serious about how scientists use representational tools to construct models and configure target systems for distinct epistemic aims. Thus, in taking a more practice-oriented approach to modeling than the available representational accounts (in their either structuralist or pragmatic guises), the artefactual account also helps to overcome the tendency to conflate the realist/antirealist debate with the discussion of the epistemic functioning of models.
Title: “Representing and Seeing (!) the Unobservable”
Abstract: My goal is to develop an empiricist account of visual evidence and the role it plays in scientific representation, focusing, in particular, on imaging in microscopy. As will become clear, in order to do that the concepts of observation and visual evidence will need to be reexamined. I provide an account of these concepts that offers a principled way of specifying the scope of the observable that allows us to make sense of salient features of scientific practice within an empiricist stance, without the pitfalls faced by the more restrictive conception of the observable and of microscopy advanced by constructive empiricism (van Fraassen )
Who is attending?
No one has said they will attend yet.
Will you attend this event?