Workshop Series Description
Representations play a central role in scientists’ understanding of the world. From mathematical models to diagrams, different representations in highly varied contexts yield diverse insights across the physical, biological, and social sciences. Despite the fact that how a phenomenon is represented has far-reaching ramifications for how it is understood, the literatures on scientific understanding and scientific representation are largely independent of each other. The time is ripe to foster greater synergy between these two areas in the philosophy of science, as they face complementary problems—and hold the promise of complementary solutions.
Consider, for instance, idealizations, such as frictionless planes, infinite populations, ideal gasses, and rational actors. Idealizations misrepresent their target systems, yet frequently provide a deeper understanding than more accurate representations. However, to develop this idea, more detailed accounts of representation and of understanding must engage each other. Otherwise, it remains mysterious how a misrepresentation can provide genuine understanding.
Another, underdeveloped area of potential crosstalk concerns the cognitive capacities that characterize understanding, such as grasping, intuiting, and reasoning. These abilities seem to require some kind of mental representation. Yet, the relationship between mental representation, scientific representation, and understanding has hardly been touched. Moreover, “practice-oriented” philosophers of science may seek to reverse the order of analysis, i.e. to account for representation in terms of scientists’ cognitive activities.
A third area in which more integrated discussions of representation and understanding might yield philosophical insight occurs in discussions of data and phenomena. Phenomena admit of different representations, and some of these representations yield better understanding because they are more amenable to embedding the phenomenon into a broader theoretical framework. Yet scientists’ flexibility in how they represent phenomena is constrained by the available data, detectors, and measurement practices.
Moreover, in the biological and social sciences, representations of phenomena may have implications for our moral understanding of ourselves, of others, and of our relationship to non-human organisms and the broader environment. In this way, understanding serves as bridge between the representation literature and the literature on science and values.
These are not the only questions that call out for deeper connections between scientific representation and scientific understanding to be drawn. At the most general level, how do representations provide understanding? Is the relation between representation and understanding always mediated through explanation, or the understanding could be obtained directly from representation? Why do some representations provide better understanding than others?
This workshop series aim is to explore all these and many further related questions through focused meetings that will take place once a year, every time at a different location. The very first, inaugural workshop will be by invitation only, because it will gather top experts on many of these questions, and the purpose of it will be to establish a steering/scientific committee. Every other workshop in the series will have keynote speakers, and the participants will be chosen through a CFP and a blind review of extended abstracts. There will be no parallel sessions. The topic of each meeting will be always very focused and precise, but also related to the main theme of the series.
The founders of the workshop series are Daniel Kostic and Kareem Khalifa. If you need any additional information please contact either one of them at: [email protected] or [email protected].
Representations play a central role in scientists’ understanding of the world. From mathematical models to diagrams, different representations in highly varied contexts yield diverse insights across the physical, biological, and social sciences. Despite the fact that how a phenomenon is represented has far-reaching ramifications for how it is understood, the literatures on scientific understanding and scientific representation are largely independent of each other. The time is ripe to foster greater synergy between these two areas in the philosophy of science, as they face complementary problems—and hold the promise of complementary solutions.
Consider, for instance, idealizations, such as frictionless planes, infinite populations, ideal gasses, and rational actors. Idealizations misrepresent their target systems, yet frequently provide a deeper understanding than more accurate representations. However, to develop this idea, more detailed accounts of representation and of understanding must engage each other. Otherwise, it remains mysterious how a misrepresentation can provide genuine understanding.
Another, underdeveloped area of potential crosstalk concerns the cognitive capacities that characterize understanding, such as grasping, intuiting, and reasoning. These abilities seem to require some kind of mental representation. Yet, the relationship between mental representation, scientific representation, and understanding has hardly been touched. Moreover, “practice-oriented” philosophers of science may seek to reverse the order of analysis, i.e. to account for representation in terms of scientists’ cognitive activities.
A third area in which more integrated discussions of representation and understanding might yield philosophical insight occurs in discussions of data and phenomena. Phenomena admit of different representations, and some of these representations yield better understanding because they are more amenable to embedding the phenomenon into a broader theoretical framework. Yet scientists’ flexibility in how they represent phenomena is constrained by the available data, detectors, and measurement practices.
Moreover, in the biological and social sciences, representations of phenomena may have implications for our moral understanding of ourselves, of others, and of our relationship to non-human organisms and the broader environment. In this way, understanding serves as bridge between the representation literature and the literature on science and values.
These are not the only questions that call out for deeper connections between scientific representation and scientific understanding to be drawn. At the most general level, how do representations provide understanding? Is the relation between representation and understanding always mediated through explanation, or the understanding could be obtained directly from representation? Why do some representations provide better understanding than others?
This workshop series aim is to explore all these and many further related questions through focused meetings that will take place once a year, every time at a different location. The very first, inaugural workshop will be by invitation only, because it will gather top experts on many of these questions, and the purpose of it will be to establish a steering/scientific committee. Every other workshop in the series will have keynote speakers, and the participants will be chosen through a CFP and a blind review of extended abstracts. There will be no parallel sessions. The topic of each meeting will be always very focused and precise, but also related to the main theme of the series.
The founders of the workshop series are Daniel Kostic and Kareem Khalifa. If you need any additional information please contact either one of them at: [email protected] or [email protected].
PROGRAM
1 day 5thFebruary 2019
Venue: Centre Broca Nouvelle-Aquitaine – ground floor, 146 rue Léo Saignat – CS 61292 33076 Bordeaux. (General Map, Tramway Line A, stop at Saint-Augustin) + (Direction map) 9:45 welcome address 10:00 Tarja Knuuttila, “Model-Based Theoretical Strategy and the Artefactual Account” 10:45 Kareem Khalifa, “Understanding, Representational Success, and Epistemic Value” 11:30 Break 11:45 Daniel Kostić, “Non-causal understanding via spatially embedded networks in the brain” 12:30 Mauricio Suárez, “Scientific Understanding as Minimal Representation” 13:15 Lunch (catering on the site) 14:30 Mark Risjord: "Modeling Practice: Representation and Scientific Reasoning" 15:15 Cedric Brun: "Pragmatic constraints on transferring models in interdisciplinary science: the case of neuroeconomics" 16:00 break 16:15-17: Mazviita Chirimuuta, “Ideal Patterns and Non-Factive Understanding” 17:00 drinks at a bar 19:30 Official dinner ABSTRACTS |
2 day 6thFebruary 2019
Venue: Maison de la recherche UBM, Salle des Thèses Domaine universitaire Esplanade des Antilles F-33607 Pessac. (Map Tramway line B, Stop at Montaigne-Montesquieu) + (Direction Map) 10:00 Catherine Elgin, “Models in Understanding” 10:45 Juha Saatsi, “Explanatory Power: Modal vs. Pragmatic Dimensions” 11:30 Break 11:45 Insa Lawler, “Understanding Based on Distorted Models and Toy Models—Towards a Unified Account” 12:30 Roman Frigg, “Unlocking Limits” 13:15 Lunch (catering on the site) 14:30 Anouk Barberousse, “Simple Models in Climate Science” 15:15 Henk de Regt, “Scientific Understanding and Epistemic Values” 16:00 break 16:15 panel round table discussion/steering committee assembly 19:30 Unofficial dinner (in our own arrangements) 3rd day, Sightseeing and wine-tasting. |
Tarja Knuuttila, “Model-Based Theoretical Strategy and the Artefactual Account”
Weisberg (2007, 2013) and Godfrey-Smith (2006) have argued that model-based theorizing makes use of a particular epistemic strategy: indirect representation. With indirect representation Weisberg and Godfrey-Smith refer to how modellers study real-world phenomena through the detour of examining hypothetical entities, models. The theorists practising abstract direct representation, in turn, strive to represent the data or real-world phenomena directly. The distinction between these two kinds of representational strategies aims to capture the way in which modelers typically first construct and study models, and only at a later stage consider some real-world target systems – if at all. There are at least three different ways to philosophically characterise the status and role of models in accordance with the thesis of indirect representation. One way is to approach models as interpreted abstract structures (Weisberg 2013), and another one is to conceive of them as fictions (e.g. Godfrey-Smith 2006; Frigg 2010; Frigg and Nguyen 2016). A third alternative, the alternative that I will specifically argue for, is provided by the artefactual account (Knuuttila 2011, 2018). In contrast to envisaging models as interpreted abstract structures or fictions, the artefactual account focuses on the erotetic function of modelling and the various external representational tools used in model construction that enable, but also delimit scientific reasoning. Models as epistemic artefacts are designed in view of some pending scientific questions, allowing for further questioning and repurposing.
Kareem Khalifa, “Representational Success, Understanding, and the Aims of Science”
Let “veritism” denote the idea that true belief is the only fundamental epistemic aim of science. A prominent objection to this view holds that it fails to account for falsehoods that advance our understanding. In this paper, I offer a successor to veritism,inquisitive truth monism, that not only rebuts this objection, but also provides a general account of representational success. Inquisitive truth monism’s core idea is that a representation is successful just in case it provides (via surrogative inference) true answers to relevant questions. As I show, various facets of understanding—responsiveness to evidence, systematicity, inferential power, and so on—can be seen as a means to answering relevant questions, and thus are of instrumental epistemic value. However, unlike veritism, inquisitive truth monism tolerates false answers to irrelevant questions, where relevance is largely (though not entirely) determined by one’s personal interests, social roles, and background assumptions. In this way, it incorporates many of the pragmatic considerations that veritism’s critics have proposed as important features of scientific inquiry that go beyond true belief, while maintaining veritism’s core insight that truth is a privileged epistemic good.
Daniel Kostić, “Non-causal understanding via spatially embedded networks in the brain”
In the literature on scientific explanation, the interest in facticity of understanding is ever growing. There are two general camps in this debate, factivists and non-factivists. Factivists argue that idealizations can provide understanding only if they are partially (Strevens 2007) or approximately true (Khalifa 2017). On the other hand, non-factivists claim that idealizations can provide understanding independently from explanation in virtue of being effective or by exemplifying the features of interest (Elgin 2007, 2018; de Regt 2009).
In this talk, I argue that spatial embedding of networks in neuroscience provide explanations that are non-causal and idealized, which prima facie seems to support the idea of understanding without explanation. However, I show that they provide understanding that is both explanatory and factive.
This point is most evident from the cases where structure determines the function. The term “structure” is posited in many different ways, most of which sidestep what would be considered causal organization of the brain. For example, Bassett and Stiso (2018) represent brains as spatially embedded networks, and argue that the wiring rules that differ in healthy brains and in neurodevelopmental disorders such as schizophrenia are driven by wiring cost, which itself is determined by the spatially localized modules, and hierarchically nested topology. Topology in this sense refers to abstract mathematical properties of the network. But how can this abstract mathematical structure constrain the functional wiring drivers in health and disease, if it sidesteps the causal organisation in the brain?
Bassett and Stiso’s explanation of how topological structure affects and determines cognitive function describes counterfactual dependencies between wiring minimization, spatially localized modules, and physical Rentian scaling. These counterfactual dependencies don’t capture the core causal factors, thus the spatially embedded networks are idealizations that provide non-causal, quasi-Woodwardian explanation, which in effect means that the understanding from them is both explanatory and factive.
Mauricio Suárez, “Scientific Understanding as Minimal Representation”
I defend the view that understanding a phenomenon often amounts to merely providing some representation for it. I illustrate by means of a few examples from scientific modeling, and then go on to argue for the following four theses:
Mark Risjord, "Modeling Practice: Representation and Scientific Reasoning"
Surrogative reasoning is widely acknowledged to be a necessary condition on scientific representation. Many accounts of surrogative reasoning from models, even those that disavow structuralism, follow Swoyer (1991) in taking it to depend on an isomorphism between model and target. Contessa’s (2007) rules, for example, establish such an isomorphism. The isomorphism guarantees that propositions true in the model will be true in the target system. Hence any valid arguments made in the model will have corresponding arguments in the target.
This presentation will argue that the standard, isomorphism-based characterization of surrogative reasoning badly distorts scientific reasoning with models. It will discuss several scientific models and their use in scientific practice. The patterns of reasoning that emerge show the crucial importance of operationalization, measurement, and model-based calculation. Operationalization is a rational process by which observable aspects of the target system are correlated with elements of the model. Once the model has been operationalized on a target, measurements can be made to supply values for the model’s variables and calculation can proceed. Surrogative reasoning that explains, predicts, or intervenes depends on such model-based calculation.
Where the above sketched elements of surrogative reasoning are represented at all by an isomorphism-based account, they are mischaracterized. The patterns of scientific reasoning that emerge from a study of scientific practice thus show how the isomorphism-based conception of scientific reasoning is deficient. The latter part of the presentation will use the patterns to develop a more plausible account of surrogative reasoning, and to sketch the outline of a properly inferentialist account of scientific representation.
Cedric Brun, "Pragmatic constraints on transferring models in interdisciplinary science: the case of Neuroeconomics"
Recent research in philosophy of science has paid a considerable amount of attention to the epistemic and ontological aspects of modeling in science. Indeed, the practices of science entail elaborating models that are simplifications, abstractions or idealizations of very complex systems, in order to produce explanatorily and predictively relevant theories of the phenomena under scrutiny. If one takes seriously the view that models’ elaboration is intimately linked to explanatory purposes which are specific to a domain of phenomena, the issue of model’s borrowing or model’s transferring becomes critical. What are the conditions under which transferring a scientific model from one domain to another can be legitimate against the chances that it might be misused, and, therefore, might provide irrelevant results? This question can be seen as a subproduct of ‘the problem of scientific representation’ (Callender and Cohen 2006). Building our argument on a deflationary representationalist account of models (akin to Suarez 2003), we show that borrowing models from one scientific field to another rests on the recognition of pragmatic norms which need to be precisely defined. AS a case study, we will examine how non-human primates models in neuroeconomics are depending on such model transferring, in order to assess the soundness of our argument.
Mazviita Chirimuuta, “Ideal Patterns and Non-Factive Understanding”
This paper begins with the observation of a Levins style trade-off in models of complex phenomena, between predictive accuracy of the model, on the one hand, and intelligibility (the capacity of the model to provide understanding of the phenomenon under investigation), on the other. I provide examples from recent use of connectionist models in neuroscience, which are predictively very accurate but less intelligible than earlier generations of models. The existence of this trade-off lends support to non-factivist accounts of scientific understanding. However, non-factivism faces important objections from Khalifa (2017) and Sullivan and Khalifa (forthcoming). I reply to these objections by arguing that a critical weakness in the non-factivist accounts of Elgin (2004) and Potochnik (2017) comes from the use of Dennett’s (1991) notion of a “real pattern.” I show that the accounts can be strengthened by replacing this with the notion of an “ideal pattern”, where the phenomena that are the targets of model building do not comprise patterns that are simply “out there” in nature, but are to some extent dependent on the methods of data-processing chosen by the researcher.
Catherine Z. Elgin, “Models in Understanding”
Many scientific realists hold that the epistemic acceptability of scientific representations depends on their accuracy. Science aspires to truth and when it is successful it delivers truth, or at least approximate truth. I argue that such a conception is at odds with science's practice of developing and deploying models. Models are epistemically efficacious precisely because they selectively and judiciously depart from truth. Nor is it the case that models always improve by coming closer to the truth. By exemplifying features they share with their targets and diverging from their targets elsewhere, models provide epistemic access to the exemplified features and make their roles manifest. Their inaccuracy thus enhances understanding.
Juha Saatsi, “Explanatory Power: Modal vs. Pragmatic Dimensions”
Consider two near-platitudes about ‘explanatory power’: (1) More powerful explanations represent the world better. (2) More powerful explanations provide us better understanding. In this talk I will examine a tension between (1) and (2), due to the cognitive limitations of human beings doing the explaining. These cognitive limitations give rise to a pragmatic dimension of explanatory power. By operating in the framework of a modal account of explanation, I aim to disentangle this pragmatic dimension from explanations’ veridical (modal) dimension. This throws new light on three substantial issues turning on judgments of explanatory power: by analysing the various ways in which 1 and 2 can be in tension, given our limited powers of reasoning in making modal inferences, we can better understand e.g. the explanatory autonomy of higher-level theories; the explanatory power of mathematics; and the explanatory role of idealisations.
Insa Lawler, “Understanding Based on Distorted Models and Toy Models—Towards a Unified Account”
Many models feature (indispensable) idealizations of their target phenomenon that are utterly false, such as the assumption that the number of particles approaches infinity. Dub such models distorted models. These models are contrasted with models that are concerned with hypothetical objects, such as Schelling’s checkerboard model of segregation. Dub such modelstoy models. How to precisely draw the line between these models is controversial. Either way, there are crucial differences regarding whether and how such models represent their target phenomena. A natural hypothesis is that they thus provide us with different kinds of understanding, e.g., how-actually vs. how-possibly understanding. In my talk, I argue that such a divide blurs the fact that the essential elements are shared when it comes to understanding. By means of case studies, I argue for the following unifying elements: (i) Understanding a phenomenon requires a systematic account of it. The aim of working with either kind of model is to contribute elements of the account. (ii) To achieve (i), the models isolate or distort features that are hypothesized to (not) make a relevant difference to the phenomenon of interest. (iii) To achieve (i), we need what Elgin calls a ‘tie to evidence’ (2017). This is typically their empirical success, e.g., correct predictions, a (rough) reproduction of the phenomenon, etc. (iv) Provided (iii), explanatory hypotheses need to be extracted from the model and tailored to the target phenomenon. (v) Whether the result is a how-possibly or how-actually element of the systematic account depends on whether the element is tenable in light of available theoretical or empirical knowledge. When it comes to understanding, the way and the accuracy of representing the target phenomenon are thus not decisive. What matters is whether the model is successful and whether the extracted hypotheses are tenable.
Roman Frigg, “Unlocking Limits”
Many scientific models are representations. Building on Goodman and Elgin’s notion of representation-as we analyse what this claim involves by providing a general definition of what makes something a scientific model and formulating a novel account of how models represent. We call the result the DEKI account of representation, which offers a complex kind of representation involving an interplay of denotation, exemplification, keying up of properties, and imputation. There will never be a complete list of keys and new ones are added as science progresses. But many models are based on certain off-the-shelf keys. What are these keys and how to they work? The working hypothesis of this paper is many models in mechanics involve limit keys, which are then interpreted either as approximations or idealisations. We spell out what these keys are and how they assist exploration in physics.
Anouk Barberousse, “Simple Models in Climate Science”
Even though climate models get more and more complex and depend on increasing computational power, especially Coupled Atmosphere-Ocean General Circulation Models, simple models may still have a role to play in cimate science because they improve understanding, if not precision. I will survey the arguments in favor of this view and discuss the relationships between the capacity to achieve accurate predictions, the management of uncertainties, and understanding in climate science.
Henk W. de Regt, “Scientific Understanding and Epistemic Values” We value understanding, and we value science because it provides us with understanding of the world. Indeed, the understanding that comes with scientific explanation is regarded as one of the central epistemic aims of science. In Understanding Scientific Understanding (2017). In my talk, I will address the question of how the aim of understanding relates to other epistemic aims of science, in particular empirical adequacy and representational accuracy. I argue that scientists achieve understanding of phenomena by basing their explanations on intelligible theories. The upshot of my analysis is that the intelligibility of theories is related to scientists’ abilities: theories are intelligible if scientists have the skills to use those theories in fruitful ways. Intelligibility, defined as the value that scientists attribute to the cluster of qualities of a theory that facilitate its use, is a contextual value, since scientists’ value-judgments regarding preferred qualities of theories will vary with their (contextually acquired) skills.
How does intelligibility relate to empirical adequacy and representational accuracy? Empirical adequacy is indeed a basic requirement in science. However, it may be ranked and applied differently in specific cases, which implies that it also functions as a value. Consequently, it does not automatically override other values and does not make intelligibility redundant. Accurate representation may also be an aim in specific contexts, but it surely isn’t a universal aim of science. I will argue that the aim of understanding often requires the abandonment of accuracy: inaccurate representation and theoretical falsehood are oft-used tools for enhancing understanding of phenomena. I will illustrate these claims with a historical case study of the controversy between James Clerk Maxwell and Ludwig Boltzmann over the latter’s molecular model for explaining the so-called specific heat anomaly.
Weisberg (2007, 2013) and Godfrey-Smith (2006) have argued that model-based theorizing makes use of a particular epistemic strategy: indirect representation. With indirect representation Weisberg and Godfrey-Smith refer to how modellers study real-world phenomena through the detour of examining hypothetical entities, models. The theorists practising abstract direct representation, in turn, strive to represent the data or real-world phenomena directly. The distinction between these two kinds of representational strategies aims to capture the way in which modelers typically first construct and study models, and only at a later stage consider some real-world target systems – if at all. There are at least three different ways to philosophically characterise the status and role of models in accordance with the thesis of indirect representation. One way is to approach models as interpreted abstract structures (Weisberg 2013), and another one is to conceive of them as fictions (e.g. Godfrey-Smith 2006; Frigg 2010; Frigg and Nguyen 2016). A third alternative, the alternative that I will specifically argue for, is provided by the artefactual account (Knuuttila 2011, 2018). In contrast to envisaging models as interpreted abstract structures or fictions, the artefactual account focuses on the erotetic function of modelling and the various external representational tools used in model construction that enable, but also delimit scientific reasoning. Models as epistemic artefacts are designed in view of some pending scientific questions, allowing for further questioning and repurposing.
Kareem Khalifa, “Representational Success, Understanding, and the Aims of Science”
Let “veritism” denote the idea that true belief is the only fundamental epistemic aim of science. A prominent objection to this view holds that it fails to account for falsehoods that advance our understanding. In this paper, I offer a successor to veritism,inquisitive truth monism, that not only rebuts this objection, but also provides a general account of representational success. Inquisitive truth monism’s core idea is that a representation is successful just in case it provides (via surrogative inference) true answers to relevant questions. As I show, various facets of understanding—responsiveness to evidence, systematicity, inferential power, and so on—can be seen as a means to answering relevant questions, and thus are of instrumental epistemic value. However, unlike veritism, inquisitive truth monism tolerates false answers to irrelevant questions, where relevance is largely (though not entirely) determined by one’s personal interests, social roles, and background assumptions. In this way, it incorporates many of the pragmatic considerations that veritism’s critics have proposed as important features of scientific inquiry that go beyond true belief, while maintaining veritism’s core insight that truth is a privileged epistemic good.
Daniel Kostić, “Non-causal understanding via spatially embedded networks in the brain”
In the literature on scientific explanation, the interest in facticity of understanding is ever growing. There are two general camps in this debate, factivists and non-factivists. Factivists argue that idealizations can provide understanding only if they are partially (Strevens 2007) or approximately true (Khalifa 2017). On the other hand, non-factivists claim that idealizations can provide understanding independently from explanation in virtue of being effective or by exemplifying the features of interest (Elgin 2007, 2018; de Regt 2009).
In this talk, I argue that spatial embedding of networks in neuroscience provide explanations that are non-causal and idealized, which prima facie seems to support the idea of understanding without explanation. However, I show that they provide understanding that is both explanatory and factive.
This point is most evident from the cases where structure determines the function. The term “structure” is posited in many different ways, most of which sidestep what would be considered causal organization of the brain. For example, Bassett and Stiso (2018) represent brains as spatially embedded networks, and argue that the wiring rules that differ in healthy brains and in neurodevelopmental disorders such as schizophrenia are driven by wiring cost, which itself is determined by the spatially localized modules, and hierarchically nested topology. Topology in this sense refers to abstract mathematical properties of the network. But how can this abstract mathematical structure constrain the functional wiring drivers in health and disease, if it sidesteps the causal organisation in the brain?
Bassett and Stiso’s explanation of how topological structure affects and determines cognitive function describes counterfactual dependencies between wiring minimization, spatially localized modules, and physical Rentian scaling. These counterfactual dependencies don’t capture the core causal factors, thus the spatially embedded networks are idealizations that provide non-causal, quasi-Woodwardian explanation, which in effect means that the understanding from them is both explanatory and factive.
Mauricio Suárez, “Scientific Understanding as Minimal Representation”
I defend the view that understanding a phenomenon often amounts to merely providing some representation for it. I illustrate by means of a few examples from scientific modeling, and then go on to argue for the following four theses:
- The view is plausible for specifically scientific kinds of understanding, but it readily generalises to all kinds of understanding.
- The appropriate representations are merely required to be effective for minimally informative inference.
- On such a minimal construal, the view denies strong versions of the explanation / understanding distinction.
- The view is correspondingly neutral on the veridicality thesis for either understanding and explanation, denying that either is necessarily factive, but other objective constraints must apply.
Mark Risjord, "Modeling Practice: Representation and Scientific Reasoning"
Surrogative reasoning is widely acknowledged to be a necessary condition on scientific representation. Many accounts of surrogative reasoning from models, even those that disavow structuralism, follow Swoyer (1991) in taking it to depend on an isomorphism between model and target. Contessa’s (2007) rules, for example, establish such an isomorphism. The isomorphism guarantees that propositions true in the model will be true in the target system. Hence any valid arguments made in the model will have corresponding arguments in the target.
This presentation will argue that the standard, isomorphism-based characterization of surrogative reasoning badly distorts scientific reasoning with models. It will discuss several scientific models and their use in scientific practice. The patterns of reasoning that emerge show the crucial importance of operationalization, measurement, and model-based calculation. Operationalization is a rational process by which observable aspects of the target system are correlated with elements of the model. Once the model has been operationalized on a target, measurements can be made to supply values for the model’s variables and calculation can proceed. Surrogative reasoning that explains, predicts, or intervenes depends on such model-based calculation.
Where the above sketched elements of surrogative reasoning are represented at all by an isomorphism-based account, they are mischaracterized. The patterns of scientific reasoning that emerge from a study of scientific practice thus show how the isomorphism-based conception of scientific reasoning is deficient. The latter part of the presentation will use the patterns to develop a more plausible account of surrogative reasoning, and to sketch the outline of a properly inferentialist account of scientific representation.
Cedric Brun, "Pragmatic constraints on transferring models in interdisciplinary science: the case of Neuroeconomics"
Recent research in philosophy of science has paid a considerable amount of attention to the epistemic and ontological aspects of modeling in science. Indeed, the practices of science entail elaborating models that are simplifications, abstractions or idealizations of very complex systems, in order to produce explanatorily and predictively relevant theories of the phenomena under scrutiny. If one takes seriously the view that models’ elaboration is intimately linked to explanatory purposes which are specific to a domain of phenomena, the issue of model’s borrowing or model’s transferring becomes critical. What are the conditions under which transferring a scientific model from one domain to another can be legitimate against the chances that it might be misused, and, therefore, might provide irrelevant results? This question can be seen as a subproduct of ‘the problem of scientific representation’ (Callender and Cohen 2006). Building our argument on a deflationary representationalist account of models (akin to Suarez 2003), we show that borrowing models from one scientific field to another rests on the recognition of pragmatic norms which need to be precisely defined. AS a case study, we will examine how non-human primates models in neuroeconomics are depending on such model transferring, in order to assess the soundness of our argument.
Mazviita Chirimuuta, “Ideal Patterns and Non-Factive Understanding”
This paper begins with the observation of a Levins style trade-off in models of complex phenomena, between predictive accuracy of the model, on the one hand, and intelligibility (the capacity of the model to provide understanding of the phenomenon under investigation), on the other. I provide examples from recent use of connectionist models in neuroscience, which are predictively very accurate but less intelligible than earlier generations of models. The existence of this trade-off lends support to non-factivist accounts of scientific understanding. However, non-factivism faces important objections from Khalifa (2017) and Sullivan and Khalifa (forthcoming). I reply to these objections by arguing that a critical weakness in the non-factivist accounts of Elgin (2004) and Potochnik (2017) comes from the use of Dennett’s (1991) notion of a “real pattern.” I show that the accounts can be strengthened by replacing this with the notion of an “ideal pattern”, where the phenomena that are the targets of model building do not comprise patterns that are simply “out there” in nature, but are to some extent dependent on the methods of data-processing chosen by the researcher.
Catherine Z. Elgin, “Models in Understanding”
Many scientific realists hold that the epistemic acceptability of scientific representations depends on their accuracy. Science aspires to truth and when it is successful it delivers truth, or at least approximate truth. I argue that such a conception is at odds with science's practice of developing and deploying models. Models are epistemically efficacious precisely because they selectively and judiciously depart from truth. Nor is it the case that models always improve by coming closer to the truth. By exemplifying features they share with their targets and diverging from their targets elsewhere, models provide epistemic access to the exemplified features and make their roles manifest. Their inaccuracy thus enhances understanding.
Juha Saatsi, “Explanatory Power: Modal vs. Pragmatic Dimensions”
Consider two near-platitudes about ‘explanatory power’: (1) More powerful explanations represent the world better. (2) More powerful explanations provide us better understanding. In this talk I will examine a tension between (1) and (2), due to the cognitive limitations of human beings doing the explaining. These cognitive limitations give rise to a pragmatic dimension of explanatory power. By operating in the framework of a modal account of explanation, I aim to disentangle this pragmatic dimension from explanations’ veridical (modal) dimension. This throws new light on three substantial issues turning on judgments of explanatory power: by analysing the various ways in which 1 and 2 can be in tension, given our limited powers of reasoning in making modal inferences, we can better understand e.g. the explanatory autonomy of higher-level theories; the explanatory power of mathematics; and the explanatory role of idealisations.
Insa Lawler, “Understanding Based on Distorted Models and Toy Models—Towards a Unified Account”
Many models feature (indispensable) idealizations of their target phenomenon that are utterly false, such as the assumption that the number of particles approaches infinity. Dub such models distorted models. These models are contrasted with models that are concerned with hypothetical objects, such as Schelling’s checkerboard model of segregation. Dub such modelstoy models. How to precisely draw the line between these models is controversial. Either way, there are crucial differences regarding whether and how such models represent their target phenomena. A natural hypothesis is that they thus provide us with different kinds of understanding, e.g., how-actually vs. how-possibly understanding. In my talk, I argue that such a divide blurs the fact that the essential elements are shared when it comes to understanding. By means of case studies, I argue for the following unifying elements: (i) Understanding a phenomenon requires a systematic account of it. The aim of working with either kind of model is to contribute elements of the account. (ii) To achieve (i), the models isolate or distort features that are hypothesized to (not) make a relevant difference to the phenomenon of interest. (iii) To achieve (i), we need what Elgin calls a ‘tie to evidence’ (2017). This is typically their empirical success, e.g., correct predictions, a (rough) reproduction of the phenomenon, etc. (iv) Provided (iii), explanatory hypotheses need to be extracted from the model and tailored to the target phenomenon. (v) Whether the result is a how-possibly or how-actually element of the systematic account depends on whether the element is tenable in light of available theoretical or empirical knowledge. When it comes to understanding, the way and the accuracy of representing the target phenomenon are thus not decisive. What matters is whether the model is successful and whether the extracted hypotheses are tenable.
Roman Frigg, “Unlocking Limits”
Many scientific models are representations. Building on Goodman and Elgin’s notion of representation-as we analyse what this claim involves by providing a general definition of what makes something a scientific model and formulating a novel account of how models represent. We call the result the DEKI account of representation, which offers a complex kind of representation involving an interplay of denotation, exemplification, keying up of properties, and imputation. There will never be a complete list of keys and new ones are added as science progresses. But many models are based on certain off-the-shelf keys. What are these keys and how to they work? The working hypothesis of this paper is many models in mechanics involve limit keys, which are then interpreted either as approximations or idealisations. We spell out what these keys are and how they assist exploration in physics.
Anouk Barberousse, “Simple Models in Climate Science”
Even though climate models get more and more complex and depend on increasing computational power, especially Coupled Atmosphere-Ocean General Circulation Models, simple models may still have a role to play in cimate science because they improve understanding, if not precision. I will survey the arguments in favor of this view and discuss the relationships between the capacity to achieve accurate predictions, the management of uncertainties, and understanding in climate science.
Henk W. de Regt, “Scientific Understanding and Epistemic Values” We value understanding, and we value science because it provides us with understanding of the world. Indeed, the understanding that comes with scientific explanation is regarded as one of the central epistemic aims of science. In Understanding Scientific Understanding (2017). In my talk, I will address the question of how the aim of understanding relates to other epistemic aims of science, in particular empirical adequacy and representational accuracy. I argue that scientists achieve understanding of phenomena by basing their explanations on intelligible theories. The upshot of my analysis is that the intelligibility of theories is related to scientists’ abilities: theories are intelligible if scientists have the skills to use those theories in fruitful ways. Intelligibility, defined as the value that scientists attribute to the cluster of qualities of a theory that facilitate its use, is a contextual value, since scientists’ value-judgments regarding preferred qualities of theories will vary with their (contextually acquired) skills.
How does intelligibility relate to empirical adequacy and representational accuracy? Empirical adequacy is indeed a basic requirement in science. However, it may be ranked and applied differently in specific cases, which implies that it also functions as a value. Consequently, it does not automatically override other values and does not make intelligibility redundant. Accurate representation may also be an aim in specific contexts, but it surely isn’t a universal aim of science. I will argue that the aim of understanding often requires the abandonment of accuracy: inaccurate representation and theoretical falsehood are oft-used tools for enhancing understanding of phenomena. I will illustrate these claims with a historical case study of the controversy between James Clerk Maxwell and Ludwig Boltzmann over the latter’s molecular model for explaining the so-called specific heat anomaly.