Underdetermination of Scientific Theory First published Wed Aug - TopicsExpress



          

Underdetermination of Scientific Theory First published Wed Aug 12, 2009; substantive revision Mon Sep 16, 2013 CONTINUED COMPLIMENTS OF THE STANFORD ENCYCLOPEDIA OF PHILOSOPHY 3. Contrastive Underdetermination, Empirical Equivalents, and Unconceived Alternatives 3.1 Contrastive Underdetermination: Back to Duhem Although it is also a form of underdetermination, what we described in Section 1 above as contrastive underdetermination raises fundamentally different issues from the holist variety considered in Section 2 (a recent book-length treatment of many of these issues is Bonk 2008). This is clearly evident in Duhems original writings concerning so-called crucial experiments, where he seeks to show that even when we explicitly suspend any concerns about holist underdetermination, the contrastive variety remains an obstacle to our discovery of truth in theoretical science: But let us admit for a moment that in each of these systems [concerning the nature of light] everything is compelled to be necessary by strict logic, except a single hypothesis; consequently, let us admit that the facts, in condemning one of the two systems, condemn once and for all the single doubtful assumption it contains. Does it follow that we can find in the ‘crucial experiment’ an irrefutable procedure for transforming one of the two hypotheses before us into a demonstrated truth? Between two contradictory theorems of geometry there is no room for a third judgment; if one is false, the other is necessarily true. Do two hypotheses in physics ever constitute such a strict dilemma? Shall we ever dare to assert that no other hypothesis is imaginable? Light may be a swarm of projectiles, or it may be a vibratory motion whose waves are propagated in a medium; is it forbidden to be anything else at all? ([1914] 1954, 189) Contrastive underdetermination is so-called because it questions the ability of the evidence to confirm any given hypothesis against alternatives, and the central focus of discussion in this connection (equally often regarded as “the” problem of underdetermination) concerns the character of the supposed alternatives. Of course the two problems are not entirely disconnected, because it is open to us to consider alternative possible modifications of the web of beliefs as alternative theories or theoretical “systems” between which the empirical evidence alone is powerless to decide. But we have already seen that one need not think of the alternative responses to recalcitrant experience as competing theoretical alternatives to appreciate the character of the holists challenge, and we will see that one need not embrace any version of holism about confirmation to appreciate the quite distinct problem that the available evidence might support more than one theoretical alternative. It is perhaps most useful here to think of holist underdetermination as starting from a particular theory or body of beliefs and claiming that our revision of those beliefs in response to new evidence may be underdetermined, while contrastive underdetermination instead starts from a given body of evidence and claims that more than one theory may be well-supported by that very evidence. Part of what has contributed to the conflation of these two problems is the holist presuppositions of those who originally made them famous. After all, on Quines view we simply revise the web of belief in response to recalcitrant experience, and so the suggestion that there are multiple possible revisions of the web available in response to any particular evidential finding just is the claim that there are in fact many different “theories” (i.e. candidate webs of belief) that are equally well-supported by any given body of data.[6] But if we give up such extreme holist views of evidence, meaning, and/or confirmation, the two problems take on very different identities, with very different considerations in favor of taking them seriously, very different consequences, and very different candidate solutions. Notice, for instance, that even if we somehow knew that no other hypothesis on a given subject was well-confirmed by a given body of data, that would not tell us where to place the blame or which of our beliefs to give up if the remaining hypothesis in conjunction with others subsequently resulted in a failed empirical prediction. And as Duhem suggests above, even if we supposed that we somehow knew exactly which of our hypotheses to blame in response to a failed empirical prediction, this would not help us to decide whether or not there are other hypotheses available that are equally well-confirmed by the data we actually have. One way to see why not is to consider an analogy that champions of contrastive underdetermination have sometimes used to support their case. If we consider any finite group of data points, an elementary proof reveals that there are an infinite number of distinct mathematical functions describing different curves that will pass through all of them. As we add further data to our initial set we will definitively eliminate functions describing curves which no longer capture all of the data points in the new, larger set, but no matter how much data we accumulate, the proof guarantees that there will always be an infinite number of functions remaining that define curves including all the data points in the new set and which would therefore seem to be equally well supported by the empirical evidence. No finite amount of data will ever be able to narrow the possibilities down to just a single function or indeed, any finite number of candidate functions, from which the distribution of data points we have might have been generated. Each new data point we gather eliminates an infinite number of curves that previously fit all the data (so the problem here is not the holists challenge that we do not know which beliefs to give up in response to failed predictions or disconfirming evidence), but also leaves an infinite number still in contention. 3.2 Empirically Equivalent Theories Of course, generating and testing fundamental scientific hypotheses is rarely if ever a matter of finding curves that fit collections of data points, so nothing follows directly from this mathematical analogy for the significance of contrastive underdetermination in most scientific contexts. But Bas van Fraassen has offered an extremely influential line of argument intended to show that such contrastive underdetermination is a serious concern for scientific theorizing more generally. In The Scientific Image (1980), van Fraassen uses a now-classic example to illustrate the possibility that even our best scientific theories might have empirical equivalents: that is, alternative theories making the very same empirical predictions, and which therefore cannot be better or worse supported by any possible body of evidence. Consider Newtons cosmology, with its laws of motion and gravitational attraction. As Newton himself realized, van Fraassen points out, exactly the same predictions are made by the theory whether we assume that the entire universe is at rest or assume instead that it is moving with some constant velocity in any given direction: from our position within it, we have no way to detect constant, absolute motion by the universe as a whole. Thus, van Fraassen argues, we are here faced with empirically equivalent scientific theories: Newtonian mechanics and gravitation conjoined either with the fundamental assumption that the universe is at absolute rest (as Newton himself believed), or with any one of an infinite variety of alternative assumptions about the constant velocity with which the universe is moving in some particular direction. All of these theories make all and only the same empirical predictions, so no evidence will ever permit us to decide between them on empirical grounds.[7] Van Fraassen is widely regarded as holding that the prospect of contrastive underdetermination grounded in such empirical equivalents should lead us to restrict our epistemic ambitions for the scientific enterprise. His constructive empiricism holds that the aim of science is not to find true theories, but only theories that are empirically adequate: that is, theories whose claims about observable phenomena are all true. Since the empirical adequacy of a theory is not threatened by the existence of another that is empirically equivalent to it, fulfilling this aim has nothing to fear from the possibility of such empirical equivalents. In reply, many critics have suggested that van Fraassen gives no reasons for restricting belief to empirical adequacy that could not also be used to argue for suspending our belief in the future empirical adequacy of our best present theories: of course there could be empirical equivalents to our best theories, but there could also be theories equally well-supported by all the evidence up to the present which diverge in their predictions about observables in future cases not yet tested. This challenge seems to miss the point of Van Fraassens epistemic voluntarism: his claim is that we should believe no more but also no less than we need to make sense of and take full advantage of our scientific theories, and a commitment to the empirical adequacy of our theories, he suggests, is the least we can get away with in this regard. Of course it is true that we are running some epistemic risk in believing in even the full empirical adequacy of our present theories, but the risk is considerably less than what we assume in believing in their truth, it is the minimum we need to take full advantage of the fruits of our scientific labors, and, he suggests, “it is not an epistemic principle that one might as well hang for a sheep as a lamb” (1980, 72). In an influential discussion, Larry Laudan and Jarrett Leplin (1991) argue that philosophers of science have invested even the bare possibility that our theories might have empirical equivalents with far too much epistemic significance. Notwithstanding the popularity of the presumption that there are empirically equivalent rivals to every theory, they argue, the conjunction of several familiar and relatively uncontroversial epistemological theses is sufficient to defeat it. Because the boundaries of what is observable change as we develop new experimental methods and instruments, because auxiliary assumptions are always needed to derive empirical consequences from a theory (cf. confirmational holism, above), and because these auxiliary assumptions are themselves subject to change over time, Laudan and Leplin conclude that there simply is no guarantee that any two theories judged to be empirically equivalent at a given time will remain so as the state of our knowledge advances. Accordingly, any judgment of empirical equivalence is both defeasible and relativized to a particular state of science. So even if two theories are empirically equivalent at a given time this is no guarantee that they will remain so, and thus there is no foundation for a general pessimism about our ability to distinguish theories that are empirically equivalent to each other on empirical grounds. Although they concede that we could have good reason to think that particular theories have empirically equivalent rivals, this must be established case-by-case rather than by any general argument or presumption. A fairly standard reply to this line of argument is to suggest that what Laudan and Leplin really show is that the notion of empirical equivalence must be applied to larger collections of beliefs than those traditionally identified as scientific theories—at least large enough to encompass the auxiliary assumptions needed to derive empirical predictions from them. At the extreme, perhaps this means that the notion of empirical equivalents (or at least timeless empirical equivalents) cannot be applied to anything less than “systems of the world” (i.e. total Quinean webs of belief), but even that is not fatal: what the champion of contrastive underdetermination asserts is that there are empirically equivalent systems of the world that incorporate different theories of the nature of light, or spacetime, or whatever. On the other hand, it might seem that quick examples like van Fraassens variants of Newtonian cosmology do not serve to make this thesis as plausible as the more limited claim of empirical equivalence for individual theories. It seems equally natural, however, to respond to Laudan and Leplin simply by conceding the variability in empirical equivalence but insisting that this is not enough to undermine the problem. Empirical equivalents create a serious obstacle to belief in a theory so long as there is some empirical equivalent to that theory at any given time, but it need not be the same one at each time. On this line of thinking, cases like van Fraassens Newtonian example illustrate how easy it is for theories to admit of empirical equivalents at any given time, and thus constitute a reason for thinking that there probably are or will be empirical equivalents to any given theory at any particular time we consider it, assuring that whenever the question of belief in a given theory arises, the challenge posed to it by constrastive underdetermination arises as well. Laudan and Leplin also suggest, however, that even if the universal existence of empirical equivalents were conceded, this would do much less to establish the significance of underdetermination than its champions have supposed, because “theories with exactly the same empirical consequences may admit of differing degrees of evidential support” (1991, 465). A theory may be better supported than an empirical equivalent, for instance, because the former but not the latter is derivable from a more general theory whose consequences include a third, well supported, hypothesis. More generally, the belief-worthiness of an hypothesis depends crucially on how it is connected or related to other things we believe and the evidential support we have for those other beliefs.[8] Laudan and Leplin suggest that we have invited the specter of rampant underdetermination only by failing to keep this familiar home truth in mind and instead implausibly identifying the evidence bearing on a theory exclusively with the theorys own entailments or empirical consequences (but cf. Tulodziecki 2012). This impoverished view of evidential support, they argue, is in turn the legacy of a failed foundationalist and positivistic approach to the philosophy of science which mistakenly assimilates epistemic questions about how to decide whether or not to believe a theory to semantic questions about how to establish a theorys meaning or truth-conditions. John Earman (1993) has argued that this dismissive diagnosis does not do justice to the threat posed by underdetermination. He argues that worries about underdetermination are an aspect of the more general question of the reliability of our inductive methods for determining beliefs, and notes that we cannot decide how serious a problem underdetermination poses without specifying (as Laudan and Leplin do not) the inductive methods we are considering. Earman regards some version of Bayesianism as our most promising form of inductive methodology, and he proceeds to show that challenges to the long-run reliability of our Bayesian methods can be motivated by considerations of the empirical indistinguishability (in several different and precisely specified senses) of hypotheses stated in any language richer than that of the evidence itself that do not amount simply to general skepticism about those inductive methods. In other words, he shows that there are more reasons to worry about underdetermination concerning inferences to hypotheses about unobservables than to, say, inferences about unobserved observables. He also goes on to argue that at least two genuine cosmological theories have serious, nonskeptical, and nonparasitic empirical equivalents: the first essentially replaces the gravitational field in Newtonian mechanics with curvature in spacetime itself, [9] while the second recognizes that Einsteins General Theory of Relativity permits cosmological models exhibiting different global topological features which cannot be distinguished by any evidence inside the light cones of even idealized observers who live forever.[10] And he suggests that “the production of a few concrete examples is enough to generate the worry that only a lack of imagination on our part prevents us from seeing comparable examples of underdetermination all over the map” (1993, 31) even as he concedes that his case leaves open just how far the threat of underdetermination extends (1993, 36). Most philosophers of science, however, have not embraced the idea that it is only lack of imagination which prevents us from finding empirical equivalents to our scientific theories generally. They note that the convincing examples of empirical equivalents we do have are all drawn from a single domain of highly mathematized scientific theorizing in which the background constraints on serious theoretical alternatives are far from clear, and suggest that it is therefore reasonable to ask whether even a small handful of such examples should make us believe that there are probably empirical equivalents to most of our scientific theories most of the time. They concede that it is always possible that there are empirical equivalents to even our best scientific theories concerning any domain of nature, but insist that we should not be willing to suspend belief in any particular theory until some convincing alternative to it can actually be produced: as Philip Kitcher puts it, “give us a rival explanation, and well consider whether it is sufficiently serious to threaten our confidence” (1993, 154; see also Leplin 1997, Achinstein 2002). That is, these thinkers insist that until we are able to actually construct an empirically equivalent alternative to a given theory, the bare possibility that such equivalents exist is insufficient to justify suspending belief in the best theories we do have. And for this same reason most philosophers of science are unwilling to follow van Fraassen into what they regard as constructive empiricisms unwarranted epistemic modesty. Even if van Fraassen is right about the most minimal beliefs we must hold in order to take full advantage of our scientific theories, most thinkers do not see why we should believe the least we can get away with rather than believing the most we are entitled to by the evidence we have. Philosophers of science have responded in a variety of ways to the suggestion that a few or even a small handful of serious examples of empirical equivalents does not suffice to establish that there are probably such equivalents to most scientific theories in most domains of inquiry. One such reaction has been to invite more careful attention to the details of particular examples of putative underdetermination: considerable work has been devoted to assessing the threat of underdetermination in the case of particular scientific theories (for a recent example, see Butterfield (2012)). Another reaction has been to investigate whether particular kinds of theories or domains of science (e.g. historical vs. experimental sciences) are more vulnerable to problems of underdetermination than others and, if so, why (see Cleland (2002), Carman (2005), Turner (2005, 2007), Stanford (2010), Forber and Griffith (2011)). But champions of contrastive underdetermination have most frequently responded by seeking to argue that all theories have empirical equivalents, typically by proposing something like an algorithmic procedure for generating such equivalents from any theory whatsoever. Stanford (2001, 2006) suggests that these efforts to prove that all our theories must have empirical equivalents fall roughly but reliably into global and local varieties, and that neither makes a convincing case for a distinctive scientific problem of contrastive underdetermination. Global algorithms are well-represented by Andre Kuklas (1996) suggestion that from any theory T we can immediately generate such empirical equivalents as T′ (the claim that Ts observable consequences are true, but T itself is false), T″ (the claim that the world behaves according to T when observed, but some specific incompatible alternative otherwise), and the hypothesis that our experience is being manipulated by powerful beings in such a way as to make it appear that T is true. But such possibilities, Stanford argues, amount to nothing more than the sort of Evil Deceiver to which Descartes appealed in order to doubt any of his beliefs that could possibly be doubted (see Section 1, above). Such radically skeptical scenarios pose an equally powerful (or powerless) challenge to any knowledge claim whatsoever, no matter how it is arrived at or justified, and thus pose no special problem or challenge for beliefs offered to us by theoretical science. If global algorithms like Kuklas are the only reasons we can give for taking underdetermination seriously in a scientific context, then there is no distinctive problem of the underdetermination of scientific theories by data, only a salient reminder of the irrefutability of classically Cartesian or radical skepticism.[11] By contrast to such global strategies for generating empirical equivalents, local algorithmic strategies instead begin with some particular scientific theory and proceed to generate alternative versions that are equally well supported by all possible evidence. This is what van Fraassen does with the example of Newtonian cosmology, showing that an infinite variety of supposed empirical equivalents can be produced by ascribing different constant absolute velocities to the universe as a whole. But Stanford suggests that empirical equivalents generated in this way are also insufficient to show that there is a distinctive and genuinely troubling form of underdetermination afflicting scientific theories, because they rely on simply saddling particular scientific theories with further claims for which those theories themselves (together with whatever background beliefs we actually hold) imply that we cannot have any evidence. Such empirical equivalents invite the natural response that they force our theories to undertake commitments that they never should have in the first place. Such claims, it seems, should simply be excised from the theories themselves, leaving over just the claims that sensible defenders would have held were all we were entitled to believe by the evidence in any case. In van Fraassens Newtonian example, for instance, this could be done simply by undertaking no commitment concerning the absolute velocity and direction (or lack thereof) of the universe as a whole. To put the point another way, if we believe a given scientific theory when one of the empirical equivalents we could generate from it by the local algorithmic strategy is correct instead, most of what we originally believed will nonetheless turn out to be straightforwardly true. 3.3 Unconceived Alternatives and A New Induction Stanford (2001, 2006) concludes that no convincing general case has been made for the presumption that there are empirically equivalent rivals to all or most scientific theories, or to any theories besides those for which such equivalents can actually be constructed. But he goes on to insist that empirical equivalents are no essential part of the case for a significant problem of constrastive underdetermination. Our efforts to confirm scientific theories, he suggests, are no less threatened by what Larry Sklar (1975, 1981) has called “transient” underdetermination, that is, theories which are not empirically equivalent but are equally (or at least reasonably) well confirmed by all the evidence we happen to have in hand at the moment, so long as this transient predicament is also “recurrent”, that is, so long as we think that there is (probably) at least one such (fundamentally distinct) alternative available—and thus the transient predicament re-arises—whenever we are faced with a decision about whether to believe a given theory at a given time. Stanford argues that a convincing case for contrastive underdetermination of this recurrent, transient variety can indeed be made, and that the evidence for it is available in the historical record of scientific inquiry itself. Stanford concedes that present theories are not transiently underdetermined by the theoretical alternatives we have actually developed and considered to date: we think that our own scientific theories are considerably better confirmed by the evidence than any rivals we have actually produced. The central question, he argues, is whether we should believe that there are well confirmed alternatives to our best scientific theories that are presently unconceived by us. And the primary reason we should believe that there are, he claims, is the long history of repeated transient underdetermination by previously unconceived alternatives across the course of scientific inquiry. In the progression from Aristotelian to Cartesian to Newtonian to contemporary mechanical theories, for instance, the evidence available at the time each earlier theory dominated the practice of its day also offered compelling support for each of the later alternatives (unconceived at the time) that would ultimately come to displace it. Stanfords “New Induction” over the history of science claims that this situation is typical; that is, that “we have, throughout the history of scientific inquiry and in virtually every scientific field, repeatedly occupied an epistemic position in which we could conceive of only one or a few theories that were well confirmed by the available evidence, while subsequent inquiry would routinely (if not invariably) reveal further, radically distinct alternatives as well confirmed by the previously available evidence as those we were inclined to accept on the strength of that evidence” (2006, 19). In other words, Stanford claims that in the past we have repeatedly failed to exhaust the space of fundamentally distinct theoretical possibilities that were well confirmed by the existing evidence, and that we have every reason to believe that we are probably also failing to exhaust the space of such alternatives that are well confirmed by the evidence we have at present. Much of the rest of his case is taken up with discussing historical examples illustrating that earlier scientists did not simply ignore or dismiss, but instead genuinely failed to conceive of the serious, fundamentally distinct theoretical possibilities that would ultimately come to displace the theories they defended, only to be displaced in turn by others that were similarly unconceived at the time. He concludes that “the history of scientific inquiry itself offers a straightforward rationale for thinking that there typically are alternatives to our best theories equally well confirmed by the evidence, even when we are unable to conceive of them at the time” (2006, 20; for reservations and criticisms concerning this line of argument, see Magnus 2006, 2010; Godfrey-Smith 2008; Chakravartty 2008; Devitt 2011; Ruhmkorff 2011). Stanford concedes, however, that the historical record can offer only fallible evidence of a distinctive, general problem of contrastive scientific underdetermination, rather than the kind of deductive proof that champions of the case from empirical equivalents have typically sought. Thus, claims and arguments about the various forms that underdetermination may take, their causes and consequences, and the further significance they hold for the scientific enterprise as a whole continue to evolve in the light of ongoing controversy, and the underdetermination of scientific theory by evidence remains very much a live and unresolved issue in the philosophy of science. Bibliography Achinstein, P., 2002, “Is There A Valid Experimental Argument for Scientific Realism?”, Journal of Philosophy, 99: 470–495. Bonk, T., 2008, Underdetermination: An Essay on Evidence and the Limits of Natural Knowledge, Dordrecht, The Netherlands: Springer. Butterfield, J., 2012, “Underdetermination in Cosmology: An Invitation”, Proceedings of the Aristotelian Society (Supplementary Volume), 86: 1–18. Carman, C., 2005, “The Electrons of the Dinosaurs and the Center of the Earth”, Studies in History and Philosophy of Science, 36: 171–174. Chakravartty, A., 2008, “What You Dont Know Cant Hurt You: Realism and the Unconceived”, Philosophical Studies, 137: 149–158. Cleland, C., 2002, “Methodological and Epistemic Differences Between Historical Science and Experimental Science”, Philosophy of Science, 69: 474–496. Descartes, R., [1640] 1996, Meditations on First Philosophy, trans. by John Cottingham, Cambridge: Cambridge University Press. Devitt, M., 2011, “Are Unconceived Alternatives a Problem for Scientific Realism”, Journal for General Philosophy of Science, 42: 285–293. Duhem, P., [1914] 1954, The Aim and Structure of Physical Theory, trans. from 2nd ed. by P. W. Wiener; originally published as La Théorie Physique: Son Objet et sa Structure (Paris: Marcel Riviera & Cie.), Princeton, NJ: Princeton University Press. Earman, J., 1993, “Underdetermination, Realism, and Reason”, Midwest Studies in Philosophy, 18: 19–38. Feyerabend, P., 1975, Against Method, London: Verso. Forber, P. and Griffith, E., 2011, “Historical Reconstruction: Gaining Epistemic Access to the Deep Past”, Philosophy and Theory in Biology, 3. DOI: dx.doi.org/10.3998/ptb.6959004.0003.003. Gilles, D., 1993, “The Duhem Thesis and the Quine Thesis,”, in Philosophy of Science in the Twentieth Century, Oxford: Blackwell Publishers, pp. 98–116. Glymour, C., 1970, “Theoretical Equivalence and Theoretical Realism”, Proceedings of the Biennial Meeting of the Philosophy of Science Association, 1970: 275–288. –––, 1977, “The Epistemology of Geometry”, Noûs, 11: 227–251. –––, 1980, Theory and Evidence, Princeton, NJ.: Princeton University Press. –––, 2013, “Theoretical Equivalence and the Semantic View of Theories”, Philosophy of Science, 80: 286–297. Godfrey-Smith, P., 2008, “Recurrent, Transient Underdetermination and the Glass Half-Full”, Philosophical Studies, 137: 141–148. Goodman, N., 1955, Fact, Fiction, and Forecast, Indianapolis: Bobbs-Merrill. Halvorson, H., 2012, “What Scientific Theories Could Not Be”, Philosophy of Science, 79: 183–206. –––, 2013, “The Semantic View, If Plausible, Is Syntactic”, Philosophy of Science, 80: 475–478. Hesse, M., 1980, Revolutions and Reconstructions in the Philosophy of Science, Brighton: Harvester Press. Kitcher, P., 1993, The Advancement of Science, New York: Oxford University Press. Kuhn, T., [1962] 1996, The Structure of Scientific Revolutions, Chicago: University of Chicago Press, 3rd edition. Kukla, A., 1996, “Does Every Theory Have Empirically Equivalent Rivals?”, Erkenntnis, 44: 137–166. Lakatos, I. 1970, “Falsification and the Methodology of Scientific Research Programmes”, in Criticism and the Growth of Knowledge, I. Lakatos and A. Musgrave (eds.), Cambridge: Cambridge University Press, pp. 91–196. Laudan, L., 1990, “Demystifying Underdetermination”, in Scientific Theories, C. Wade Savage (ed.), (Series: Minnesota Studies in the Philosophy of Science, vol. 14), Minneapolis: University of Minnesota Press, pp. 267–297. Laudan, L. and Leplin, J., 1991, “Empirical Equivalence and Underdetermination”, Journal of Philosophy, 88: 449–472. Leplin, J., 1997, A Novel Defense of Scientific Realism, New York: Oxford University Press. Magnus, P. 2006, “Whats New About the New Induction?”, Synthese, 148: 295–301. –––, 2010, “Inductions, Red Herrings, and the Best Explanation for the Mixed Record of Science”, British Journal for the Philosophy of Science, 61: 803–819. Manchak, J. 2009, “Can We Know the Global Structure of Spacetime?”, Studies in History and Philosophy of Modern Physics, 40: 53–56. Mill, J. S., [1867] 1900, A System of Logic, Ratiocinative and Inductive, Being a Connected View of the Principles of Evidence and the Methods of Scientific Investigation, New York: Longmans, Green, and Co. Norton, J. 2008, “Must Evidence Underdetermine Theory?”, in The Challenge of the Social and the Pressure of Practice: Science and Values Revisited, M. Carrier, D. Howard, and J. Kourany (eds.), Pittsburgh: University of Pittsburgh Press, 17–44. Quine, W. V. O., 1951, “Two Dogmas of Empiricism”, Reprinted in From a Logical Point of View, 2nd Ed., Cambridge, MA: Harvard University Press, pp. 20–46. –––, 1955, “Posits and Reality”, Reprinted in The Ways of Paradox and Other Essays, 2nd Ed., Cambridge, MA: Harvard University Press, pp. 246–254. –––, 1969, “Epistemology Naturalized”, in Ontological Relativity and Other Essays, New York: Columbia University Press, pp. 69–90. –––, 1975, “On Empirically Equivalent Systems of the World”, Erkenntnis, 9: 313–328. –––, 1990, “Three Indeterminacies”, in Perspectives on Quine, R. B. Barrett and R. F. Gibson, (eds.), Cambridge, MA: Blackwell, pp. 1–16. Ruhmkorff, S., 2011, “Some Difficulties for the Problem of Unconceived Alternatives”, Philosophy of Science, 78: 875–886. Shapin, S. and Shaffer, S., 1982, Leviathan and the Air Pump, Princeton: Princeton University Press. Sklar, L., 1975, “Methodological Conservatism”, Philosophical Review, 84: 384–400. –––, 1981, “Do Unborn Hypotheses Have Rights?”, Pacific Philosophical Quarterly, 62: 17–29. –––, 1982, “Saving the Noumena”, Philosophical Topics, 13: 49–72. Stanford, P. K., 2001, “Refusing the Devils Bargain: What Kind of Underdetermination Should We Take Seriously?”, Philosophy of Science, 68 [Proceedings]: S1-S12. –––, 2006, Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives, New York: Oxford University Press. –––, 2010, “Getting Real: The Hypothesis of Organic Fossil Origins”, The Modern Schoolman, 87: 219–243 Tulodziecki, D., 2012, “Epistemic Equivalence and Epistemic Incapacitation”, British Journal for the Philosophy of Science, 63: 313–328. Turner, D., 2005, “Local Underdetermination in Historical Science”, Philosophy of Science, 72: 209–230. –––, 2007, Making Prehistory: Historical Science and the Scientific Realism Debate, Cambridge: Cambridge University Press. Van Fraassen, B., 1980, The Scientific Image, Oxford: Oxford University Press. Academic Tools sep man icon How to cite this entry. sep man icon Preview the PDF version of this entry at the Friends of the SEP Society. inpho icon Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). phil papers icon Enhanced bibliography for this entry at PhilPapers, with links to its database. Other Internet Resources [Please contact the author with suggestions.] Related Entries confirmation | constructive empiricism | Duhem, Pierre | epistemology: naturalized | feminist (interventions): epistemology and philosophy of science | Feyerabend, Paul | induction: problem of | Quine, Willard van Orman | scientific knowledge: social dimensions of | scientific realism Acknowledgments I have benefited from discussing both the organization and content of this article with many people including audiences and participants at the 2009 Pittsburgh Workshop on Underdetermination and the 2009 Southern California Philosophers of Science retreat, as well as the participants in graduate seminars both at UC Irvine and Pittsburgh. Special thanks are owed to John Norton, P. D. Magnus, John Manchak, Bennett Holman, Penelope Maddy, Jeff Barrett, David Malament, John Earman, and James Woodward. Copyright © 2013 by Kyle Stanford
Posted on: Sun, 15 Dec 2013 16:10:29 +0000

Trending Topics



Recently Viewed Topics




© 2015