https://www.reddit.com/r/askphilosophy/comments/2isjz2/what_exactly_is_wrong_with_falsificationism/
created by LeeHyori on 09/10/2014 at 20:40 UTC*
14 upvotes, 6 top-level comments (showing 6)
Hey,
I read about falsificationism every so often, but I am never able to nail down what exactly is wrong with it. Criticisms of it are all over the place: some people talk about falsificationism in terms of a *demarcation criterion* for science, while others talk about it in terms of a *scientific methodology*. And then, a lot of criticisms of it are historical in nature: i.e., how it does not capture the *history of science*.
Let me lay out my impressions of falsificationism, so that you all know what is bugging me:
1. **Criterion of Demarcation**: The correct view is that falsifiability is a necessary but insufficient condition for being "scientific." On the other hand, being a "falsificationist" about the demarcation problem is to believe that falsifiability is both a necessary and sufficient condition for demarcating science.
2. **As an analysis of the scientific method**: Science progresses by proposing different theories, and then throwing out theories that are contradicted by observations. There is a "survival of the fittest" among scientific theories, so the best theories are ones that haven't faced falsifying evidence, rather than being ones with the most confirming evidence in its favor. However, falsificationism does not capture the history of science very well, so it is wrong in that way. (Personally, I don't really care and don't think this is philosophical question; it's a historical or sociological one.)
3. **As offering the proper scientific method**: Falsificationism is presented as a proper way of doing science. It is a way of overcoming the classical problem of induction (moving from singular observations to universal generalizations). Since it overcomes the problem of induction, then it is a logically valid way of doing science, whereas induction is not logically valid.
I am wondering if someone could check and refine my impressions. I'm most interested in **(3)**, since I think (2) is at best only a semi-relevant historical question, and (1) is boring.
What are the reasons why falsificationism fails as a methodology for science? That is, why is it wrong *on its own merits*, rather than as a matter of scientific history?
Thanks!
Comment by drinka40tonight at 09/10/2014 at 23:22 UTC
10 upvotes, 1 direct replies
You'll find some push back along the lines of Duhem/Quine and issues of theory holism:
Holist underdetermination ensures, Duhem argues, that there cannot be any such thing as a “crucial experiment”: a single experiment whose outcome is predicted differently by two competing theories and which therefore serves to definitively confirm one and refute the other. For example, in a famous scientific episode intended to resolve the ongoing heated battle between partisans of the theory that light consists of a stream of particles moving at extremely high speed (the particle or “emission” theory of light) and defenders of the view that light consists instead of waves propagated through a mechanical medium (the wave theory), the physicist Foucault designed an apparatus to test the two theories' competing claims about the speed of transmission of light in different media: the particle theory implied that light would travel faster in water than in air, while the wave theory implied that the reverse was true. Although the outcome of the experiment was taken to show that light travels faster in air than in water Duhem argues that this is far from a refutation of the hypothesis of emission:
> in fact, what the experiment declares stained with error is the whole group of propositions accepted by Newton, and after him by Laplace and Biot, that is, the whole theory from which we deduce the relation between the index of refraction and the velocity of light in various media. But in condemning this system as a whole by declaring it stained with error, the experiment does not tell us where the error lies. Is it in the fundamental hypothesis that light consists in projectiles thrown out with great speed by luminous bodies? Is it in some other assumption concerning the actions experienced by light corpuscles due to the media in which they move? We know nothing about that. It would be rash to believe, as Arago seems to have thought, that Foucault's experiment condemns once and for all the very hypothesis of emission, i.e., the assimilation of a ray of light to a swarm of projectiles. If physicists had attached some value to this task, they would undoubtedly have succeeded in founding on this assumption a system of optics that would agree with Foucault's experiment. ([1914] 1954, p. 187)
From this and similar examples, Duhem drew the quite general conclusion that our response to the experimental or observational falsification of a theory is always underdetermined in this way. When the world does not live up to our theory-grounded expectations, we must give up something, but because no hypothesis is ever tested in isolation, no experiment ever tells us precisely which belief it is that we must revise or give up as mistaken:
> In sum, the physicist can never subject an isolated hypothesis to experimental test, but only a whole group of hypotheses; when the experiment is in disagreement with his predictions, what he learns is that at least one of the hypotheses constituting this group is unacceptable and ought to be modified; but the experiment does not designate which one should be changed. ([1914] 1954, 187)
http://plato.stanford.edu/entries/scientific-underdetermination/#HolUndVerIde
Comment by Joebloggy at 09/10/2014 at 22:05 UTC
6 upvotes, 1 direct replies
In my mind the biggest issue is in falsificationism's production of theories with predictive capability, a cornerstone of how science works. If we need to use a theory to predict data, say astrophysics, and we're using two models, neither falsified by the data, how should we know which one to use? Any appeal to a previous failure to falsify a theory is an appeal to induction. Appealing to the nature of the theory and its relative explanatory power is a creating certain epistemic privilege- this may be necessary or useful, but then again we lose the deductive element of the theory of falsificationism. The view presented seems incapable of deductive prediction in different circumstances, where multiple theories fit existing data, and so doesn't achieve its goals in this regard.
Another critique of falsificationism is that certain statements, like "for every metal, there is a temperature at which it will melt" are unfalsifiable, since if the metal doesn't melt at temperature T, then there is always temperature T+1 to consider. However, this seems a perfectly ordinary scientific hypothesis, which suggests falsificationism is inadequate as a scientific method.
Comment by Eh_Priori at 10/10/2014 at 04:38 UTC
5 upvotes, 2 direct replies
There is a "survival of the fittest" among scientific theories, so the best theories are ones that haven't faced falsifying evidence, rather than being ones with the most confirming evidence in its favor.
There needs to be more to it than this, otherwise a theory that hasn't been tested at all is just as good as a theory that has been tested 100 times. For Karl Popper what was important was that a theory had passed severe tests. The severity of a test is determined by how likely it is to falsify a theory.
In regards to criticisms of falsificationism, if taken as a necessary condition for a proposition to be scientific, is that while a proposition is falsifiable its negation may (will?) not be. For example, "All A's have the property B" is a falsifiable statement, but its negation "there is some A that does not have the property B" is not falsifiable (depending on its scope). So if we accept falsificationism we have this weird situation where a proposition is scientific while its negation is not scientific.
Another problem is that probability statements are not falsifiable. If I claim that a balanced coin has a 50% chance of landing on heads there is no possible run of coin flips that can falsify that claim, all runs are equally likely. Yet we might not want to expunge probabilistic claims from science.
Comment by alanforr at 10/10/2014 at 15:47 UTC
1 upvotes, 0 direct replies
Falsificationism is a misleading term for critical rationalism proposed first by Karl Popper. Philosophers almost universally reject critical rationalism when they are aware of it because they have bad ideas about epistemology. They almost all adopt justificationism: the idea that it is possible and desirable to show a theory is true or probably true. If you assess ideas using argument then the arguments have premises and rules of inference and the result of the argument may not be true (or probably true) if the premises and rules of inference are false. You might try to solve this by coming up with a new argument that proves the premises and rules of inference but then you have the same problem with those premises and rules of inference. You might say that some stuff is indubitably true (or probably true), and you can use that as a foundation. But that just means you have cut off a possible avenue of intellectual progress since the foundation can't be explained in terms of anything deeper. And in any case there is nothing that can fill that role. Sense experience won't work since you can misinterpret information from your sense organs, e.g. - optical illusions. Sense organs also fail to record lots of stuff that does exist, e.g. - neutrinos. Scientific instruments aren't infallible either since you can make mistakes in setting them up, in interpreting information from them and so on.
We don't create knowledge (useful or explanatory information) by showing stuff is true or probably true for reasons so how do we create knowledge? We can only create knowledge by finding mistakes in our current ideas and correcting them piecemeal. You notice a problem with your current ideas, propose solutions, criticise the solutions until only one is left and then find a new problem. We shouldn't say that a theory is false because it hasn't been proven because this applies to all theories. Rather, we should look at what problems it aims to solve and ask whether it solves them. We should look at whether it is compatible with other current knowledge and if not try to figure out the best solution. Should the new idea be discarded or the old idea or can some variant of both solve the problem?
If you want to know more I suggest reading "Realism and the Aim of Science" by Karl Popper, especially the first chapter and "The Fabric of Reality" and "The Beginning of Infinity" by David Deutsch. You might also want to visit www.fallibleideas.com.
Comment by Bradm77 at 10/10/2014 at 16:29 UTC
1 upvotes, 1 direct replies
Shouldn't history inform us as to what the proper scientific method should be? If, for example, we have examples of past scientists who break the rules for any given criteria for "proper scientific methodology" yet still produce what most people would agree is good science shouldn't that provide some hint that that criteria isn't actually "proper scientific methodology?" If so, then history of science seems very relevant.
Comment by Katallaxis at 11/10/2014 at 01:54 UTC
1 upvotes, 0 direct replies
Criterion of Demarcation: The correct view is that falsifiability is a necessary but insufficient condition for being "scientific." On the other hand, being a "falsificationist" about the demarcation problem is to believe that falsifiability is both a necessary and sufficient condition for demarcating science.
If, with Laudan, we 'insist that any philosophically interesting demarcative device must distinguish scientific and non-scientific matters in a way which exhibits a surer epistemic warrant or evidential ground for science than for non-science,' then most falsificationists, I think, would agree that falsifiability is insufficient condition for demarcating science from nonscience. However, falsificationists tend not to insist on such 'epistemic warrant' or 'evidential ground', because they don't regard scientific status as a measure of justification, warrant, or confirmation. For them, to say that a theory is scientific doesn't speak of its past success, but rather its potential for failure in the future--it's forward rather than backward looking. That is, some theories are open to criticism or refutation by empirical testing and others are not, and those that are should be regarded as scientific--including very unsuccessful theories for which we have no epistemic warrant or evidential ground to believe.
Falsificationists usually present their criterion of science as a proposal for a convention or norm, and its intentionally broad, including both good and bad theories, so to prevent people fighting over the moniker of science. For falsificationists, to say that something is unscientific isn't a criticism, and to say that a theory is scientific isn't a compliment. There are good and bad of each; to speak of their scientific status is just to speak of how we might go a critical investigation of their content and possible truth.