The Fallacy Fork: Why It’s Time to Get Rid of Fallacy Theory

Summary: Fallacy theory is popular among skeptics, but it is in serious trouble. Every fallacy in the traditional taxonomy runs into a destructive dilemma which I call the Fallacy Fork: either it hardly ever occurs in real life, or it is not actually fallacious.
----------------
Why do people believe weird things? Why is there so much irrationality in the world? Here’s a standard answer from the sceptic’s playbook: fallacies. Fallacies are certain types of arguments that are common, attractive, persistent, and dead wrong. Because people keep committing fallacies, so the story goes, they end up believing all sorts of weird things.
In popular books about skepticism and in the pages of skeptical magazines such as this one, one commonly finds a concise treatment of the most common types of fallacies. The traditional classification is widely known, often by its Latin name: ad hominem, ad ignorantiam, ad populum, begging the question, post hoc ergo propter hoc. Some of them are more obscure, such as ignoratio elenchi, affirming the consequent, secundum quid, and ad verecundiam, better known as ‘argument from authority’. Most of them date back to the days of Aristotle, others are relative newbies, like the slippery slope fallacy, the genetic fallacy or – for obvious reasons – the reductio ad Hitlerum.
Such lists serve a pedagogical purpose. By learning the most common types of reasoning errors, you will avoid making them yourself, and become better at spotting them when others do. It’s a kind of inoculation against irrationality. If only people would learn the list of fallacies, the world would be a far more rational place!
Except this neat little story is wrong.
Skeptical about fallacies
I used to teach a course in critical thinking at Ghent University. As behooves a good skeptic, I first  presented my students with the usual laundry list of fallacies, after which I invited them to put the theory into practice. Take a popular piece from the newspaper or watch a political debate, and try to spot the fallacies.
I no longer give that assignment. My students became paranoid! They began to see fallacies everywhere. Rather than dealing with the substance of an argument, they just carelessly threw around labels and cried “fallacy!” at every turn. But none of the alleged “fallacies” they spotted survived a close inspection.
Were my students to blame? I had to confess that, when I did the exercise myself, looking for clear-cut fallacies in real life, I came away mostly empty-handed. Perhaps because my students didn’t find any clear instances of fallacies, they started to make them up? So I turned to the classics. The Demon-Haunted World (1996) by Carl Sagan, perhaps the most celebrated work in the skeptical library, has a special section on reasoning fallacies, like many other books in the genre. But although Sagan duly lists all the usual suspects, he never puts them to work in the rest of the book. His treatment comes across as perfunctory, and he hardly gives any examples from real-life pseudoscience. Like many other skeptics, Sagan just invents some toy examples, which are easy to knock down but don’t actually correspond to real-life arguments. It seems that Sagan is paying lip service to fallacy theory, but has no use for it in his actual debunking work.
But if real life abounds with fallacies, why do skeptics like Sagan have to invent toy examples to make their point?
Fallacy Fork
For a long time, philosophers and argumentation theorists have tried to define the different types of fallacies, mostly by using some (semi-)formal argumentation schemes. The attractive idea behind such an approach is that it would allow for swift and easy identification of common reasoning errors, in a wide range of contexts. But alas, the hopes of fallacy theorists have been frustrated. Definitions and schemas have become more complex and unwieldy over time, making them less fit for practical use. Yet, in spite of this, most authors continue to hold on to the notion that you can identify good and fallacious arguments based on their formal structure. The trick is just to find the right analysis.
I’ve now come to believe that this whole idea should be thrown overboard. Together with my colleagues Fabio Paglieri, an argumentation theorist, and Massimo Pigliucci, a dyed-in-the-wool skeptic, we recently published a paper in the journal Argumentation (2015) in which we explain what we think is wrong with fallacy theory. Not just with a particular definition of a given fallacy, but with all of them. Here’s the nub of the problem: arguments that are deemed ‘fallacious’ according to the standard approach are always closely related to arguments that, in many contexts, are perfectly reasonable. Formally, the good and bad ones are indistinguishable. No argumentation scheme can succeed in capturing the difference, separating the wheat from the chaff.
In our paper, we develop a destructive dilemma for fallacy theorists which we call the Fallacy Fork. In this dilemma, fallacy theorists are forced to choose between two options, neither of which is appealing. Take any fallacy from the list. Now we have two options:
(1)    We characterize our fallacy by means of a deductive argumentation schema. For instance, in the case of post hoc ergo propter hoc, we use the following definition: “If B follows A, then A is the cause of B”. For a deductive argument to be valid, the conclusion needs to follow inexorably from the premises. In this case, that is clearly not true. By the standards of deductive logic, any argument instantiating that schema is fallacious. Now the good thing about this approach is that it has normative force. There’s no negotiating with deductive logic. The problem, however, is that we hardly ever find such clear-cut errors, presented in deductive form, in real life (see further). This is the first prong of the Fallacy Fork.
(2)    We characterize our fallacy in a way that captures real-life arguments. In order to do so, we need to abandon our strict deductive approach. We need to relax our definitions and add some qualifiers and nuances. For post hoc ergo propter hoc, this might go as follows: “If B follows shortly after A, and we can think of a plausible causal mechanism linking A and B, then A is probably the cause of B.” This definition is a bit more cumbersome, but it is much closer to the kind of arguments people make in real life. By casting our net wider, we catch many more fish. But now we have another problem on our hands: is our argument still fallacious? In other words, is every instantiation of the argument wrong?
Let’s see how the most famous fallacies fare when confronted with the Fallacy Fork.
Post hoc ergo propter hoc
Every skeptic is familiar with the saying: correlation does not imply causation. To think otherwise is to commit the post hoc ergo propter hoc (or cum hoc) fallacy. The website Spurious Correlations has collected some outrageous examples, with fancy graphs: there is a clear correlation between margarine consumption and divorce rates, and between the number of people who drowned by falling in a pool and the number of films featuring Nicholas Cage (per year). Is there a mysterious causal relationship between these events? If I was ill yesterday and feel better today, to which of the myriad possible earlier events should I attribute my improved condition? That I had cornflakes for breakfast? That I watched a movie with Nicholas Cage? That I was wearing my blue socks? That my next-door neighbor was wearing blue socks?
Not even the most superstitious person believes that correlation automatically implies causation, or that any succession of two events A and B implies  that A caused B. There are just too many things going on in the world, and not enough causal connections to account for them. In its clear-cut deductive guise, the post hoc ergo propter hoc inference is a fallacy, to be sure, but hardly anyone makes it in real life. This is the first prong of the Fallacy Fork. So what about the kind of post hoc arguments that people do use in real life? (Pinto 1995) As it turns out, many of those are not obviously mistaken. It all depends on the context.
Imagine you eat some mushrooms you picked in the forest. Half an hour later you feel nauseated, so you put two and two together: “Ugh. That must have been the mushrooms”. Are you committing a fallacy? Not as long as your inference is merely inductive and probabilistic. Intuitively, your inference depends on the following reasonable assumptions: 1) some mushrooms are toxic 2) it’s easy for a lay person like you to mistake a poisonous mushroom for a healthy one 3) nausea is a typical symptom of food intoxication 4) you don’t usually feel nauseated. If you want, you can show the probabilistic relevance of all these premises. Take the last one, which is known as the base rate or prior probability. if I am a healthy person and don’t usually suffer from nausea, the mushroom is most probably the culprit. If, on the other hand, I suffer from a gastro-intestinal condition and I often have bouts of nausea, my post hoc inference will be less strong.
Indeed, almost all of our everyday causal knowledge is derived from such intuitive post hoc reasoning. For instance, my laptop is behaving strangely after I accidentally dropped it on the floor; some acquaintances un-friended me after I posted that offensive cartoon on Facebook; the fire alarm goes off after I light a cigar. As Randall Munroe (the creator of the web comic xkcd) once put it: “Correlation doesn't imply causation, but it does waggle its eyebrows suggestively and gesture furtively while mouthing 'look over there'." Most of the time these premises remain unspoken, but that cannot be a problem per se. Practically every form of reasoning in everyday life, and even in science, contains plenty of hidden premises and skipped steps.
So how about the post hoc arguments that we hear from quack therapists and other pseudoscientists? Someone takes a dose of oscillococcinum (a homeopathic remedy) for his flu, and he feels better the next day. If he attributes this to the pill, is he committing a fallacy? Not obviously, or at least not on formal grounds. It all depends on the plausibility of a causal link, the availability of alternative explanations, the prior probability of the effect, etc. Dismissing any such inferences as post hoc ergo propter hoc fallacies is just a knee-jerk reaction. The real problem with homeopathy is that there is no possible physical mechanism, because of the extreme nature of the dilutions, and because randomized clinical trials have never demonstrated any effect whatsoever. But appealing to post hoc reasoning by itself is not fallacious. We do it all the time when we’re taking real medicine and conclude that it “works for us”.
Ad hominem
Perhaps the most infamous among the fallacies is the argumentum ad hominem. The principle is quite simple. If you are assessing the merits of someone’s argument, you should not attack his or her personal background or motives. If you play the man instead of the ball, you are guilty of ad hominem reasoning. But are things so simple?
Let’s trot out the Fallacy Fork again. If your ad hominem argument take a deductive form, then of course it is invalid. Even a broken clock is right twice a day. Take this argument: “Researcher A is in the pocket of the pharmaceutical industry, therefore it follows that A’s study is flawed”. If the “therefore” is intended to be deductive, then clearly the argument is invalid. But how often do you encounter ad hominem argument in this strong form?
So we move to the second prong of the Fallacy Fork. Take the following, weaker version of the same ad hominem argument: “Researcher A published a study on the efficacy of a certain antidepressant, but he’s in the pocket of the pharmaceutical company that manufactures the drug. Therefore, we should take his results with a large grain of salt. Better to have an independent team replicate the study.” Now this sounds a lot more reasonable. The second argument is non-deductive and ‘defeasible’, which means that it is inconclusive and up for revision. Almost all arguments in real life are like that. The fact remains that the argument has an ad hominem structure. But should we really dismiss it on those grounds?
In reality, we cannot do without ad hominem reasoning. This is because the fabric of human knowledge is deeply social. Virtually everything we know derives from what other people have told us. Only a fraction of the knowledge we possess is supported by the evidence of our own senses. The rest is, literally, hearsay. Life is too short to investigate everything by yourself. No wonder we are very sensitive to the reputation and trustworthiness of our sources (Sperber et al. 2010).
Many ad hominem arguments, in most contexts, are therefore perfectly reasonable. Much depends on factors that cannot be captured in the formal argumentation scheme: psychological assumptions about prejudice and bias, the past track record of our sources, the relevance of personal background for the issue at hand, background knowledge about hidden agendas. In the courts, ad hominem arguments are standard fare. Experts and witnesses can be discredited and censured because of a hidden agenda, bias, or conflict of interest. Naturally, it is logically possible that even a biased witness might be offering an honest testimony. But courts are not schools of logic.
In The Demon-Haunted World, Carl Sagan unwittingly illustrates the problem with fallacy theory. In his section about ad hominem reasoning, Sagan writes – perhaps bending over backwards to show his good will – that even skeptics are sometimes guilty of ad hominem reasoning, as in the following example: “The Reverend Dr. Smith is a known Biblical fundamentalist, so her objections to evolution need not be taken seriously” (Sagan 1996, 212). The little vignette – as usual in these discussions – was dreamt up by Sagan himself. It’s just a pedagogical straw man, easy to knock down.
But actually, unless Sagan’s argument is meant to be deductive (first prong), it is not fallacious at all (second prong). If we know that the good Reverend is an evangelical Christian, who dogmatically clings to a literal reading of Scripture, of course this will color our judgment about her arguments against evolutionary theory. I’d go even further: pragmatically speaking, this fact alone is reason enough to dismiss her arguments, and not to waste any further time on it. It’s simply naïve to think we have an obligation to scrutinize the arguments of every single crank. In an ideal world perhaps, with unlimited time on your hands, but not in this one.  So ad hominem arguments are indispensable for navigating our way through a social world.
None of this is to deny that, logically speaking, even a die-hard creationist could conceivably level a good argument against evolutionary theory. If you think that the Reverend’s argument must be wrong, given her evangelical faith, you are making an error of deductive logic. But let’s be honest: if some Jehovah’s Witnesses hand you a pamphlet with “scientific” arguments against Darwin, are you going to give them your full attention, lest you succumb to ad hominem reasoning?
If we adopt deduction as the norm of rationality, the whole of science goes out of the window. Science is based on trust and reputation, because empirical evidence is deeply testimonial. Researchers have to report their affiliation, funding sources and possible conflict of interests, and fraudsters are harshly punished. We want to know who they are, and we want them to know that their reputation is at stake. Imagine if Science and Nature were to publish anonymous papers about certain revolutionary discoveries made in unnamed labs. Would anyone be inclined to take them seriously?
To be sure, some ad hominem arguments are uncalled for and distract from the issue at hand. But where to draw the line? Again, this depends on the specific context, which cannot be captured in an argumentation scheme. A possible rule of thumb is this one: “If it’s possible to play the ball, don’t play the man. If not, play the man.” But even this pragmatic rule will only get you so far. There’s no neat formula for distinguishing good ad hominem arguments from bad ones.
Fallacies galore
The main thesis of our paper is that each and every fallacy in the traditional list runs afoul of the Fallacy Fork. Either you construe the fallacy in a clear-cut and deductive fashion, which means that your definition has normative bite, but also that you hardly find any instances in real life; or you relax your formal definition, making it defeasible and adding contextual qualifications, but then your definition loses its teeth. Your “fallacy” is no longer a fallacy.
Let’s briefly consider some other examples. Arguments from ignorance (argumentum ad ignorantiam), according to the standard view, are fallacious because of the following well-known bit of wisdom: “absence of evidence does not constitute evidence of absence”. But as a matter of fact, it often does, and people turn out to be attuned to this (Hahn and Oaksford 2007). Here is a perfectly decent argument from ignorance, which is even used by skeptics: “Recovered memories about satanic cults sacrificing babies are probably the product of confabulation and suggestion, because we have never found any material traces of these atrocities.” This argument is acceptable, as we argue in the paper, because the hidden premises are probabilistically justified (in particular, concerning the ‘likelihood’ of finding such evidence on the assumption that these cults exist).
The so-called genetic fallacy is a relative latecomer to the party, having being coined only about a century ago. It draws (negative) conclusions about X by pointing to the origins of X. It’s a close neighbor to ad hominem reasoning. In reality, once again, many such arguments are not fallacious at all. For instance, if you can explain how out-of-body experiences originate neurologically, or even induce them in the lab, you make supernatural explanations less likely. Deductively, such an argument is fallacious, but probabilistically, it has strong probative weight. Someone who dismisses this as a “genetic fallacy”, as spiritualists and parapsychologists often do, is just not getting the point. In Joseph Heller’s Catch-22, the protagonist Yossarian at some point presents the following argument: “Just because you’re paranoid doesn’t mean they aren’t after you”. Deductively speaking, Yossarian has a point. But the comment is funny precisely because, pace deductive logic, it is absurd. If a psychiatrist tells you that your friend suffers from paranoid psychosis, you will not take seriously your friend’s claim that that he is being persecuted by the CIA (even though it is logically possible!).
The fallacy of Affirming the Consequent, according to the standard story, goes as follows: “If A then B. / B. / Hence: A” Once again, the schema is deductively invalid, but many arguments instantiating it have strong probative value, depending on the circumstances. Such arguments are simply what is known as inferences to the best explanation. For instance: “my car starts if someone turns the ignition key. / I hear my car starting. / Hence: someone must have turned the ignition key”. Logically invalid, but pragmatically justified, given some probabilistic assumptions.
One final example: an argumentum ad verecundiam states that proposition P is true because some authority X has claimed that X is true. As we saw earlier, much of our knowledge is testimonial, and therefore based on authority. There is nothing wrong with arguments from authority, provided that you bear in mind certain questions: does the claim fall within X’s domain of expertise? Are there other experts available, and if so, do they agree? What is the track record of X when it comes to similar claims? Ironically, the problem with pseudoscientists, and conspiracy theorists in particular, is not that they rely too heavily on arguments from authority, but too little. They are excessively suspicious of authority. Anything coming from established academic institutions, or from the “Mainstream Media”, is immediately discredited. Indeed, the diagnostic label “argument from authority” is a convenient excuse for them to reject any form of respected authority.
Paper tigers
Pure fallacies are rare specimens. They are found in logic textbooks, but rarely in real life. If you think about it, this makes sense. The function of reasoning, according to the argumentative theory of reasoning of Hugo Mercier and Dan Sperber (2011), is to convince other people, and to be persuaded in return. But arguments that are too blatantly fallacious cannot perform this function. Deductive versions of ad hominem or post hoc ergo propter hoc are just too easy to debunk. Your audience sees right through them. Why would you bother making them?
The odds of finding such paper tigers in real life are low. If you believe you did spot one, chances are that you missed something, or that the case is less than clear-cut. Perhaps you’ve exaggerated the intended strength of the argument, by portraying an inductive and defeasible argument as an example of deduction, by glossing over some tacit probabilistic assumptions, or by stripping the argument to its bare form and then knocking it down. In other words, perhaps you’ve built a straw man (itself a supposed “fallacy” for which no formal definition can be given). For instance, if you’re being uncharitable, you could flag my mushroom argument as a fallacy, by misconstruing it as a piece of deduction. But I’m not saying that the mushrooms must be the culprits, by dint of deductive logic. I’m just saying that they are the most likely cause of my nausea, given the circumstances.
None of this is to suggest that people don’t use bad arguments. But lazy and sloppy arguments are much more common than cut-and-dried fallacies. People often take pot shots when they are arguing, offering the barest outline of a weak or inconclusive argument, without spelling out their crucial premises and without making clear the structure and intended force of their argument {Mercier, 2017 #2646}. In a cooperative context, the audience needs to reconstruct the speaker’s argument and fill in the blank spaces, preferably in a charitable way. It is true that this principle of charity can also be abused, for instance in the phenomenon of innuendo and plausible deniability, where the speaker hints at a dubious argument but then hides behind a cloak of ambiguity. For instance, I may not explicitly set up an argument to discredit someone’s reputation, but I can insinuate it and hope that the audience gets my drift (“We all know that X is being paid by the pharmaceutical industry”). A fortiori, once you have to take into account these ambiguities and subtle means of deception, the formal definitions of fallacies become completely unhelpful.
Conclusion
Language can bewitch us. If a word exists, we tend to assume that there must be something in reality to which it refers. Labels are meant to be slapped onto things, right? The traditional taxonomy of fallacies, with its portentous Latin phrases, creates the impression that here we are dealing with established and indubitable theoretical results. Does an argument exhibit structure X, following steps Y and Z? Then we blow our whistle: “fallacy”! Do we hear someone drawing causal inferences from successions of events? Post hoc ergo propter hoc! Is someone relying on authority to make a point? Ad verecundiam! Playing the man rather than the ball? Ad hominem!
Arthur Schopenhauer, in his sarcastic little book The Art of Being Right (1896), expressed the wet dream of all argumentation theorists: “It would be a very good thing if every trick could receive some short and obviously appropriate name, so that when a man used this or that particular trick, he could be at once reproached for it.” That would be a splendid thing indeed! Alas, the real world is a bit more complicated. The diagnostic labels of fallacy theory are much less useful for combating irrationality than is often assumed. The standard list, handed down through the ages since the days of Aristotle, is a blunt instrument in real-life discussions. Virtually every definition of a fallacy runs into the Fallacy Fork: either it singles out invalid arguments that rarely occur in real life, or it does apply to real life but turns out to be toothless.
By carelessly throwing around labels and crying foul at every turn, defenders of science and reason may actually harm their own cause. People may start to harbor sympathy for the targets of such unfair dismissal – in other words, for woo. And indeed, the complacency of skeptics is understandable. Precisely because doctrines like homeopathy and astrology have been debunked so many times in so many ways, and because it is so unlikely that its advocates will ever rehabilitate their cause, it is tempting for skeptics to become lazy and smug. If you have the truth on your side anyway, why not?
But that would be a pity indeed. Even bad ideas don’t deserve bad criticism. It’s time for skeptics and other fallacy buffs to get rid of fallacies.

(To be published in Skeptical Inquirer. The PDF of the original academic paper in Argumentation can be found here. Thanks to Nick Brown for proof-reading!)
Boudry, Maarten, Fabio Paglieri, and Massimo Pigliucci. 2015. "The Fake, the Flimsy, and the Fallacious: Demarcating Arguments in Real Life."  Argumentation 29 (4):431–456. doi: 10.1007/s10503-015-9359-1.
Hahn, Ulrike, and Mike Oaksford. 2007. "The rationality of informal argumentation: a Bayesian approach to reasoning fallacies."  Psychological review 114 (3):704-732.
Mercier, Hugo, and Dan Sperber. 2011. "Why do humans reason? Arguments for an argumentative theory."  Behavioral and Brain Sciences 34 (2):57-74.
Pinto, Robert C. 1995. "Post Hoc Ergo Propter Hoc." In Fallacies: Classical and contemporary readings, edited by Hans V. Hansen and Robert C. Pinto, 302-311. Pennsylvania: The Pennsylvania State University Press.
Sagan, C. 1996. The demon-haunted world: Science as a candle in the dark. New York: Random House.
Schopenhauer, A. 1896. "The Art of Being Right (Die Kunst, Recht zu behalten)." In: Wikisource. https://en.wikisource.org/wiki/The_Art_of_Being_Right.
Sperber, Dan, F. Clément, C. Heintz, O. Mascaro, H. Mercier, G. Origgi, and D. Wilson. 2010. "Epistemic vigilance."  Mind & Language 25 (4):359–393. doi: 10.1111/j.1468-0017.2010.01394.x.

Comments

  1. I've started 'teaching the controversy" on this issue. I teach my students the usual list, but then teach them reasons to be suspicious of the utility of so doing, and even ask them to develop their own philosophical position on the nature of the fallacies. The papers by Hitchcock and Blair in Hansen and Pinto's Fallacies anthology are useful on this score.

    ReplyDelete
    Replies
    1. Interesting! I don't teach that course anymore, but last year I adopted a similar approach: first giving the standard laundry list, then presenting the Fallacy Fork. But then I was wondering, "does it make sense for me to teach standard fallacy theory, only to then turn around and take it all back?" In any case, I haven't abandoned the fallacy labels completely either. Sometimes I find it useful to flag a dubious argument. For example, the term "ad hominem" may be still be a useful in a debate, even though in current usage it's often used a knee-jerk reaction to any strong criticism.

      Delete
    2. The argumentation schemes of Douglas Walton and Fabrizio Macagno (and their general approach to fallacies) are much more useful, in my opinion. They present different argument structures (including a lot of so-called fallacies), and give a set of "critical questions" to ask when you find one. This way, the taxonomy is a useful first step in argument evaluation (as I think traditional fallacies should be used).

      Delete
    3. Walton's work is definitely an improvement over the traditional approach (we discuss it in our paper), but I think it's mostly adding epicycles to a flawed framework. The fallacy schemes with all their qualifications and 'critical questions' become unwieldy, as Walton himself admits at some point, and they are not very useful in real life.

      Delete
    4. I actually think that their approach is very much in line with what you suggest at the end of your paper. Using fallacies to "cry fallacies" is using them in a very limited scope, mostly akin to name-calling (as you say). The traditional list encourage that, since it looks like a simple list of "fouls" that empower you to be the "referee", without seriously considering the matter at hand.

      But once one understand that using fallacies like that is not helpping anyone, knowing the fallacies is still a good tool to spot problems in argumentation. Not to accuse people of committing them, but zero-in the problem and then questions, clarify premisses, state implicit premisses, question assumptions, etc. The traditional list can be used like that, but does not give a lot of guidance. Walton's "critical questions" do exactly that : don't "cry fallacy", but evaluate the argument by asking those questions.

      I think you kind of throw the baby with the bathwater when you say that fallacies "do
      little theoretical work, and their main intended function is to scare into submissions
      alleged perpetrators of dire reasoning mistakes". It's one very specific and unfortunate way to use them; but knowing the argument structures, with their common weak points and bad uses (i.e. fallacies) is still very useful to be critical. The problem may be that people using fallacies this way don't "cry fallacy", so we don't see them as benefiting from the knowledge. Maybe the larger problem is not with fallacies per se, but more generally with people using concepts meant to deepen our understanding as stick to beat peoplee into submission; you can see it not only with fallacies, but also with reason, evidence, racism, etc.

      All that being said, I'm under the impression that you and I are saying pretty much the same thing : crying fallacy is bad (even counter-productive), but understanding arguments forms is useful ("As it turns out, in fallacy theory, the theory is usually quite good, in some cases even excellent: it’s this obsession with fallacies that has to go.") Teaching the traditional fallacy list as a list of foul needs to go the way of the dodo. We need to develop the desire to argue in good faith, something like the virtue of "good-thinking", and use the fallacy theories as a tool to reach the truth, not to call out other people. But getting rid of fallacies won't help much : it's bad use is a symptom of a much deeper bad-thinking vice, not it's cause, so it's going to be replaced by something else to beat people into submission.

      As for the "Walton's questions are unwieldy", I'm with you on that. But that's another fork, if I may : either it's simple and easy, but incomplete, or it's more complete, but unwieldy. Walton seem to be especially interested in AI, where an human-unwieldy list is not so much an issue. But I think that simplifying the list down to maybe a dozen schemes is feasible, and would strike a pretty good balance.

      Delete
    5. Thanks a lot for your thouightful comments! I have to admit that I haven't taken a close look at Walton's volume on "Argumentation Schemes", although as you see we cite some of his later papers. Perhaps it would be a good idea to reach out to Walton himself and ask what he thinks.

      It's clear that, in practice, we're pretty much on the same page, as is Walton. My take is just somewhat more radical, especially in this popular piece., which is meant to rattle the (skeptical) cage a bit. We agree that the standard approach is unhelpful, and leads the annoying practice of "crying fallacy".
      The main difference, as I see it, is that Walton wants to retain the concepts of "fallacy" and "fallacious", including the traditional list. It's just that he’s wants to develop more sophisticated approach, distinguishing between non-fallacious and fallacious instances of every category (and everything in between). This is fine with me, but the problem is that the concept of a "fallacy" inherently suggests a clear-cut and identifiable reasoning error, like a traffic violation, or foul play in a game. To use the adjective "fallacious" in a graded way, seems a bit awkward.

      The point about Walton's interest in AI is a good one. Unwieldiness is not so much of an issue there.

      For some practical recommendations for critical thinking education, see our follow-up paper with Hugo Mercier et al. Our assessment is pretty optimistic: people are natural-born arguers, and they already have a pre-theoretical and unreflective understanding of the difference between strong and weak arguments (at least when it comes to evaluating arguments of others). We need to create the right environment for human reasoning to flourish, instead of focusing on fallacy theory and (I'd say) argumentation schemes.
      http://www.tandfonline.com/doi/abs/10.1080/00461520.2016.1207537

      Delete
    6. Thanks, I'll certainly read your article! I'm no expert nor scholar on the topic, but I also found the approach of Tim Kenyon and Guillaume Beaulac interesting : they say we need to build (and teach ourselves to build) "socio-environmental infrastructures" conducive to good thinking. The paper must still be forthcoming in Topoi, but here's a draft : https://www.academia.edu/25061539/The_scope_of_debiasing_in_the_classroom. Their paper in Informal Logic is also worth a look.

      The way I see it, Walton use the term "fallacy" to mean "bad argument", not the traditional list (although of course, sometimes bad arguments are instances of this list). He presents his schemes as defeasible arguments, and the critical questions are there to help evaluate if specific arguments of this form are bad (fallacious) or not. But as you say, maybe this is a bit awkward. My guess is that this way of presenting shemes have the advantage of "riding on" the more common knowledge of traditional fallacies (and the vast litterature on them). Maybe the term "fallacy" itself encourage bad arguing behavior, so we should simply abandon it. As I said, though, I'm not sure it's going to be very useful : even if the expression was to go, people arguing in bad faith will find something else to present themselves as intelligent without meaningfully engage intellectually with arguments. Even the socratic method can be used this way, and I have no doubt that Walton's "critical questions" can, too. Without the underlying desire to understand and virtue of the good-thinker, any tool can be used as a weapon; and "teaching" this is very hard. But maybe I'm too cynical.

      Thanks, it's a pleasure to read you and discuss with you!

      Delete
    7. Thank you for the link to that paper in Topoi! And the pleasure is mutual. :-)

      One more thought about your more "cynical" view. I think you're right that every normative concept is susceptible to abuse. In the epistemic domain, for example, you have creationists dismissing evolutionary theory as "pseudoscience". But of course that doesn't mean we should get rid of the concept of "pseudoscience".

      So you're right that, whatever alternative concepts we come up with, people will try to use them to browbeat others in a debate. But there's something more specific about "fallacy" theory that worries me. Partly, it's just the weighty Latin names, which create the impression that we're talking about time-honored and indubitable philosophical results. "We've known about 'ad hominem' for more than two millennia, and you're still using them!" And as I said, for now at least the term "fallacy" is bound up with the notion of discrete and clear-cut reasoning errors. It locates irrationality on the level of low-level missteps.

      I think we can do better than that. For instance, I think psychologically rich concepts such as "cognitive dissonance", "wishful thinking" and "confirmation bias" are fine, and even "strawmanning" (if you don't treat it as a discrete "fallacy").

      Delete
  2. This comment has been removed by the author.

    ReplyDelete
  3. 'I no longer give that assignment. My students became paranoid! They began to see fallacies everywhere. Rather than dealing with the substance of an argument, they just carelessly threw around labels and cried “fallacy!” at every turn. But none of the alleged “fallacies” they spotted survived a close inspection.'

    That's indicative of your failure as a teacher. None of your drivel means fallacies are wrong, it means you're a piss poor teacher.

    ReplyDelete
    Replies
    1. I'm only a college student, and I don't pretend to know more than the average layman about this topic, but it didn't occur to me in my reading of this text that Dr. Boudry was alleging that fallacies were wrong. If i understand correctly, he has created a different framework by which to adequately and accurately pinpoint a fallacy, based on a "deductive argumentation schema"
      (as befits a hypothetical, created fallacy) and a looser interpretation of the same (intended to deal with murkier, real-world examples of a fallacy).
      As such I don't think it's fair to be so harsh to him. Just my two cents worth.

      Delete
  4. "Here’s a standard answer from the sceptic’s playbook: fallacies"

    Misspelled "skeptic's"

    ReplyDelete
    Replies
    1. Acceptable British/Commonwealth spelling.

      Delete
  5. A fascinating article - thank you very much. The present row about the Grenfell House skyscraper fire in London perhaps will make some of the argument clearer. For a Labour partisan, there is a belief that the Tories are always doing the poor down, so that it is true, without seriously engaging with the evidence, that the disaster is the personal responsibility of the Prime Minister. This is, of course, based on a 'post hoc ergo propter hoc' fallacy. In its more healthy forms the argument becomes that the Tories' approach to government gave the council officials 'permission' to cut corners, but even that remains unproven in the light of the allegation that the cladding may have been illegal under existing rules.

    Yet the danger of such partisanship is that even when the facts overwhelming prove the opposite hypothesis, a few partisans will persist in believing their truth. This is painfully obvious in Venezuela where the flawed economic policies of a left wing government are causing chaos - but their partisans seek to blame US interference. And it is also true that there is a price to be paid for changing your mind: it's described as a 'U turn' and will be mocked, rather than welcomed as an admission that you can be convinced by new information; whilst heaven may rejoice over the repentant, the newspapers don't...

    Those of us who aspire to live rationally need to recognise the central truth of post-modernism that we can't actually do so because our minds are warped and our 'logic' is inadequate to fully engage with the real world; Luther's disdain for reason as a 'whore' was well placed! And scientists need to remember that their history is scattered with examples of where what was later widely accepted was rejected for many years because it challenged their world view: thus the resistance to the Big Bang theory of the universe was grounded in the belief that it evidenced the existence of a Creator, a fact which its opponents wanted to avoid at all costs.

    ReplyDelete
  6. Every fallacy is specific to a particular logical system. In conventional Western deductive logic, argument from authority is fallacious because no matter how expert a person is, they may on occasion be incorrect. In Bayesian reasoning it is no fallacy - the statement of an expert updates our understanding.

    ReplyDelete
  7. You make a good case for being cautious and rigorous about applying fallacy theory, but a weak case for discarding it.

    "It’s time for skeptics and other fallacy buffs to get rid of fallacies." I guess you're being rhetorical – to literally do this would be foolish.

    ReplyDelete

Post a Comment

Popular posts from this blog

We zijn veel te lief voor extreemlinks: Over het dempen van ideologische beerputten

Ook links zwijgt ex-moslims dood