Fallacy

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>


A fallacy is the use of invalid or otherwise faulty reasoning, or "wrong moves"[1] in the construction of an argument.[2][3] A fallacious argument may be deceptive by appearing to be better than it really is. Some fallacies are committed intentionally to manipulate or persuade by deception, while others are committed unintentionally due to carelessness or ignorance, as even lawyers admit that the extent to which an argument is sound or unsound depends on the context in which the argument is made.[4]

Fallacies are commonly divided into "formal" and "informal". A formal fallacy can be expressed neatly in a standard system of logic, such as propositional logic,[2] while an informal fallacy originates in an error in reasoning other than an improper logical form.[5] Arguments containing informal fallacies may be formally valid, but still fallacious.[6]

Formal

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

A formal fallacy is a common error of thinking that can neatly be expressed in standard system of logic.[2] An argument that is formally fallacious is rendered invalid due to a flaw in its logical structure. Such an argument is always considered to be wrong.

The presence of a formal fallacy in a deductive argument does not imply anything about the argument's premises or its conclusion. Both may actually be true, or may even be more probable as a result of the argument; but the deductive argument is still invalid because the conclusion does not follow from the premises in the manner described. By extension, an argument can contain a formal fallacy even if the argument is not a deductive one: for instance, an inductive argument that incorrectly applies principles of probability or causality can be said to commit a formal fallacy.

Common examples

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Aristotle

Aristotle was the first to systematize logical errors into a list, as being able to refute an opponent's thesis is one way of winning an argument.[7] Aristotle's "Sophistical Refutations" (De Sophisticis Elenchis) identifies thirteen fallacies. He divided them up into two major types, those depending on language and those not depending on language.[8] These fallacies are called verbal fallacies and material fallacies, respectively. A material fallacy is an error in what the arguer is talking about, while a verbal fallacy is an error in how the arguer is talking. Verbal fallacies are those in which a conclusion is obtained by improper or ambiguous use of words.[9] An example of a language dependent fallacy is given as a debate as to who amongst humanity are learners: the wise or the ignorant. [10] Language-independent fallacies may be more complex, eg: "(1) Coriscus is different from Socrates (2) Socrates is a man Therefore: (3) Coriscus is different from a man" [11]

Whately's grouping

Richard Whately defines a fallacy broadly as, "any argument, or apparent argument, which professes to be decisive of the matter at hand, while in reality it is not.[12]

Whately divided fallacies into two groups: logical and material. According to Whately, logical fallacies are arguments where the conclusion does not follow from the premises. Material fallacies are not logical errors because the conclusion does follow from the premises. He then divided the logical group into two groups: purely logical and semi-logical. The semi-logical group included all of Aristotle's sophisms except:ignoratio elenchi, petitio principii, and non causa pro causa, which are in the material group.[13]

Intentional

Sometimes a speaker or writer uses a fallacy intentionally. In any context, including academic debate, a conversation among friends, political discourse, advertising, or for comedic purposes, the arguer may use fallacious reasoning to try to persuade the listener or reader, by means other than offering relevant evidence, that the conclusion is true.

Examples of this include the speaker or writer:[14]

  1. Diverting the argument to unrelated issues with a red herring (Ignoratio elenchi)
  2. Insulting someone's character (argumentum ad hominem)
  3. Assuming they are right by "begging the question" (petitio principi)
  4. Making jumps in logic (non-sequitur)
  5. Identifying a false cause and effect (post hoc ergo propter hoc)
  6. Asserting that everyone agrees (bandwagoning)
  7. Creating a "false dilemma" ("either-or fallacy") in which the situation is oversimplified
  8. Selectively using facts (card-stacking)
  9. Making false or misleading comparisons (false equivalence and false analogy)
  10. Generalizing quickly and sloppily (hasty generalization)

In humor, errors of reasoning are used for comical purposes. Groucho Marx used fallacies of amphiboly, for instance, to make ironic statements; Gary Larson employs fallacious reasoning in many of his cartoons. Wes Boyer and Samuel Stoddard have written a humorous essay teaching students how to be persuasive by means of a whole host of informal and formal fallacies.[15]

Deductive

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

In philosophy, the term formal fallacy for logical fallacies and defined formally as: a flaw in the structure of a deductive argument which renders the argument invalid. The term is preferred as logic is the use of valid reasoning and a fallacy is an argument that uses poor reasoning therefore the term logical fallacy is an oxymoron. However, the same terms are used in informal discourse to mean an argument which is problematic for any reason. A logical form such as "A and B" is independent of any particular conjunction of meaningful propositions. Logical form alone can guarantee that given true premises, a true conclusion must follow. However, formal logic makes no such guarantee if any premise is false; the conclusion can be either true or false. Any formal error or logical fallacy similarly invalidates the deductive guarantee. Both the argument and all its premises must be true for a statement to be true.

Paul Meehl

In Why I Do Not Attend Case Conferences[16] (1973), psychologist Paul Meehl discusses several fallacies that can arise in medical case conferences that are primarily held to diagnose patients. These fallacies can also be considered more general errors of thinking that all individuals (not just psychologists) are prone to making.

  • Barnum effect: Making a statement that is trivial, and true of everyone, e.g of all patients, but which appears to have special significance to the diagnosis.
  • Sick-sick fallacy ("pathological set"): The tendency to generalize from personal experiences of health and ways of being, to the identification of others who are different from ourselves as being "sick". Meehl emphasizes that though psychologists claim to know about this tendency, most are not very good at correcting it in their own thinking.
  • "Me too" fallacy: The opposite of Sick-sick. Imagining that "everyone does this" and thereby minimizing a symptom without assessing the probability of whether a mentally healthy person would actually do it. A variation of this is Uncle George's pancake fallacy. This minimizes a symptom through reference to a friend/relative who exhibited a similar symptom, thereby implying that it is normal. Meehl points out that consideration should be given that the patient is not healthy by comparison but that the friend/relative is unhealthy.
  • Multiple Napoleons fallacy: "It's not real to us, but it's 'real' to him." A relativism that Meehl sees as a waste of time. There is a distinction between reality and delusion that is important to make when assessing a patient and so the consideration of comparative realities can mislead and distract from the importance of a patient's delusion to a diagnostic decision.
  • Hidden decisions: Decisions based on factors that we do not own up to or challenge, and for example result in the placing of middle- and upper-class patients in therapy while lower-class patients are given medication. Meehl identifies these decisions as related to an implicit ideal patient who is young, attractive, verbal, intelligent, and successful (YAVIS). He sees YAVIS patients as being preferred by psychotherapists because they can pay for long-term treatment and are more enjoyable to interact with.
  • The spun-glass theory of the mind: The belief that the human organism is so fragile that minor negative events, such as criticism, rejection, or failure, are bound to cause major trauma to the system. Essentially not giving humans, and sometimes patients, enough credit for their resilience and ability to recover.[16]

Measurement

Increasing availability and circulation of big data are driving proliferation of new metrics for scholarly authority,[17][18] and there is lively discussion regarding the relative usefulness of such metrics for measuring the value of knowledge production in the context of an "information tsunami."[19] Where mathematical fallacies are subtle mistakes in reasoning leading to invalid mathematical proofs, measurement fallacies are unwarranted inferential leaps involved in the extrapolation of raw data to a measurement-based value claim. The ancient Greek Sophist Protagoras was one of the first thinkers to propose that humans can generate reliable measurements through his "human-measure" principle and the practice of dissoi logoi (arguing multiple sides of an issue).[20][21] This history helps explain why measurement fallacies are informed by informal logic and argumentation theory.

  • Anchoring fallacy: Anchoring is a cognitive bias, first theorized by Amos Tversky and Daniel Kahneman, that "describes the common human tendency to rely too heavily on the first piece of information offered (the 'anchor') when making decisions." In measurement arguments, anchoring fallacies can occur when unwarranted weight is given to data generated by metrics that the arguers themselves acknowledge is flawed. For example, limitations of the Journal Impact Factor (JIF) are well documented,[22] and even JIF pioneer Eugene Garfield notes, "while citation data create new tools for analyses of research performance, it should be stressed that they supplement rather than replace other quantitative-and qualitative-indicators."[23] To the extent that arguers jettison acknowledged limitations of JIF-generated data in evaluative judgments, or leave behind Garfield's "supplement rather than replace" caveat, they court commission of anchoring fallacies.
  • Naturalistic fallacy: In the context of measurement, a naturalistic fallacy can occur in a reasoning chain that makes an unwarranted extrapolation from "is" to "ought," as in the case of sheer quantity metrics based on the premise "more is better"[19] or, in the case of developmental assessment in the field of psychology, "higher is better."[24]
  • False analogy: In the context of measurement, this error in reasoning occurs when claims are supported by unsound comparisons between data points, hence the false analogy's informal nickname of the "apples and oranges" fallacy.[25] For example, the Scopus and Web of Science bibliographic databases have difficulty distinguishing between citations of scholarly work that are arms-length endorsements, ceremonial citations, or negative citations (indicating the citing author withholds endorsement of the cited work).[26] Hence, measurement-based value claims premised on the uniform quality of all citations may be questioned on false analogy grounds.
  • Argumentum ex silentio: An argument from silence features an unwarranted conclusion advanced based on the absence of data. For example, Academic Analytics' Faculty Scholarly Productivity Index purports to measure overall faculty productivity, yet the tool does not capture data based on citations in books. This creates a possibility that low productivity measurements using the tool may constitute argumentum ex silentio fallacies, to the extent that such measurements are supported by the absence of book citation data.
  • Ecological fallacy: An ecological fallacy is committed when one draws an inference from data based on the premise that qualities observed for groups necessarily hold for individuals; for example, "if countries with more Protestants tend to have higher suicide rates, then Protestants must be more likely to commit suicide."[27] In metrical argumentation, ecological fallacies can be committed when one measures scholarly productivity of a sub-group of individuals (e.g. "Puerto Rican" faculty) via reference to aggregate data about a larger and different group (e.g. "Hispanic" faculty).[28]

Other systems of classification

Of other classifications of fallacies in general the most famous are those of Francis Bacon and J. S. Mill. Bacon (Novum Organum, Aph. 33, 38 sqq.) divided fallacies into four Idola (Idols, i.e. False Appearances), which summarize the various kinds of mistakes to which the human intellect is prone. With these should be compared the Offendicula of Roger Bacon, contained in the Opus maius, pt. i. J. S. Mill discussed the subject in book v. of his Logic, and Jeremy Bentham's Book of Fallacies (1824) contains valuable remarks. See Rd. Whateley's Logic, bk. v.; A. de Morgan, Formal Logic (1847) ; A. Sidgwick, Fallacies (1883) and other textbooks.

Assessment - pragmatic theory

According to the pragmatic theory,[29] a fallacy can in some instances be an error a fallacy, use of a heuristic (short version of an argumentation scheme) to jump to a conclusion. However, even more worryingly, in other instances it is a tactic or ploy used inappropriately in argumentation to try to get the best of a speech part unfairly. There are always two parties to an argument containing a fallacy - the perpetrator and the intended victim. The dialogue framework required to support the pragmatic theory of fallacy is built on the presumption that argumentative dialogue has both an adversarial component and a collaborative component. A dialogue has individual goals for each participant, but also collective (shared) goals that apply to all participants. A fallacy of the second kind is seen as more than simply violation of a rule of reasonable dialogue. It is also a deceptive tactic of argumentation, based on sleight-of-hand. Aristotle explicitly compared contentious reasoning to unfair fighting in athletic contest. But the roots of the pragmatic theory go back even further in history to the Sophists. The pragmatic theory finds its roots in the Aristotelian conception of a fallacy as a sophistical refutation, but also supports the view that many of the types of arguments traditionally labelled as fallacies are in fact reasonable techniques of argumentation that can be used, in many cases, to support legitimate goals of dialogue. Hence on the pragmatic approach, each case needs to analyzed individually, to determine by the textual evidence whether the argument is fallacious or reasonable.

See also

<templatestyles src="Div col/styles.css"/>

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. 2.0 2.1 2.2 Harry J. Gensler, The A to Z of Logic (2010:p74). Rowman & Littlefield, ISBN 9780810875968
  3. John Woods, The Death of Argument (2004). Applied Logic Series Volume 32, pp 3-23. ISBN 9789048167005
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. Lua error in package.lua at line 80: module 'strict' not found.
  7. Lua error in package.lua at line 80: module 'strict' not found.
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. Lua error in package.lua at line 80: module 'strict' not found.
  10. Lua error in package.lua at line 80: module 'strict' not found.
  11. Lua error in package.lua at line 80: module 'strict' not found.
  12. Frans H. van Eemeren, Bart Garssen, Bert Meuffels (2009). Fallacies and Judgments of Reasonableness: Empirical Research Concerning the Pragma-Dialectical Discussion Rules, p.8. ISBN 9789048126149.
  13. Lua error in package.lua at line 80: module 'strict' not found.
  14. Lua error in package.lua at line 80: module 'strict' not found.
  15. Lua error in package.lua at line 80: module 'strict' not found.
  16. 16.0 16.1 Meehl, P.E. (1973). Psychodiagnosis: Selected papers. Minneapolis (MN): University of Minnesota Press, p. 225-302.
  17. Lua error in package.lua at line 80: module 'strict' not found.
  18. Lua error in package.lua at line 80: module 'strict' not found.
  19. 19.0 19.1 Lua error in package.lua at line 80: module 'strict' not found.
  20. Lua error in package.lua at line 80: module 'strict' not found.
  21. Lua error in package.lua at line 80: module 'strict' not found.
  22. Lua error in package.lua at line 80: module 'strict' not found.
  23. Lua error in package.lua at line 80: module 'strict' not found.
  24. Lua error in package.lua at line 80: module 'strict' not found.
  25. Lua error in package.lua at line 80: module 'strict' not found.
  26. Lua error in package.lua at line 80: module 'strict' not found.
  27. Lua error in package.lua at line 80: module 'strict' not found.
  28. Lua error in package.lua at line 80: module 'strict' not found.
  29. Lua error in package.lua at line 80: module 'strict' not found.

Further reading

  • C. L. Hamblin, Fallacies, Methuen London, 1970. reprinted by Vale Press in 1998 as ISBN 0-916475-24-7.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Douglas N. Walton, Informal logic: A handbook for critical argumentation. Cambridge University Press, 1989.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.

Historical texts

External links