Post Hoc Reasoning, Special Pleading, and Ad Hoc Hypotheses


The powers of the paranormal, if they exist, cannot be very great if they are so easily thwarted by mere doubt. It seems as though, in the world of supernatural claims, doubt is the strongest magic of all. It can cancel anything, except science, which actually needs it to work. At least, this is the impression I get from the claims of paranormal believers when attempts to replicate initially successful parapsychology studies fail. And fail they have once the controls of the initial study are improved, reducing statistical significance closer to chance levels and shrinking effect-size to zero.

It seems to me that even with perfect methodology there would still be a chance for false-positive results, and that what these studies show is not what they are claimed to — only that something other than chance may be at work, and giving no indication of what that may actually be. It could be due to poor experimental design, inappropriate use of statistics, errors in reasoning, bad data collection, and rarely, but often enough to taint the entire field of study, fraud.

One thing never fails, though, and that’s the rationalizations offered for this failure to replicate. This post deals with a species of error in reasoning: Special Pleading, the Post Hoc [after this, or “after the fact”]fallacy, or Ad Hoc [for this (only)]hypothesis, and sometimes just “covering your ass by making shit up.” I also aim to show that it is not always a fallacy under the right circumstances.

This fallacy, regardless of its name, is an attempt to rescue a claim from disproof by inventing special reasons why it should be exempt from the usual standards of evidence, to deflect criticism without demonstrating that these alleged reasons are in fact true or actually exist apart from the claim they attempt to defend. Every attempt has been made to boil the following examples of its use down to their essence and to avoid committing straw-persons:

Psi phenomena are shy, or jealous. They do not work in the presence of skeptics. Skeptical doubt cancels them.

What about this one?

Successful replications do occur, but the doubt of skeptics reading the journals they are published in reaches back through time, retroactively changing the experiment and causing it to fail.

Or…

Psi is elusive and a delicate phenomenon. Imposing excessively strict controls (read: adequate ones) in a study impedes Psi’s natural functioning in a sterile laboratory setting.

What I find interesting about this sort of reasoning in its fallacious form is that it is considered acceptable in some circles.

Never mind that many of the replications are attempted by other believers and by those without an apparent bias against the paranormal, and another such rationalization goes something like:

They (believers or neutral parties who don’t get results) are burdened with a repressed skepticism that causes their replication attempts to fail, no matter what belief or neutrality they claim to have. These hidden attitudes unconsciously sabotage their efforts.

Never mind the fact that this argument is made on the basis of mere supposition and absent the use of a valid psychological test. Those who reason thus are essentially claiming to be able to read minds, the very thing that some of these replication attempts have failed to demonstrate.

This phenomenon, the success of some to get positive results in their studies and others to get negative results based on their belief systems, is in parapsychology known variously as the Shyness Effect, the Wiseman Effect, or, in a form broadly applying to any field of science where attitudes may unconsciously influence results, the Experimenter Effect, or Observer-expectancy effect, and this is one of the reasons for double-blinding studies and other forms of experimental controls.

A good example was a series of medical studies for a procedure known as the portacaval shunt, and in the analysis of these studies, it was discovered that those who were more enthusiastic about the procedure tended to get false-positive results more often than those not so inclined. And this was from a study assessing an experimental surgical method, not magic mind-powers.

Above were some examples of this form of argument used as a fallacy, but are Ad Hoc hypotheses always and everywhere bad reasoning?

Fortunately, no.

This can be a perfectly good way of reasoning, as long as at least one of the following conditions is met:

  • The reason for failure to demonstrate something has already been shown, or can be, to exist independently of the hypothesis it is used to support. There must be valid evidence that it is true and relevant as a viable supporting reason.
  • The Ad Hoc hypothesis is both interesting and fruitful in predicting new phenomena that could in principle be tested even without being true or existing itself. The key point is that it must be testable, whether by verification or falsification if a general or a particular claim.
  • The Post Hoc reasoning is used to invent new and creative ways to test a claim, and as long as it is used to further inquiry and not merely to thwart the goal of critical reasoning by making up silly excuses as needed.

A good example of an Ad Hoc hypothesis that was both interesting and fruitful was Einstein’s addition to General relativity of the Cosmological Constant, which though he later rejected it and called it his “greatest blunder” has shown to be useful today in the concept of Dark Energy to explain the accelerating expansion of the universe. Another would be the the Lorentz contraction offered to explain the failure of the  Michelson-Morley experiment to detect the Earth’s motion through the Ether, later incorporated into Einstein’s Special relativity.

One thing to note about many forms of argument used as fallacies:

Philosophers and communications specialists may differ on this, but informal fallacies are not so much violations of argument form as they are violations of argument procedure, as attempts to subvert the rules and goals of constructive argument and critical discussion. In this sense, they are abused, often out of ignorance but sometimes out of intellectual dishonesty, as rhetorical devices masking themselves as cogent arguments when they are not. For ethical, productive argumentation, try to keep this in mind and avoid this yourself whenever possible. Happy debating.

Self-Deception: Can we lie to ourselves?


What does it mean to lie? In a strict sense, lying means relating a falsehood that one knows on some level to be just that — false. We are not always honest with ourselves, and our brains have many ways to deceive themselves. We are frequently convinced, often quite strongly of things simply not true, tenaciously holding some falsehoods as self-evident truths.

It seems paradoxical to be able to believe what we know to be false, so how then may it be possible to fool ourselves in that manner and still be aware, at least intellectually, that we are doing so?

First, a bit of what it means to ‘know’ something. Knowledge at it’s most basic level involves both awareness of an idea, or probable fact, and its acceptance.

Knowledge involves belief that something is the case or that something is not, that belief needed to fully grasp the intricacies and nuances of what is known, but we are not all the way there yet. We have a couple more steps to go…

To know something, we must not only be informed of a thing and believe it to be the case, but it must actually conform to existing facts — it must be true. Not just this, but there must be some grounds for believing it, and convincing ourselves that we have knowledge, and not just a lucky guess on our part.

Strictly speaking, you can’t really know something that’s false, and you can’t truthfully say you know something without good grounds…

…So,

There must be some information available, usually gained by our senses, by which we obtain those grounds and the justification for the item of knowledge we possess — some channel of information must necessarily and sufficiently complete the picture, so that we can confidently think that we know something.

Whether these grounds come from our own personal sensory experience, often enhanced by our instruments and other artifacts, secondhand or further removed testimony given by others (usually needing some kinds of grounding itself, like real and relevant expertise of the source giving the testimony…) and possibly other channels of information as well, we must have evidence, and it must be strong enough to justify the claim we accept.

*Ahem*

We surely deceive ourselves, convince ourselves of probable falsehoods and we often hold conflicting beliefs by walling them off from each other — and even using processes of doublethink and rationalization to entertain them without the discomfort we often experience when both or all come to our conscious awareness at the same time.

It’s possible to have a lack of confidence in one’s knowledge, the niggling doubt that sometimes happens to those of us who hold all knowledge to be subject to correction with further and better grounds to retain, reject or amend what we know at any given time.

With doublethink, rationalization, logical fallacies and belief in belief — believing for the sake of belief itself as a virtue — we may hold at least as partially true what we intellectually know (and thus to an extent accept) to be false, and move from compartmentalizing our accepted and conflicting claims into the territory of the pious fraud, further into those of the pathological liar and victims of False memory syndrome

…as well as when we willfully sacrifice the value of reason and evidence in favor of what feels good to us, rather than the uncomfortable realities we are often forced to deal with in daily life.

…Should I Believe?


Part of Image:Planetary society.jpg Original c...

Part of Image:Planetary society.jpg Original caption: “Founding of the Planetary Society Carl Sagan, Bruce Murray and Louis Friedman, the founders of The Planetary Society at the time of signing the papers formally incorporating the organization. The fourth person is Harry Ashmore, an advisor, who greatly helped in the founding of the Society. Ashmore was a Pulitizer Prize winning journalist and leader in the Civil Rights movement in the 1960s and 1970s.” (Photo credit: Wikipedia)

When we are faced with an incentive to believe something that agrees strongly with our prejudices, it comes down to a simple matter of “Can I believe?” that we may not even have to ask. We accept such things on a whim, unless we exercise care in our thinking. We have a tendency to first believe things we have an emotional investment in, and then cobble together, often quite ingeniously, reasons to justify our belief.

When we are faced with those things we are disinclined to believe, things contrary to our ideologies, or belief/value paradigms, it’s a matter of “Must I believe?” as though we are being faced with an uncomfortable choice and go immediately on the defensive with frequently clever rationalizations we muster to attack the discomforting idea and defend our belief structures from harm.

But I would add a third option, shown by thinkers and investigators I’ve known, read and listened to who approach certain…nonscientific and scientific…topics as intellectual curiosities or academic subject matter without a clear vested interest in accepting or rejecting the claims that these concern:

“Should I believe? Do I have sound reasons to accept this claim as true, or do I have sound reasons to reject it as false, or worse, as not even wrong?”

Often these questions aren’t even consciously asked by those believing, disbelieving, or suspending both until the data are in.

But the first two involve belief or disbelief first, followed by a attempts at conscious justification, often subjectively ironclad, and often fallacious, whereas the third involves deliberation, a weighing of evidence and argument, followed by a tentative conclusion, possibly with leanings toward either end of a continuum of credulity to denial, but a conclusion subject to newer and better information and reasoning as they are presented.

The third option is uncommon, and involves thinking unfamiliar to most of us, but as it occurs with perfectly normal human brains operating with the proper training and accumulated habit, it is every bit as human as reflexive acceptance or knee-jerk rejection.

It’s something we probably did not specifically evolve to do, but like playing a piano, also without a direct adaptive function, we can learn to do, and quite skillfully for many of us.

I think it’s something worth doing, but it requires that we consciously override some of our impulses, consider our thinking, our motivations, and mind the soundness of our reasoning and solidity of the facts we claim, and always consider that these things all have limits — they are fallible, but used well and with care, reliable and effective as paths to real knowledge.

We must consider the input and critiques of others, for alone we are prone to misleading ourselves, even the smartest and best educated of us, with our own biases and fallacies of thinking and memory.

To quote the late Carl Sagan, “Valid criticism does you a favor.”

This is why modern science acts as a community, so that research workers can get public input from their colleagues, cross-check their findings, and it is the reason that external replication of results is of the greatest importance — one-off phenomena that are impossible to verify objectively are of little use, and any finding must at least in principle be testable, or it cannot be demonstrably known.

Scientific inquiry works as effectively as it does, because unlike any other set of methods, it can tell us when we are wrong, and even when we are, to sometimes continue to reap discoveries from failed ideas that lead to new territory.

Is there something better than this now? Will there be, ever?

I don’t know, to both questions. If such a set of methods exists, I’ve not heard of it, and apparently, neither has anyone else I know of.

But if and when something superior comes along, that more effectively and accurately does what scientific inquiry, and as part of it, skeptical inquiry does at the moment, then I shall happily change my mind about science and support whatever works best instead.

Demythologizing Reason


Reason has its problems, and its limits, and even among those of us with an appreciation of its usefulness. It should never be, like anything else, beyond, above or outside of questioning when it may be shown in error. The delusion that critical reasoning needs no skill or care in its use has done much to misrepresent it in the popular culture, as has the equally delusional notion that it can reveal what is true about contingent matters of fact in the world.

Ill-considered, untrained and incautious reasoning is quite prone to lead to the most egregious of fallacies, and merely being educated and intelligent is not enough to prevent this. In fact, the smarter and more knowledgeable tend to be even more good at fooling themselves with ever more intricate logical constructions that serve only to justify prejudices and prior errors in thinking and behavior.

In fact, if we heed not the soundness or cogency of our own arguments, their validity or strength, and the truth of the premises we reason from, then merely being informed of logical errors, where this is applied only to the arguments of others, will do us not good. This is why I have difficulty taking religious apologists seriously, since few seem very concerned at all for the logic or assumptions going into their arguments, yet are all too eager to accuse their opponents of fallacies, even where such errors have not actually been made. Some fallacies exist only in the mind of the beholder.

This is something to be careful of when evaluating arguments.

Brilliance and learning offer no guarantee against gullibility and errors in reasoning.

Often, our tendency to dismiss and rationalize criticism in defense of our notions and presumptions, when we are not honest with ourselves, must be checked by a concern for more than just subjective truisms and opinions — we need criticism, when its valid, as a counterbalance for our own mistakes and sloppy thinking.

No one who really understands logic — what it is, what it’s good for, what it can’t do — actually believes that simply knowing about logical fallacies actually make anyone perfectly logical and rational; there is no such person alive, save those with some rare brain pathologies or neurological damage. Such unfortunates, being totally rational, lack the necessary drives to action provided by emotions.

When we think our way from A to B to C, and so on, from a set of facts, observations or assumptions we treat as facts, to a conclusion on some matter of importance to us, whether certain or tentative, we are engaged in reasoning.

To do so well requires skill, effort and due care in our thinking — most of us are poorly equipped at first with the mental software to to it reliably and effectively, but we can learn.

Though we are reasoning animals, we are often not as reasonable as we could be, with the fallacies, biases, and shortcuts in our thinking that we commit sometimes with disturbing regularity, and which make it necessary for us to continually scrutinize our thinking and motivations — metacognition; thinking about our thinking.

So reason alone can say nothing about truth, only validity, for reason is truth-preserving, not truth-finding. Reason is useful for assessing the consistency and validity of statements, though by itself, bereft of observation, assumptions or facts it may even be sterile and needs by itself no referent to the real world. Formal structures of reasoning are not literally how we think — they are a reconstruction of our thinking that makes intuitive sense, a convenient model of thought that serves the purpose of focusing and clarifying our understanding.

Informal reasoning is hounded by the problem of induction, the fact that all inductive reasoning is based on the assumption of the regularity of nature, which cannot be proven deductively and only with circularity by induction itself. But this is not really a problem once you consider that inductive reasoning is concerned, not with proof, but with probability. We can bite the proverbial bullet and trudge on, without needing certainty in the justification of our reasoning.

Proof, absolute, physical and concrete especially, is a will-o-the-wisp outside of mathematics and formal logic, and it is induction, not deduction, that is concerned by contingent matters of reality where we need confidence, not certitude.

So we must use reason, use it well and use it often, but remember the purpose it serves. It has been called by Professor Marianne Talbot “the ultimate transferable skill” — it may be used for almost anything — but keep it in its place as a useful tool, only one of which we use to aid our thinking, and while a worthy piece of equipment, to bear in mind it is not a final goal, only a way of reaching them.

“Logic is the beginning of wisdom, not the end.” ~ Leonard Nimoy as Mr. Spock, from Star Trek

Some Basics for Intellectually Honest Discussion


In any discussion involving disagreement between two or more parties, in order for that discussion to actually accomplish something meaningful, I’ve noticed a few guidelines that should generally be observed for the discussion to successfully carry.

This is needed when what is desired is more than just flaming common message-board trolls, more than a bickering pissing contest or argument by assertion, when what is desired is willing agreement, agreement to disagree, or to clarify or elaborate on a previously asserted position.

Here are a few I find handy, and I expect these, under near ideal conditions, of those I argue with as well:

  1. Avoid using partisan-sounding loaded language, and try to be clear in your meaning if the goal is illumination, not obfuscation. Use of loaded language that appeals to ideologies and attitudes your interlocutor doesn’t share is inappropriate and may prevent them from seeing things ‘your way’ via a negative attitudinal reaction to your choice of words. Presentation is everything.
  2. Mind the soundness of your reasoning, and try to avoid committing obvious logical and rhetorical fallacies. This is even more important than pointing out such fallacies in your partner’s arguments, and if you cannot see the flaws in your own reasoning, he or she almost certainly can and will if so inclined and skilled.
  3. Avoid the use of cherry-picked quotes, factoids, or other data out of its proper context to support your point, and make sure your sources both pertain to the topic of discussion and support your case. Nothing is more embarrassing that quoting a source to shore up your point only to find out that it has nothing to do with what you said in its full context, or that it even outright contradicts your point.
  4. This should go with [3.] but is important on its own, too: Make sure your sources are reliably trustworthy and the information you use from them is factually correct. Check your facts — if you don’t, your argument partner will, and will call you on it if not.
  5. Address the argument made, and only the argument, not a straw-person caricature of the argument, and not the person making it, unless some circumstance both true and relevant to the argument warrants questioning its source. Insults and snark should be used as adjuncts to, not replacements for, strong or valid argument. Don’t be a dick unless the other person is as well, then, all bets are off and you may fire at will, Mr. Gridley.
  6. Respect your opponent. Showing respect for your argument partner’s/opponent’s personhood, rights, autonomy, intelligence, and perspective allows you to claim that same respect for yourself, and all of the previous guidelines rest upon this one.
  7. Be prepared to admit when you are shown wrong, allow yourself to be corrected, and move on. Nothing, and I mean nothing, is more destructive to civil discourse than a dogmatic need for certainty, and nothing more tempting in our tumultuous, ever-changing world with metaphysical certitude’s siren song of absolute Truth™ calling to the unwary.
  8. Finally, and most importantly, know your own biases and consider how they may intrude on your objectivity: None of us are as objective as we think ourselves to be, but we can make ourselves more so if we know and account for our biases in our thinking and perspective. Typical examples are confirmation bias, selective thinking, the attribution error, the representativeness heuristic, the superiority bias, the availability error, and many, many others. I encourage you to read up on these and similar errors and shortcuts in thinking — you won’t be sorry that you did. Every one of us is skewed in our views, but by learning of and accounting for this, we are the wiser for it.

Namaste.

Critical Thinking & Motivated Reasoning


The Spinning Dancer appears to move both clock...

The Spinning Dancer appears to move both clockwise and counter-clockwise (Photo credit: Wikipedia)

The skill sets of critical thinking, often called a baloney detection kit, are useful to an extent but they can also be misused, fallaciously applied out of context, and even mistakenly employed as a mechanical rote substitute for thinking.

This is especially the case when we use our critical faculties to rationalize from a conclusion rather than reason our way from a set of facts or assumptions to a likely conclusion through a rational process.

The two processes are mirror images of each other, one we unthinkingly do all the time, the other when we exercise caution and diligence in our thinking.

The former we do when we want a position we hold to be true even when it is not, and the latter when we take due care in accounting for our biases, thinking clearly and deeply, and reaching a reasonable conclusion on what is probably true even when we may not like it very much.

We are all of us to some extent guided by our biases, our world views, our preconceptions, our prior beliefs and knowledge, our expectations, wants and desires, our many cognitive flaws, including the confirmation bias and the availability heuristic.

There are also the limits and flaws inherent to our sensory equipment and the perceptual models our brains construct for us. These take our sensory data and weave it into an apparent unity from the disconnected chaotic bits that we see, hear, smell, and perceive through other senses.

We are subject to hallucinations, even the sanest of us, optical illusions, and the tendency to note patterns in what really amounts to nothing more than random sensory noise.

Cover of "The Demon-Haunted World: Scienc...

Cover via Amazon

This applies to everyone. Scientific skepticism is the ability to overcome this, to recognize our limits, flaws, routes of self-deception and the wariness to guard against them with care, skill, and scientific thinking.

Motivated reasoning is the opposite of critical reasoning, when we attempt to use our baloney detector, filtered through unnoticed biases, wishes and preconceptions, to rationalize and muster our arguments for ‘the cause’ even to the extent of accusing our opponents of using the very same logical fallacies and factual errors we are. The line between self-deception and willful charlatanry is not easily drawn, and most often lies somewhere in between.

Falsifying evidence and counterarguments are ignored or dismissed, while any data and arguments that could support the predetermined conclusion, even if they are weak, invalid, or don’t have any relevance or meaning, however appealing they may be, are sought out, quickly noted and seized upon by the proponent as ammunition.

I see this often, among religious apologists, conspiracy theorists, science and history deniers, and even those who deny the very existence of reality and knowledge itself. All attempt to defend their belief systems from falsification and themselves from cognitive dissonance, often completely unaware of the direction they argue in or of the fallacies they commit. Not always, but often.

This is especially so when one has only the most superficial understanding of critical thinking, only able to go through the motions at best, and an unhealthy dose of the Dunning-Kruger effect — when we are too incompetent to recognize our own incompetence.