Blog Archives

Someone’s being silly on Yahoo!..

Chris Parsons on Yahoo! news posted this article on Will Storr’s new book ‘The Heretics: Adventures With The Enemies Of Science,’ and his post raised a few livid crimson flags, which I’ll point out below. His article starts innocently enough, though with the statement,

“Will Storr is a man who deals in facts. As a journalist of more than 10 years, undeniable evidence and rational data are his bread and butter. There are groups of people, however, who deny the irrefutable; who see cold, hard facts as mistruths or simply inconvenient.”

Parsons goes on to include among these people Holocaust deniers, creationists and UFO believers, though I may be excused for a little suspicion, given what I’ve learned about the rank-and-file of journalistic media over my 6 years as a skeptic, of the possible argument from authority fallacy in Parson’s first two sentences, which sounds to me like few journalists in most outlets I’ve heard or read.

Not to diss journalists as a whole, but fewer than should be in the profession are as fact-based as is being implied. But let’s grant, shall we, that Mr. Storr is one of the few who sticks to facts, rationality and knows what he’s doing, though I’ve never heard of him before now.

Parsons continues, with this:

“So why are there intelligent, seemingly rational people like this, who are capable of such unreasonable logic? The question is the subject of Storr’s new book, which explores the ‘beliefs of non-believers’. Put simply, he wants to know why ‘facts don’t work’.”

Nothing wrong with this, since having beliefs in denial of facts is not inconsistent with intelligence, education, or otherwise rational thinking in people. We do tend to rationalize our beliefs and cherry-pick our data to support them through confirmation bias, as Parsons notes, and the smarter and better educated we are, the better we are at rationalizing.

This has already been done, and given what follows in quotes below, a much better job of it, with Mike Shermer’s book “The Believing Brain,’ in 2011. Here’s my review of it.

My skeptic sense began to tingle when I saw this, flipping the “on” switch of my baloney detector:

“Storr studies not only the thought process behind conspiracy theories, but also the unwavering rationalism of their opponents.”

My suspicions were put on full alert, my logical fallacy meter suddenly hitting the “red” end of the scale a few lines down, when I saw this:

“He attends a conference of ‘sceptics’, who insist there is ‘no evidence for homeopathy’. When he asks the sceptics what scientific literature on homeopathy they’ve read to support these claims, many admit they haven’t read any. This isn’t to say that homeopathy isn’t legitimate – merely that many ‘rationalists’ dismiss it because they don’t want to believe it in the first place.”

WTF?? What a remarkably naive and illogical set of statements…a perfect trifecta of an ad hominem dismissal, a straw-person fallacy, and a blatant argument from ignorance. Since this statement isn’t in quotes I’ll assume it’s in Parsons’ own words, at best a paraphrase.

The statement attempts to dismiss skeptical criticism of homeopathy by completely misrepresenting the skeptical position, trotting out the tired old argument accusing skeptics of being closed-minded, and at once attempting to improperly shift the burden of proof by arguing from a lack of data that “If you haven’t seen any scientific proof that (insert favorite belief) isn’t true, then it must be legitimate. You just don’t want to accept!”

Skeptics dismiss homeopathy with good reason: It’s scientifically implausible on its face, and most importantly, there are no valid well-controlled studies that exist in the literature supporting its therapeutic use at all, except maybe curing your thirst. Every adequate double-blind study in the literature conducted so far has yielded no medical efficacy at all for homeopathy. It has either never been found to work, or has been found not to work.

It’s essentially magic water.

Accepting homeopathy as science would force us to discard major portions of physics and chemistry without any reason, given the lack of evidence and silly arguments advanced for it.

Don’t believe me? See for yourself. I suggest checking out JSTOR, the Lancet, the New England Journal of Medicine, Nature, Science, and PubMed. Be aware that you may have to pay a subscription or article download fee. If nothing else, take a look at the abstracts given for the articles. I have, on those occasions when buying the article was for whatever reason not an option.

Don’t trust peer-reviewed journals because you think the peer-review process and science as a whole are broken? Sorry, but I can’t help you there.

Well, given this I’ll probably buy the book, but this article doesn’t really raise my expectations.


Post Hoc Reasoning, Special Pleading, and Ad Hoc Hypotheses

The powers of the paranormal, if they exist, cannot be very great if they are so easily thwarted by mere doubt. It seems as though, in the world of supernatural claims, doubt is the strongest magic of all. It can cancel anything, except science, which actually needs it to work. At least, this is the impression I get from the claims of paranormal believers when attempts to replicate initially successful parapsychology studies fail. And fail they have once the controls of the initial study are improved, reducing statistical significance closer to chance levels and shrinking effect-size to zero.

It seems to me that even with perfect methodology there would still be a chance for false-positive results, and that what these studies show is not what they are claimed to — only that something other than chance may be at work, and giving no indication of what that may actually be. It could be due to poor experimental design, inappropriate use of statistics, errors in reasoning, bad data collection, and rarely, but often enough to taint the entire field of study, fraud.

One thing never fails, though, and that’s the rationalizations offered for this failure to replicate. This post deals with a species of error in reasoning: Special Pleading, the Post Hoc [after this, or "after the fact"]fallacy, or Ad Hoc [for this (only)]hypothesis, and sometimes just “covering your ass by making shit up.” I also aim to show that it is not always a fallacy under the right circumstances.

This fallacy, regardless of its name, is an attempt to rescue a claim from disproof by inventing special reasons why it should be exempt from the usual standards of evidence, to deflect criticism without demonstrating that these alleged reasons are in fact true or actually exist apart from the claim they attempt to defend. Every attempt has been made to boil the following examples of its use down to their essence and to avoid committing straw-persons:

Psi phenomena are shy, or jealous. They do not work in the presence of skeptics. Skeptical doubt cancels them.

What about this one?

Successful replications do occur, but the doubt of skeptics reading the journals they are published in reaches back through time, retroactively changing the experiment and causing it to fail.


Psi is elusive and a delicate phenomenon. Imposing excessively strict controls (read: adequate ones) in a study impedes Psi’s natural functioning in a sterile laboratory setting.

What I find interesting about this sort of reasoning in its fallacious form is that it is considered acceptable in some circles.

Never mind that many of the replications are attempted by other believers and by those without an apparent bias against the paranormal, and another such rationalization goes something like:

They (believers or neutral parties who don’t get results) are burdened with a repressed skepticism that causes their replication attempts to fail, no matter what belief or neutrality they claim to have. These hidden attitudes unconsciously sabotage their efforts.

Never mind the fact that this argument is made on the basis of mere supposition and absent the use of a valid psychological test. Those who reason thus are essentially claiming to be able to read minds, the very thing that some of these replication attempts have failed to demonstrate.

This phenomenon, the success of some to get positive results in their studies and others to get negative results based on their belief systems, is in parapsychology known variously as the Shyness Effect, the Wiseman Effect, or, in a form broadly applying to any field of science where attitudes may unconsciously influence results, the Experimenter Effect, or Observer-expectancy effect, and this is one of the reasons for double-blinding studies and other forms of experimental controls.

A good example was a series of medical studies for a procedure known as the portacaval shunt, and in the analysis of these studies, it was discovered that those who were more enthusiastic about the procedure tended to get false-positive results more often than those not so inclined. And this was from a study assessing an experimental surgical method, not magic mind-powers.

Above were some examples of this form of argument used as a fallacy, but are Ad Hoc hypotheses always and everywhere bad reasoning?

Fortunately, no.

This can be a perfectly good way of reasoning, as long as at least one of the following conditions is met:

  • The reason for failure to demonstrate something has already been shown, or can be, to exist independently of the hypothesis it is used to support. There must be valid evidence that it is true and relevant as a viable supporting reason.
  • The Ad Hoc hypothesis is both interesting and fruitful in predicting new phenomena that could in principle be tested even without being true or existing itself. The key point is that it must be testable, whether by verification or falsification if a general or a particular claim.
  • The Post Hoc reasoning is used to invent new and creative ways to test a claim, and as long as it is used to further inquiry and not merely to thwart the goal of critical reasoning by making up silly excuses as needed.

A good example of an Ad Hoc hypothesis that was both interesting and fruitful was Einstein’s addition to General relativity of the Cosmological Constant, which though he later rejected it and called it his “greatest blunder” has shown to be useful today in the concept of Dark Energy to explain the accelerating expansion of the universe. Another would be the the Lorentz contraction offered to explain the failure of the  Michelson-Morley experiment to detect the Earth’s motion through the Ether, later incorporated into Einstein’s Special relativity.

One thing to note about many forms of argument used as fallacies:

Philosophers and communications specialists may differ on this, but informal fallacies are not so much violations of argument form as they are violations of argument procedure, as attempts to subvert the rules and goals of constructive argument and critical discussion. In this sense, they are abused, often out of ignorance but sometimes out of intellectual dishonesty, as rhetorical devices masking themselves as cogent arguments when they are not. For ethical, productive argumentation, try to keep this in mind and avoid this yourself whenever possible. Happy debating.

Neil deGrasse Tyson on the Argument from Ignorance and UFOs

This is from a while back, but for ten minutes, it’s the most complete, indepth discussion Dr. Tyson’s given on this subject.

Self-Deception: Can we lie to ourselves?

What does it mean to lie? In a strict sense, lying means relating a falsehood that one knows on some level to be just that — false. We are not always honest with ourselves, and our brains have many ways to deceive themselves. We are frequently convinced, often quite strongly of things simply not true, tenaciously holding some falsehoods as self-evident truths.

It seems paradoxical to be able to believe what we know to be false, so how then may it be possible to fool ourselves in that manner and still be aware, at least intellectually, that we are doing so?

First, a bit of what it means to ‘know’ something. Knowledge at it’s most basic level involves both awareness of an idea, or probable fact, and its acceptance.

Knowledge involves belief that something is the case or that something is not, that belief needed to fully grasp the intricacies and nuances of what is known, but we are not all the way there yet. We have a couple more steps to go…

To know something, we must not only be informed of a thing and believe it to be the case, but it must actually conform to existing facts — it must be true. Not just this, but there must be some grounds for believing it, and convincing ourselves that we have knowledge, and not just a lucky guess on our part.

Strictly speaking, you can’t really know something that’s false, and you can’t truthfully say you know something without good grounds…


There must be some information available, usually gained by our senses, by which we obtain those grounds and the justification for the item of knowledge we possess — some channel of information must necessarily and sufficiently complete the picture, so that we can confidently think that we know something.

Whether these grounds come from our own personal sensory experience, often enhanced by our instruments and other artifacts, secondhand or further removed testimony given by others (usually needing some kinds of grounding itself, like real and relevant expertise of the source giving the testimony…) and possibly other channels of information as well, we must have evidence, and it must be strong enough to justify the claim we accept.


We surely deceive ourselves, convince ourselves of probable falsehoods and we often hold conflicting beliefs by walling them off from each other — and even using processes of doublethink and rationalization to entertain them without the discomfort we often experience when both or all come to our conscious awareness at the same time.

It’s possible to have a lack of confidence in one’s knowledge, the niggling doubt that sometimes happens to those of us who hold all knowledge to be subject to correction with further and better grounds to retain, reject or amend what we know at any given time.

With doublethink, rationalization, logical fallacies and belief in belief — believing for the sake of belief itself as a virtue — we may hold at least as partially true what we intellectually know (and thus to an extent accept) to be false, and move from compartmentalizing our accepted and conflicting claims into the territory of the pious fraud, further into those of the pathological liar and victims of False memory syndrome

…as well as when we willfully sacrifice the value of reason and evidence in favor of what feels good to us, rather than the uncomfortable realities we are often forced to deal with in daily life.

…Should I Believe?

Part of Image:Planetary society.jpg Original c...

Part of Image:Planetary society.jpg Original caption: “Founding of the Planetary Society Carl Sagan, Bruce Murray and Louis Friedman, the founders of The Planetary Society at the time of signing the papers formally incorporating the organization. The fourth person is Harry Ashmore, an advisor, who greatly helped in the founding of the Society. Ashmore was a Pulitizer Prize winning journalist and leader in the Civil Rights movement in the 1960s and 1970s.” (Photo credit: Wikipedia)

When we are faced with an incentive to believe something that agrees strongly with our prejudices, it comes down to a simple matter of “Can I believe?” that we may not even have to ask. We accept such things on a whim, unless we exercise care in our thinking. We have a tendency to first believe things we have an emotional investment in, and then cobble together, often quite ingeniously, reasons to justify our belief.

When we are faced with those things we are disinclined to believe, things contrary to our ideologies, or belief/value paradigms, it’s a matter of “Must I believe?” as though we are being faced with an uncomfortable choice and go immediately on the defensive with frequently clever rationalizations we muster to attack the discomforting idea and defend our belief structures from harm.

But I would add a third option, shown by thinkers and investigators I’ve known, read and listened to who approach certain…nonscientific and scientific…topics as intellectual curiosities or academic subject matter without a clear vested interest in accepting or rejecting the claims that these concern:

“Should I believe? Do I have sound reasons to accept this claim as true, or do I have sound reasons to reject it as false, or worse, as not even wrong?”

Often these questions aren’t even consciously asked by those believing, disbelieving, or suspending both until the data are in.

But the first two involve belief or disbelief first, followed by a attempts at conscious justification, often subjectively ironclad, and often fallacious, whereas the third involves deliberation, a weighing of evidence and argument, followed by a tentative conclusion, possibly with leanings toward either end of a continuum of credulity to denial, but a conclusion subject to newer and better information and reasoning as they are presented.

The third option is uncommon, and involves thinking unfamiliar to most of us, but as it occurs with perfectly normal human brains operating with the proper training and accumulated habit, it is every bit as human as reflexive acceptance or knee-jerk rejection.

It’s something we probably did not specifically evolve to do, but like playing a piano, also without a direct adaptive function, we can learn to do, and quite skillfully for many of us.

I think it’s something worth doing, but it requires that we consciously override some of our impulses, consider our thinking, our motivations, and mind the soundness of our reasoning and solidity of the facts we claim, and always consider that these things all have limits — they are fallible, but used well and with care, reliable and effective as paths to real knowledge.

We must consider the input and critiques of others, for alone we are prone to misleading ourselves, even the smartest and best educated of us, with our own biases and fallacies of thinking and memory.

To quote the late Carl Sagan, “Valid criticism does you a favor.”

This is why modern science acts as a community, so that research workers can get public input from their colleagues, cross-check their findings, and it is the reason that external replication of results is of the greatest importance — one-off phenomena that are impossible to verify objectively are of little use, and any finding must at least in principle be testable, or it cannot be demonstrably known.

Scientific inquiry works as effectively as it does, because unlike any other set of methods, it can tell us when we are wrong, and even when we are, to sometimes continue to reap discoveries from failed ideas that lead to new territory.

Is there something better than this now? Will there be, ever?

I don’t know, to both questions. If such a set of methods exists, I’ve not heard of it, and apparently, neither has anyone else I know of.

But if and when something superior comes along, that more effectively and accurately does what scientific inquiry, and as part of it, skeptical inquiry does at the moment, then I shall happily change my mind about science and support whatever works best instead.


Get every new post delivered to your Inbox.

Join 2,974 other followers

%d bloggers like this: