One of the best recent quotes I’ve heard this year, by an atheist blogger on Patheos, pseudonym Kaveh Mousavi. He may be found at his blog, On the Margin of Error, and on Twitter @PatheosKaveh
Talotaa frang (be well!). This week, I present some wallpapers via UltraFractal 5, two of them with an embossed look, as I explore the app’s flexibility with that sort of image. There seems to be a compatibility mismatch between certain formulas and the embossing coloring algorithm, so it’s not going to work for all. I’m in the process of learning more about the code the formulas are written in to expand, through custom presets, the variety of interesting fractal types that may be used. This will be combined with a more thorough look at the formulas and options possible for them, to expand my sample size when it comes to knowing what code of what formulas permit embossing. One thing is a definite improvement over XaoS embossed fractals, and that’s the shading and color that’s possible through UltraFractal.
So here’s a sample of that.
This has a curious, almost spiky quality to it, centering on a mini-Mandelbrot set, almost like a spider at the center of its web. This was a deep zoom, with a slight incompatibility to the coloring algorithm, that apparently did not significantly hurt the image.
This was a normal, non-embossed fractal, using a rings-and-loops coloring algorithm, that I nonetheless had fun with. My favorite is for last…
This uses a Newton fractal type with embossing, and here I’ve adjusted the shading angle sightly for effect.
These images are all 1600×900 pixels in size, so click to humoungify for full size.
Tf. Tk. Tts.
All JPEG, PNG & GIF images in this post are original works by the author, created via a variety of apps and unless otherwise stated are copyright 2016 by Troy Loy. I hereby permit the free, noncommercial use of these images, with proper attribution or a link back to the original source. Thank you!
Well, well…I guess aliens use fractals for their ship designs. Allegedly taken from a landing bay in Atlantis under the Bermuda triangle by a reliable witness who got abducted and was lucky enough to have his cell with him at the time. The ETs will be pissed at their onboard security detail of chupacabras and bigfoots who missed it. Powered by a homeopathic generator that fuses hydrogen diluted to nothing in Quantum™ vacuum for maximum output…😉
Each JPEG, PNG & GIF image in this post is an original work by the author, created via a variety of apps and unless otherwise stated are copyright 2016 by Troy Loy. I hereby permit the free, noncommercial use of these images, with proper attribution or a link back to the original source. Thank you!
Welcome to the first installment not retroposted from an earlier date of Caturday’s Astrophenia on this blog. Here are links to the site Astronomy Picture of the Day, plus space science news items from a variety of sites, and often something else as well, so click away, enjoy the content of this fortnightly roundup, and thanks much for visiting this blog as I move ever closer to semi-regular posting here in that middle ground between one day and five each week!
Tf. Tk. Tts.
This entry deals with an error in reasoning known as The Appeal to Nature, often confused with the Naturalistic Fallacy, and its reverse, the Moralistic Fallacy.
To keeps things short n’ simple, the Naturalistic Fallacy is the confusing of a statement of fact with a statement of subjective worth, a value judgment, an ought from an is, without sound justification, while the Moralistic Fallacy is the confusion of a value judgment with a statement of fact, an is from an ought with an equal lack of good reason.
Is does not by necessity imply Should be, nor does Should be imply Is.
Either of these is an informal fallacy.
The Appeal to Nature typically takes the form of an argument that because something is natural, and of similar use are the familiar marketing buzzwords organic and holistic it is therefore right, good, safe or better than something that is artificial or more efficiently produced, or because something is artificial, it is therefore implied to be inferior, undesirable, bad or wrong…
- Someone tripped over an invisible turtle and staved in his skull on a rock. It is therefore correct to assume that the natural causation (gravity, and impact with a large piece of rock, a naturally-occurring substance…) of the damage to his cranium means that the damage is therefore right or desirable, and should thus not be treated by a physician.
The above is perhaps an extreme example, and then there is the following:
- Vitamin A, when artificial, is harmful to the body, though not when it is natural, in any arbitrary amount for either.
The above is fallacious because whether natural or synthetic, vitamin A is exactly the same molecule regardless of how or where it is produced, and whether it is safe or harmful depends entirely on the dosage.
Because these last two examples are concerned with the goodness/badness or rightness/wrongness of something based on its origins they also constitute variations of a genetic fallacy.
There are a great many things of completely natural origin that are nonetheless rather unsafe. A few are below:
- Arsenic, Cadmium, and Uranium are three highly toxic natural elements…
- There are the herbs Hemlock, Foxglove, and Belladonna…
- …in addition to all other animal, mineral, and plant toxins, such as the venom of certain species of trapdoor spiders, curare, rattlesnake venom, platypus venom, a huge host of poisonous fungi such as species Amanita muscaria…
A common argument using this fallacy is the assertion that behaviors and practices “found in nature” are good or more desirable than modern behaviors, such as some of our more detrimental evolved social instincts being “right,” even though in our modern technological society they no longer convey the survival benefits they did to our ancestors on the plains of Africa, now that we are a global species. The following is one such argument…
- Before we were a technological species we were one with the natural world, but soon we shall pay for our continuous crimes against nature. For every illness we’ve bought temporary respite from through modern medicine, another, more resistant strain takes its place. How long can we keep at our unnatural and invasive medical procedures, when all we are doing is living longer and longer and getting sicker and sicker with each new ‘advance’ in treatment. The only way we can be truly healthy is to return to our roots, to return to nature and relinquish the evils of science and technology.
First, humans have always been a technological species, and we’ve been that even before we were human.
Second, we were then mostly ignorant about the natural world, and saw the supernatural everywhere.
As a species that better understands Nature now than we once did, instead of attributing everything to mystical influences in our ignorance, praying and chanting to invisible and probably nonexistent spirits in invariably failed attempts to cure horrible illnesses, with that better understanding comes a closeness to nature unparalleled by our ancestors even a century ago.
We understand better how nature works, and our scientific and medical advances, limited though they may be in some areas, have given those with access to them greater health and quality of life, and the greater modern life-expectancy and standards of living are reliable indicators of this.
Those belonging to a typical middle-class family live longer and in greater luxury on the average than a medieval king, though perhaps with less gold stashed in the cellar.
Yes, scientific advances are a double-edged sword, but we can’t solve the problems brought about by knowledge by replacing it with ignorance. A problem caused by misuse of knowledge can only be remedied by the use of better knowledge than that which caused the original problem.
Have fun looking for this fallacy in everyday discussions and in the media, especially advertisements for questionable ‘food supplements’ and medical products using the previously mentioned buzzwords, products which tend more often than not to have a high price tag, at least in cash if not in possible health consequences.
Consider: Despite what mystics and quacks will tell you, it pays to be skeptical.
People often harbor the misconception that science is just an archive, a library, a stagnant body of facts, or a belief system — a collection of truths, a particular worldview, an ideology, and not the vibrantly active, contentious, competitive, and continually advancing search for knowledge it is, with the ability it gives us as a species to enhance our understanding of the world and ourselves.
Science isn’t the way it is today because some patriarchal Europeans during the Renaissance made some sh*t up and arbitrarily decided that that’s the way it will be for all time — It’s the way it is today, rather more different from how it was then, because that’s what’s been shown over time to work, what gets the best results right now.
Science is an almost Darwinian entity, and so evolves over time, those methodologies and philosophical underpinnings that work are adopted and retained, and the ones that turn out not to are just ditched. Science isn’t perfect, and it probably never will be, but it is progressive. And it’s the only human endeavor that’s designed from the bottom-up to be internally self-correcting.
Despite the occasional fraud or fabrication, the truth prevails. While individual scientists are no more paragons of moral virtue than the rest of us, science as a whole is self-policing. Because scientists like to try to dismantle each others’ theories, if one scientist isn’t honest, others will be. Propose a phony theory of astrophysics, and it will be exposed by a rival. In almost every instance, fraud or error in science are uncovered and vigorously called out by scientists themselves.
Any useful theory in science can have one or several, but often many more supporting ideas, each serving the purpose of a predictor, more properly, a hypothesis, and you need observational facts as well as logic to round a theory out, since it’s a bad idea to try to theorize on an empty mind, but even this just isn’t enough: you have to be able to go a wee bit further than what factoids you know.
To be of any worth, a theory should meet at least two or more conditions called Criteria of Adequacy — specifically a set of five that for purposes of this post will be known as Testability, Fruitfulness, Scope, Simplicity, and Conservatism. We’ll deal with them each in turn…
…One great way to tell genuine scientific theories from pseudoscientific ones is by whether or not they can be tested, and any functional hypotheses within a theory must have this property in order to be worth anything — if it doesn’t, well, it just doesn’t measure up as science.
Karl Popper’s idea that any scientific theory had to be testable to be valid was mostly sound, though there was a problem with his use of the word falsifiability for it is that strict, conclusive falsifiability or verification, final proof nay or yea, aren’t possible in science. This is because there is no way to be certain that new data won’t turn up in the future that could refute a hypothesis, and you can always rescue any hypothesis in spite of evidence by toying around here and there with the theory it belongs to. That, and the fact that almost any new theory is already seemingly refuted by a lot of the data available at the time it is first conceived.
Hypotheses can’t be tested all by their lonesome, only with others that make up the basic theory they are part of. Thus, even ‘reductionistic’ hypothesis-testing is holistic in the truest possible sense, since it is done in bundles of hypotheses…
Scientifically functional hypotheses should go further than the predictions that they make with the theory that they’re meant to support, and a hypothesis that doesn’t is what’s known as an ad hoc hypothesis, ad hoc, because my evil self is gonna go all Latin on you, means (in)this case only, and a grunchload of ad hoc hypotheses in a theory is a really good indicator that it is pseudoscience.
Hypotheses let us predict things by telling us what we should observe under what set of conditions, in order to provisionally confirm or confute them. Ad hoc hypotheses, on the other hand, don’t improve upon our understanding by telling us anything we don’t already know. A given hypothesis is of no scientific value if it cannot be tested against that most heinous of taskmasters, reality. If a hypothesis makes predictions on what we can and should observe that its own base theory doesn’t, or can’t, then it’s testable.
Let’s look at a sample hypothesis, the pixie hypothesis of home computer networks, which states that when one boots up the network, tiny pixies living in the the computer, DSL modem, and router flit around at near-light speeds inside the machinery and carry signals between the different circuits to make the computer work, and fly around at light-speed outside the machinery to carry Wi-Fi signals to all the laptops in the network.
As mentioned earlier, there can be any number of hypotheses in a theory, such as the blue LED pixie hypothesis, the green LED pixie hypothesis, the LCD screen pixie hypothesis, and so on, but the pixie hypothesis’ usefulness for scientific purposes depends on what it tells us about pixies, what it predicts we should observe.
Referring back to the base theory and trying to prove or disprove the existence of the pixies by booting up the network does us no good, for this is a tautology — circular reasoning — and the very thing that the pixie hypothesis is meant to explain. It’s obvious that we have to go beyond the basic theory.
Now if this hypothesis tells us that the pixies are visible or tangible or audible, we can just look inside the casing of the computer and network hardware to see or feel or listen around for signs of the pixies. If the hypothesis tells us the they are normally intangible and invisible and silent, but can be seen and touched or heard when the computer’s custom-built suped-up liquid coolant system is in overdrive, we can crank up the coolant system to make them visible. This hypothesis just doesn’t do us any good, though, and is not testable if it says that the pixies are always invisible, intangible, and produce no sound, not even the chattering of little pixie teeth induced by the chill of the supercooled computer.
Yes, I know — that was silly. It’s a general rule that to qualify as scientifically interesting — and valid — any hypothesis must observably predict more than what the theory it belongs to does, assuming that all other properties of the hypothesis are the same in worth.
But testability isn’t the only important factor, since we impart more worth to some hypotheses than others. We need to take more than just testability into account, and the next criterion to be considered is…
…which is a valid condition of a still worthwhile hypothesis and may suffice to rescue it even in the face of contrary evidence, since it lets said hypothesis successfully predict new observational data and to create, often without any initial foresight, entirely unexpected lines of research. If a hypothesis predicts more new and unexpected findings than others, all other factors being roughly the same in importance, then it is the best.
Oddly enough, this is true even if a hypothesis is tested and found to be false. Even in such a case, an incorrect but both interesting and useful, and therefore fruitful hypothesis can sometimes serendipitously lead to new discoveries, because of a number of factors, such as the researcher’s imagination, observational skill, and ability to take advantage of opportunities thrown their way by the winds of random happenstance.
But there are also fields of study that qualify as degenerating research programs, involving theories and hypotheses that are most obviously not fruitful, highly unproductive where pioneering research is concerned, that even if they aren’t limited severely in the phenomena they study, they predict not all that much in the way of new findings, and are largely unsuccessful in their predictions at best. And no, post hoc rationalizations and shoehorned postdictions don’t count.
Parapsychology is a good example of one such field, as it has never succeeded in predicting and actually revealing any new and unexpected observations, no practical applications for either ESP or PK, and no new facts excepting ingeniously contrived excuses as to why even its most cutting-edge research protocols don’t independently replicate when non-believers in psi are involved in the experiment.
Even to this day, after over 130 years of research, it is riddled with ad hoc hypotheses, such as the decline effect, the observer effect, psi-missing, and even bizarre claims of the retroactive skepticism of readers of parapsychology journals reaching back through time to affect (previously) successful experiments in the past.
In fact, despite largely unsuccessful attempts to co-opt quantum mechanics and other poorly-understood ideas of bleeding-edge physics for the purpose, such as zero-point energy fields and string theory, parapsychology still lacks a sound consensus on any coherent theoretical underpinnings.
Most of the claims of parapsychology violate much of what we can honestly say is currently known in biology, physics and psychology, three fields that it would have revolutionized had it been as successful as some of it’s advocates sometimes claim, and as successful as its pioneers would have wished it to become, given the time it’s had.
This is not to say that Psi violates laws of nature in any absolute sense, but it does appear to violate those laws as we presently understand them. Our understanding of these laws may indeed be incorrect, or incomplete, but unless parapsychologists can identify the ones that are, and demonstrate new laws with observational data that explain the universe better than the current ones, we have no good cause to suspect that currently known laws are wrong.
This is a crucial component of any theory with wide applications, how capable it is of organizing and putting our understanding of that which it describes all in the same convenient package, and this also has the bennies of reducing the probability of the theory being wrong. The superior theory is that which predicts and explains the widest range of phenomena, all other factors being the same in importance.
In my Gods of Terra science fiction setting, the discovery of Kurtz-Dunar Hypermatrix Theory (or KDHT for short) finally unified the older theoretical paradigms of Quantum Mechanics and Einsteinian Relativity into a fully integrated, coherent whole, incorporating more precise and deeper understanding of the first four forces they dealt with — Gravity, Electromagnetism, the Strong and Weak forces, and in addition, the Cosmological Constant, or Dark energy — and all of the various phenomena they governed, in addition to resolving any conflicts that had arisen in its predecessor theories.
KDHT was a distant descendant of String Theory, but one that had arisen when the technology of the day was up to the task of testing its predictions, which finally allowed humanity, and any similarly developed technological species, access to the Superforce and its technological applications, using it to more precisely manipulate its component forces and phenomena under their purview. It also had, in addition to the virtues of Testability, Fruitfulness and Scope, that of…
…which deals with a theory’s elegance and logical consistency. Generally, assuming everything else being about the same, the theory with the greatest logical coherence and the fewest unnecessary assumptions is the better.
Going back to our last example, Raoul Kurtz and Ranan Dunar’s highly successful Hypermatrix Theory was especially liked by its co-founders because of its parsimony and elegance, since not only was their idea tested and provisionally verified shortly after its conception, not only did it lead to new and surprising avenues of research, not only did it allow humanity relatively easy access to interstellar travel, cheap surface-to-orbit transit and biologically friendly long-duration space voyages through its tremendous applicability to a wide range of phenomena, its simplicity allowed for fewer possible ways to falsify it, thus making it more likely to be true when first formulated.
Simplicity allowed this “theory of almost everything” to stand apart from its more cumbersome competitors, and this criterion has been justly esteemed in the real world since the days of the Ionian Awakening in classical Greece, starting historically with Thales of Miletus.
You’ve likely kind of noticed how hypotheses explain what they do by postulating the existence of certain things, and simplicity tells us that it’s a good idea to resort to the use of the rule of thumb called Occam’s razor, which states that ‘Entities should not be multiplied without necessity.’
It’s important to consider the fact that assuming the existence of something without a really good reason is not a logical thing to do.
But even the revolutionary impact of Hypermatrix Theory, new as it was, also had to abide by one more criterion, the final one in this post, that of…
…which deals with a character of sound scientific hypotheses concerned with the consistency of new ideas with prior knowledge.
This is an important feature for what we can honestly say we know, and a ginormous red flag should pop up in one’s head about any sort of claims that conflict with much of what we have good reason to think we know, especially if what we know at present results in the creation of technologies and techniques that actually work, like the computer server that hosts this blog.
Unthinking acceptance of inconsistent ideas both erodes and forces us to reject what we know without sound reason. The plausibility of ideas that violate Conservatism is probably not very high if they go against applications of established knowledge that have real practical benefits.
Overall, a more conservative hypothesis is more plausible, more useful, and most closely fits previous valid claims to what we know, provided other criteria are of equal standing.
Even though KDHT led to a new and more powerful understanding of the universe, allowing mankind to tap the Superforce and spread across interstellar space, the properties of the Superforce, while some where specific to it, did not contravene those properties of its sub-forces, nor violate the new, deeper understanding of the older Quantum and Relativity theories, for example: Superforce radiation does not exceed the speed of light, traveling at roughly 300,000 Kilometers per second in a vacuum, falls off in strength over distances in accordance with the inverse-square law, and when sublimating into any of its component forces, obeys all of their physical properties, and finally, obeys Einstein’s law of E=mc² and all of the laws of Thermodynamics. It does not allow one to violate physical laws that still enjoy empirical support in the science of the Gods of Terra setting, rather allowing one instead to make use of those not previously known or otherwise poorly understood at best.
However, not all hypotheses are of equal worth, and it’s rational to accept an idea that doesn’t abide by one criterion as long as it abides by others.
Much to my Troythuluness’s regret, there is no such thing as a completely ironclad way to tell when any criterion should be outranked by others, and there is no formal methodology for applying them. There is no known way precisely measure the various elements of a hypothesis and no known means by which a formal ranking system may be applied to any of them.
We just might, for example, conclude that Conservatism should have a greater rank than, say, Fruitfulness, if the idea under consideration has a relatively narrow scope. Or Conservatism may be outranked by Simplicity and Scope, in particular if said hypothesis has a great deal of the latter, though Testability is a must.
Hypothesis selection is not a strict, mechanistic process involving rigid logic, and like any process of decision-making, like the proceedings of a court of law, much less the court of science, requires the exercise of our ability for sound judgment employing methods themselves not very amenable to formal conventions, though this process isn’t completely subjective either: There are processes that we can’t easily gauge that are nonetheless objective.
For example, it is not possible to strictly delineate the exact cut-off point at which light becomes dark, or at which the wavelength and frequency of red light becomes that of orange light, though it would be absurd to claim that these things cannot be distinguished from each other, with the difference between the extreme ends of these spectra, these wavelengths of light, or light and dark, being as objective as far as it goes.
Since most distinctions range along a continuum instead of there being a strict split between them, with a fuzzy but real difference, it would not be rational to argue that that because there is no arbitrary demarcation between light and dark, that the difference between them does not exist and that therefore they are the same. To suppose this is highly specious reasoning, the commission of the False Continuum fallacy.
It is also wrong to believe, for example, that spontaneous generation, alchemy, phrenology, vitalism, or luminiferous ether theory are still valid scientific ideas even if they were at one point. And I know of no diplomatic way to say this: To steadfastly adhere to a claim of fact, belief system or doctrine that isn’t supported by any of the criteria discussed in this post is to hold irrational views. Fnord.
(How To Think About Weird Things: Critical Thinking For A New Age, 4th Edition (pp. 187-197) by Theodore Schick, Jr. & Lewis Vaughn)
(The Art of Scientific Investigation, First Printing (pp. 56-71) by W.I.B. Beveridge)