NOTE: The following is taken from the book, Why People Believe Weird Things: Pseudoscience, Superstition, and Other Confusions of Our Time, pp. 58-61.It is the last section of the 3rd Chapter entitled, How Thinking Goes Wrong: 25 Fallacies That Lead Us to Believe Weird Things:
Effort Inadequacies and the Need for Certainty, Control, and Simplicity
Most of us, most of the time, want certainty, want to control our environment, and want nice, neat, simple explanations. All this may have some evolutionary basis, but in a multifarious society with complex problems, these characteristics can radically oversimplify reality and interfere with critical thinking and problem solving. For example, I believe that paranormal beliefs and pseudoscientific claims flourish in market economies in part because of the uncertainty of the marketplace. According to James Randi, after communism collapsed in Russia there was a significant increase in such beliefs. Not only are the people now freer to try to swin-die each other with scams and rackets but many truly believe they have discovered something concrete and significant about the nature of the world. Capitalism is a lot less stable a social structure than communism. Such uncertainties lead the mind to look for explanations for the vagaries and contingencies of the market (and life in general), and the mind often takes a turn toward the supernatural and paranormal.
Scientific and critical thinking does not come naturally. It takes training, experience, and effort, as Alfred Mander explained in his Logic for the Millions: “Thinking is skilled work. It is not true that we are naturally endowed with the ability to think clearly and logically—without learning how, or without practicing. People with untrained minds should no more expect to think clearly and logically than people who have never learned and never practiced can expect to find themselves good carpenters, golfers, bridge players, or pianists” (1947, p. vii). We must always work to suppress our need to be absolutely certain and in total control and our tendency to seek the simple and effortless solution to a problem. Now and then the solutions may be simple, but usually they are not.
All critical and scientific thinking is, in a fashion, problem solving. There are numerous psychological disruptions that cause inadequacies in problem solving. Psychologist Barry Singer has demonstrated that when people are given the task of selecting the right answer to a problem after being told whether particular guesses are right or wrong, they:
Immediately form a hypothesis and look only for examples to confirm it.
Do not seek evidence to disprove the hypothesis.
Are very slow to change the hypothesis even when it is obviously wrong.
If the information is too complex, adopt overly-simple hypotheses orstrategies for solutions.
If there is no solution, if the problem is a trick and “right” and “wrong” isgiven at random, form hypotheses about coincidental relationships they observed. Causality is always found. (Singer and Abell 1981, p. 18)
If this is the case with humans in general, then we all must make the effort to overcome these inadequacies in solving the problems of science and of life.
Ideological Immunity, or the Planck Problem
In day-to-day life, as in science, we all resist fundamental paradigm change. Social scientist Jay Stuart Snelson calls this resistance an ideological immune system: “educated, intelligent, and successful adults rarely change their most fundamental presuppositions” (1993, p. 54). According to Snelson, the more knowledge individuals have accumulated, and the more well-founded their theories have become (and remember, we all tend to [ look for and remember confirmatory evidence, not counterevidence), the greater the confidence in their ideologies. The consequence of this, however, is that we build up an “immunity” against new ideas that do not corroborate previous ones. Historians of science call this the Planck Problem, after physicist Max Planck, who made this observation on what must happen for innovation to occur in science: “An important scientific innovation rarely makes its way by gradually winning over and converting its opponents: it rarely happens that Saul becomes Paul. What does happen is that its opponents gradually die out and that the growing generation is familiarized with the idea from the beginning” (1936, p. 97).
Psychologist David Perkins conducted an interesting correlational study in which he found a strong positive correlation between intelligence (measured by a standard IQ test) and the ability to give reasons for taking a point of view and defending that position; he also found a strong negative correlation between intelligence and the ability to consider other alternatives. That is, the higher the IQ, the greater the potential for ideological immunity. Ideological immunity is built into the scientific enterprise, where it functions as a filter against potentially overwhelming novelty. As historian of science I. B. Cohen explained, “New and revolutionary systems of science tend to be resisted rather than welcomed with open arms, because every successful scientist has a vested intellectual, social, and even financial interest in maintaining the status quo. If every revolutionary new idea were welcomed with open arms, utter chaos would be the result” (1985, p. 35).
In the end, history rewards those who are “right” (at least provisionally). Change does occur. In astronomy, the Ptolemaic geocentric universe was slowly displaced by Copernicus’s heliocentric system. In geology, George Cuvier’s catastrophism was gradually wedged out by the more soundly supported uniformitarianism of James Hutton and Charles Lyell. In biology, Darwin’s evolution theory superseded creationist belief in the immutability of species. In Earth history, Alfred Wegener’s idea of continental drift took nearly a half century to overcome the received dogma of fixed and stable continents. Ideological immunity can be overcome in science and in daily life, but it takes time and corroboration.
Skeptics have the very human tendency to relish debunking what we already believe to be nonsense. It is fun to recognize other people’s fallacious reasoning, but that’s not the whole point. As skeptics and critical thinkers, we must move beyond our emotional responses because by understanding how others have gone wrong and how science is subject to social control and cultural influences, we can improve our understanding of how the world works. It is for this reason that it is so important for us to understand the history of both science and pseudoscience. If we see the larger picture of how these movements evolve and figure out how their thinking went wrong, we won’t make the same mistakes. The seventeenth-century Dutch philosopher Baruch Spinoza said it best: “I have made a ceaseless effort not to ridicule, not to bewail, not to scorn human actions, but to understand them.”
NOTE: The following is taken from the book, Why People Believe Weird Things: Pseudoscience, Superstition, and Other Confusions of Our Time, pp. 55-58.It is the third section of the 3rd Chapter entitled, How Thinking Goes Wrong: 25 Fallacies That Lead Us to Believe Weird Things:
Emotive Words and False Analogies
Emotive words are used to provoke emotion and sometimes to obscure rationality. They can be positive emotive words—motherhood, America, integrity, honesty. Or they can be negative—rape, cancer, evil, communist. Likewise, metaphors and analogies can cloud thinking with emotion or steer us onto a side path. A pundit talks about inflation as “the cancer of society” or industry “raping the environment.” In his 1992 Democratic nomination speech, Al Gore constructed an elaborate analogy between the story of his sick son and America as a sick country. Just as his son, hovering on the brink of death, was nursed back to health by his father and family, America, hovering on the brink of death after twelve years of Reagan and Bush, was to be nurtured back to health under the new administration. Like anecdotes, analogies and metaphors do not constitute proof. They are merely tools of rhetoric.
This is an appeal to ignorance or lack of knowledge and is related to the burden of proof and unexplained is not inexplicable fallacies, where someone argues that if you cannot disprove a claim it must be true. For example, if you cannot prove that there isn’t any psychic power, then there must be. The absurdity of this argument comes into focus if one argues that if you cannot prove that Santa Claus does not exist, then he must exist. You can argue the opposite in a similar manner. If you cannot prove Santa Claus exists, then he must not exist. In science, belief should come from positive evidence in support of a claim, not lack of evidence for or against a claim.
Ad Hominem and Tu Quoque
Literally “to the man” and “you also,” these fallacies redirect the focus from thinking about the idea to thinking about the person holding the idea. The goal of an ad hominem attack is to discredit the claimant in hopes that it will discredit the claim. Calling someone an atheist, a communist, a child abuser, or a neo-Nazi does not in any way disprove that person’s statement. It might be helpful to know whether someone is of a particular religion or holds a particular ideology, in case this has in some way biased the research, but refuting claims must be done directly, not indirectly. If Holocaust deniers, for example, are neo-Nazis or anti-Semites, this would certainly guide their choice of which historical events to emphasize or ignore. But if they are making the claim, for example, that Hitler did not have a master plan for the extermination of European Jewry, the response “Oh, he is saying that because he is a neo-Nazi” does not refute the argument. Whether Hitler had a master plan or not is a question that can be settled historically. Similarly for tu quoque. If someone accuses you of cheating on your taxes, the answer “Well, so do you” is no proof one way or the other.
In logic, the hasty generalization is a form of improper induction. In life, it is called prejudice. In either case, conclusions are drawn before the facts warrant it. Perhaps because our brains evolved to be constantly on the lookout for connections between events and causes, this fallacy is one of the most common of all. A couple of bad teachers mean a bad school. A few bad cars mean that brand of auto- mobile is unreliable. A handful of members of a group are used to judge the entire group. In science, we must carefully gather as much information as possible before announcing our conclusions.
Overreliance on Authorities
We tend to rely heavily on authorities in our culture, especially if the authority is considered to be highly intelligent. The IQ score has acquired nearly mystical proportions in the last half century, but I have noticed that belief in the paranormal is not uncommon among Mensa members (the high-IQ club for those in the top 2 percent of the population); some even argue that their “Psi-Q” is also superior. Magician James Randi is fond of lampooning authorities with Ph.D.s—once they are granted the degree, he says, they find it almost impossible to say two things: “I don’t know” and “I was wrong.” Authorities, by virtue of their expertise in a field, may have a better chance of being right in that field, but correctness is certainly not guaranteed, and their expertise does not necessarily qualify them to draw conclusions in other areas.
In other words, who is making the claim makes a difference. If it is a Nobel laureate, we take note because he or she has been right in a big way before. If it is a discredited scam artist, we give a loud guffaw because he or she has been wrong in a big way before. While expertise is useful for separating the wheat from the chaff, it is dangerous in that we might either (1) accept a wrong idea just because it was supported by someone we respect (false positive) or (2) reject a right idea just because it was supported by someone we disrespect (false negative). How do you avoid such errors? Examine the evidence.
Also known as the fallacy of negation or the false dilemma, this is the tendency to dichotomize the world so that if you discredit one position, the observer is forced to accept the other. This is a favorite tactic of creationists, who claim that life either was divinely created or evolved. Then they spend the majority of their time discrediting the theory of evolution so that they can argue that since evolution is wrong, creationism must be right. But it is not enough to point out weaknesses in a theory. If your theory is indeed superior, it must explain both the “normal” data explained by the old theory and the “anomalous” data not explained by the old theory. A new theory needs evidence in favor of it, not just against the opposition.
Also known as the fallacy of redundancy, begging the question, or tautology, this occurs when the conclusion or claim is merely a restatement of one of the premises. Christian apologetics is filled with tautologies: Is there a God? Yes. How do you know? Because the Bible says so. How do you know the Bible is correct? Because it was inspired by God. In other words, God is because God is. Science also has its share of redundancies: What is gravity? The tendency for objects to be attracted to one another. Why are objects attracted to one another? Gravity. In other words, gravity is because gravity is. (In fact, some of Newton’s contemporaries rejected his theory of gravity as being an unscientific throwback to medieval occult thinking.) Obviously, a tautological operational definition can still be useful. Yet, difficult as it is, we must try to construct operational definitions that can be tested, falsified, and refuted.
Reductio ad Absurdum and the Slippery Slope
Reductio ad absurdum is the refutation of an argument by carrying the argument to its logical end and so reducing it to an absurd conclusion. Surely, if an argument’s consequences are absurd, it must be false. This is not necessarily so, though sometimes pushing an argument to its limits is a useful exercise in critical thinking; often this is a way to discover whether a claim has validity, especially if an experiment testing the actual reduction can be run. Similarly, the slippery slope fallacy involves constructing a scenario in which one thing leads ultimately to an end so extreme that the first step should never be taken. For example: Eating Ben & Jerrys ice cream will cause you to put on weight. Putting on weight will make you overweight. Soon you will weigh 350 ounds and die of heart disease. Eating Ben & Jerrys ice cream leads to death. Don’t even try it. Certainly eating a scoop of Ben & Jerry’s ice cream may contribute to obesity, which could possibly, in very rare cases, cause death. But the consequence does not necessarily follow from the premise.
NOTE: The following is taken from the book, Why People Believe Weird Things: Pseudoscience, Superstition, and Other Confusions of Our Time, pp. 48-55.It is the second section of the 3rd Chapter entitled, How Thinking Goes Wrong: 25 Fallacies That Lead Us to Believe Weird Things:
Anecdotes Do Not Make a Science
Anecdotes—stories recounted in support of a claim—do not make a science. Without corroborative evidence from other sources, or physical proof of some sort, ten anecdotes are no better than one, and a hundred anecdotes are no better than ten. Anecdotes are told by fallible human storytellers. Farmer Bob in Puckerbrush, Kansas, may be an honest, church-going, family man not obviously subject to delusions, but we need physical evidence of an alien spacecraft or alien bodies, not just a story about landings and abductions at 3:00 A.M. on a deserted country road. Likewise with many medical claims. Stories about how your Aunt Mary’s cancer was cured by watching Marx brothers movies or taking a liver extract from castrated chickens are meaningless. The cancer might have gone into remission on its own, which some cancers do; or it might have been misdiagnosed; or, or, or…. What we need are controlled experiments, not anecdotes. We need 100 subjects with cancer, all properly diagnosed and matched. Then we need 25 of the subjects to watch Marx brothers movies, 25 to watch Alfred Hitchcock movies, 25 to watch the news, and 25 to watch nothing. Then we need to deduct the average rate of remission for this type of cancer and then analyze the data for statistically significant differences between the groups. If there are statistically significant differences, we better get confirmation from other scientists who have conducted their own experiments separate from ours before we hold a press conference to announce the cure for cancer.
Scientific Language Does Not Make a Science
Dressing up a belief system in the trappings of science by using scientistic language and jargon, as in “creation-science,” means nothing without evidence, experimental testing, and corroboration. Because science has such a powerful mystique in our society, those who wish to gain respectability but do not have evidence try to do an end run around the missing evidence by looking and sounding “scientific.” Here is a classic example from a New Age column in the Santa Monica News: “This planet has been slumbering for eons and with the inception of higher energy frequencies is about to awaken in terms of consciousness and spirituality. Masters of limitation and masters of divination use the same creative force to manifest their realities, however, one moves in a downward spiral and the latter moves in an upward spiral, each increasing the resonant vibration inherent in them.” How’s that again? I have no idea what this means, but it has the language components of a physics experiment: “higher energy frequencies,” “downward and upward spirals,” and “resonant vibration.” Yet these phrases mean nothing because they have no precise and operational definitions. How do you measure a planet’s higher energy frequencies or the resonant vibration of masters of divination? For that matter, what is a master of divination?
Bold Statements Do Not Make Claims True
Something is probably pseudoscientific if enormous claims are made for its power and veracity but supportive evidence is scarce as hen’s teeth. L. Ron Hubbard, for example, opens his Dianetics: The Modern Science of Mental Health, with this statement: “The creation of Dianetics is a milestone for man comparable to his discovery of fire and superior to his invention of the wheel and arch” (in Gardner 1952, p. 263). Sexual energy guru Wilhelm Reich called his theory of Orgonomy “a revolution in biology and psychology comparable to the Copernican Revolution” (in Gardner 1952, p. 259). I have a thick file of papers and letters from obscure authors filled with such outlandish claims (I call it the “Theories of Everything” file). Scientists sometimes make this mistake, too, as we saw at 1:00 P.M., on March 23, 1989, when Stanley Pons and Martin Fleischmann yield a press conference to announce to the world that they had made cold nuclear fusion work. Gary Taubes’s excellent book about the cold fusion debacle, appropriately named Bad Science (1993), thoroughly examines the implications of this incident. Maybe fifty years of physics will be proved wrong by one experiment, but don’t throw out your furnace until that experiment has been replicated. The moral is that the more extraordinary the claim, the more extraordinarily well-tested the evidence must be.
Heresy Does Not Equal Correctness
They laughed at Copernicus. They laughed at the Wright brothers. Yes, well, they laughed at the Marx brothers. Being laughed at does not mean you are right. Wilhelm Reich compared himself to Peer Gynt, the unconventional genius out of step with society, and misunderstood and ridiculed as a heretic until proven right: “Whatever you have done to me or will do to me in the future, whether you glorify me as a genius or put me in a mental institution, whether you adore me as your savior or hang me as a spy, sooner or later necessity will force you to comprehend that I have discovered the laws of the living” (in Gardner 1952, p. 259). Reprinted in the January/February 1996 issue of the Journal of Historical Review, the organ of Holocaust denial, is a famous quote from the nineteenth-century German philosopher Arthur Schopenhauer, which is quoted often by those on the margins: “All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as self-evident.” But “all truth” does not pass through these stages. Lots of true ideas are accepted without ridicule or opposition, violent or otherwise. Einstein’s theory of relativity was largely ignored until 1919, when experimental evidence proved him right. He was not ridiculed, and no one violently opposed his ideas. The Schopenhauer quote is just a rationalization, a fancy way for those who are ridiculed or violently opposed to say, “See, I must be right.” Not so.
History is replete with tales of the lone scientist working in spite of his peers and flying in the face of the doctrines of his or her own field of study. Most of them turned out to be wrong and we do not remember their names. For every Galileo shown the instruments of torture for advocating a scientific truth, there are a thousand (or ten thousand) unknowns whose “truths” never pass muster with other scientists. The scientific community cannot be expected to test every fantastic claim that comes along, especially when so many are logically inconsistent. If you want to do science, you have to learn to play the game of science. This involves getting to know the scientists in your field, exchanging data and ideas with colleagues informally, and formally presenting results in conference papers, peer-reviewed journals, books, and the like.
Burden of Proof
Who has to prove what to whom? The person making the extraordinary claim has the burden of proving to the experts and to the community at large that his or her belief has more validity than the one almost everyone else accepts. You have to lobby for your opinion to be heard. Then you have to marshal experts on your side so you can convince the majority to support your claim over the one that they have always supported. Finally, when you are in the majority, the burden of proof switches to the outsider who wants to challenge you with his or her unusual claim. Evolutionists had the burden of proof for half a century after Darwin, but now the burden of proof is on creationists. It is up to creationists to show why the theory of evolution is wrong and why creationism is right, and it is not up to evolutionists to defend evolution. The burden of proof is on the Holocaust deniers to prove the Holocaust did not happen, not on Holocaust historians to prove that it did. The rationale for this is that mountains of evidence prove that both evolution and the Holocaust are facts. In other words, it is not enough to have evidence. You must convince others of the validity of your evidence. And when you are an outsider this is the price you pay, regardless of whether you are right or wrong.
Rumors Do Not Equal Reality
Rumors begin with “I read somewhere that…” or “I heard from someone that….” Before long the rumor becomes reality, as “I know that…” passes from person to person. Rumors may be true, of course, but usually they are not. They do make for great tales, however. There is the “true story” of the escaped maniac with a prosthetic hook who haunts the lover’s lanes of America. There is the legend of “The Vanishing Hitchhiker,” in which a driver picks up a hitchhiker who vanishes from his car along with his jacket; locals then tell the driver that his hitchhiking woman had died that same day the year before, and eventually he discovers his jacket on her grave. Such stories spread fast and never die.
Caltech historian of science Dan Kevles once told a story he suspected was apocryphal at a dinner party. Two students did not get back from a ski trip in time to take their final exam because the activities of the previous day had extended well into the night. They told their professor that they had gotten a flat tire, so he gave them a makeup final the next day. Placing the students in separate rooms, he asked them just two questions: (1) “For 5 points, what is the chemical formula for water?” (2) “For 95 points, which tire?” Two of the dinner guests had heard a vaguely similar story. The next day I repeated the story to my students and before I got to the punch line, three of them simultaneously blurted out, “Which tire?” Urban legends and persistent rumors are ubiquitous. Here are a few:
The secret ingredient in Dr. Pepper is prune juice.
A woman accidentally killed her poodle by drying it in a microwave oven.
Paul McCartney died and was replaced by a look-alike.
Giant alligators live in the sewers of New York City.
The moon landing was faked and filmed in a Hollywood studio.
George Washington had wooden teeth.
The number of stars inside the “P” on Playboy magazine’s cover indicates how many times publisher Hugh Hefner had sex with the centerfold.
A flying saucer crashed in New Mexico and the bodies of the extraterrestrials are being kept by the Air Force in a secret warehouse.
How many have you heard . .. and believed?
None have ever been confirmed.
Unexplained Is Not Inexplicable
Many people are overconfident enough to think that if they cannot explain something, it must be inexplicable and therefore a true mystery of the paranormal. An amateur archeologist declares that because he cannot figure out how the pyramids were built, they must have been constructed by space aliens. Even those who are more reasonable at least think that if the experts cannot explain something, it must be inexplicable. Feats such as the bending of spoons, firewalking, or mental telepathy are often thought to be of a paranormal or mystical nature because most people cannot explain them. When they are explained, most people respond, “Yes, of course” or “That’s obvious once you see it.” Firewalking is a case in point. People speculate endlessly about supernatural powers over pain and heat, or mysterious brain chemicals that block the pain and prevent burning. The simple explanation is that the capacity of light and fluffy coals to contain heat is very low, and the conductivity of heat from the light and fluffy coals to your feet is very poor. As long as you don’t stand around on the coals, you will not get burned. (Think of a cake in a 450°F oven. The air, the cake, and the pan are all at 450°F, but only the metal pan will burn your hand. Air has very low heat capacity and also low conductivity, so you can put your hand in the oven long enough to touch the cake and pan. The heat capacity of the cake is a lot higher than air, but since it has low conductivity you can briefly touch it without getting burned. The metal pan has a heat capacity similar to the cake, but high conductivity too. If you touch it, you will get burned.) This is why magicians do not tell their secrets. Most of their tricks are, in principle, relatively simple (although many are extremely difficult to execute) and knowing the secret takes the magic out of the trick.
There are many genuine unsolved mysteries in the universe and it is okay to say, “We do not yet know but someday perhaps we will.” The problem is that most of us find it more comforting to have certainty, even if it is premature, than to live with unsolved or unexplained mysteries.
Failures Are Rationalized
In science, the value of negative findings—failures—cannot be overemphasized. Usually they are not wanted, and often they are not published. But most of the time failures are how we get closer to truth. Honest scientists will readily admit their errors, but all scientists are kept in line by the fact that their fellow scientists will publicize any attempt to fudge. Not pseudo-scientists. They ignore or rationalize failures, especially when exposed. If they are actually caught cheating—not a frequent occurrence—they claim that their powers usually work but not always, so when pressured to perform on television or in a laboratory, they sometimes resort to cheating. If they simply fail to perform, they have ready any number of creative explanations: too many controls in an experiment cause negative results; the powers do not work in the presence of skeptics; the powers do not work in the presence of electrical equipment; the powers come and go, and this is one of those times they went. Finally, they claim that if skeptics cannot explain everything, then there must be something paranormal; they fall back on the unexplained is not inexplicable fallacy.
Also known as “post hoc, ergo propter hoc,” literally “after this, therefore because of this.” At its basest level, it is a form of superstition. The baseball player does not shave and hits two home runs. The gambler wears his lucky shoes because he has won wearing them in the past. More subtly, scientific studies can fall prey to this fallacy. In 1993 a study found that breast-fed children have higher IQ scores. There was much clamor over what ingredient in mother’s milk increased intelligence. Mothers who bottle-fed their babies were made to feel guilty. But soon researchers began to wonder whether breast-fed babies are attended to differently. Maybe nursing mothers spend more time with their babies and motherly vigilance was the cause behind the differences in IQ. As Hume taught us, the fact that two events follow each other in sequence does not mean they are connected causally. Correlation does not mean causation.
In the paranormal world, coincidences are often seen as deeply significant. “Synchronicity” is invoked, as if some mysterious force were at work behind the scenes. But I see synchronicity as nothing more than a type of contingency—a conjuncture of two or more events without apparent design. When the connection is made in a manner that seems impossible according to our intuition of the laws of probability, we have a tendency to think something mysterious is at work.
But most people have a very poor understanding of the laws of probability. A gambler will win six in a row and then think he is either “on a hot streak” or “due to lose.” Two people in a room of thirty people discover that they have the same birthday and conclude that something mysterious is at work. You go to the phone to call your friend Bob. The phone rings and it is Bob. You think, “Wow, what are the chances? This could not have been a mere coincidence. Maybe Bob and I are communicating telepathically.” In fact, such coincidences are not coincidences under the rules of probability. The gambler has predicted both possible outcomes, a fairly safe bet! The probability that two people in a room of thirty people will have the same birthday is .71. And you have forgotten how many times Bob did not call under such circumstances, or someone else called, or Bob called but you were not thinking of him, and so on. As the behavioral psychologist B. F. Skinner proved in the laboratory, the human mind seeks relationships between events and often finds them even when they are not present. Slot-machines are based on Skinnerian principles of intermittent reinforcement. The dumb human, like the dumb rat, only needs an occasional payoff to keep pulling the handle. The mind will do the rest.
As Aristotle said, “The sum of the coincidences equals certainty.” We forget most of the insignificant coincidences and remember the meaningful ones. Our tendency to remember hits and ignore misses is the bread and butter of the psychics, prophets, and soothsayers who make hundreds of predictions each January 1. First they increase the probability of a hit by predicting mostly generalized sure bets like “There will be a major earthquake in southern California” or “I see trouble for the Royal Family.” Then, next January, they publish their hits and ignore the misses, and hope no one bothers to keep track.
We must always remember the larger context in which a seemingly unusual event occurs, and we must always analyze unusual events for their representativeness of their class of phenomena. In the case of the “Bermuda Triangle,” an area of the Atlantic Ocean where ships and planes “mysteriously” disappear, there is the assumption that something strange or alien is at work. But we must consider how representative such events are in that area. Far more shipping lanes run through the Bermuda Triangle than its surrounding areas, so accidents and mishaps and disappearances are more likely to happen in the area. As it turns out, the accident rate is actually lower in the Bermuda Triangle than in surrounding areas. Perhaps this area should be called the “Non-Bermuda Triangle.” (See Kusche 1975 for a full explanation of this solved mystery.) Similarly, in investigating haunted houses, we must have a baseline measurement of noises, creaks, and other events before we can say that an occurrence is unusual (and therefore mysterious). I used to hear rapping sounds in the walls of my house. Ghosts? Nope. Bad plumbing. I occasionally hear scratching sounds in my basement. Poltergeists? Nope. Rats. One would be well-advised to first thoroughly understand the probable worldly explanation before turning to other-worldly ones.
NOTE: The following is taken from the book, Why People Believe Weird Things: Pseudoscience, Superstition, and Other Confusions of Our Time, pp. 46-48.It is the first section of the 3rd Chapter entitled, How Thinking Goes Wrong: 25 Fallacies That Lead Us to Believe Weird Things:
1. Theory Influences Observations
About the human quest to understand the physical world, physicist and Nobel laureate Werner Heisenberg concluded, “What we observe is not nature itself but nature exposed to our method of questioning.” In quantum mechanics, this notion has been formalized as the “Copenhagen interpretation” of quantum action: “a probability function does not prescribe a certain event but describes a continuum of possible events until a measurement interferes with the isolation of the system and a single event is actualized” (in Weaver 1987, p. 412). The Copenhagen interpretation eliminates the one-to-one correlation between theory and reality. The theory in part constructs the reality. Reality exists independent of I the observer, of course, but our perceptions of reality are influenced by the J theories framing our examination of it. Thus, philosophers call science theory laden.
That theory shapes perceptions of reality is true not only for quantum physics but also for all observations of the world. When Columbus arrived in the New World, he had a theory that he was in Asia and proceeded to perceive the New World as such. Cinnamon was a valuable Asian spice, and the first New World shrub that smelled like cinnamon was declared to be it. When he encountered the aromatic gumbo-limbo tree of the West Indies, Columbus concluded it was an Asian species similar to the mastic tree of the Mediterranean. A New World nut was matched with Marco Polo’s description of a coconut. Columbus’s surgeon even declared, based on some Caribbean roots his men uncovered, that he had found Chinese rhubarb. A theory of Asia produced observations of Asia, even though Columbus was half a world away. Such is the power of theory.
2. The Observer Changes the Observed
Physicist John Archibald Wheeler noted, “Even to observe so minuscule an object as an electron, [a physicist] must shatter the glass. He must reach in. He must install his chosen measuring equipment…. Moreover, the measurement changes the state of the electron. The universe will never afterward be the same” (in Weaver 1987, p. 427). In other words, the act of studying an event can change it. Social scientists often encounter this phenomenon. Anthropologists know that when they study a tribe, the behavior of the members may be altered by the fact they are being observed by an outsider. Subjects in a psychology experiment may alter their behavior if they know what experimental hypotheses are being tested. This is why; psychologists use blind and double-blind controls. Lack of such controls is often found in tests of paranormal powers and is one of the classic ways that thinking goes wrong in the pseudosciences. Science tries to minimize and acknowledge the effects of the observation on the behavior of the observed; pseudoscience does not.
3. Equipment Constructs Results
The equipment used in an experiment often determines the results. The size of our telescopes, for example, has shaped and reshaped our theories about the size of the universe. In the twentieth century, Edwin Hubble’s 60- and 100-inch telescopes on Mt. Wilson in southern California for the first time provided enough seeing power for astronomers to distinguish individual stars in other galaxies, thus proving that those fuzzy objects called nebulas that we thought were in our own galaxy were actually separate galaxies. In the nineteenth century, craniometry defined intelligence as brain size and instruments were designed that measured it as such; today intelligence is defined by facility with certain developmental tasks and is measured by another instrument, the IQ test. Sir Arthur Stanley Eddington illustrated the problem with this clever analogy:
Let us suppose that an ichthyologist is exploring the life of the ocean. He casts a net into the water and brings up a fishy assortment. Surveying his catch, he proceeds in the usual manner of a scientist to systematize what it reveals. He arrives at two generalizations:
(1) No sea-creature is less than two inches long.
(2) All sea-creatures have gills.
In applying this analogy, the catch stands for the body of knowledge which constitutes physical science, and the net for the sensory and intellectual equipment which we use in obtaining it. The casting of the net corresponds to observations.
An onlooker may object that the first generalization is wrong. “There are plenty of sea-creatures under two inches long, only your net is not adapted to catch them.” The ichthyologist dismisses this objection contemptuously. “Anything uncatchable by my net is ipso facto outside the scope of ichthyological knowledge, and is not part of the kingdom of fishes which has been defined as the theme of ichthyological knowledge. In short, what my net can’t catch isn’t fish.” (1958, p. 16)
Likewise, what my telescope can’t see isn’t there, and what my test can’t measure isn’t intelligence. Obviously, galaxies and intelligence exist, but how we measure and understand them is highly influenced by our equipment.