
The more informed we are, the more successful we’ll be in our decision-making endeavors. That’s only true up to a point: it’s only true if the information we’ve acquired is accurate and truthful. Making good decisions doesn’t merely rely on how much information we take in, it also depends on the quality of that information. If what we’ve instead ingested and accepted is misinformation or disinformation — incorrect information that doesn’t align with factual reality — then we not only become susceptible to grift and fraud ourselves, but we risk having our minds captured by charismatic charlatans. When that occurs, we can lose everything: money, trust, relationships, and even our mental independence.
This isn’t a problem that’s new here in 2026; this is a problem as old as humanity itself. When someone is compelling to us, and their arguments are convincing to us, we tend to go along with them, lauding both the idea and the one who puts it forth. We’re even more vulnerable if the idea is something that appeals to us emotionally, playing on our fears, hopes, preconceptions, preferences, or ideologies. However, no argument, no matter how well-crafted, can ever turn fiction into fact. It’s with this in mind that Carl Sagan, precisely 30 years ago, put forth what is now known as his “baloney detection kit” in his book, The Demon Haunted World, Science As A Candle in The Dark.
Here are nine timeless lessons we can all take to heart, and apply in our daily lives, when it comes to separating fact from fiction.
1.) Demand independent confirmation of whatever statements are asserted as facts.
In any matter that we consider, we always begin with the common ground of a starting point: with the facts and assumptions that underlie whatever topic we’re investigating. The key to making sure that we’re all on the same page is by stating what those facts and assumptions are up front, and by ensuring that everyone agrees on the truth of the facts being stated. This is only possible if:
- the facts are well-supported and/or well-established,
- the information underlying those facts has been obtained after a comprehensive and scrupulous analysis,
- and that those facts have been independently confirmed, ideally by people or teams who also aren’t stakeholders in the outcomes of those confirmation attempts.
It often turns out, upon closer examination or upon attempted replication, that what was once treated as a “fact” winds up being a much more disputed proposition. A line isn’t always the shortest distance between two points (that’s true only in flat space), black holes don’t evaporate because of particle-antiparticle pairs popping in-and-out of existence, and the far side of the Moon, invisible to all denizens of Earth until the development of spaceflight, doesn’t look similar to the Earth-facing side at all. Facts need to be robustly and responsibly established before they’re used to inform our decision-making process. All too often, especially when we’re eager to reach our preferred conclusion, we accept dubious assertions that are presented as facts without questioning whether this “fact” is actually representative of reality. We must tread cautiously, or we risk fooling ourselves.
2.) Encourage substantive debate from all points of view by those with substantial, relevant expertise.
This is an extremely important point, but one that we again must be very careful of. There is no shortage of debate happening in our modern world, including about issues that ignite our passions. That’s not necessarily a good thing, however. What we want is:
- substantive debate,
- where the underlying facts are accepted by everyone involved,
- where the proponents of different points of view are all knowledgeable experts,
- and where no one is lying, making up facts, engaging in the spreading of misinformation, or attempting to convince an onlooker of an alternative reality.
If Einstein and Bohr disagree over how to interpret our quantum reality, you can have a substantive debate over what it means, because everyone accepts the same facts, everyone involved is a knowledgeable expert, and everyone embraces our shared, measurable reality. However, when we have a widespread expert consensus about an issue, like the safety and utility of water fluoridation, the safety and efficacy of the (2024-era and earlier) childhood vaccination schedule, or the natural origins of SARS-CoV-2, debate only serves to sow doubt about well-established facts.
But we don’t want to undermine the best approximation of reality that human civilization can muster; we want to use all that we know and add in our capacity to reason and think critically to make informed decisions about how to have healthy, successful lives where we work together for the common good of all. That includes knowing when to listen to the signal and when to tune out the noise.
3.) Don’t accept an argument from an authority because that person is an authority. Instead, judge arguments based on the merits of the underlying facts, and how experts scrupulously interpret those facts.
As Carl Sagan noted, even the most vaunted authority you can think of has made many mistakes in the past, and will do so again in the future. But in science, the only authority is the accepted suite of scientific facts and the well-established foundation of everything we’ve learned by applying those facts to our physical reality. There is no one authority figure we can go to and find out whether something is true or not based on what they say; we have to look at the merits of what is being argued and how well the facts support that argument.
Then we have to examine it and scrutinize it across a broad set of criteria.
- Does this argument fit the full suite of facts, or are there inconvenient findings that undermine the argument?
- Is this argument the only game in town, or are there alternative hypotheses that explain at least a large fraction of the agreed-upon facts just as well or better?
- Do the overwhelming majority of experts, independently, all draw and/or accept the same conclusions, and are their reasons for accepting those conclusions well-supported by the data?
It’s vital to remember that in science, all truths about reality are only provisional, representing the state of knowledge at the time. As we learn more, as we uncover new evidence, and as we enhance the full suite of data that we currently possess, a new, superior truth may yet emerge. It’s happened many times in the past, and will inevitably happen again.
4.) Spin as many hypotheses as you can that are consistent with the data. Every possible explanation that isn’t ruled out or contradicted by the already-existing data should be considered, and each hypothesis should be tested and examined as rigorously as possible.
That’s how we do it: how we arrive at our best approximation of a scientific truth. We don’t choose our preferred idea and then look for evidence to support and defend it; although this is a common tactic used when we attempt to convince others to share our point-of-view, it has no place in the scientific enterprise. Instead, we attempt to be as neutral as possible, subjecting all hypotheses to the same strict scrutiny, attempting to falsify or poke holes in any idea by testing it as rigorously as possible.
In science, the key question that we always ask ourselves, when it comes to explaining any physical phenomenon, is “how?”
- How did this happen?
- How come this outcome or set of outcomes occurred, as opposed to any other possibility?
- How did a physical process, step-by-step, lead to the observations and measurements that we made?
It’s by considering all plausible answers to these questions, no matter how absurd they may seem, that we steadily improve our picture of reality and how it works. Many ideas that were rejected in the past receive new life upon a surprising new observation; many ideas that are accepted today will be overthrown when a key experimental result demonstrates its insufficiency. What passes for a “scientific truth” today may later be demoted to a crude and limited approximation that only applies under special circumstances, just as Newton’s laws are approximations to Einstein’s. That is not a failure of science; that is an essential part of the process.
5.) Whatever your favorite, most preferred hypothesis is — especially if it’s your original idea — be its harshest critic. By attempting to knock it down or poke holes in it as hard as you can, you’ll determine how well it stands up under the steeliest of scrutiny. (And if you don’t, others will.)
This is one of the hardest aspects for non-scientists (and many low-quality scientists) to engage in: working hard to undermine your own work. “Why would anyone do that,” you might wonder. And the answer is simple: because the more invested you are in an idea being true, the stronger your instinct is to:
- overlook its flaws and faults, including all the ways it fails to explain reality,
- while overemphasizing and pointing to its strong points, especially in the ways it does align with reality.
If we ever hope to get at the truth and avoid succumbing to our prejudices — or, in this example, avoiding falling prey to baloney — we have to be skeptical of every idea, including and especially our own preferred idea, and subject it to the blindingly harsh light of reality.
Particularly in the era of LLM chatbots, which will flatter us and every one of our thoughts in conversation, self-inflicting this type of harsh criticism upon ourselves and our cherished ideas may seem especially unnerving. From a scientific, truth-seeking perspective, however, it’s an absolute mind-killer. If you can’t fathom abandoning your most preferred, cherished, deeply-held beliefs about the world because the evidence might contradict it, you’ve already fallen victim to the most insidious kind of baloney: the baloney that arises when we attempt to convince ourselves that we couldn’t possibly be wrong or mistaken. As Richard Feynman warned more than two decades before Carl Sagan’s book:
“The first principle is that you must not fool yourself — and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that.”
6.) Don’t settle for a qualitative analysis of the issue. Be quantitative: ask and answer the key question of “by how much?”
This is something that a lot of non-scientists often overlook, particularly when it comes to scientific issues. If there are multiple possible explanations for something, and multiple contributing factors, how do you proceed? If you want to arrive at your preferred conclusion, you’ll talk in flowery terms about how massive or large an effect is, but you’ll avoid a comprehensive quantitative analysis. For example, the Earth has warmed over the past 250 years, and continues to warm even today. If you wanted to sow doubt about the cause of that warming, or to support an alternative-to-the-mainstream conclusion, you might point to a long list of contributing factors:
- the fact that we’re in the process of exiting an Ice Age,
- the fact that the Sun is variable and provides most of Earth’s energy,
- the fact that clouds trap heat, as do the natural gases in our atmosphere,
- and the fact that volcanoes not only cause cloud seeding, but contribute to heat-trapping through the greenhouse effect.
However, if you have sufficient expertise in the relevant areas (climate science and atmospheric science, for instance) and are approaching the problem scrupulously, you’ll ask the key question of how much each effect contributes. That also includes quantifying from the effects you might hope to downplay, such as the effect of human-created greenhouse gases due to the emission of fossil fuels and/or agricultural practices. It’s only by predicting both what happens and the amount that it’s going to happen by that we reach a physical understanding of what’s actually going on. Over a full century before Sagan’s writings, it was Lord Kelvin who said,
“…when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind: it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science, whatever the matter may be.”
Credit: Wind Map/Hint.fm
7.) If there’s a chain of argument being put forth, then every link in the chain, from the premise to the final conclusion, must be sound.
They say that a chain is only as strong as its weakest link, and that’s just as true in the chain of logical reasoning as it is in the chains tethering a battleship to its anchors. A single weak link, including:
- assuming a single untrue assumption,
- relying on a discredited or fraudulent study,
- a logical error in reasoning,
- presenting an unsubstantiated assertion as an established fact,
- or ignoring an overlooked or omitted fact that undermines one of the key points,
can lead to an invalid conclusion being drawn.
This is why we must be careful not to misuse our ability to think critically or reason logically; if we misapply our toolkit — whether because of our own cluelessness (where we fool ourselves) or due to deliberate manipulation (where we purposely fool others) — we will wind up hiding, rather than highlighting, the points of evidence that contradict our narrative. If your goal is to get at the truth, or at least our closest approximation of it at the present time, the way to do that is to be scrupulous and forthright about the strengths and weaknesses of every link in your chain of argument. If one of today’s assumptions (or chain links) turns out to later be contradicted or overthrown, that is no failure on anyone’s part. That is how our understanding of the world improves and advances: one new fact and one additional piece of information at a time.
8.) The convenient rule of Occam’s Razor: to choose the simplest explanation among multiple hypotheses that explain the data equally well.
Also known as the principle of parsimony, Occam’s Razor is often paraphrased as, “all other things being equal, the simplest explanation is usually the best.” However, this too can be misapplied (and often is) in many ways, and we have to be aware of what those misapplications are in order to guard against them. They include:
- when multiple hypotheses have different levels of predictive, explanatory power (in which case, one of them will usually have the most such power),
- when multiple hypotheses that do explain one class of data equally well have non-equivalent instances that conflict with reality in some other fashion,
- or where one explanation is hailed as “simpler” despite actually requiring additional unproven assumptions as compared with another.
If multiple hypotheses do not explain the data equally well, then the one that explains the data more accurately and comprehensively is superior. If multiple hypotheses work to explain the data equally well but one conflicts with reality in some other realm (and the other doesn’t), the one that’s valid across the widest range of applicability is superior. And if two rival explanations each declare that they’re the simplest one, the way to tell is by looking at the number of additional assumptions that each one needs to invoke to be true; the one with fewer additional assumptions is simpler. (For example, “dark energy exists but evolves over time” is more complex than “dark energy exists and is a constant,” because it requires a greater number of parameters to model dark energy in that fashion.)
When all else is equal, the simplest explanation is usually best, but only if all else is equal, and only if we are careful with how we apply the notion of “simple” to the problem in question.
9.) Ask whether the hypothesis, at least in principle, can be falsified. Non-falsifiable and untestable hypotheses cannot be checked out, and hence those ideas are incapable of disproof.
This is not a benefit; this is the hallmark of all ideas that aren’t worth very much. There are plenty of ideas that one can concoct that cannot be disproven, but that also don’t predict anything that can be tested. When I was a child, I had one such idea: the idea that the Universe was created for me at the moment of my birth, with no one else actually existing. All historical records, photographs, written texts, everyone else’s memories and experiences, etc., were created along with the Universe at the moment of my birth, so that no one would be aware of this. Certainly, this idea cannot be disproven — not by me and not by anyone else with a similar idea about themselves — but it lacks the power to explain anything as well.
If it cannot be falsified by any sort of evidence, and it lacks explanatory power to quantitatively describe reality, then it isn’t worth very much to others. As Thomas Henry Huxley put it long ago,
“The foundation of all morality is to have done, once and for all, with lying; to give up pretending to believe that for which there is no evidence, and repeating unintelligible propositions about things beyond the possibilities of knowledge.”
Although we do not yet live in a world exclusively governed by rationality, skepticism, and critical thought as envisioned by Sagan, Huxley, and many others, these nine lessons remain vital tools in the eternal war against misinformation, grift, and fraud. The entire scientific enterprise remains the most meaningful method for obtaining factual knowledge about reality, and it’s by following these lessons that we’ve achieved all that we have as a civilization. To go further still, these lessons must never be forgotten.
This article Carl Sagan’s 9 timeless lessons for detecting baloney is featured on Big Think.















