We all feel it: Our once-happy digital spaces have become increasingly less user-friendly and more toxic, cluttered with extras nobody asked for and hardly anybody wants. There’s even a word for it: “enshittification,” named 2023 Word of the Year by the American Dialect Society. The term was coined by tech journalist/science fiction author Cory Doctorow, a longtime advocate of digital rights. Doctorow has spun his analysis of what’s been ailing the tech industry into an eminently readable new book, Enshittification: Why Everything Suddenly Got Worse and What To Do About It.
As Doctorow tells it, he was on vacation in Puerto Rico, staying in a remote cabin nestled in a cloud forest with microwave Internet service—i.e., very bad Internet service, since microwave signals struggle to penetrate through clouds. It was a 90-minute drive to town, but when they tried to consult TripAdvisor for good local places to have dinner one night, they couldn’t get the site to load. “All you would get is the little TripAdvisor logo as an SVG filling your whole tab and nothing else,” Doctorow told Ars. “So I tweeted, ‘Has anyone at TripAdvisor ever been on a trip? This is the most enshittified website I’ve ever used.'”
Initially, he just got a few “haha, that’s a funny word” responses. “It was when I married that to this technical critique, at a moment when things were quite visibly bad to a much larger group of people, that made it take off,” Doctorow said. “I didn’t deliberately set out to do it. I bought a million lottery tickets and one of them won the lottery. It only took two decades.”
The Wikimedia Foundation, the nonprofit organization that hosts Wikipedia, says that it’s seeing a significant decline in human traffic to the online encyclopedia because more people are getting the information that’s on Wikipedia via generative AI chatbots that were trained on its articles and search engines that summarize them without actually clicking through to the site.
The Wikimedia Foundation said that this poses a risk to the long term sustainability of Wikipedia.
“We welcome new ways for people to gain knowledge. However, AI chatbots, search engines, and social platforms that use Wikipedia content must encourage more visitors to Wikipedia, so that the free knowledge that so many people and platforms depend on can continue to flow
Sustainably,” the Foundation’s Senior Director of Product Marshall Miller said in a blog post. “With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work.”
Ironically, while generative AI and search engines are causing a decline in direct traffic to Wikipedia, its data is more valuable to them than ever. Wikipedia articles are some of the most common training data for AI models, and Google and other platforms have for years mined Wikipedia articles to power its Snippets and Knowledge Panels, which siphon traffic away from Wikipedia itself.
“Almost all large language models train on Wikipedia datasets, and search engines and social media platforms prioritize its information to respond to questions from their users,” Miller said. That means that people are reading the knowledge created by Wikimedia volunteers all over the internet, even if they don’t visit wikipedia.org— this human-created knowledge has become even more important to the spread of reliable information online.”
Miller said that in May 2025 Wikipedia noticed unusually high amounts of apparently human traffic originating mostly from Brazil. He didn’t go into details, but explained this caused the Foundation to update its bot detections systems.
“After making this revision, we are seeing declines in human pageviews on Wikipedia over the past few months, amounting to a decrease of roughly 8% as compared to the same months in 2024,” he said. “We believe that these declines reflect the impact of generative AI and social media on how people seek information, especially with search engines providing answers directly to searchers, often based on Wikipedia content.”
Miller told me in an email that Wikipedia has policies for third-party bots that crawl its content, such as specifying identifying information and following its robots.txt, and limits on request rate and concurrent requests.
“For obvious reasons, we can’t share details publicly about how exactly we block and detect bots,” he said. “In the case of the adjustment we made to data over the past few months, we observed a substantial increase over the level of traffic we expected, centering on a particular region, and there wasn’t a clear reason for it. When our engineers and analysts investigated the data, they discovered a new pattern of bot behavior, designed to appear human. We then adjusted our detection systems and re-applied them to the past several months of data. Because our bot detection has evolved over time, we can’t make exact comparisons – but this adjustment is showing the decline in human pageviews.”
Human pageviews to all language versions of Wikipedia since September 2021, with revised pageviews since April 2025Image: Wikimedia Foundation.
“These declines are not unexpected. Search engines are increasingly using generative AI to provide answers directly to searchers rather than linking to sites like ours,” Miller said. “And younger generations are seeking information on social video platforms rather than the open web. This gradual shift is not unique to Wikipedia. Many other publishers and content platforms are reporting similar shifts as users spend more time on search engines, AI chatbots, and social media to find information. They are also experiencing the strain that these companies are putting on their infrastructure.”
Miller said that the Foundation is “enforcing policies, developing a framework for attribution, and developing new technical capabilities” in order to ensure third-parties responsibly access and reuse Wikipedia content, and continues to "strengthen" its partnerships with search engines and other large “re-users.” The Foundation, he said, is also working on bringing Wikipedia content to younger audiences via YouTube, TikTok, Roblox, and Instagram.
However, Miller also called on users to “choose online behaviors that support content integrity and content creation.”
“When you search for information online, look for citations and click through to the original source material,” he said. “Talk with the people you know about the importance of trusted, human curated knowledge, and help them understand that the content underlying generative AI was created by real people who deserve their support.”
“Greg Toppo is a Senior Writer at The 74 and a journalist with more than 25 years of experience, most of it covering education. He spent 15 years as the national education reporter for USA Today and was most recently a senior editor for Inside Higher Ed.” This appeared in The74, September 22, 2025
William Liang was sitting in chemistry class one day last spring, listening to a teacher deliver a lecture on “responsible AI use,” when he suddenly realized what his teachers are up against.
The talk was about a big, take-home essay, and Liang, then a sophomore at a Bay Area high school, recalled that it covered the basics: the rubric for grading as well as suggestions for how to use generative AI to keep students honest: They should use it as a “thinking partner” and brainstorming tool.
As he listened, Liang glanced around the classroom and saw that several classmates, laptops open, had already leaped ahead several steps, generating entire drafts of their essays.
Liang said his generation doesn’t engage in moral hand-wringing about AI. “For us, it’s simply a tool that enables us not to have to think for ourselves.”
William Liang
But with AI’s awesome power comes a side effect that many would rather not consider: It’s killing the trust between teachers and students.
When students can cheaply and easily outsource their work, he said, why value a teacher’s feedback? And when teachers, relying on sometimes unreliable AI-detection software, believe their students are taking such major shortcuts, the relationship erodes further.
It’s an issue that researchers are just beginning to study, with results that suggest an imminent shakeup in student-teacher relationships: AI, they say, is forcing teachers to rethink how they think about students, assessments and, to a larger extent, learning itself.
If you ask Liang, now a junior and an experienced op-ed writer — he has penned pieces for The Hill, The San Diego Union-Tribune, and the conservative Daily Wire — AI has already made school more transactional, stripping many students of their desire to learn in favor of simply completing assignments.
“The incentive system for students is to just get points,” he said in an interview.
While much of the attention of the past few years has focused on how teachers can detect AI-generated work and put a stop to it, a few researchers are beginning to look at how AI affects student-teacher relationships.
Researcher Jiahui Luo of the Education University of Hong Kong in 2024 found that college students in many cases resent the lack of “two-way transparency” around AI. While they’re required to declare their AI use and even submit chat records in a few cases, Luo wrote, the same level of transparency “is often not observed from the teachers.” That produces a “low-trust environment,” where students feel unsafe to freely explore AI.
In 2024, after being asked by colleagues at Drexel University to help resolve an AI cheating case, researcher Tim Gorichanaz, who teaches in the university’s College of Computing and Informatics, analyzed college students’ Reddit threads, spanning December 2022 to June 2023, shortly after Open AI unleashed ChatGPT onto the world. He found that many students were beginning to feel the technology was testing the trust they felt from instructors, in many cases eroding it — even if they didn’t rely on AI.
Tim Gorichanaz, Drexel University
While many students said instructors trusted them and would offer them the benefit of the doubt in suspected cases of AI cheating, others were surprised when they were accused nonetheless. That damaged the trust relationship.
For many, it meant they’d have to work on future assignments “defensively,” Gorichanaz wrote, anticipating cheating accusations. One student even suggested, “Screen recording is a good idea, since the teacher probably won’t have as much trust from now on.” Another complained that their instructor now implicitly trusted AI plagiarism detectors “more than she trusts us.”
In an interview, Gorichanaz said instructors’ trust in AI detectors is a big problem. “That’s the tool that we’re being told is effective, and yet it’s creating this situation of mutual distrust and suspicion, and it makes nobody like each other. It’s like, ‘This is not a good environment.’”
For Gorichanaz, the biggest problem is that AI detectors simply aren’t that reliable — for one thing, they are more likely to flag the papers of English language learners as being written by AI, he said. In one Stanford University study from 2023, they “consistently” misclassified non-native English writing samples as AI-generated, while accurately identifying the provenance of writing samples by native English speakers.
“We know that there are these kinds of biases in the AI detectors,” Gorichanaz said. That potentially puts “a seed of doubt” in the instructor’s mind, when they should simply be using other ways to guide students’ writing. “So I think it’s worse than just not using them at all.”
‘It is an enormous wedge in the relationship’
Liz Shulman, an English teacher at Evanston Township High School near Chicago, recently had an experience similar to Liang’s: One of her students covertly relied on AI to help write an essay on Romeo and Juliet, but forgot to delete part of the prompt he’d used. Next to the essay’s title were the words, “Make it sound like an average ninth-grader.”
Liz Shulman, Evanton Township High School
Asked about it, the student simply shrugged, Shulman recalled in a recent op-ed she co-authored with Liang.
In an interview, Shulman said that just three weeks into the new school year, in late August, she had already had to sit down with another student who used AI for an assignment. “I pretty much have to assume that students are going to use it,” she said. “It is an enormous wedge in the relationship, which is so important to build, especially this time of the year.”
Her take: School has transformed since 2020’s long COVID lockdowns, with students recalibrating their expectations. It’s less relational, she said, and “much more transactional.”
During lockdowns, she said, Google “infiltrated every classroom in America — it was how we pushed out documents to students.” Five years later, if students miss a class because of illness, their “instinct” now is simply to check Google Classroom, the widely used management tool, “rather than coming to me and say, ‘Hey, I was sick. What did we do?’”
That’s a bitter pill for an English teacher who aspires to shift students’ worldviews and beliefs — and who relies heavily on in-class discussions.
“That’s not something you can push out on a Google doc,” Shulman said. “That takes place in the classroom.”
In a sense, she said, AI is contracting where learning can reliably take place: If students can simply turn off their thinking at home and rely on AI tools to complete assignments, that leaves the classroom as the sole place where learning occurs.
“Because of AI, are we only going to ‘do school’ while we’re in school?” she asked.
‘We forget all the stuff we learned before’
Accounts of teachers resigned to students cheating with AI are “concerning” and stand in contrast to what a solid body of research says about the importance of teacher agency, said Brooke Stafford-Brizard, senior vice president for Innovation and Impact at the Carnegie Foundation.
Teachers, she said, “are not just in a classroom delivering instruction — they’re part of a community. Really wonderful school and system leaders recognize that, and they involve them. They’re engaged in decision making. They have that agency.”
One of the main principles of Carnegie’s R&D Agenda for High School Transformation, a blueprint for improving secondary education, includes a “culture of trust,” suggesting that schools nurture supportive learning and “positive relationships” for students and educators.
“Education is a deeply social process,” Stafford-Brizard said. “Teaching and learning are social, and schools are social, and so everyone contributing to those can rely on that science of relational trust, the science of relationships. We can pull from that as intentionally as we pull from the science of reading.”
Gorichanaz, the Drexel scholar, said that for all of its newness, generative AI presents educators with what’s really an old challenge: How to understand and prevent cheating.
“We have this tendency to think AI changed the entire world, and everything’s different and revolutionized and so on,” he said. “But it’s just another step. We forget all the stuff we learned before.”
Specifically, research going back more than a decade identifies four key reasons why students cheat: They don’t understand the relevance of an assignment to their life, they’re under time pressure, or intimidated by its high stakes, or they don’t feel equipped to succeed.
Even in the age of AI, said Gorichanaz, teachers can lessen the allure of taking shortcuts by solving for these conditions — figuring out, for instance, how to intrinsically motivate students to study by helping them connect with the material for its own sake. They can also help students see how an assignment will help them succeed in a future career. And they can design courses that prioritize deeper learning and competence.
To alleviate testing pressure, teachers can make assignments more low-stakes and break them up into smaller pieces. They can also give students more opportunities in the classroom to practice the skills and review the knowledge being tested.
And teachers should talk openly about academic honesty and the ethics of cheating.
“I’ve found in my own teaching that if you approach your assignments in that way, then you don’t always have to be the police,” he said. Students are “more incentivized, just by the system, to not cheat.”
With writing, teachers can ask students to submit smaller “checkpoint” assignments, such as outlines and handwritten notes and drafts that classmates can review and comment on. They can also rely more on oral exams and handwritten blue book assignments.
Shulman, the Chicago-area English teacher, said she and her colleagues are not only moving back to blue books, but to doing “a lot more on paper than we ever used to.” They’re asking students to close their laptops in class and assigning less work to be completed outside of class.
As for Liang, the high school junior, he said his new English teacher expects all assignments to come in hand-written. But he also noted that a few teachers have fallen under the spell of ChatGPT themselves, using it for class presentations. As one teacher last spring clicked through a slide show, he said, “It was glaringly obvious, because all kids are AI experts, and they can just instantly sniff it out.”
He added, “There was a palpable feeling of distrust in the room.
There’s a statement that one can make that would have been completely non-controversial at the end of the 19th century, but many people both in and out of science would argue against it today. Consider for yourself how you feel about it:
“The fundamental laws that govern the smallest constituents of matter and energy, when applied to the Universe over long enough cosmic timescales, can explain everything that will ever emerge.”
This means that the formation of literally everything in our Universe, from atomic nuclei to atoms to simple molecules to complex molecules to life to intelligence to consciousness and beyond, can all be understood as something that emerges directly from the fundamental laws underpinning reality, with no additional laws, forces, or interactions required.
This simple idea — that all phenomena in the Universe are fundamentally physical phenomena — is known as reductionism. In many places, includingright hereon BigThink, reductionism is treated as though it’s not the taken-for-granted default position about how the Universe works. The alternative proposition is emergence, which states that qualitatively novel properties are found in more complex systems that can never, even in principle, be derived or computed from fundamental laws, principles, and entities. While it’s true that many phenomena are not obviously emergent from the behavior of their constituent parts, reductionism should be the default position (or null hypothesis) for any interpretation of reality. Anything else should be treated as the equivalent of the God-of-the-gaps argument, and what follows is an explanation as to why.
On the right, the gauge bosons, which mediate the three fundamental quantum forces of our Universe, are illustrated. There is only one photon to mediate the electromagnetic force, there are three bosons mediating the weak force, and eight mediating the strong force. This suggests that the Standard Model is a combination of three groups: U(1), SU(2), and SU(3), whose interactions and particles combine to make up everything known in existence. Despite the success of this picture, many puzzles still remain.
When we think about the question of “what is fundamental” in this Universe, we typically turn to the most indivisible, elementary entities of all and the laws that govern them. For our physical reality, that means we ought to start with the particles of the Standard Model and the interactions that govern them — as well as whatever dark matter and dark energy are and the interactions that govern them; hitherto their nature is unknown — and to see if that gives us the necessary and sufficient ingredients to build every known phenomenon and complex entity out of those building blocks alone.
As long as there’s a combination of forces that are relatively attractive at one scale but relatively repulsive at a different scale, we’re pretty much guaranteed to form bound structures out of these fundamental entities. Given that we have four fundamental forces in the Universe, including:
short-range nuclear forces that come in two types, a strong version and a weak version,
a long-range electromagnetic force, where “like” charged particles repel and “unlike” charged particles attract,
and a long-range gravitational force, where the only force between them is always attractive,
we should fully expect that structures will emerge on a variety of distance scales: at small, intermediate, and large scales alike.
The traditional model of an atom, now more than 100 years old, is of a positively charged nucleus orbited by negatively charged electrons. Although the outdated Bohr model is where this picture comes from, the size of the atom itself is determined by the charge-to-mass ratio of the electron. If the electron were heavier or lighter, atoms would be smaller or larger, as well as more difficult or more easy to ionize, respectively.
Indeed: this is precisely what we find when we examine the Universe we actually inhabit. On the smallest scales, the strong nuclear force binds quarks into bound structures, three-at-a-time, known as baryons. The lightest two baryons are the most stable: the proton, which is 100% stable, and the neutron, which is stable enough to survive with a half-life of about ~15 minutes even when it isn’t bound to anything else.
Those protons and neutrons can then form bound structures made out of those composite entities as the building blocks of larger structures. This time, it’s the strong nuclear force that’s the culprit: capable of binding protons and neutrons together into atomic nuclei, even overcoming the repulsive electromagnetic force between like (positive) charges due to having multiple protons in most complex nuclei. Some nuclei will be stable against decays, others will undergo one or more decays (radioactively) before producing a stable end-product.
And then, the electromagnetic force leverages two facts about the Universe.
That, overall, it’s electrically neutral, with the same number of negative charges (electrons) as there are positive charges (protons) in existence.
And that each electron is tiny in mass compared to each proton, neutrons, and atomic nucleus.
This allows electrons and nuclei to bind together in order to form neutral atoms, where every unique species of atom, depending on the number of protons in its nucleus, has its own unique electron structure, in accordance with the fundamental laws of quantum physics that govern our Universe.
The energy levels and electron wavefunctions that correspond to different states within a hydrogen atom, although the configurations are extremely similar for all atoms. The way atoms bind together to form molecules and other, more complex structures is a challenging task when one begins from fundamental particles and interactions, but understanding the basics is how we build up to explaining more complex systems.
It’s very important, when we discuss the idea of reductionism, that we don’t “strawman” the reductionist’s position. The reductionist doesn’t contend — nor does the reductionist need to contend — that they have a complete and full explanation for each and every complex phenomenon that arises within every imaginable complex structure. Some composite structures and some properties of complex structures will be easily explicable from the underlying rules, sure, but the more complex your system becomes, the more difficult you can expect it will be to explain all of the various phenomena and properties that emerge.
That latter piece cannot be considered “evidence against reductionism” in any way, shape, or form. The fact that “there exists this phenomenon that lies beyond my ability to make robust, quantitative predictions about” should never to be construed as evidence in favor of “this phenomenon requires additional laws, rules, substances, or interactions beyond what’s presently known.”
You either understand your system well-enough to understand what should and shouldn’t emerge from it, in which case you can put reductionism to the test, or you don’t, in which case, you have to go back down to the null hypothesis: that until you can make such predictions from a reductionist approach, you can’t consider any evidence you find as evidence for the need for something beyond the reductionist viewpoint.
A wine glass, when vibrated at the right frequency, will shatter. This is a process that dramatically increases the entropy of the system and is thermodynamically favorable. The reverse process, of shards of glass reassembling themselves into a whole, uncracked glass, is so unlikely that it never occurs spontaneously in practice. However, if the motion of the individual shards, as they fly apart, were exactly reversed, they would indeed fly back together and, at least for an instant, successfully reassemble the wine glass. Time reversal symmetry is exact in Newtonian physics, but it is not obeyed in thermodynamics.
And, to be clear, that’s what the “null hypothesis” is: that the Universe is 100% reductionist. That means a suite of things.
That all structures that are built out of atoms and their constituents — including molecules, ions, and enzymes — can be described based on the fundamental laws of nature and the component structures that they’re made out of.
That all larger structures and processes that occur between those structures, including all chemical reactions, don’t require anything more than those fundamental laws and constituents.
That all biological processes, from biochemistry to molecular biology and beyond, as complex as they might appear to be, are truly just the sum of their parts, even if each “part” of a biological system is remarkably complex.
And that everything that we regard as “higher functioning,” including the workings of our various cells, organs, and even our brains, doesn’t require anything beyond the known physical constituents and laws of nature to explain.
To date, although it shouldn’t be controversial to make such a statement, there is no evidence for the existence of any phenomena that falls outside of what reductionism is capable of explaining.
The Al Naslaa rock formation, located in Saudi Arabia, is made of high-density sedimentary rock and shows significant evidence of weathering and erosion. However, the pedestal beneath it has eroded more quickly, the petroglyphs upon it are thousands of years old, and the extremely smooth fissure down its center is not yet fully explained.
How “apparent emergence” is readily explained by reductionism
For some properties inherent to complex systems, it’s pretty easy to explain why they exist as they do. The mass (or weight, if you prefer to use scales) of a macroscopic object is, quite simply, the sum of the masses of the components that make it up, minus the mass lost to the energy binding those components together, via Einstein’s E = mc².
For other properties, it’s not necessarily such an easy task, but it has been accomplished. We can explain how thermodynamic quantities like heat, temperature, entropy, and enthalpy emerge from a complex, large-scale ensemble of particles. We can explain the properties of many molecules through the science of quantum chemistry, which again can be derived directly from the underlying fundamental laws. We can use those same fundamental laws to understand — although the computing power required is immense — how various molecules, such as peptides and proteins, fold into their equilibrium configurations, and how they can also wind up in metastable states.
And then there are properties that we cannot fully explain, but that we also are incapable of making robust predictions for as far as what we expect to see under those conditions. These “hard problems” often include systems that are far too complex to model with current technology, such as human consciousness.
Then-graduate student Chao He in front of the gas chamber in the Horst planetary lab at Johns Hopkins, which recreates conditions suspected to exist in the hazes of exoplanet atmospheres. By subjecting it to conditions designed to mimic those induced by ultraviolet emissions and plasma discharges, researchers work toward the emergence of organics, and life, from non-life.
Credit: Chanapa Tantibanchachai/Johns Hopkins University
In other words, what appears to be emergent to us today, with our present limitations of what is within our power to compute, may someday in the future be describable in purely reductionist terms. Many such systems that were once incapable of being described via reductionism have, with superior models (as far as what we choose to pay attention to) and the advent of improved computing power, now been successfully described in precisely a reductionist fashion. Many seemingly chaotic systems can, in fact, be predicted to whatever accuracy we arbitrarily choose, so long as enough computational resources are available. Examples include:
Yes, we can’t rule out non-reductionism, but wherever we’ve been able to make robust predictions for what the fundamental laws of nature do imply for large-scale, complex structures, they’ve been in agreement with what we’ve been able to observe and measure. The combination of the known particles that make up the Universe and the four fundamental forces through which they interact has been sufficient to explain, from atomic to stellar scales and beyond, everything we’ve ever encountered in this Universe. The existence of systems that are too complex to predict with current technology is not an argument against reductionism.
Many have argued, unsuccessfully, that the evolution of a complex organ like the human eye could not have occurred through natural processes alone. And yet, the eye has evolved, naturally, in many different organisms independently a large number of independent times. Asserting the need for something supernatural in an intermediate scale in the Universe is fundamentally antithetical to the process of science, and is likely to be proven unnecessary and extraneous as science continues to advance.
Credit: Venti Views / Unsplash
The God-of-the-gaps nature of non-reductionism
But it is true that resorting to non-reductionism — or the notion that completely novel properties will emerge within a complex system that cannot be derived from the interactions of its constituent parts — is tantamount, at this point in time, to a God-of-the-gaps argument. It basically says, “Well, we know how things behave on a certain scale or at a certain time, and we know how they behaved on a smaller scale or at an earlier time, but we can’t fill in all the steps to get from that small scale/early time to understand how the large scale/later time behavior comes about, and therefore, I’m going to insert the possibility that something magical, divine, or otherwise non-physical comes into play.”
Although this is an assertion that is difficult or even impossible to disprove, it’s one that has not only zero, but negative scientific value. The whole process of science involves investigating the Universe with the tools we have at our disposal for investigating reality, and determining the best physical model, description, and set of conditions that describes that reality. What a fool’s errand it is to assert “maybe we need more than our current best model to describe reality” when:
we don’t even have the computational or modeling power necessary to put our current model to the test,
and where these are the regimes most likely — if you insert something magical, divine, or non-physical — where science is very likely, in the very near future, to show that such an intervention is wholly unnecessary.
If life began with a random peptide that could metabolize nutrients/energy from its environment, replication could then ensue from peptide-nucleic acid coevolution. Here, DNA-peptide coevolution is illustrated, but it could work with RNA or even PNA as the nucleic acid instead. Asserting that a “divine spark” is needed for life to arise is a classic “God-of-the-gaps” argument, but asserting that we know exactly how life arose from non-life is also a fallacy. These conditions, including rocky planets with these molecules present on their surfaces, likely existed within the first 1-2 billion years of the Big Bang.
If you either believe or simply want to believe that there’s more to the Universe than the sum of its physical parts, that’s a statement where science doesn’t have anything meaningful to say on the matter; science is completely agnostic about that possibility. However, if you want to believe that a description of the physical phenomena that exist in this Universe requires either:
something more than the physical laws that govern the Universe,
and/or something other than the physical objects that exist within the Universe,
perhaps the least successful decision you can make is to insert whatever “metaphysical” entities you believe in in a place where science, once it advances just a little bit further, can disprove the need for them entirely.
I have never understood why one would be so willing to assert the existence of the divine or supernatural in such a small place: a place where it would be so easy to falsify the need for it. Why would you believe, while inhabiting a Universe that’s so vast, that something beyond the capability of our physical laws to describe would primarily appear in such an extraneous, unnecessary place? If the Universe, as we observe and measure it, isn’t able to be described by what’s physically present within it under the known laws of reality, shouldn’t we determine that to actually be the case before resorting to a non-scientific, supernatural explanation?
A fruit fly brain as viewed through a confocal microscope. The workings of the brain of any animal are not fully understood, but it’s eminently plausible that electrical activity in the brain and throughout the body is responsible for what we know as “consciousness,” and furthermore, that human beings are not so unique among animals or even other living creatures in possessing it.
The fundamental components of our physical Universe, along with the fundamental laws that govern all of existence, represent the most successful scientific picture of the Universe in all of history. Never before, from the tiniest subatomic particles to macroscopic phenomena to cosmic scales, have we ever had such a successful way of describing our physical reality as we do today. The idea of reductionism is simple: that physical phenomena can be explained by the complex combination of the objects that exist within the Universe, governed by the same physical laws that govern all physical systems within the Universe.
That’s our default starting point: the “null hypothesis” for what reality is.
If that’s not your starting point, it’s my duty to inform you that the burden of proof for your worldview — one that includes a new set of fundamental forces, new entities, new interactions, or the intervention of the supernatural — lies with you. You must show that the null hypothesis is insufficient to describe a phenomenon where its predictions are clear, and in conflict with what can be observed and/or measured. This is a very high bar to clear, and an endeavor that no opponent of reductionism has ever succeeded at. We may not understand everything there is to know about all complex phenomena: that much is true. The more complex a phenomenon is, the harder of a task it is to derive all of its properties from the fundamental, but that’s not the same as having evidence that something more is required.
In science, however, we’re never satisfied with a statement that simply says “this problem is hard, so maybe the answer lies beyond science.” That’s not how progress is made. The only way we ever move forward is by conducting more and better science, relentlessly, until we figure out how it all works.
This article first appeared in August of 2022. It was updated in October of 2025.