962 stories
·
1 follower

Another AI Side Effect: Erosion of Student-Teacher Trust (Greg Toppo)

1 Share

“Greg Toppo is a Senior Writer at The 74 and a journalist with more than 25 years of experience, most of it covering education. He spent 15 years as the national education reporter for USA Today and was most recently a senior editor for Inside Higher Ed.” This appeared in The74, September 22, 2025

William Liang was sitting in chemistry class one day last spring, listening to a teacher deliver a lecture on “responsible AI use,” when he suddenly realized what his teachers are up against.

The talk was about a big, take-home essay, and Liang, then a sophomore at a Bay Area high school, recalled that it covered the basics: the rubric for grading as well as suggestions for how to use generative AI to keep students honest: They should use it as a “thinking partner” and brainstorming tool.

As he listened, Liang glanced around the classroom and saw that several classmates, laptops open, had already leaped ahead several steps, generating entire drafts of their essays.

Liang said his generation doesn’t engage in moral hand-wringing about AI. “For us, it’s simply a tool that enables us not to have to think for ourselves.”

William Liang

But with AI’s awesome power comes a side effect that many would rather not consider: It’s killing the trust between teachers and students. 

When students can cheaply and easily outsource their work, he said, why value a teacher’s feedback? And when teachers, relying on sometimes unreliable AI-detection software, believe their students are taking such major shortcuts, the relationship erodes further.

It’s an issue that researchers are just beginning to study, with results that suggest an imminent shakeup in student-teacher relationships: AI, they say, is forcing teachers to rethink how they think about students, assessments and, to a larger extent, learning itself. 

If you ask Liang, now a junior and an experienced op-ed writer — he has penned pieces for The Hill, The San Diego Union-Tribune, and the conservative Daily Wire — AI has already made school more transactional, stripping many students of their desire to learn in favor of simply completing assignments. 

“The incentive system for students is to just get points,” he said in an interview. 

While much of the attention of the past few years has focused on how teachers can detect AI-generated work and put a stop to it, a few researchers are beginning to look at how AI affects student-teacher relationships.

Researcher Jiahui Luo of the Education University of Hong Kong in 2024 found that college students in many cases resent the lack of “two-way transparency” around AI. While they’re required to declare their AI use and even submit chat records in a few cases, Luo wrote, the same level of transparency “is often not observed from the teachers.” That produces a “low-trust environment,” where students feel unsafe to freely explore AI.

In 2024, after being asked by colleagues at Drexel University to help resolve an AI cheating case, researcher Tim Gorichanaz, who teaches in the university’s College of Computing and Informatics, analyzed college students’ Reddit threads, spanning December 2022 to June 2023, shortly after Open AI unleashed ChatGPT onto the world. He found that many students were beginning to feel the technology was testing the trust they felt from instructors, in many cases eroding it — even if they didn’t rely on AI.

Tim Gorichanaz, Drexel University

While many students said instructors trusted them and would offer them the benefit of the doubt in suspected cases of AI cheating, others were surprised when they were accused nonetheless. That damaged the trust relationship.

For many, it meant they’d have to work on future assignments “defensively,” Gorichanaz wrote, anticipating cheating accusations. One student even suggested, “Screen recording is a good idea, since the teacher probably won’t have as much trust from now on.” Another complained that their instructor now implicitly trusted AI plagiarism detectors “more than she trusts us.”

In an interview, Gorichanaz said instructors’ trust in AI detectors is a big problem. “That’s the tool that we’re being told is effective, and yet it’s creating this situation of mutual distrust and suspicion, and it makes nobody like each other. It’s like, ‘This is not a good environment.’”

For Gorichanaz, the biggest problem is that AI detectors simply aren’t that reliable — for one thing, they are more likely to flag the papers of English language learners as being written by AI, he said. In one Stanford University study from 2023, they “consistently” misclassified non-native English writing samples as AI-generated, while accurately identifying the provenance of writing samples by native English speakers.

“We know that there are these kinds of biases in the AI detectors,” Gorichanaz said. That potentially puts “a seed of doubt” in the instructor’s mind, when they should simply be using other ways to guide students’ writing. “So I think it’s worse than just not using them at all.” 

‘It is an enormous wedge in the relationship’

Liz Shulman, an English teacher at Evanston Township High School near Chicago, recently had an experience similar to Liang’s: One of her students covertly relied on AI to help write an essay on Romeo and Juliet, but forgot to delete part of the prompt he’d used. Next to the essay’s title were the words, “Make it sound like an average ninth-grader.”

Liz Shulman, Evanton Township High School

Asked about it, the student simply shrugged, Shulman recalled in a recent op-ed she co-authored with Liang.

In an interview, Shulman said that just three weeks into the new school year, in late August, she had already had to sit down with another student who used AI for an assignment. “I pretty much have to assume that students are going to use it,” she said. “It is an enormous wedge in the relationship, which is so important to build, especially this time of the year.”

Her take: School has transformed since 2020’s long COVID lockdowns, with students recalibrating their expectations. It’s less relational, she said, and “much more transactional.” 

During lockdowns, she said, Google “infiltrated every classroom in America — it was how we pushed out documents to students.” Five years later, if students miss a class because of illness, their “instinct” now is simply to check Google Classroom, the widely used management tool, “rather than coming to me and say, ‘Hey, I was sick. What did we do?’”

That’s a bitter pill for an English teacher who aspires to shift students’ worldviews and beliefs — and who relies heavily on in-class discussions.

“That’s not something you can push out on a Google doc,” Shulman said. “That takes place in the classroom.”

In a sense, she said, AI is contracting where learning can reliably take place: If students can simply turn off their thinking at home and rely on AI tools to complete assignments, that leaves the classroom as the sole place where learning occurs. 

“Because of AI, are we only going to ‘do school’ while we’re in school?” she asked. 

‘We forget all the stuff we learned before’

Accounts of teachers resigned to students cheating with AI are “concerning” and stand in contrast to what a solid body of research says about the importance of teacher agency, said Brooke Stafford-Brizard, senior vice president for Innovation and Impact at the Carnegie Foundation.

Teachers, she said, “are not just in a classroom delivering instruction — they’re part of a community. Really wonderful school and system leaders recognize that, and they involve them. They’re engaged in decision making. They have that agency.”

One of the main principles of Carnegie’s R&D Agenda for High School Transformation, a blueprint for improving secondary education, includes a “culture of trust,” suggesting that schools nurture supportive learning and “positive relationships” for students and educators.

“Education is a deeply social process,” Stafford-Brizard said. “Teaching and learning are social, and schools are social, and so everyone contributing to those can rely on that science of relational trust, the science of relationships. We can pull from that as intentionally as we pull from the science of reading.”

Gorichanaz, the Drexel scholar, said that for all of its newness, generative AI presents educators with what’s really an old challenge: How to understand and prevent cheating. 

“We have this tendency to think AI changed the entire world, and everything’s different and revolutionized and so on,” he said. “But it’s just another step. We forget all the stuff we learned before.”

Specifically, research going back more than a decade identifies four key reasons why students cheat: They don’t understand the relevance of an assignment to their life, they’re under time pressure, or intimidated by its high stakes, or they don’t feel equipped to succeed.

Even in the age of AI, said Gorichanaz, teachers can lessen the allure of taking shortcuts by solving for these conditions — figuring out, for instance, how to intrinsically motivate students to study by helping them connect with the material for its own sake. They can also help students see how an assignment will help them succeed in a future career. And they can design courses that prioritize deeper learning and competence. 

To alleviate testing pressure, teachers can make assignments more low-stakes and break them up into smaller pieces. They can also give students more opportunities in the classroom to practice the skills and review the knowledge being tested.

And teachers should talk openly about academic honesty and the ethics of cheating.

“I’ve found in my own teaching that if you approach your assignments in that way, then you don’t always have to be the police,” he said. Students are “more incentivized, just by the system, to not cheat.”

With writing, teachers can ask students to submit smaller “checkpoint” assignments, such as outlines and handwritten notes and drafts that classmates can review and comment on. They can also rely more on oral exams and handwritten blue book assignments. 

Shulman, the Chicago-area English teacher, said she and her colleagues are not only moving back to blue books, but to doing “a lot more on paper than we ever used to.” They’re asking students to close their laptops in class and assigning less work to be completed outside of class. 

As for Liang, the high school junior, he said his new English teacher expects all assignments to come in hand-written. But he also noted that a few teachers have fallen under the spell of ChatGPT themselves, using it for class presentations. As one teacher last spring clicked through a slide show, he said, “It was glaringly obvious, because all kids are AI experts, and they can just instantly sniff it out.” 

He added, “There was a palpable feeling of distrust in the room.



Read the whole story
mrmarchant
1 hour ago
reply
Share this story
Delete

GifCities – The Geocities Animated GIF Search from Internet Archive

1 Share
Comments
Read the whole story
mrmarchant
1 hour ago
reply
Share this story
Delete

Yes, reductionism can explain everything in the whole Universe

1 Share

There’s a statement that one can make that would have been completely non-controversial at the end of the 19th century, but many people both in and out of science would argue against it today. Consider for yourself how you feel about it:

“The fundamental laws that govern the smallest constituents of matter and energy, when applied to the Universe over long enough cosmic timescales, can explain everything that will ever emerge.”

This means that the formation of literally everything in our Universe, from atomic nuclei to atoms to simple molecules to complex molecules to life to intelligence to consciousness and beyond, can all be understood as something that emerges directly from the fundamental laws underpinning reality, with no additional laws, forces, or interactions required.

This simple idea — that all phenomena in the Universe are fundamentally physical phenomena — is known as reductionism. In many places, including right here on Big Think, reductionism is treated as though it’s not the taken-for-granted default position about how the Universe works. The alternative proposition is emergence, which states that qualitatively novel properties are found in more complex systems that can never, even in principle, be derived or computed from fundamental laws, principles, and entities. While it’s true that many phenomena are not obviously emergent from the behavior of their constituent parts, reductionism should be the default position (or null hypothesis) for any interpretation of reality. Anything else should be treated as the equivalent of the God-of-the-gaps argument, and what follows is an explanation as to why.

On the right, the gauge bosons, which mediate the three fundamental quantum forces of our Universe, are illustrated. There is only one photon to mediate the electromagnetic force, there are three bosons mediating the weak force, and eight mediating the strong force. This suggests that the Standard Model is a combination of three groups: U(1), SU(2), and SU(3), whose interactions and particles combine to make up everything known in existence. Despite the success of this picture, many puzzles still remain.
Credit: Daniel Domingues/CERN

The fundamental

When we think about the question of “what is fundamental” in this Universe, we typically turn to the most indivisible, elementary entities of all and the laws that govern them. For our physical reality, that means we ought to start with the particles of the Standard Model and the interactions that govern them — as well as whatever dark matter and dark energy are and the interactions that govern them; hitherto their nature is unknown — and to see if that gives us the necessary and sufficient ingredients to build every known phenomenon and complex entity out of those building blocks alone.

As long as there’s a combination of forces that are relatively attractive at one scale but relatively repulsive at a different scale, we’re pretty much guaranteed to form bound structures out of these fundamental entities. Given that we have four fundamental forces in the Universe, including:

  • short-range nuclear forces that come in two types, a strong version and a weak version,
  • a long-range electromagnetic force, where “like” charged particles repel and “unlike” charged particles attract,
  • and a long-range gravitational force, where the only force between them is always attractive,

we should fully expect that structures will emerge on a variety of distance scales: at small, intermediate, and large scales alike.

The traditional model of an atom, now more than 100 years old, is of a positively charged nucleus orbited by negatively charged electrons. Although the outdated Bohr model is where this picture comes from, the size of the atom itself is determined by the charge-to-mass ratio of the electron. If the electron were heavier or lighter, atoms would be smaller or larger, as well as more difficult or more easy to ionize, respectively.
Credit: U.S. Department of Energy

Indeed: this is precisely what we find when we examine the Universe we actually inhabit. On the smallest scales, the strong nuclear force binds quarks into bound structures, three-at-a-time, known as baryons. The lightest two baryons are the most stable: the proton, which is 100% stable, and the neutron, which is stable enough to survive with a half-life of about ~15 minutes even when it isn’t bound to anything else.

Those protons and neutrons can then form bound structures made out of those composite entities as the building blocks of larger structures. This time, it’s the strong nuclear force that’s the culprit: capable of binding protons and neutrons together into atomic nuclei, even overcoming the repulsive electromagnetic force between like (positive) charges due to having multiple protons in most complex nuclei. Some nuclei will be stable against decays, others will undergo one or more decays (radioactively) before producing a stable end-product.

And then, the electromagnetic force leverages two facts about the Universe.

  1. That, overall, it’s electrically neutral, with the same number of negative charges (electrons) as there are positive charges (protons) in existence.
  2. And that each electron is tiny in mass compared to each proton, neutrons, and atomic nucleus.

This allows electrons and nuclei to bind together in order to form neutral atoms, where every unique species of atom, depending on the number of protons in its nucleus, has its own unique electron structure, in accordance with the fundamental laws of quantum physics that govern our Universe.

atom

The energy levels and electron wavefunctions that correspond to different states within a hydrogen atom, although the configurations are extremely similar for all atoms. The way atoms bind together to form molecules and other, more complex structures is a challenging task when one begins from fundamental particles and interactions, but understanding the basics is how we build up to explaining more complex systems.
Credit: PoorLeno/Wikimedia Commons

How a reductionist sees the Universe

It’s very important, when we discuss the idea of reductionism, that we don’t “strawman” the reductionist’s position. The reductionist doesn’t contend — nor does the reductionist need to contend — that they have a complete and full explanation for each and every complex phenomenon that arises within every imaginable complex structure. Some composite structures and some properties of complex structures will be easily explicable from the underlying rules, sure, but the more complex your system becomes, the more difficult you can expect it will be to explain all of the various phenomena and properties that emerge.

That latter piece cannot be considered “evidence against reductionism” in any way, shape, or form. The fact that “there exists this phenomenon that lies beyond my ability to make robust, quantitative predictions about” should never to be construed as evidence in favor of “this phenomenon requires additional laws, rules, substances, or interactions beyond what’s presently known.”

You either understand your system well-enough to understand what should and shouldn’t emerge from it, in which case you can put reductionism to the test, or you don’t, in which case, you have to go back down to the null hypothesis: that until you can make such predictions from a reductionist approach, you can’t consider any evidence you find as evidence for the need for something beyond the reductionist viewpoint.

wine glass shatter

A wine glass, when vibrated at the right frequency, will shatter. This is a process that dramatically increases the entropy of the system and is thermodynamically favorable. The reverse process, of shards of glass reassembling themselves into a whole, uncracked glass, is so unlikely that it never occurs spontaneously in practice. However, if the motion of the individual shards, as they fly apart, were exactly reversed, they would indeed fly back together and, at least for an instant, successfully reassemble the wine glass. Time reversal symmetry is exact in Newtonian physics, but it is not obeyed in thermodynamics.
Credit: BBC Worldwide/GIPHY

And, to be clear, that’s what the “null hypothesis” is: that the Universe is 100% reductionist. That means a suite of things.

  • That all structures that are built out of atoms and their constituents — including molecules, ions, and enzymes — can be described based on the fundamental laws of nature and the component structures that they’re made out of.
  • That all larger structures and processes that occur between those structures, including all chemical reactions, don’t require anything more than those fundamental laws and constituents.
  • That all biological processes, from biochemistry to molecular biology and beyond, as complex as they might appear to be, are truly just the sum of their parts, even if each “part” of a biological system is remarkably complex.
  • And that everything that we regard as “higher functioning,” including the workings of our various cells, organs, and even our brains, doesn’t require anything beyond the known physical constituents and laws of nature to explain.

To date, although it shouldn’t be controversial to make such a statement, there is no evidence for the existence of any phenomena that falls outside of what reductionism is capable of explaining.

al naslaa

The Al Naslaa rock formation, located in Saudi Arabia, is made of high-density sedimentary rock and shows significant evidence of weathering and erosion. However, the pedestal beneath it has eroded more quickly, the petroglyphs upon it are thousands of years old, and the extremely smooth fissure down its center is not yet fully explained.
Credit: OnPoint TV/YouTube

How “apparent emergence” is readily explained by reductionism

For some properties inherent to complex systems, it’s pretty easy to explain why they exist as they do. The mass (or weight, if you prefer to use scales) of a macroscopic object is, quite simply, the sum of the masses of the components that make it up, minus the mass lost to the energy binding those components together, via Einstein’s E = mc².

For other properties, it’s not necessarily such an easy task, but it has been accomplished. We can explain how thermodynamic quantities like heat, temperature, entropy, and enthalpy emerge from a complex, large-scale ensemble of particles. We can explain the properties of many molecules through the science of quantum chemistry, which again can be derived directly from the underlying fundamental laws. We can use those same fundamental laws to understand — although the computing power required is immense — how various molecules, such as peptides and proteins, fold into their equilibrium configurations, and how they can also wind up in metastable states.

And then there are properties that we cannot fully explain, but that we also are incapable of making robust predictions for as far as what we expect to see under those conditions. These “hard problems” often include systems that are far too complex to model with current technology, such as human consciousness.

Then-graduate student Chao He in front of the gas chamber in the Horst planetary lab at Johns Hopkins, which recreates conditions suspected to exist in the hazes of exoplanet atmospheres. By subjecting it to conditions designed to mimic those induced by ultraviolet emissions and plasma discharges, researchers work toward the emergence of organics, and life, from non-life.
Credit: Chanapa Tantibanchachai/Johns Hopkins University

In other words, what appears to be emergent to us today, with our present limitations of what is within our power to compute, may someday in the future be describable in purely reductionist terms. Many such systems that were once incapable of being described via reductionism have, with superior models (as far as what we choose to pay attention to) and the advent of improved computing power, now been successfully described in precisely a reductionist fashion. Many seemingly chaotic systems can, in fact, be predicted to whatever accuracy we arbitrarily choose, so long as enough computational resources are available. Examples include:

Yes, we can’t rule out non-reductionism, but wherever we’ve been able to make robust predictions for what the fundamental laws of nature do imply for large-scale, complex structures, they’ve been in agreement with what we’ve been able to observe and measure. The combination of the known particles that make up the Universe and the four fundamental forces through which they interact has been sufficient to explain, from atomic to stellar scales and beyond, everything we’ve ever encountered in this Universe. The existence of systems that are too complex to predict with current technology is not an argument against reductionism.

reductionism

Many have argued, unsuccessfully, that the evolution of a complex organ like the human eye could not have occurred through natural processes alone. And yet, the eye has evolved, naturally, in many different organisms independently a large number of independent times. Asserting the need for something supernatural in an intermediate scale in the Universe is fundamentally antithetical to the process of science, and is likely to be proven unnecessary and extraneous as science continues to advance.
Credit: Venti Views / Unsplash

The God-of-the-gaps nature of non-reductionism

But it is true that resorting to non-reductionism — or the notion that completely novel properties will emerge within a complex system that cannot be derived from the interactions of its constituent parts — is tantamount, at this point in time, to a God-of-the-gaps argument. It basically says, “Well, we know how things behave on a certain scale or at a certain time, and we know how they behaved on a smaller scale or at an earlier time, but we can’t fill in all the steps to get from that small scale/early time to understand how the large scale/later time behavior comes about, and therefore, I’m going to insert the possibility that something magical, divine, or otherwise non-physical comes into play.”

Although this is an assertion that is difficult or even impossible to disprove, it’s one that has not only zero, but negative scientific value. The whole process of science involves investigating the Universe with the tools we have at our disposal for investigating reality, and determining the best physical model, description, and set of conditions that describes that reality. What a fool’s errand it is to assert “maybe we need more than our current best model to describe reality” when:

  • we don’t even have the computational or modeling power necessary to put our current model to the test,
  • and where these are the regimes most likely — if you insert something magical, divine, or non-physical — where science is very likely, in the very near future, to show that such an intervention is wholly unnecessary.

If life began with a random peptide that could metabolize nutrients/energy from its environment, replication could then ensue from peptide-nucleic acid coevolution. Here, DNA-peptide coevolution is illustrated, but it could work with RNA or even PNA as the nucleic acid instead. Asserting that a “divine spark” is needed for life to arise is a classic “God-of-the-gaps” argument, but asserting that we know exactly how life arose from non-life is also a fallacy. These conditions, including rocky planets with these molecules present on their surfaces, likely existed within the first 1-2 billion years of the Big Bang.
Credit: A. Chotera et al., Chemistry Europe, 2018

If you either believe or simply want to believe that there’s more to the Universe than the sum of its physical parts, that’s a statement where science doesn’t have anything meaningful to say on the matter; science is completely agnostic about that possibility. However, if you want to believe that a description of the physical phenomena that exist in this Universe requires either:

  • something more than the physical laws that govern the Universe,
  • and/or something other than the physical objects that exist within the Universe,

perhaps the least successful decision you can make is to insert whatever “metaphysical” entities you believe in in a place where science, once it advances just a little bit further, can disprove the need for them entirely.

I have never understood why one would be so willing to assert the existence of the divine or supernatural in such a small place: a place where it would be so easy to falsify the need for it. Why would you believe, while inhabiting a Universe that’s so vast, that something beyond the capability of our physical laws to describe would primarily appear in such an extraneous, unnecessary place? If the Universe, as we observe and measure it, isn’t able to be described by what’s physically present within it under the known laws of reality, shouldn’t we determine that to actually be the case before resorting to a non-scientific, supernatural explanation?

A fruit fly brain as viewed through a confocal microscope. The workings of the brain of any animal are not fully understood, but it’s eminently plausible that electrical activity in the brain and throughout the body is responsible for what we know as “consciousness,” and furthermore, that human beings are not so unique among animals or even other living creatures in possessing it.
Credit: Garaulet et al., Developmental Cell, 2020

Final thoughts

The fundamental components of our physical Universe, along with the fundamental laws that govern all of existence, represent the most successful scientific picture of the Universe in all of history. Never before, from the tiniest subatomic particles to macroscopic phenomena to cosmic scales, have we ever had such a successful way of describing our physical reality as we do today. The idea of reductionism is simple: that physical phenomena can be explained by the complex combination of the objects that exist within the Universe, governed by the same physical laws that govern all physical systems within the Universe.

That’s our default starting point: the “null hypothesis” for what reality is.

If that’s not your starting point, it’s my duty to inform you that the burden of proof for your worldview — one that includes a new set of fundamental forces, new entities, new interactions, or the intervention of the supernatural — lies with you. You must show that the null hypothesis is insufficient to describe a phenomenon where its predictions are clear, and in conflict with what can be observed and/or measured. This is a very high bar to clear, and an endeavor that no opponent of reductionism has ever succeeded at. We may not understand everything there is to know about all complex phenomena: that much is true. The more complex a phenomenon is, the harder of a task it is to derive all of its properties from the fundamental, but that’s not the same as having evidence that something more is required.

In science, however, we’re never satisfied with a statement that simply says “this problem is hard, so maybe the answer lies beyond science.” That’s not how progress is made. The only way we ever move forward is by conducting more and better science, relentlessly, until we figure out how it all works.

This article first appeared in August of 2022. It was updated in October of 2025.

This article Yes, reductionism can explain everything in the whole Universe is featured on Big Think.

Read the whole story
mrmarchant
1 hour ago
reply
Share this story
Delete

Watch My Skeleton Dance for Science

1 Share

Stripped down to compression shorts and bejeweled with reflective motion capture dots scattered across my body, I recently danced before an array of cameras as a team of researchers looked on. Ostensibly, I was there to help them understand lower back pain, an affliction with which I have lately become all too familiar. As part of their analysis, they capture everyday body movements performed by me and other study participants, digitizing our skeletons and seeking patterns in the pained bends and stretches of our aching bones.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

I was not asked to dance. I was told simply to bend and stretch. But for me, being transformed into a cartoon skeleton was too great an opportunity to constrain to dispassionate data collection. And being in the midst of the spooky season, I thought it appropriate to mimic a classic routine, first performed by icons of early animation. Luckily my scientific handlers indulged my whimsy. Here, I present my interpretation of 1929’s The Skeleton Dance, Walt Disney’s first Silly Symphony animated short.

First the original:

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

And now my take:

This all took place in the lab of Linda Van Dillen, a physical therapy researcher at the nearby Washington University in St. Louis, whose study I enrolled in after tweaking my back a few months ago while doing household chores.

Van Dillen and her colleagues seek to alleviate the suffering of people who have what they term low back pain. Worldwide, it is the leading cause of disability, afflicting nearly 60-80 percent of adults at some point in their lives. By studying people with varying degrees of aches like mine for several months after that first twinge of pain, her team seeks to characterize the transition from acute to chronic low back pain. If they can identify similarities in the ways that people with acute and chronic back pain move, doctors and therapists might be able to target specific areas of the body before that initial pang becomes a longstanding ache, Van Dillen recently told me in an email.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .
STOOPING TO SCHLEP: Researchers captured this digital recreation of my body bending over to pick up a box, one of the motions they analyze in the search for characteristic patterns in people with low back pain. NB: They assured me that my actual pelvis is not that robust.

So, every month or so, my fellow study subjects and I travel to WashU to help Van Dillen’s team capture the characteristic movements of people in the grips of such aches and pains. When we get there, researchers cover our legs, pelvises, chests, and a single arm in reflective dots and have us perform simple movements such as bending down to pick up a box. They record everything with 8 high-tech, motion-capture cameras of the sort used in Hollywood blockbusters that bring impossible creatures to life on the big screen. Between sessions, we answer surveys that track not only our pain symptoms, but other biological and psychological factors related to our lingering discomfort.

The scientists will later analyze the data they are collecting, with the goal of improving and standardizing the evaluation process physicians use to diagnose and care for those who suffer from low back pain. By following our progress over the course of a year, Van Dillen and her team hope to emerge with a better method for early detection and treatment of such pain. These improvements could stop acute low back pain before it becomes a chronic problem, reducing “healthcare spending for this often costly, long-term condition,” she said.

A worthy scientific goal, for sure. But also a perfect opportunity for a bit of fun. Enjoy my dancing skeleton and please let it remind you—as it does me—that un-seriousness is sometimes the best medicine.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Enjoying  Nautilus? Subscribe to our free newsletter.

Lead image: Still from 1929’s The Skeleton Dance, the first Silly Symphony from Walt Disney

Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete

The explosion of choice

1 Share

Photo of a vibrant snack bar with colourful signs advertising beer, pizza, sausages and ice cream.

It’s only in recent history that freedom has come to mean having a huge array of choices in life. Did we take a wrong turn?

- by Sophia Rosenfeld

Read on Aeon

Read the whole story
mrmarchant
8 hours ago
reply
Share this story
Delete

Meet the man building a starter kit for civilization

1 Share

You live in a house you designed and built yourself. You rely on the sun for power, heat your home with a woodstove, and farm your own fish and vegetables. The year is 2025. 

This is the life of Marcin Jakubowski, the 53-year-old founder of Open Source Ecology, an open collaborative of engineers, producers, and builders developing what they call the Global Village Construction Set (GVCS). It’s a set of 50 machines—everything from a tractor to an oven to a circuit maker—that are capable of building civilization from scratch and can be reconfigured however you see fit. 

Jakubowski immigrated to the US from Slupca, Poland, as a child. His first encounter with what he describes as the “prosperity of technology” was the vastness of the American grocery store. Seeing the sheer quantity and variety of perfectly ripe produce cemented his belief that abundant, sustainable living was within reach in the United States. 

With a bachelor’s degree from Princeton and a doctorate in physics from the University of Wisconsin, Jakubowski had spent most of his life in school. While his peers kick-started their shiny new corporate careers, he followed a different path after he finished his degree in 2003: He bought a tractor to start a farm in Maysville, Missouri, eager to prove his ideas about abundance. “It was a clear decision to give up the office cubicle or high-level research job, which is so focused on tiny issues that one never gets to work on the big picture,” he says. But in just a short few months, his tractor broke down—and he soon went broke. 

Every time his tractor malfunctioned, he had no choice but to pay John Deere for repairs—even if he knew how to fix the problem on his own. John Deere, the world’s largest manufacturer of agricultural equipment, continues to prohibit farmers from repairing their own tractors (except in Colorado, where farmers were granted a right to repair by state law in 2023). Fixing your own tractor voids any insurance or warranty, much like jailbreaking your iPhone. 

Today, large agricultural manufacturers have centralized control over the market, and most commercial tractors are built with proprietary parts. Every year, farmers pay $1.2 billion in repair costs and lose an estimated $3 billion whenever their tractors break down, entirely because large agricultural manufacturers have lobbied against the right to repair since the ’90s. Currently there are class action lawsuits involving hundreds of farmers fighting for their right to do so.

“The machines own farmers. The farmers don’t own [the machines],” Jakubowski says. He grew certain that self-sufficiency relied on agricultural autonomy, which could be achieved only through free access to technology. So he set out to apply the principles of open-source software to hardware. He figured that if farmers could have access to the instructions and materials required to build their own tractors, not only would they be able to repair them, but they’d also be able to customize the vehicles for their needs. Life-changing technology should be available to all, he thought, not controlled by a select few. So, with an understanding of mechanical engineering, Jakubowski built his own tractor and put all his schematics online on his platform Open Source Ecology.  

That tractor Jakubowski built is designed to be taken apart. It’s a critical part of the GVCS, a collection of plug-and-play machines that can “build a thriving economy anywhere in the world … from scratch.” The GVCS includes a 3D printer, a self-contained hydraulic power unit called the Power Cube, and more, each designed to be reconfigured for multiple purposes. There’s even a GVCS micro-home. You can use the Power Cube to power a brick press, a sawmill, a car, a CNC mill, or a bioplastic extruder, and you can build wind turbines with the frames that are used in the home. 

Jakubowski compares the GVCS to Lego blocks and cites the Linux ecosystem as his inspiration. In the same way that Linux’s source code is free to inspect, modify, and redistribute, all the instructions you need to build and repurpose a GVCS machine are freely accessible online. Jakubowski envisions a future in which the GVCS parallels the Linux infrastructure, with custom tools built to optimize agriculture, construction, and material fabrication in localized contexts. “The [final form of the GVCS] must be proven to allow efficient production of food, shelter, consumer goods, cars, fuel, and other goods—except for exotic imports (coffee, bananas, advanced semiconductors),” he wrote on his Open Source Ecology wiki. 

The ethos of GVCS is reminiscent of the Whole Earth Catalog, a countercultural publication that offered a combination of reviews, DIY manuals, and survival guides between 1968 and 1972. Founded by Stewart Brand, the publication had the slogan “Access to tools” and was famous for promoting self-sufficiency. It heavily featured the work of R. Buckminster Fuller, an American architect known for his geodesic domes (lightweight structures that can be built using recycled materials) and for coining the term “ephemeralization,” which refers to the ability of technology to let us do more with less material, energy, and effort. 

plans for a lifetrac tractor
The schematics for Marcin Jakubowski’s designs are all available online.
COURTESY OF OPEN SOURCE ECOLOGY

Jakubowski owns the publication’s entire printed output, but he offers a sharp critique of its legacy in our current culture of tech utopianism. “The first structures we built were domes. Good ideas. But the open-source part of that was not really there yet—Fuller patented his stuff,” he says. Fuller and the Whole Earth Catalog may have popularized an important philosophy of self-reliance, but to Jakubowski, their failure to advocate for open collaboration stopped the ultimate vision of sustainability from coming to fruition. “The failure of the techno-utopians to organize into a larger movement of collaborative, open, distributed production resulted in a miscarriage of techno-utopia,” he says. 

lifetrac tractor
With a background in physics and an understanding of mechanical engineering, Marcin Jakubowski built his own tractor.
COURTESY OF OPEN SOURCE ECOLOGY

Unlike software, hardware can’t be infinitely reproduced or instantly tested. It requires manufacturing infrastructure and specific materials, not to mention exhaustive documentation. There are physical constraints—different port standards, fluctuations in availability of materials, and more. And now that production chains are so globalized that manufacturing a hot tub can require parts from seven different countries and 14 states, how can we expect anything to be replicable in our backyard? The solution, according to Jakubowski, is to make technology “appropriate.” 

Appropriate technology is technology that’s designed to be affordable and sustainable for a specific local context. The idea comes from Gandhi’s philosophy of swadeshi (self-reliance) and sarvodaya (upliftment of all) and was popularized by the economist Ernst Friedrich “Fritz” Schumacher’s book Small Is Beautiful, which discussed the concept of “intermediate technology”: “Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius—and a lot of courage—to move in the opposite direction.” Because different environments operate at different scales and with different resources, it only makes sense to tailor technology for those conditions. Solar lamps, bikes, hand-­powered water pumps—anything that can be built using local materials and maintained by the local community—are among the most widely cited examples of appropriate technology. 

This concept has historically been discussed in the context of facilitating economic growth in developing nations and adapting capital-intensive technology to their needs. But Jakubowski hopes to make it universal. He believes technology needs to be appropriate even in suburban and urban places with access to supermarkets, hardware stores, Amazon deliveries, and other forms of infrastructure. If technology is designed specifically for these contexts, he says, end-to-end reproduction will be possible, making more space for collaboration and innovation. 

What makes Jakubowski’s technology “appropriate” is his use of reclaimed materials and off-the-shelf parts to build his machines. By using local materials and widely available components, he’s able to bypass the complex global supply chains that proprietary technology often requires. He also structures his schematics around concepts already familiar to most people who are interested in hardware, making his building instructions easier to follow.

Everything you need to build Jakubowski’s machines should be available around you, just as everything you need to know about how to repair or operate the machine is online—from blueprints to lists of materials to assembly instructions and testing protocols. “If you’ve got a wrench, you’ve got a tractor,” his manual reads.  

This spirit dates back to the ’70s, when the idea of building things “moved out of the retired person’s garage and into the young person’s relationship with the Volkswagen,” says Brand. He references John Muir’s 1969 book How to Keep Your Volkswagen Alive: A Manual of Step-by-Step Procedures for the Compleat Idiot and fondly recalls how the Beetle’s simple design and easily swapped parts made it common for owners to rebody their cars, combining the chassis of one with the body of another. He also mentions the impact of the Ford Model T cars that, with a few extra parts, were made into tractors during the Great Depression. 

For Brand, the focus on repairability is critical in the modern context. There was a time when John Deere tractors were “appropriate” in Jakubowski’s terms, Brand says: “A century earlier, John Deere took great care to make sure that his plowshares could be taken apart and bolted together, that you can undo and redo them, replace parts, and so on.” The company “attracted insanely loyal customers because they looked out for the farmers so much,” Brand says, but “they’ve really reversed the orientation.” Echoing Jakubowski’s initial motivation for starting OSE, Brand insists that technology is appropriate to the extent that it is repairable. 

Even if you can find all the parts you need from Lowe’s, building your own tractor is still intimidating. But for some, the staggering price advantage is reason enough to take on the challenge: A GVCS tractor costs $12,000 to build, whereas a commercial tractor averages around $120,000 to buy, not including the individual repairs that might be necessary over its lifetime at a cost of $500 to $20,000 each. And gargantuan though it may seem, the task of building a GVCS tractor or other machine is doable: Just a few years after the project launched in 2008, more than 110 machines had been built by enthusiasts from Chile, Nicaragua, Guatemala, China, India, Italy, and Turkey, just to name a few places. 

Of the many machines developed, what’s drawn the most interest from GVCS enthusiasts is the one nicknamed “The Liberator,” which presses local soil into compressed earth blocks, or CEBs—a type of cost- and energy-­efficient brick that can withstand extreme weather conditions. It’s been especially popular among those looking to build their own homes: A man named Aurélien Bielsa replicated the brick press in a small village in the south of France to build a house for his family in 2018, and in 2020 a group of volunteers helped a member of the Open Source Ecology community build a tiny home using blocks from one of these presses in a fishing village near northern Belize. 

""
The CEB press, nicknamed “The Liberator,” turns local soil into energy-efficient compressed earth blocks.
COURTESY OF OPEN SOURCE ECOLOGY

Jakubowski recalls receiving an email about one of the first complete reproductions of the CEB press, built by a Texan named James Slate, who ended up starting a business selling the bricks: “When [James] sent me a picture [of our brick press], I thought it was a Photoshopped copy of our machine, but it was his. He just downloaded the plans off the internet. I knew nothing about it.” Slate described having a very limited background in engineering before building the brick press. “I had taken some mechanics classes back in high school. I mostly come from an IT computer world,” he said in an interview with Open Source Ecology. “Pretty much anyone can build one, if they put in the effort.” 

Andrew Spina, an early GVCS enthusiast, agrees. Spina spent five years building versions of the GVCS tractor and Power Cube, eager to create means of self-­sufficiency at an individual scale. “I’m building my own tractor because I want to understand it and be able to maintain it,” he wrote in his blog, Machining Independence. Spina’s curiosity gestures toward the broader issue of technological literacy: The more we outsource to proprietary tech, the less we understand how things work—further entrenching our need for that proprietary tech. Transparency is critical to the open-source philosophy precisely because it helps us become self-sufficient. 

Since starting Open Source Ecology, Jakubowski has been the main architect behind the dozens of machines available on his platform, testing and refining his designs on a plot of land he calls the Factor e Farm in Maysville. Most GVCS enthusiasts reproduce Jakubowski’s machines for personal use; only a few have contributed to the set themselves. Of those select few, many made dedicated visits to the farm for weeks at a time to learn how to build Jakubowski’s GVCS collection. James Wise, one of the earliest and longest-term GVCS contributors, recalls setting up tents and camping out in his car to attend sessions at Jakubowski’s workshop, where visiting enthusiasts would gather to iterate on designs: “We’d have a screen on the wall of our current best idea. Then we’d talk about it.” Wise doesn’t consider himself particularly experienced on the engineering front, but after working with other visiting participants, he felt more emboldened to contribute. “Most of [my] knowledge came from [my] peers,” he says. 

Jakubowski’s goal of bolstering collaboration hinges on a degree of collective proficiency. Without a community skilled with hardware, the organic innovation that the open-source approach promises will struggle to bear fruit, even if Jakubowski’s designs are perfectly appropriate and thoroughly documented.

“That’s why we’re starting a school!” said Jakubowski, when asked about his plan to build hardware literacy. Earlier this year, he announced the Future Builders Academy, an apprenticeship program where participants will be taught all the necessary skills to develop and build the affordable, self-sustaining homes that are his newest venture. Seed Eco Homes, as Jakubowski calls them, are “human-sized, panelized” modular houses complete with a biodigester, a thermal battery, a geothermal cooling system, and solar electricity. Each house is entirely energy independent and can be built in five days, at a cost of around $40,000. Over eight of these houses have been built across the country, and Jakubowski himself lives in the earliest version of the design. Seed Eco Homes are the culmination of his work on the GVCS: The structure of each house combines parts from the collection and embodies its modular philosophy. The venture represents Jakubowski’s larger goal of making everyday technology accessible. “Housing [is the] single largest cost in one’s life—and a key to so much more,” he says.

The final goal of Open Source Ecology is a “zero marginal cost” society, where producing an additional unit of a good or service costs little to nothing. Jakubowski’s interpretation of the concept (popularized by the American economist and social theorist Jeremy Rifkin) assumes that by eradicating licensing fees, decentralizing manufacturing, and fostering collaboration through education, we can develop truly equitable technology that allows us to be self-sufficient. Open-source hardware isn’t just about helping farmers build their own tractors; in Jakubowski’s view, it’s a complete reorientation of our relationship to technology. 

In the first issue of the Whole Earth Catalog, a key piece of inspiration for Jakubowski’s project, Brand wrote: “We are as gods and we might as well get good at it.” In 2007, in a book Brand wrote about the publication, he corrected himself: “We are as gods and have to get good at it.” Today, Jakubowski elaborates: “We’re becoming gods with technology. Yet technology has badly failed us. We’ve seen great progress with civilization. But how free are people today compared to other times?” Cautioning against our reliance on the proprietary technology we use daily, he offers a new approach: Progress should mean not just achieving technological breakthroughs but also making everyday technology equitable. 

“We don’t need more technology,” he says. “We just need to collaborate with what we have now.”

Tiffany Ng is a freelance writer exploring the relationship between art, tech, and culture. She writes Cyber Celibate, a neo-Luddite newsletter on Substack. 

Read the whole story
mrmarchant
8 hours ago
reply
Share this story
Delete
Next Page of Stories