When teachers rely on commonly used artificial intelligence chatbots to devise lesson plans, it does not result in more engaging, immersive, or effective learning experiences compared with existing techniques, we found in our recent study. The AI-generated civics lesson plans we analyzed also left out opportunities for students to explore the stories and experiences of traditionally marginalized people.
The allure of generative AI as a teaching aid has caught the attention of educators. A Gallup survey from September 2025 found that 60 percent of K-12 teachers are already using AI in their work, with the most common reported use being teaching preparation and lesson planning.
Without the assistance of AI, teachers might spend hours every week crafting lessons for their students. With AI, time-stretched teachers can generate detailed lesson plans featuring learning objectives, materials, activities, assessments, extension activities, and homework tasks in a matter of seconds.
I regret to inform you that certain political pundits and education reformers are calling for the return of "high stakes testing." Or at least David Frum talked to former Secretary of Education Margaret Spellings in a recent podcast published by The Atlantic where this dismal idea was (once again, as always) posited as the fix for "the decade-long decline in U.S. student achievement." You know that the elite are fresh out of ideas when they have to wheel out a member of George W. Bush's cabinet to talk about policy proposals they promise (this time, really) will revitalize the country.
That the incessant demands of standardized testing – demands on teachers and students – have contributed to people's growing dissatisfaction with public education is largely waved away in this discussion. Testing – indeed, all of schooling – is clearly something we do to people rather than a space and a practice constructed with, by, and for people.
This is some dangerous bullshit
I'd wager that a return to high-stakes testing would only serve to further undermine our confidence in schools. But perhaps that is the plan.
As I argued in Teaching Machines, it's impossible to separate the history of standardized testing from the history of education technology. Although education psychologists in the early twentieth century designed machines they said could teach, they also built machines that could test – sometimes, as with Sidney Pressey's device, these were one and the same. Standardized testing, of course, had grown in popularity in this period – it was seen as one of the key ways to measure and rank student aptitude and intelligence – particularly after it was used in World War I to evaluate military recruits. But while the Army Alpha and other early standardized tests administered in schools were scored by hand, new machinery promised calculation, computation at scale. It's more objective; it's more scientific; it's more efficient – you can see these claims still echoed in Frum and Spellings's conversation.
It should be no surprise, if we consider the long, intertwined history of testing and technology, that the big surge in school expenditure for one-to-one computing in the 2010s came from legislation and funding that tied tech purchases to testing objectives. Oh sure, folks probably prefer narratives about "networked learning" and "future readiness" and whatever other clichés the PR taught them to parrot; but the impetus for computers in schools has mostly revolved around assessment. That computer usage also means a constant stream of data that can be used to build algorithms that can be sold back to schools as "personalized learning" (then) or "AI" (now) is another boon, but again, one that education psychologists were already fantasizing about in the 1920s.
Back in 2012, there was a flurry of excitement about the use of automated grading algorithms in standardized testing. Perhaps you recall the study, published by University of Akron's Dean of the College of Education Mark Shermis along with Ben Hamner, a data scientist from a company called Kaggle, that claimed that robots and humans graded pretty much the same: "overall, automated essay scoring was capable of producing scores similar to human scores for extended-response writing items with equal performance for both source-based and traditional writing genre."
Or perhaps you don't recall, because a lot of folks now write as though we're facing automated essay grading for the very first time. Honestly, it makes me feel a little crazy when you do that, guys. (Or, at the very least, a little old.)
I realize that much is invested in the story that this moment is the "AI" moment, but I'd urge you to think about the ways in which – in education at least – standardized testing and education technology have worked hand-in-hand to shape thinking, reading, and writing for decades now (derogatory), to reduce students to data points, to see teaching as transmission, and to see grading as a feedback mechanism – everything, everyone reduced to a machine-metaphor.
Who looks at the world and thinks that what schools need – what democracy needs – is more testing and more technology (more compliance, more punishment, more discipline, more surveillance, etc.)?!
Oh.
When a beer brewing competition introduced an "AI"-based judging tool in the middle of a competition, folks were pissed. "There’s so much subjectivity to it, and to strip out all of the humanity from it is a disservice," one beer judge told 404 Media.
I guess the grownups in the craft beer world just don't like accountability, am I right Margaret?
Or maybe those grownups were suspicious that this was a plan to extract data, extract value from the craft brew community so that someone else could profit, so that their labor could be replaced.
But their blatant stupidity is why they’re so popular. It’s the uncomfortable truth underpinning pretty much everything that’s happened in pop culture — including politics — since the 2010s social media revolution. The online platforms that created our new world, run on likes and shares and comments and views, reshaped the marketplace of ideas into an attention economy. One that, like a real economy, is full of very popular garbage. And, also like a real economy, is now so vast and important that it’s virtually impossible to change it. If you want access to it, you better get comfortable making lowest-common-denominator bullshit in front of a camera. And, of course, it’s a lot easier to feel good about doing that if you’re an idiot.
A brief aside here, but this is the big bet that AI companies are making right now. That our tastes have grown so rotten and atrophied that we won’t even care when our feeds start filling up with slop.
Although there aren't any official numbers to back this up, a reporter from Business Insider claims that Sora 2, Open AI's new video-slop feed, is "overrun with teenage boys" and that a lot of what's generated and shared is "edgelord humor." This would follow the trajectory of much of computing, much of the Internet, of course – designed for, catering to young men; encouraging violent, racist, misogynist content while insisting it's all harmless good fun. (A recent paper in Gender and Education examines the ways in which the "manosphere" infiltrates classroom technologies – the "affective infrastructure" – as well.)
Some of this makes it hard to not frown at the mindlessness of the latest "6, 7" meme, I suppose. It emerged on TikTok; and it sure seems like an example of "brain rot." Certainly The Wall Street Journal had a good hand-wring over it this week, trying so very hard to address the concern – it's "making life hell for teachers"! – and find some meaning somewhere, anywhere in the numbers.
I would say I'm passionately ambivalent about this sort of thing. I mean, yes, it's dumb. But saying and doing dumb things – and finding this to be peak humor, particularly when adults don't laugh – is one of the great joys of childhood. The folklore of children and teens has always involved learning and testing and subverting boundaries, figuring out and exploring their power, their bodies, their emotions, their sociality.
Technology (and, of course, COVID) has altered some of this, no doubt. It has changed how children play (increasingly "alone together," as Sherry Turkle put it.) It has altered the speed with which rhymes and sayings are spread. I mean, there's already a South Park episode.
In some ways, I worry a lot less about the goings-on in the subculture of children and more about the goings-on in the dominant culture of adults.
And there, there is a larger cultural shift at play – not simply towards "brain rot," but towards "trolling" – the kind of provocations that "6, 7" might gesture towards, but are more fully (more destructively) enacted as in-group/out-group provocations. Trolling is a cultural embrace of harassment. It's an embrace of manipulation. And then, when followed by the dismissive shrug from the troll, you're the asshole for getting upset.
The "AI" bubble – it's giving Enron vibes, Dave Karpf rightly observes. It reeks of a kind of radical disregard for truth, for feelings, for other people. And maybe this makes those childhood pranks and jokes feel a lot less benign. But kids aren't the problem here.
More bad news:
"Campus, on online two-year college backed by Sam Altman and Shaquille O’Neal, has acquired AI learning platform Sizzle AI for an undisclosed amount," Inside Higher Ed reports.
"A hack impacting Discord’s age verification process shows in stark terms the risk of tech companies collecting users’ ID documents. Now the hackers are posting peoples’ IDs and other sensitive information online," says 404 Media.
Thanks for reading Second Breakfast. Please consider becoming a paid subscriber, as this is my full-time job – although fair warning: I am taking November and December off from sending emails on a regular basis. (I'm sure I'll still write and send something.) I've got a bunch of talks I'm giving and a book proposal to write. (Hi Susan.)
This week's bird is the plate-billed mountain toucan, which lives in the high-mountain forests of the Andes. Toucans are arguably one of the world's most recognizable birds, due to their large and colorful bills. As such, they've been frequently used in advertising – notably (and related to today's stories) in iconic ads for some fairly middling beer and children's cereal. The plate-billed mountain toucan is considered "near threatened" as its habitat is being lost to deforestation. Good thing "AI" is going to fix climate change for us.
An adult tunicate chilling on a rock after consuming much of its own brain.
Thank you for reading The Garden of Forking Paths. This edition is free, but if you’d like to support my work and keep my research and writing sustainable, please consider upgrading. It’s only $50 for a year ($4/month) and it also gives you full access to 215+ essays in the archive. Every bit helps.
One of humanity’s distant ancestors—though one that’s unlikely to show up at your next family reunion—are the tunicates, also known as sea squirts. In their early stage of life, they are active, little tadpole-like explorers, propelling themselves through their marine world.
As the creature matures, however, that free-spirited life gives way to a sedentary one, in which the sea squirt attaches itself to a suitable rock. It will never move again. Its body becomes little more than a passive sack, no longer searching or exploring, just waiting to see what drifts by, day in, and day out, for the rest of its life.
This is, as you might imagine, not a particularly intellectually demanding lifestyle. The sea squirt’s brain therefore becomes superfluous, an unnecessary waste of energy.
So, it does the logical thing: it eats its own brain, and gets on with surviving on whatever floats past.1 Even without a full brain, evolution has given the sea squirt the wisdom to realize when something no longer serves a purpose.
Many humans, by contrast, now mimic the sea squirt’s lifestyle on social media, but have not yet recognized an unfortunate truth: that their brains have become superfluous.
II: Origins
When I was 18 years old, I joined TheFacebook.com, one of the earliest attempts by a humanoid creature known as Mark Zuckerberg to socially fit in.
TheFacebook emerged after Zuck’s first creation, FaceMash, in which he hacked into Harvard’s databases—”child’s play,” he boasted at the time—and then subjected unknowing young women to being digitally judged by their peers, adjudicating who was hotter.
In written comments at the time, Zuck, clearly a chiseled supermodel himself, noted how ugly some of them were, saying “I almost want to put some of these faces next to pictures of farm animals and have people vote on which is more attractive.” (Perhaps we should rethink a society in which this creature is one of the richest and most powerful lifeforms on the planet.)
After backlash to FaceMash grew and he took the site offline, Zuckerberg, in his first—but certainly not last—inauthentic apology wrote: “I’m not willing to risk insulting anyone.”
When I joined TheFacebook in 2004, there were around 50,000 users globally. It was only open to students at a smattering of elite colleges and universities. It was a gigantic, unknown social experiment, the first frontier of a new way of connecting to people.
It was exciting.
But nobody had a clue what to do with it. Beyond “friending” someone—a bold step, to be sure—one could also create groups. This lent itself to playful experimentation.
At my college, there was a relatively ordinary student named Luke Nobel-Plimpton (names have been slightly modified to protect the victim). His friends, recognizing an excellent prank opportunity, created a closed Facebook group called “People Who Are Not Luke Nobel-Plimpton.” The group’s membership criterion was singular—and ruthlessly enforced.
Soon, the group spread like wildfire. On a campus of 2,000 students, the group’s ranks swelled to nearly 1,700. Luke Nobel-Plimpton became a campus celebrity; spottings in the wild were celebrated at parties in drunken whispers.
Then, there was the “poke” feature. (Humanoid Zuckerberg acknowledged that he came up with the idea while his carbon-based body was intoxicated.) Nobody knew what to do with it, but it could be tremendously exhilarating when that little notification arrived from someone who was not a close friend. On the flip side, after an impulsive moment, many a student also experienced a novel feature of the human condition unknown to Dickens or Dostoevsky: poke regret.
But TheFacebook in 2004 was also radically different to social media today. In short, it was better, safer, healthier, more interesting, and more human. Why?
First, it was a closed network: you were only exposed to posts from people you were friends with, nobody else. If you got exposed to insane opinions, that was only because your real-world friends were insane. And because everyone knew they were posting for their actual friends, people with secretly insane opinions tended to keep them to themselves, lest they be socially shunned.
Second, there was no algorithm, stuff just showed up as it was posted, without any rewards for posts or photos that were clickbait. There was therefore no premium on attracting eyeballs. Blissfully, there were no influencers (aside from obscure legends like Luke Nobel-Plimpton).
Third, early social media fostered rather than replaced real-world connections. The main reason to join TheFacebook in 2004 was to figure out what parties were happening, when people were meeting up, and what activities were happening on campus. It was therefore a catalyst for social activity; today, it’s a catalyst for loneliness, built to reward the dystopian act of scrolling alone.
Fourth, if you were an insufferable, bad-faith jerk, you tended to lose friends, not gain followers. (I’ve written about the perils of audience capture here, but the current model is particularly toxic for rewarding ratcheting extremism).
In 2025, we’ve gone so far through the looking glass that, as recently highlighted, Meta filed a brief in an antitrust case with the Federal Trade Commission officially arguing that it could not possibly be considered a social media monopoly because it is not even in the business of social media. Here’s what they wrote in their brief:
Today, only a fraction of time spent on Meta’s services—7% on Instagram, 17% on Facebook—involves consuming content from online “friends” (“friend sharing”). A majority of time spent on both apps is watching videos, increasingly short-form videos that are “unconnected”—i.e., not from a friend or followed account—and recommended by AI-powered algorithms Meta developed as a direct competitive response to TikTok’s rise, which stalled Meta’s growth.
In other words, “social” media apps are no longer remotely driven by real-world interactions; they’re just people watching short-form videos and, at best, occasionally interacting with other humans (and, increasingly, with AI bots) in the comments.
We have totally lost the plot.
III: Why do people spend so much time on something that’s so mind-numbingly boring?
The problems of modern social media are well-documented. At its worst, it polarizes people, destroys social trust, amplifies stereotypes, encourages real-world violence, mainstreams extremism, gives hateful politicians direct-to-voter megaphones, makes the world’s most insufferable people rich and famous, splinters a shared sense of reality and, ah yes, helps destroy democracy. (On the other hand, it also has amusing dance videos.)2
For a while, I was a heavy user—particularly on pre-Muskian Twitter. It was too much. I regret how much time I wasted on the site. I now don’t have any social media apps on my phone and I check social media far less often.
But the more I’ve thought about why I’ve made that switch, I’ve realized that my personal aversion to social media has emerged from a more fundamental reaction to it: utter boredom.
As a little experiment, I visited X yesterday, for the first time in a very long time. In order, the feed offered me:
An Elon Musk post comparing any transgender health care to Josef Mengele, who happily oversaw the gas chambers at Birkenau and Auschwitz;
A video of a woman inexplicably smashing a man’s car windshield with a hammer;
A video of a man suffering terribly as an amusement ride malfunctioned, much to the amusement of gleeful monsters in the comments;
Sam Altman posting about how ChatGPT will now be much better because the new version will “respond in a very human-like way, or use a ton of emoji, or act like a friend” alongside an announcement that ChatGPT will soon produce erotica on demand.
Please, just unplug it.
It’s vile, obviously, but it’s also just so predictably boring. The same stuff every day, churned out by the same eyeball chasers, amplified by many of the most loathsome people on the planet, day after day after day.
John Burn-Murdoch, data guru for The Financial Times, has produced an extraordinary chart, showcasing how much of social media today is just tedious, pointless scrolling, watching “content” (from people I would swiftly run away from if I encountered them at a real-life party). The biggest growth areas are unabashedly mind-numbing: “to follow celebrities” and “to fill spare time.” At that point, why not just take a page from the sea squirt playbook and eat your own brain?
Thirteen billion years ago, an improbable universe inexplicably blinked into existence. Life on Earth arose 3.8 billion years ago. And over the ensuing aeons, countless single-celled organisms lived and died, the slow march of evolution later conjuring trilobites and dinosaurs, down the line to chimpanzees and other non-human primates, eventually leading, through infinite accidents and flukes of history, to one species with advanced cognition, extraordinary self-aware consciousness, and an unbridled capacity to experience joy, to gain knowledge and wisdom locked off to all other species, to invent, explore, connect, learn, and to love.
Within that species, there are an infinite number of possible people who could have existed but do not, the unborn ghosts, the sparks of life that never were, and from that infinite realm of possibility, we are the few lucky ones.
And lucky we are, though for most of us, our luck runs out after around 30,000 days on the planet. Precisely how many of those impossibly lucky days of our improbable existence are well-spent filling spare time by following the vapid lives of celebrities?
IV: It’s better to not eat your own brain
Thankfully, some people are wising up to the perils of sea squirt lifestyles. For the first time, social media use is declining globally. But there’s one exception: in North America, it’s still going up, as John Burn-Murdoch again shows:
The global numbers reflect a growing sense I feel when I talk to others: that many now see social media in increasingly negative terms, not just as a corrosive social phenomenon but as a personal blight on their life. To some, it feels like a chore; to others, an addiction they can’t break even though they desperately want to leave it behind.
My suggestion is this: next time you log onto a social media platform, use two tests to evaluate whether it was worth it:
The Sea Squirt Test: Did you need to use your brain?
The Social Test: Did you feel genuine human connection?
If neither test is passed, obliterate that platform from your life. You only live around 30,000 days; today is one of them, and the world is far too fascinating a place to waste any more of them on something so destructively boring.
Thank you for reading The Garden of Forking Paths. If you value my writing and research, please consider upgrading to a paid subscription for $50/year or $4/month. It makes my work sustainable—and you can be a modern Medici, fashioning yourself as a patron of the human creative arts in an era of increasing glittering sludge spewed out by artificial intelligence.
This story is somewhat simplified, as sea squirts still have brains in adulthood, and their behavior and physiology is well-adapted to their strategy for survival. In the larval stage, it’s swimming around with a tail; in the adult stage, it’s becoming a passive feeder. Astonishingly, their hearts do something profoundly weird: every three or four minutes, they reverse the direction of blood flow. They are the only animal known to do this. Here’s a review article on their physiology.
The Washington Post did an interesting experiment, tracking 1,100 people in their TikTok use over a year. The results are really depressing—and they showcase how some of the phenomenon functions as a genuine addiction. Their report is worth reading.
We all feel it: Our once-happy digital spaces have become increasingly less user-friendly and more toxic, cluttered with extras nobody asked for and hardly anybody wants. There’s even a word for it: “enshittification,” named 2023 Word of the Year by the American Dialect Society. The term was coined by tech journalist/science fiction author Cory Doctorow, a longtime advocate of digital rights. Doctorow has spun his analysis of what’s been ailing the tech industry into an eminently readable new book, Enshittification: Why Everything Suddenly Got Worse and What To Do About It.
As Doctorow tells it, he was on vacation in Puerto Rico, staying in a remote cabin nestled in a cloud forest with microwave Internet service—i.e., very bad Internet service, since microwave signals struggle to penetrate through clouds. It was a 90-minute drive to town, but when they tried to consult TripAdvisor for good local places to have dinner one night, they couldn’t get the site to load. “All you would get is the little TripAdvisor logo as an SVG filling your whole tab and nothing else,” Doctorow told Ars. “So I tweeted, ‘Has anyone at TripAdvisor ever been on a trip? This is the most enshittified website I’ve ever used.'”
Initially, he just got a few “haha, that’s a funny word” responses. “It was when I married that to this technical critique, at a moment when things were quite visibly bad to a much larger group of people, that made it take off,” Doctorow said. “I didn’t deliberately set out to do it. I bought a million lottery tickets and one of them won the lottery. It only took two decades.”
The Wikimedia Foundation, the nonprofit organization that hosts Wikipedia, says that it’s seeing a significant decline in human traffic to the online encyclopedia because more people are getting the information that’s on Wikipedia via generative AI chatbots that were trained on its articles and search engines that summarize them without actually clicking through to the site.
The Wikimedia Foundation said that this poses a risk to the long term sustainability of Wikipedia.
“We welcome new ways for people to gain knowledge. However, AI chatbots, search engines, and social platforms that use Wikipedia content must encourage more visitors to Wikipedia, so that the free knowledge that so many people and platforms depend on can continue to flow
Sustainably,” the Foundation’s Senior Director of Product Marshall Miller said in a blog post. “With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work.”
Ironically, while generative AI and search engines are causing a decline in direct traffic to Wikipedia, its data is more valuable to them than ever. Wikipedia articles are some of the most common training data for AI models, and Google and other platforms have for years mined Wikipedia articles to power its Snippets and Knowledge Panels, which siphon traffic away from Wikipedia itself.
“Almost all large language models train on Wikipedia datasets, and search engines and social media platforms prioritize its information to respond to questions from their users,” Miller said. That means that people are reading the knowledge created by Wikimedia volunteers all over the internet, even if they don’t visit wikipedia.org— this human-created knowledge has become even more important to the spread of reliable information online.”
Miller said that in May 2025 Wikipedia noticed unusually high amounts of apparently human traffic originating mostly from Brazil. He didn’t go into details, but explained this caused the Foundation to update its bot detections systems.
“After making this revision, we are seeing declines in human pageviews on Wikipedia over the past few months, amounting to a decrease of roughly 8% as compared to the same months in 2024,” he said. “We believe that these declines reflect the impact of generative AI and social media on how people seek information, especially with search engines providing answers directly to searchers, often based on Wikipedia content.”
Miller told me in an email that Wikipedia has policies for third-party bots that crawl its content, such as specifying identifying information and following its robots.txt, and limits on request rate and concurrent requests.
“For obvious reasons, we can’t share details publicly about how exactly we block and detect bots,” he said. “In the case of the adjustment we made to data over the past few months, we observed a substantial increase over the level of traffic we expected, centering on a particular region, and there wasn’t a clear reason for it. When our engineers and analysts investigated the data, they discovered a new pattern of bot behavior, designed to appear human. We then adjusted our detection systems and re-applied them to the past several months of data. Because our bot detection has evolved over time, we can’t make exact comparisons – but this adjustment is showing the decline in human pageviews.”
Human pageviews to all language versions of Wikipedia since September 2021, with revised pageviews since April 2025Image: Wikimedia Foundation.
“These declines are not unexpected. Search engines are increasingly using generative AI to provide answers directly to searchers rather than linking to sites like ours,” Miller said. “And younger generations are seeking information on social video platforms rather than the open web. This gradual shift is not unique to Wikipedia. Many other publishers and content platforms are reporting similar shifts as users spend more time on search engines, AI chatbots, and social media to find information. They are also experiencing the strain that these companies are putting on their infrastructure.”
Miller said that the Foundation is “enforcing policies, developing a framework for attribution, and developing new technical capabilities” in order to ensure third-parties responsibly access and reuse Wikipedia content, and continues to "strengthen" its partnerships with search engines and other large “re-users.” The Foundation, he said, is also working on bringing Wikipedia content to younger audiences via YouTube, TikTok, Roblox, and Instagram.
However, Miller also called on users to “choose online behaviors that support content integrity and content creation.”
“When you search for information online, look for citations and click through to the original source material,” he said. “Talk with the people you know about the importance of trusted, human curated knowledge, and help them understand that the content underlying generative AI was created by real people who deserve their support.”