615 stories
·
0 followers

The Hidden Human Cost of AI Moderation

1 Share

Training AI often means staring at humanity’s worst atrocities for hours at a time. Workers tasked with this labor endure psychological injury without support — and face legal threats if they speak about it.


alt
Behind every AI model promising efficiency, safety, or innovation are thousands of data labelers and content moderators who train these systems by performing repetitive, often psychologically damaging tasks. (d3sign / Getty Images)

I signed the NDA like everyone else — didn’t think twice at the time. But now it feels like a trap. I’m living with nightmares from the content I saw, but I can’t even talk about it in therapy without fearing I’m violating the NDA.

Content moderator, Colombia

The artificial intelligence boom runs on more than just code and compute power — it depends on a hidden, silenced workforce. Behind every AI model promising efficiency, safety, or innovation are thousands of data labelers and content moderators who train these systems by performing repetitive, often psychologically damaging tasks. Many of these workers are based in the Global South, working eight to twelve hours a day reviewing hundreds — sometimes thousands — of images, videos, or data points, including graphic material involving rape, murder, child abuse, and suicide. They do this without adequate breaks, paid leave, or mental health support — and in some cases, for as little as $2 an hour. Bound by sweeping nondisclosure agreements (NDAs), they are prohibited from sharing their experiences.

The psychological toll is not incidental. It is the predictable outcome of an industry structured around outsourcing, speed, surveillance, and the extraction of invisible labor under extreme conditions — all to fuel the profits of a tiny corporate elite concentrated in the Global North.

As researchers involved in developing Scroll. Click. Suffer., a report by the human rights organization Equidem, we interviewed 113 data labelers and content moderators across Kenya, Ghana, Colombia, and the Philippines. We documented over sixty cases of serious mental health harm — including PTSD, depression, insomnia, anxiety, and suicidal ideation. Some workers reported panic attacks, chronic migraines, and symptoms of sexual trauma directly linked to the graphic content they were required to review — often without access to mental health support and under constant pressure to meet punishing productivity targets.

Yet most are legally barred from speaking out. In Colombia, 75 out of 105 workers we approached declined interviews. In Kenya, it was 68 out of 110. The overwhelming reason: fear of violating the sweeping NDAs they had signed.

NDAs don’t just safeguard proprietary data — they conceal the exploitative conditions that make the AI industry run. These contracts prevent workers from discussing their jobs, even with therapists, family, or union organizers, fostering a pervasive culture of fear and self-censorship. NDAs serve two essential functions in the AI labor regime: they hide abusive practices and shield tech companies from accountability, and they suppress collective resistance by isolating workers and criminalizing solidarity. This enforced silence is no accident — it is strategic and highly profitable. By atomizing a workforce that cannot speak out, tech companies externalize risk, evade scrutiny, and keep wages low.

Originally created to protect trade secrets, today NDAs have become tools of labor repression. They enable dominant tech firms to extract value from traumatized workers while rendering them invisible, disposable, and politically contained. Deployed through layered subcontracting chains, these agreements intensify psychological harm by forcing workers to carry trauma in silence.

To challenge this regime, NDAs must no longer be treated as neutral legal instruments. They are pillars of digital capitalism — technologies of control that must be dismantled if we are to build a just and democratic future of work.


The Hidden Workforce Behind Our Feeds

In today’s AI economy, Big Tech firms exercise what can be described as dual monopsony power. A monopsony is a market condition where a small number of buyers exert outsize control over sellers. First, companies like Meta, OpenAI, and Google dominate the product market: they control the platforms, tools, and data infrastructures that shape our digital lives. Second, they act as powerful buyers in the global data labor supply chain — outsourcing the most grueling and undervalued work, such as content moderation and data annotation, to business process outsourcing (BPO) firms in countries like Kenya, Colombia, and the Philippines.

In these labor markets, where unemployment is high and labor protections are weak, corporations enjoy wide latitude to dictate terms of employment. Lead firms determine task volume and pay rates, effectively setting the margins for BPO firms. These margins in turn determine wages, working hours, and industrial discipline practices designed to hit productivity targets. In this setup, workers have little power to say no. Platforms impose strict performance metrics, algorithmic surveillance, and gag orders — yet maintain legal and reputational distance from the labor conditions they create.

The harm is real and growing. Take the case of Ladi Anzaki Olubunmi, a content moderator reviewing TikTok videos under contract with outsourcing giant Teleperformance. She died after collapsing from apparent exhaustion. Her family says she had complained repeatedly about excessive workloads and fatigue. Yet ByteDance, the parent company of TikTok, has faced no consequences — shielded by the structural buffer of intermediated employment.

This system facilitates what some scholars now describe as technofeudalism: a return to feudal-like relations, not through land ownership, but through control of the digital commons via opaque data infrastructures, proprietary algorithms, and a workforce made invisible through subcontracting and gagged by NDAs. For users, these algorithms determine what content is seen. For workers, they take the form of relentless performance dashboards — a modern-day overseer.

NDAs not only silence these workers but prevent them from raising alarms when algorithmic systems threaten the safety of the digital commons — or when the content they encounter poses real risks to the public. Kenyan data labelers, for instance, described reviewing videos containing subtle yet clear incitement to communal violence — but had no channels for reporting imminent threats.

The NDA has become a modern oath of loyalty — silence at all costs. Platforms may rebrand or rotate in and out of dominance — today it’s Meta and OpenAI, tomorrow it may be others — but the model of labor extraction remains the same: one built on distance, control, and disposability.


A Global Health Crisis by Design

What emerges from this business model — built on outsourcing, suppression, and the commodification of forced psychological endurance — is not a series of isolated workplace injuries. It is a public health crisis, structurally produced by the AI industry’s labor regime. Workers are not just exhausted or demoralized; they are being mentally broken.

In Scroll. Click. Suffer., we heard from content moderators who reported hallucinations, dissociation, numbness, and intrusive flashbacks. “Sometimes I blank out completely; I feel like I’m not in my body,” said a worker in Ghana. Others described losing their appetite, developing chronic migraines, or suffering persistent gastrointestinal issues — classic symptoms of long-term trauma. A Kenyan moderator said she could no longer go on dates, haunted by the sexual violence she was forced to view daily. Another described turning to alcohol just to be able to sleep.

This harm doesn’t remain confined to individuals — it ripples outward into families, relationships, and entire communities. In countries where mental health care infrastructure is severely underresourced, the burden is pushed onto overworked public systems and households. In most of these workplaces, even basic mental health support is absent. Some offer short “wellness breaks,” only to penalize workers later for falling short of productivity targets. As Ephantus Kanyugi, vice president of the Data Labelers Association of Kenya, put it:

Workers come to us visibly shaken — not just by the trauma of the content they’re forced to see, but by the fear etched into them by the NDAs they’ve signed. They’re terrified that even asking for help could cost them their jobs.

This is not incidental distress but an institutionalized form of extraction — emotional strain borne by workers. The AI industry extracts surplus value not only from labor time but also from psychic endurance — until that capacity collapses. Unlike traditional factory work, where injuries can be seen, named, and sometimes collectively resisted, the damage here is internal, isolating, and much harder to contest.

NDAs intensify the crisis. They don’t merely shield companies from legal liability; they sever the very conditions necessary for healing and resistance. By gagging workers, NDAs prevent the formation of collective identity. That political silencing compounds the health crisis: workers are unable to name what is happening to them, let alone organize around it. The result is a class of traumatized, disposable workers who suffer in silence while the system that harms them remains protected — and profitable.


Where Do We Go From Here?

The scale and severity of this crisis demands more than piecemeal reform or individualized coping strategies. It calls for a coordinated, global response grounded in worker power, legal accountability, and cross-movement solidarity. As labor organizers and activists, we must begin by naming what we are up against: not just bad actors or isolated violations but a deliberately engineered system — one that profits from rendering labor invisible, extracts value from trauma, and silences dissent through coercive contracts like NDAs.

The first step is dismantling the mechanisms of silence. NDAs that prevent workers from speaking about their conditions — whether to therapists, families, journalists, or unions — must be banned from labor contracts. Governments and international bodies should recognize these clauses not as standard business practice but as violations of fundamental rights: freedom of expression, access to care, and freedom of association. Where platforms claim these agreements are needed to protect trade secrets, we must ask: At what cost, and to whose benefit?

Second, we must build worker power across borders. Content moderators and data workers are often isolated by design — scattered across subcontractors and countries, bound by legal and technological barriers. But new formations are emerging. In Kenya, the Philippines, and Colombia, workers are sharing testimonies despite threats of retaliation and job loss. These local efforts must be connected through transnational labor alliances that can jointly name employers, demand protections, and fight for shared standards. Tech firms may hide behind outsourcing, but the harm is consistent — and so must be our response.

Third, we need enforceable global standards that treat psychological health as central to decent work. Wellness breaks and hotline numbers are not enough. Platform companies must be held directly accountable for labor conditions across their outsourced chains. That includes legally binding rules for working hours, mandatory trauma support, rest periods, and protections from retaliation. Governments, trade unions, and international labor bodies must insist that companies like Meta, TikTok, and OpenAI cannot be considered global AI leaders while denying fundamental rights to the workers who train their models.

Finally, we must reject the notion that AI regulation is simply about ethics or innovation. This is a labor rights issue — and it must be treated as such. Ethics without enforcement is hollow, and innovation that comes at the cost of human dignity is exploitation by another name. Organizers, researchers, and allies must push for a new narrative: one that measures the intelligence of any system not only by its performance but also by how it treats the people who make it possible.


The Future of AI Is a Mirror of Our Values

The International Labour Conference, the peak decision-making body of the International Labour Organization (ILO), has just completed the first round of standard-setting discussions on decent work on digital labor platforms. With a mandate to develop a binding convention and supporting recommendation, the ILO must ensure that regulatory frameworks apply not only to directly contracted platform workers but also to those hired through intermediaries. These standards must protect the fundamental rights to form or join trade unions and to bargain collectively — including through explicit prohibitions on NDAs that systematically silence workers and undermine collective action.

What does it say about the world we’re building when the most celebrated technology of our time runs on the silent suffering of some of its most precarious workers? While billions are poured into AI and headlines hail its breakthroughs, the very people who make it possible — by absorbing unimaginable violence to train machines — are left voiceless, broken, and discarded. This isn’t progress. It’s calculated blindness.

If we build AI on a foundation of trauma and repression, we are not creating tools for human advancement — we are constructing systems that forget how to care, how to listen, how to be just. And if we don’t fight to change this now, the cost won’t only be borne by the content moderators in Nairobi or the data labelers in Manila. It will be borne by all of us — in the silence we normalize, the harm we conceal, and the future we allow to be built on their pain.


Read the whole story
mrmarchant
14 hours ago
reply
Share this story
Delete

What if computer history were a romantic comedy?

1 Share

The computer first appeared on the Broadway stage in 1955 in a romantic comedy—William Marchant’s The Desk Set. The play centers on four women who conduct research on behalf of the fictional International Broadcasting Company. Early in the first act, a young engineer named Richard Sumner arrives in the offices of the research department without explaining who he is or why he is studying the behavior of the workers. Bunny Watson, the head of the department, discovers that the engineer plans to install an “electronic brain” called Emmarac, which Sumner affectionately refers to as “Emmy” and describes as “the machine that takes the pause quotient out of the work–man-hour relationship.”

What Sumner calls the “pause quotient” is jargon for the everyday activities and mundane interactions that make human beings less efficient than machines. Emmarac would eliminate inefficiencies, such as walking to a bookshelf or talking with a coworker about weekend plans. Bunny Watson comes to believe that the computing machine will eliminate not only inefficiencies in the workplace but also the need for human workers in her department. Sumner, the engineer, presents the computer as a technology of efficiency, but Watson, the department head, views it as a technology of displacement.

Bunny Watson’s view was not uncommon during the first decade of computing technology. Thomas Watson Sr., president of IBM, insisted that one of his firm’s first machines be called a “calculator” instead of a “computer” because “he was concerned that the latter term, which had always referred to a human being, would raise the specter of technological unemployment,” according to historians Martin Campbell-Kelly and William Aspray. In keeping with the worry of both Watsons, the computer takes the stage on Broadway as a threat to white-collar work. The women in Marchant’s play fight against the threat of unemployment as soon as they learn why Sumner has arrived. The play thus attests to the fact that the very benefits of speed, accuracy, and information processing that made the computer useful for business also caused it to be perceived as a threat to the professional-managerial class.

Comedy provides a template for managing the incongruity of an “electronic brain” arriving in a space oriented around human expertise and professional judgment.

This threat was somewhat offset by the fact that for most of the 1950s, the computing industry was not profitable in the United States. Manufacturers produced and sold or leased the machines at steep losses, primarily to preserve a speculative market position and to bolster their image as technologically innovative. For many such firms, neglecting to compete in the emerging market for computers would have risked the perception that they were falling behind. They hoped computing would eventually become profitable as the technology improved, but even by the middle of the decade, it was not obvious to industry insiders when this would be the case. Even if the computer seemed to promise a new world of “lightning speed” efficiency and information management, committing resources to this promise was almost prohibitively costly.

While firms weighed the financial costs of computing, the growing interest in this new technology was initially perceived by white-collar workers as a threat to the nature of managerial expertise. Large corporations dominated American enterprise after the Second World War, and what historian Alfred Chandler called the “visible hand” of managerial professionals exerted considerable influence over the economy. Many observers wondered if computing machines would lead to a “revolution” in professional-managerial tasks. Some even speculated that “electronic brains” would soon coordinate the economy, thus replacing the bureaucratic oversight of most forms of labor. 

Howard Gammon, an official with the US Bureau of the Budget, explained in a 1954 essay that “electronic information processing machines” could “make substantial savings and render better service” if managers were to accept the technology. Gammon advocated for the automation of office work in areas like “stock control, handling orders, processing mailing lists, or a hundred and one other activities requiring the accumulating and sorting of information.” He even anticipated the development of tools for “erect[ing] a consistent system of decisions in areas where ‘judgment’ can be reduced to sets of clear-cut rules such as (1) ‘purchase at the lowest price,’ or (2) ‘never let the supply of bolts fall below the estimated one-week requirement for any size or type.’”

Gammon’s essay illustrates how many administrative thinkers hoped that computers would allow upper-level managers to oversee industrial production through a series of unambiguous rules that would no longer require midlevel workers for their enactment. 

This fantasy was impossible in the 1950s for so many reasons, the most obvious being that only a limited number of executable processes in postwar managerial capitalism could be automated through extant technology, and even fewer areas of “judgment,” as Gammon called them, can be reduced to sets of clear-cut rules. Still, this fantasy was part of the cultural milieu when Marchant’s play premiered on Broadway, one year after Gammon’s report and just a few months after IBM had announced the advance in memory storage technology behind its new 705 Model II, the first successful commercial data-processing machine. IBM received 100 orders for the 705, a commercial viability that seemed to signal the beginning of a new age in American corporate life.

It soon became clear, however, that this new age was not the one that Gammon imagined. Rather than causing widespread unemployment or the total automation of the visible hand, the computer would transform the character of work itself. Marchant’s play certainly invokes the possibility of unemployment, but its posture toward the computer shifts toward a more accommodative view of what later scholars would call the “computerization of work.” For example, early in the play, Richard Sumner conjures the specter of the machine as a threat when he asks Bunny Watson if the new electronic brains “give you the feeling that maybe—just maybe—that people are a little bit outmoded.” Similarly, at the beginning of the second act, a researcher named Peg remarks, “I understand thousands of people are being thrown out of work because of these electronic brains.” The play seems to affirm Sumner’s sentiment and Peg’s implicit worry about her own unemployment once the computer, Emmarac, has been installed in the third act. After the installation, Sumner and Watson give the machine a research problem that previously took Peg several days to complete. Watson expects the task to stump Emmarac, but the machine takes only a few seconds to produce the same answer.

While such moments conjure the specter of “technological unemployment,” the play juxtaposes Emmarac’s feats with Watson’s wit and spontaneity. For instance, after Sumner suggests people may be “outmoded,” Watson responds, “Yes, I wouldn’t be a bit surprised if they stopped making them.” Sumner gets the joke but doesn’t find it funny: “Miss Watson, Emmarac is not a subject for levity.” The staging of the play contradicts Sumner’s assertion. Emmarac occasions all manner of levity in The Desk Set, ranging from Watson’s joke to Emmarac’s absurd firing of every member of the International Broadcasting Company, including its president, later in the play. 

This shifting portrayal of Emmarac follows a much older pattern in dramatic comedy. As literary critic Northrop Frye explains, many forms of comedy follow an “argument” in which a “new world” appears on the stage and transforms the society entrenched at the beginning of the play. The movement away from established society hinges on a “principle of conversion” that “include[s] as many people as possible in its final society: the blocking characters are more often reconciled or converted than simply repudiated.”

We see a similar dynamic in how Marchant’s play portrays the efficiency expert as brusque, rational, and incapable of empathy or romantic interests. After his arrival in the office, a researcher named Sadel says, “You notice he never takes his coat off? Do you think maybe he’s a robot?” Another researcher, Ruthie Saylor, later kisses Sumner on the cheek and invites him to a party. He says, “Sorry, I’ve got work to do,” to which Ruthie responds, “Sadel’s right—you are a robot!” 

Even as Sumner’s robotic behavior portrays him as antisocial, Emmarac further isolates him from the office by posing a threat to the workers. The play accentuates this blocking function by assigning Emmarac a personality and gender: Sumner calls the machine “Emmy,” and its operator, a woman named Miss Warriner, describes the machine as a “good girl.” By taking its place in the office, Emmarac effectively moves into the same space of labor and economic power as Bunny Watson, who had previously overseen the researchers and their activities. After being installed in the office, the large mainframe computer begins to coordinate this knowledge work. The gendering of the computer thus presents Emmarac as a newer model of the so-called New Woman, as if the computer imperils the feminist ideal that Bunny Watson clearly embodies. By directly challenging Watson’s socioeconomic independence and professional identity, the computer’s arrival in the workplace threatens to make the New Woman obsolete. 

Yet much like Frye’s claims about the “argument” of comedy, the conflict between Emmarac and Watson resolves as the machine transforms from a direct competitor into a collaborator. We see this shift during a final competition between Emmarac and the research department. The women have been notified that their positions have been terminated, and they begin packing up their belongings. Two requests for information suddenly arrive, but Watson and her fellow researchers refuse to process them because of their dismissal, so Warriner and Sumner attempt to field the requests. The research tasks are complicated, and Warriner mistakenly directs Emmarac to print a long, irrelevant answer. The machine inflexibly continues although the other inquiry needs to be addressed. Sumner and Warriner try to stop the machine, but this countermanding order causes the machine’s “magnetic circuit” to emit smoke and a loud noise. Sumner yells at Warriner, who runs offstage, and the efficiency expert is now the only one to field the requests and salvage the machine. However, he doesn’t know how to stop Emmarac from malfunctioning. Marchant’s stage directions here say that Watson, who has studied the machine’s maintenance and operation, “takes a hairpin from her hair and manipulates a knob on Emmarac—the NOISE obligingly stops.” Watson then explains, “You forget, I know something about one of these. All that research, remember?”

The madcap quality of this scene continues after Sumner discovers that Emmarac’s “little sister” in the payroll office has sent pink slips to every employee at the broadcasting firm. Sumner then receives a letter containing his own pink slip, which prompts Watson to quote Horatio’s lament as Hamlet dies: “Good night, sweet prince.” The turn of events poses as tragedy, but of course it leads to the play’s comic resolution. Once Sumner discovers that the payroll computer has erred—or, at least, that someone improperly programmed it—he explains that the women in the research department haven’t been fired. Emmarac, he says, “was not meant to replace you. It was never intended to take over. It was installed to free your time for research—to do the daily mechanical routine.”

Even as Watson “fixes” the machine, the play fixes the robotic man through his professional failures. After this moment of discovery, Sumner apologizes to Watson and reconciles with the other women in the research department. He then promises to take them out to lunch and buy them “three martinis each.” Sumner exits with the women “laughing and talking,” thus reversing the antisocial role that he has occupied for most of the play.

Emmarac’s failure, too, becomes an opportunity for its conversion. It may be that a programming error led to the company-wide pink slips, but the computer’s near-breakdown results from its rigidity. In both cases, the computer fails to navigate the world of knowledge work, thus becoming less threatening and more absurd through its flashing lights, urgent noises, and smoking console. This shift in the machine’s stage presence—the fact that it becomes comic—does not lead to its banishment or dismantling. Rather, after Watson “fixes” Emmarac, she uses it to compute a final inquiry submitted to her office: “What is the total weight of the Earth?” Given a problem that a human researcher “can spend months finding out,” she chooses to collaborate. Watson types out the question and Emmarac emits “its boop-boop-a-doop noise” in response, prompting her to answer, “Boop-boop-a-doop to you.” Emmarac is no longer Watson’s automated replacement but her partner in knowledge work.

In Marchant’s play, comedy provides a template for managing the incongruity of an “electronic brain” arriving in a space oriented around human expertise and professional judgment. This template converts the automation of professional-­managerial tasks from a threat into an opportunity, implying that a partnership with knowledge workers can convert the electronic brain into a machine compatible with their happiness. The computerization of work thus becomes its own kind of comic plot. 


Benjamin Mangrum is an associate professor of literature at MIT. This excerpt is from his new book, The Comedy of Computation: Or, How I Learned to Stop Worrying and Love Obsolescence, published by Stanford University Press, ©2025 by Benjamin Mangrum. All rights reserved. 

Read the whole story
mrmarchant
4 days ago
reply
Share this story
Delete

Audiences Prove that the Experts Are Dead Wrong

1 Share

Much of my work here is focused on anticipating the future of our shared culture—which is under threat in complex, interconnected ways.

In particular, I’ve tried to show that many of the dominant digital trends are causing great harm. But they are unsustainable.

So they will reverse.

Things will get better. And that will happen even though the forces aligned against creative vocations and human flourishing appear to be huge—so much so that many have given up hope.

Today I want to give an example of a reversal that is happening right now—but few have noticed. I’ll explain the shift, and then I will describe in some detail why this is happening.

This is very useful information for anyone working in the creative economy—or anybody who wants to live in a culture that supports artistic expression and the life of the imagination.

By the way, this article was initially planned as a paywall-protected analysis for premium subscribers. But I want to give wider visibility to these huge, hidden changes (which are taking place in shocking contradiction to the conventional wisdom). So I’ve decided to make this installment of The Honest Broker freely available to everybody.

If you like it, feel free to share it with others. Or consider taking out a premium subscription.


Please support my work by taking out a premium subscription—for just $6 per month (even less if you sign up for a year).

Subscribe now


HOW CULTURES HEAL

By Ted Gioia

When I saw the numbers, I couldn’t believe them.

Every digital platform is flooding the market with short videos, but the audience is now spending more time with longform video—and by a huge margin.

Source: Tubular Labs

Some video creators have already figured this out. That’s why the number of videos longer than 20 minutes uploaded on YouTube grew from 1.3 million to 8.5 million in just two years.

That’s a staggering six-fold increase. But even short videos are now getting longer. Social media consultants call this the “long short” format. Sometimes they are used as teasers to draw viewers to still longer media (often on another platform).

Movies are also getting longer. At first glance, that makes no sense—more people are watching films at home on small digital devices, where Hollywood fare has to compete with bite-sized junk from TikTok and Instagram.

“The rebirth of longform runs counter to everything media experts are peddling. They are all trying to game the algorithm. But they’re making a huge mistake….”

You might think that filmmakers would feel forced to compress their storytelling, but the opposite is true. They are learning that audiences crave something longer and more immersive than a TikTok.

At first, Hollywood insiders tried to imitate the ultra-short aesthetic, but they failed—sometimes in colossal fashion. (Does anyone remember the Quibi fiasco?)

Now they not only embrace long films, but happily release sprawling mega-movies longer than the Boston Marathon. Dune Two ran for 166 minutes—not even Eliud Kipchoge does that. Oppenheimer clocked in at 180 minutes. Scorsese’s Killers of the Flower Moon lasted a mind-boggling 206 minutes.

The studios would have vetoed these excesses just a few years ago. Not anymore.

Songs are also getting longer. The top ten hits on Billboard actually increased twenty seconds in duration last year. Five top ten hits ran for more than five minutes.

Two of those long hit songs came from Taylor Swift—who has been a champion of longer immersive musical experiences, most notably in her insanely successful Eras tour. She set the record for the biggest money-generating roadshow in music history, and did it with a performance twice as long as a Mahler symphony.

These Swift concerts run for three-and-a-half hours (just like Scorsese at his most maniacal), and include more than 40 songs. They’re grouped in ten separate acts, each built around a different era in her career.

Ten acts? Really?

Even Wagner stopped short of that. But the Eras tour generated more than $2 billion in revenues. And all this happened while experts were touting 15-second songs on TikTok as the future of music.

I’ve charted the duration of Swift’s studio albums over the last two decades, and it tells the same story. She has gradually learned that her audience prefers longer musical experiences.

The New York Times complained about the length of her most recent album—calling it “sprawling and often self-indulgent.” It mocked her for believing that “more is more.”

It summed up her whole worldview with a dismissive claim that she has fallen in love with “abundance.” In fact, the Times opened its article with that accusation.

But I note that a year after the Times laughed at Swiftian abundance, the hottest topic in the culture is a book with that same word as its title. (Full disclosure: I’ll be doing a live Substack conversation with its co-author Derek Thompson in a few days.)

Abundance has dominated the New York Times non-fiction bestseller list for the last several months. Even more to the point, the word seems to tap into the public’s hunger for something bigger, deeper, and more expansive than it’s been getting.

Perhaps Taylor Swift understands the zeitgeist better than the New York Times.

Abundance book by Ezra Klein and Derek Thompson
Voters are hungry for ‘Abundance’ according to this bestselling book—but the same is true for audiences dealing with today’s digital culture and entertainment.

In the culture arena, abundance is not just a highbrow concern. As the above examples make clear, the biggest winners in the new longform game are targeting mass market and lowbrow tastes.

Just consider those ultra-long podcasts by Joe Rogan. Or look at the popular fantasy romance novels by Rebecca Yarros—they clock in at 500 pages or more, but readers devour them like Joey Chestnut at a Fourth of July cookout.

Yarros’s new book Onyx Storm is the fastest-selling adult novel in two decades. It’s 544 pages, and weighs 29 ounces. That’s heavier than the javelin thrown by Olympic athletes.

Onyx Storm is part of a wider trend towards longer fiction bestsellers. Back in 2022, experts were complaining that the novels on the NY Times bestseller were shorter than ever—averaging 386 pages.

But that’s not true anymore.

I calculated the average length of the current fiction bestsellers, and they are longer than in any of the previous measurement periods.

Chart of page count of current fiction bestsellers

Or look at all the longform writing on Substack. When I launched The Honest Broker, I assumed that readers would prefer shorter articles—but I soon learned that the opposite was true.

At first I was puzzled—but pleased. By now I just take it for granted.

Thompson’s (former) employer The Atlantic is another example of this revival of longform writing. Even as other legacy print media outlets shrink and disappear, The Atlantic can boast of an amazing turnaround.

Just three years ago, The Atlantic struggled with a $20 million deficit. Traffic was down. The magazine was laying off staff. Prospects looked bleaker than Jarndyce versus Jarndyce.

But the magazine is now profitable and attracting subscribers at a healthy rate—even after raising subscription prices by 50%. And The Atlantic is doing this in the most surprising way imaginable, namely by hiring top talent at high salaries and letting the writers tackle in-depth articles.

I recently got featured in an article in The Atlantic. And that article was almost eight thousand words—that’s longer than many novellas.

So like Substack, The Atlantic is achieving unprecedented success by totally ignoring the digital world’s obsession with short meme-oriented material.


Why is this happening?

The rebirth of longform runs counter to everything media experts are peddling. They are all trying to game the algorithm. But they’re making a huge mistake.

They believe that longform is doomed. They see that digital platforms reward ultra-short videos on an endless scroll. And they understand that this works because the interface is extremely addictive.

So short must defeat long in the digital marketplace. That’s obvious to them.

But all the evidence now proves that this isn’t happening.

Many media companies went broke trusting their advice. It was dead wrong—but many still haven’t figure out why.

Let me lay it out for you. Here are the five reasons why longform is now winning:

  1. The dopamine boosts from endlessly scrolling short videos eventually produce anhedonia—the complete absence of enjoyment in an experience supposedly pursued for pleasure. (I write about that here.) So even addicts grow dissatisfied with their addiction.

  2. More and more people are now rebelling against these manipulative digital interfaces. A sizable portion of the population simply refuses to become addicts. This has always been true with booze and drugs, and it’s now true with digital entertainment.

  3. Short form clickbait gets digested easily, and spreads quickly. But this doesn’t generate longterm loyalty. Short form is like a meme—spreading easily and then disappearing. Whereas long immersive experiences reach deeper into the hearts and souls of the audience. This creates a much stronger bond than any 15-second video or melody will ever match.

  4. All cultural forms create a backlash if they are pushed too far—and that is happening now with shortform media. People have digested too much of it, and are ready to exit for the vomitorium.

  5. People now view anything coming out of Silicon Valley and the technocracy with intense skepticism and resistance. The pushback gets more intense with each passing month. This resistance has already killed the virtual reality market (despite billions spent by Meta and Apple), and will soon impact many other tech services—especially those based on turning the public into scrolling-and-swiping chimpanzees.

Longform isn’t like a drug. It’s more like a ritual. Instead of promoting addiction, it possesses a hypnotic power that creates an almost cult-like devotion among its audience.

Just consider the obsessive fandoms of Wagner’s Ring, Proust’s prose, Joyce’s daylong Dublin stroll, the Harry Potter novels, Christopher Nolan’s movies, DFW’s Infinite Jest, Beethoven’s Ninth, Taylor Swift’s concerts, etc.

No TikTok will ever generate that kind of passionate long-lasting response. They come and go. But longform fandoms will last a hundred years or more, and get passed on from generation to generation.

That’s why longform is making a comeback—in total defiance to the wishes of Silicon Valley and their scroll-driven strategies. Maybe that should be a lesson to them. Perhaps they should reconsider some of their other social engineering initiatives before they also meet with a painful reversal.



Read the whole story
mrmarchant
4 days ago
reply
Share this story
Delete

The Cosmic Treasure Chest | Rubin Observatory

2 Shares



Read the whole story
mrmarchant
4 days ago
reply
Share this story
Delete

Show HN: Lego Island Playable in the Browser

1 Share
Comments
Read the whole story
mrmarchant
5 days ago
reply
Share this story
Delete

Food Is Not Magic

1 Share
You may not be able to eat your way to immortality or manliness, but food is something we can make and enjoy together.

Read the whole story
mrmarchant
6 days ago
reply
Share this story
Delete
Next Page of Stories