1770 stories
·
2 followers

AI as the new avatar of American capitalism

1 Share

When a commencement speaker at the University of Central Florida intoned that “AI is the next industrial revolution,” she was met with a chorus of thundering boos from the graduating students in attendance. The mass disapproval left the orator, Gloria Caulfield, Vice President of Strategic Alliances for Tavistock Development Company, flustered and unmoored. She fumbled through the rest of the portion of her speech amid more jeers and exclamations of “AI sucks,” which didn’t relent until she noted that AI wasn’t a factor in our lives just a few years ago, to which the grads cheered.

The clip went viral, of course, in the now-well-stocked genre of ‘stark and unambiguous reminders that lots of people hate AI’, a genre that stretches back to early 2024, when a SXSW audience booed a pro-AI video, through Guillermo del Toro’s 2025 “fuck AI” exhortations on the press tour for Frankenstein, and onto the darker, more violent instances of AI rejection that made headlines this year.

The commencement speaker clip is a particularly striking artifact, though. It resonates particularly deeply, I think, because it reflects the generational and economic breakdown of who AI is for, and who it harms. Here is an executive with one of the most impressively generic corporate managerial titles I have encountered, blithely repeating a line about AI being “the next industrial revolution” that she has likely uttered many times in other circumstances, to hundreds of young people who have now been hearing for years about how AI is both erasing their career prospects and is the future. (By one count, the entry level job market is the worst that it’s been in nearly four decades.)

I’ve heard this industrial revolution line or a variation of it so many times from people with titles like ‘VP of Strategic Alliances’ over the last three years—in meetings, at seminars, at conferences, in personal conversation—that beyond losing count of the utterances, it’s become like white noise, the rule rather than the exception, as predictable as the warm, slightly curdled exhalation of breath of your office colleague in a late-afternoon meeting. It’s something that is said dutifully by mid-to-late career executives and managers to signal (to investors and partners, to the public, to colleagues angling for their jobs) that they understand and tacitly endorse the changes to be effectuated by AI. Almost every person who says this, it seems, has something to sell, and usually that something is bound up in the idea that AI is inevitable, and that we must all get used to the cognitive offloading, increased surveillance, and amped-up productivity demands that accepting as much entails.

Meanwhile, the graduating students, who likely either have already begun trying to find post-collegiate work, know people who have, or have at the very least seen the headlines about AI and entry level jobs and felt the bad vibes, have an eminently superior grasp of what ‘AI is the next industrial revolution’ means in practice. It means that right now, employers have decided they can hire fewer people, and for lower wages, and that they are graduating into a notably bleak economy for people like them with fresh degrees. Thus we have our chorus of revulsion and our apt demonstration of who AI is made to serve, and who it will not.

AI is after all being blamed for deskilling or even decimating a lot of industries that college graduates may well have wanted to work in: the arts, entertainment, tech, gaming. I too would loudly boo at the prospect of this next industrial revolution if I was in my early twenties, unemployed, and had aspirations for my future greater than entering prompts into an LLM. It’s yet another reminder that enthusiasm for AI tends to break along age and class position, as recent polling has demonstrated, and brings to mind that recent NBC survey that showed that Gen Z respondents (ages 18-34) gave AI a favorability rating of negative 44, while one of the only groups that found it favorable were those earning over $200,000 a year. All of this seems pretty straightforward to me, and I noted as much in my recent post about what was motivating the increasingly violent backlash to AI.

Subscribe now

In the weeks since, there have been a number of other efforts to diagnose the AI backlash. In particular, a term ‘AI populism’, coined by the writer Jasmine Sun, has been popping up, especially at the New York Times, where it’s been quoted by Ezra Klein and showed up in the headline of a piece by David Wallace-Wells1.

Here’s Sun:

I define AI populism as a worldview in which AI is viewed not only as a normal technology but as an elite political project to be resisted. It regards AI as a thing manufactured by out-of-touch billionaires and pushed onto an unwilling public to achieve sinister aims like “capitalist efficiency” (layoffs) and “population management” (surveillance). AI populists don’t really care whether ChatGPT is personally useful, or if Waymos eke out some safety gains: AI’s utility as a tool is immaterial relative to the unwelcome societal change it represents.

Sun, who describes herself as an anthropologist of disruption, uses ‘AI populism’ as a means of theorizing why the AI industry is attracting ire from both x-risk doomers and anti-data center organizers. It’s a provocative coinage. But like David Karpf, who points out that such groups have very different reasons for and methods of opposing AI, and that it’s not particularly useful to lump them together, I don’t ultimately think this a great way to think about the broader animosity percolating around AI. (For one thing, the language presents the idea as “sinister” and faintly conspiratorial, and seems to patronize those who might believe it.)

Directionally, as a tech guy might put it, it’s not wrong. There is undoubtedly anger at out-of-touch billionaires helping companies execute mass layoffs, and many people don’t think ChatGPT is useful enough to warrant the social (or economic and environmental) burdens it imposes. The problem is that Sun’s coinage aims to position AI as a project that can be considered novel, or even apart, from the political economy from which it emerged. But I don’t think most people are formulating a new worldview in which AI is a boogeyman political project hatched by billionaires. I think they’re more likely to understand AI as an extension of an already inequitable system, and as an accelerant of that inequality. At a time when consumer sentiment is stuck at all-time lows, housing costs are sky-high, the price of basic goods is spiking, entry level jobs are disappearing, tech firms have concentrated enormous power and “broligarchy” was shortlisted for Dictionary.com’s 2025 word of the year, AI has become the avatar of the ills of unrestrained capitalism. “AI populism” is really just “21st century populism” or just, “populism.”

AI has after all been adopted and promoted as an instrument of efficiency, control, and leverage by just about every layer of management at every institution, from any given Fortune 500 company to a department in the federal government to your boss who makes you use Copilot, to which one might direct their populist anger. This is less the result of a specific political project, as much as it is how capitalism tends to function when there is a new instrument to discipline workers on offer. As writers and thinkers like Ted Chiang and Hagen Blix have pointed out, fear and anger at AI are often best understood as fear and anger at how AI will function within capitalism. Few are worried about the prospect of public research scientists using LLMs to discover new peptides; plenty are worried about how AI might be used as leverage against them in their workplaces, or to replace their labor, or to narrow their job opportunities. They’re worried that AI will exacerbate existing conditions in a precarious system.

Firms have used automation technologies to impose layoffs and surveillance regimes on their workforces to achieve improved efficiencies for as long as such technologies have existed; there’s nothing sinister, or at least unusually sinister, about this. But Silicon Valley has certainly raised the stakes, in pursuit of ever-greater profits and investment capital: AI has been developed, pitched, and sold by tech firms as the most powerful automation technology of all time. As OpenAI’s charter puts it, the company is building “highly autonomous systems that outperform humans at most economically valuable work.” This declared aspiration to sell one-size-fits-all, mass deskilling-as-a-service in a destabilized, post-pandemic, post-J6 world feels in hindsight like a dependable formula for generating widespread anger.

At the dawn of the AI moment in 2022/23, AI firms promised that the technology would help solve climate change and cure cancer. While there has been little notable progress on those fronts—and in fact the energy demand of data centers has thus far moved the needle in the opposite direction, towards increasing carbon emissions—AI very much has initiated a transfer of wealth from the middle and working classes to the rich, and a mass degradation of jobs once thought widely desirable. Just this week, a TV writer published a piece in WIRED about how she had turned to selling her editorial skills to AI training companies in a wildly unregulated labor market to get by:

I never intended to write about this industry. I came to it not as a journalist but as a disgruntled, broke TV writer determined to make a dent in student loans and keep paying LA rent while my industry withered in front of me. But working with and for AI had proven even more cruel than I could have ever imagined. Mercor says it employs about 300 full-time staffers. Meanwhile, each week it keeps some 30,000 independent contractors caught up in a fever dream of aimless, directionless urgency, corralled across Slack channels by achingly young adults, sending messages at 3 am to “push on” and “finish strong” and “lock in” and “Go Team GO!” All in service of the grandest purpose in history: to successfully remove a scuba diver from a picture with one click of a mouse, transport him to the moon without any glaring artifacts—and bring him back again.

In fact every week seems to bring new stories of mass job loss that the companies are attributing to AI: Meta, Disney, and Cloudflare are just some of the latest. I’ve chronicled scores of stories about lost work, careers, and income security in AI Killed My Job. And a brand new study shows artists report declining income and opportunities.

And who’s winning, in the current environment? Certainly, the executives of tech firms. Over the last couple of weeks we’ve learned through the Musk vs. OpenAI trial, that co-founder Greg Brockman’s stake in the onetime nonprofit started to benefit all of humanity is worth $30 billion, and Ilya Sutskever, no longer with the company, has a cut worth $7 billion. Other winners of the AI era? Guys like Matt Gallagher, who created the first “one-person billion dollar company.” His company is an entirely automated version of Hims called Medvi, a digital middle-man business of selling health supplements online that apparently relies on fake, presumably AI-generated doctor accounts to hock GLP-1, falsified, AI-generated patient testimonials. It’s the target of an ongoing class action lawsuit, has been formally warned by the FDA for misbranding violations, and more.

Who else? Well, people who can weaponize AI to game prediction markets, leading to a situation where, at sites like Polymarket, 0.1% of the accounts capture two-thirds of all the gains, as the Wall Street Journal reported this week. (Title: “Why Everyone Loses—Except a Few Sharks—at Prediction Markets.”) This is, as analyst Paul Kedrosky notes “partly a function of their nature, but also of vibe-coding script kiddies attacking every market anomaly as quickly as it arises.”

He continues:

“The same dynamic is now spreading across retail-dominated markets. A driver is how AI lowers the cost of systematic exploitation and exploration to near zero. What used to require infrastructure, data pipelines, and bearded quants is now accessible via off-the-shelf models, APIs, and loosely stitched “agent” workflows doing ... stuff that even their users don’t fully understand.

The result isn’t democratization of returns. It is wider participation, of a sort, alongside the rapid re-concentration of profits. A small subset of users—those willing to iterate fastest, monitor continuously, and deploy capital programmatically—capture gains, with everyone else just liquidity…

…Prediction markets are simply the cleanest expression of this trend because they combine thin liquidity, discrete outcomes, and high retail participation. But the same pattern is visible in options flow, single-stock volatility events, and even online poker, which AI increasingly dominates.

As AI tools continue to scale, expect this to get worse: a small cohort running semi-automated strategies extracting semi-consistent edge, and a much larger base supplying them returns. Under the pressure of AI prevalance, markets don’t flatten, the return gradient steepens to a cliff.

So we have AI looming over our withering creative industries, a generation of young people who are angry and disillusioned by the lack of opportunities, and precarity and anxiety nearly everywhere. In exchange, we get a new batch of tech oligarchs, new shady billion-dollar businesses that employ no one at all and use AI to evade consumer protection laws—that pretty unequivocally leave the world worse off in the wake of the founder’s mad dash to personal enrichment—and new tools for the unscrupulous to accumulate wealth at the expense of those still following the rules, whether in stock trading, prediction markets, or even online poker. That and Claude Code.

That’s why the students are booing, I think.2 They’re experiencing AI in realtime as a forecloser of futures; as the cruel new face of hyper-scaling capitalism, as the prime agent moving a world that’s become a deck stacked against them.


Subscribe now

BLOODY MEDIA HITS

I sat down to chat with Taylor Lorenz about the AI backlash, and for a little friendly debate:

A bit back, I recorded a podcast with Thomas Dekeyser for the University of Minnesota Press for the release of his book; it’s out now. Give it a listen here:

CHART OF THE WEEK

From a new survey on attitudes towards data centers.

BLOODY GOOD READS

and Saul Levin on organizing and the anti-data center movement the Guardian:

As usual, ordinary people are ahead of their leaders. The remarkable organic growth of the datacenter resistance movement across geographies, economic interests and ideology reflects the myriad harms that come with AI infrastructure and growing anger at the tech elite. The tremendous energy unleashed by these fights, and their sensible and unifying demands, have the potential to form the foundations of a new and powerful populist coalition, one poised to help define a working-class agenda that meets this moment and resonates with disaffected voters. This excellent organizing should be cultivated rather than dismissed.

Alondra Nelson has a new paper in Science, The Civic Grammar for AI Rights, that’s worth a look:

Some commentary has argued that AI companies reaching for the vocabulary of constitutional democracy are attempting to fill a vacuum that democratic institutions can no longer hold. That reading is not wrong about the vacuum. But it is incomplete about which actors have moved to fill it. Not only technology companies, but legislators, civil society organizations, and the constituents they represent have produced a civic grammar: a shared set of rights claims that publics can extend to new institutions and new harms, and that has been traveling across jurisdictions, partisan lines, and institutional contexts.

The great novelist and playwright Ishmael Reed is working on a new play, called, wait for it, “King Ludd’s Revenge”.

From the NYT:

Mr. Reed, a novelist, playwright and provocateur who has been upsetting opinions across the political spectrum for at least six decades, is aiming high with a new drama. “King Ludd’s Revenge” is a rare attempt to take on the tech moguls with something more than mere journalism.

“Instead of a straight narrative, I improvise,” the 88-year-old writer said. “It’s like Louis Armstrong singing ‘Stardust.’ He doesn’t do it the way it’s written.”

Oakland is poorer, Blacker and more maligned than San Francisco and Silicon Valley, both of which are just across the bridges that span the Bay. Having the trial here happened at random — Mr. Musk’s lawsuit against Mr. Altman and the company they founded together, OpenAI, was filed in San Francisco and assigned to the federal court in Oakland — but feels a little like one of those episodes where the Greek gods descend to mundane Earth to settle a dispute.

Mr. Reed, an Oakland resident who has celebrated and defended the city for decades, may be the only one in town noticing who’s here. “Everybody’s focused on the N.B.A. playoffs,” he explained.

“King Ludd’s Revenge” takes its title from the legendary leader of the workers’ revolt in England in the early 19th century. With the ascent of A.I., the Luddites have come back into fashion. The play begins with Mr. Musk receiving a pedicure from a robot. Peter Thiel, the tech billionaire who backed President Trump in 2016, bursts into the room. “I think I’ve identified the leader of the Anti-Christ Syndicate,” he says.

Mr. Musk: “Who might that be?”

Mr. Thiel: “Greta Thunberg.”

Karen Hao on the Elon Musk vs Sam Altman courtroom drama:

…[F]ixating on questions of whether Altman is untrustworthy, or whether Musk is even less so distracts from a far deeper problem. If OpenAI lost its footing as the AI industry frontrunner, another barely distinguishable competitor – Musk’s xAI or other – would simply replace it. That includes companies like Anthropic, who enjoy a better reputation yet engage in many similar behaviors like compromising careful decision-making for speed, disregarding intellectual property, and aggressively scaling their computing infrastructure to the detriment of communities.

Nothing about this trial or OpenAI’s financial structure will change the imperial drive of these companies to consolidate ever-more data and capital, terraform the Earth, exhaust and displace labor, and embed themselves deep within the state to gain leverage over its apparatuses of violence. We would still exist in a world in which a tiny few have the profound power to cast it in their image and dictate how billions of people live.

Alright alright, that’s it for now. Have a good weekend everyone, and apologies if that was all a bit punchy, I’ve been fighting a cold all week. Until next time—hammers up.

Subscribe now

1

I don’t love the AI populism term, but this is a good piece, worth reading.

2

I wouldn’t call the angry students AI populists. I would wager a guess that it’s not just billionaires they’re angry at, but the society that allows for the building data centers while desiccating the arts, and the VPs of Strategic Alliances whose complicity and support has made it all possible.



Read the whole story
mrmarchant
44 minutes ago
reply
Share this story
Delete

Twist

1 Share

https://commons.wikimedia.org/wiki/File:10_kvadratoj_en_kvadrato.svg

This is the most efficient way to pack 10 unit squares into a surrounding square.

It looks like something I would turn in after running out of time.

Read the whole story
mrmarchant
44 minutes ago
reply
Share this story
Delete

I Literally Don't Know

1 Share

You don't need to pick my brain. It won't help.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

A very good, very 2026 headline: Japan Runs Out of Robot...

1 Share

A very good, very 2026 headline: Japan Runs Out of Robot Wolves in Fight Against Bears. “Starting at around $4,000, each bespoke Monster Wolf is now equipped with battery power, solar panels, and detection sensors.”

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

It’s Still Demoralizing to Teach a Classroom of Scrolling Students (Emily Oster)

1 Share

“Emily Oster is the founder and chief executive of ParentData and a professor of economics at Brown University.” This article appeared in The New York Times, May 10, 2026

In the past several years, about three dozen states have instituted phone bans in schools, and more are likely to follow. These bans have been trumpeted as game changers. Anecdotal reporting points to more books being checked out from school libraries and more students engaging with one another in the hallway. “How the Phone Ban Saved High School,” reads one headline. At the same time, respected academics have suggested that the arrival of phones in schools is linked to large test score declines in countries around the world.

It was, therefore, surprising to many people when a paper this week showed that phone bans had a very minimal impact on student behavior and academics in a nationwide sample of schools. Phone usage went down, and teachers liked the policy (all good), but test scores didn’t change much, disciplinary infractions increased in the short term and there was no demonstrable effect on bullying or student attention. Basically, not much changed.

This finding should not have been as surprising as it was. Based on what we know about phones and education, it is not realistic to expect phone bans to have enormous impacts on academic outcomes. But that doesn’t mean that they are a bad idea, or that they should be walked back. Instead, we need to approach this topic with more realistic expectations, a richer approach to what counts as a positive outcome and more help for families and schools.

The expectations for phone bans were poorly calibrated, largely because the data on which some of the more extreme claims about phones is based is subject to considerable biases. For example, a paper published last fall argued that increases in phone usage were tied to large reductions in test scores in many countries between 2012 and 2022. The study found bigger drops in test scores in countries with greater smartphone adoption. But it turns out that those were also the countries that had longer school closures during the Covid-19 pandemic. Phones may have played a role in driving test scores down, but since we know school closings mattered for academic progress, too, the emphasis on phones overstates their role.

There is also plenty of data showing that children who spend more time on social media do worse in school, but they tend to come from households with fewer resources. It may also be that problems in school are contributing to social media use, rather than the other way around. Finally, given that a lot of phone usage is outside of school, it’s unclear if these results would really apply to phone bans in school.

Sign up for the Opinion Today newsletter  Get expert analysis of the news and a guide to the big ideas shaping the world every weekday morning. Get it sent to your inbox.

The paper out this week takes a better approach, looking at how test scores and behavior varied over time as schools restricted phone use by introducing Yondr pouches that lock away phones during the day. An earlier paper, which looked at variation across school districts in Florida as some introduced phone bans earlier than others, found similarly small effects on test scores. These are the studies we should be focusing on.

Over the next several years, we will get more data exploring these questions. I expect a cottage industry of papers on school phone bans — and we’ll probably also start to see results from school districts that change technology in other ways (for example, taking computers out of early childhood classrooms). We should expect to see similar results.

It would be a mistake to interpret these findings as a sign that we should forget about phone bans altogether. There are no magic bullets in education. Improving student learning is a game of inches, not miles. There is no clear positive reason for students to have phones in the classroom. No phones should be the default, and to introduce phones, we’d want to see evidence that they meaningfully improve learning or help in another way. None of that appears in the data. On the flip side, I think the knee-jerk reaction to also remove all computers and tech is an overstatement and unrealistic.

Instead, we need to alter our expectations. Phone bans may be helpful in some ways, but they aren’t a cure-all, and that shouldn’t be the bar for success.

Second, we have to get better data. Test scores are easy to measure, but a lot of the discussion around phone bans focuses on the experiences of students, how they interact with one another and whether the classroom feels engaging to both students and teachers. We should be measuring those outcomes systematically. I do not allow my students to have phones or laptops in my classroom, because screens affect their participation and, quite honestly, it’s demoralizing to look out at a classroom of kids scrolling on their phones. I’m guessing other teachers feel similarly; we should figure out how to measure and evaluate this, too.

Finally, we need to find a more helpful approach for schools and parents to manage technology. We’ve sent parents and schools messages that are simultaneously fear-inducing (“phones are ruining your children”) and overly optimistic (“phone bans will make it better”). Neither of these is true, and it’s time to move to something that promises less, but delivers more.

For schools, that may mean keeping phone bans and making additional changes, like modifying laptop use in some classrooms, while recognizing that technology is part of modern life and not the enemy. It could also mean focusing on resources and instructional support that will actually move the needle on test scores.

On the parental side, we need fewer blanket warnings about the dangers of technology and more help drawing appropriate boundaries for our kids. Teenagers absolutely need rules and restrictions on their phone use, and they need their parents to set those — and parents need help doing that. Phone bans promised an easy fix, but they aren’t magic. The faster we realize that, the faster we can make realistic progress.



Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete

Quoting Boris Mann

1 Share

“11 AI agents” is meaningless as a phrase.

If I said “I have 11 spreadsheets” or “I have 11 browser tabs” to do my work, it means about the same thing.

Boris Mann

Tags: ai-agents, ai, agent-definitions

Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete
Next Page of Stories