A couple of years ago, when AI had come onto the market and was clearly reshaping the society, I decided that I would have nothing to do with it—I would boycott. This decision was based partly on morals and partly on practicality—I suspected that, underneath the really impressive technological breakthrough, AI was still, essentially, a one-trick pony. And now that AI has evidently gone through another revolution, vastly improving itself in recent models and opening up a whole world of coding to the public at large, it seems like I should have to re-examine my decision to boycott—which, actually, is one of the easiest decisions I’ve ever made. I am more adamant about it than ever.
I might be a lot more interested in developments with AI if I hadn’t already seen this movie. It played out across a great swath of my life in the form of social media. It featured all the same actors making all the same promises early on—and all of them following the same playbook to extract our attention, as if it were a mineable resource, and then selling that for ads, as well as whatever personal data we were unwise enough to leave unguarded on our own devices. I remember showing up on a college campus looking forward to young people connecting with each other; instead—Facebook had just come out—everybody seemed to be sealed up in their rooms carrying out a facsimile of social exchange. All this time later, has anything really improved? Every time I go into my bank account I seem to find some recurring subscription that I signed up for in the flush of tech optimism and long ago stopped using—but which still clings to my wallet like a barnacle to a ship’s hull.
So what’s different this time around? Well, if in the previous installment of this series, the tech companies targeted social relationships, and hacked at the envy and FOMO and anxiety that animates social relationships as much as the substance itself, now they’ve made their way into a different kind of space—the deep privacy of people’s innermost lives. In the AI regime, people confess their darkest secrets to their LLM therapist, which spits back out what they want to hear—although without the legal confidentiality of actual therapy sessions, with everything forming a digital record, and with the tech companies, on a whim, sometimes posting the raw text of chats onto the open internet. People earnestly ask AI about the existence of God or the meaning of life in the way they might once have sunk down to their knees and searched the silence of their own soul. And people freely turn their most creative projects, sometimes their life’s work, over to AI under the bashful premise that they themselves are not that good at the thing they most love to do and it’s better handled by the bot.
Now that we’re a couple of years into this whole set of developments, the essentials of it—and the battle lines—are a bit more clear. It’s not really about what AI can do or what it can’t do, whether it will take all the jobs or won’t, whether it will destroy the world or not. The question is about agency—do you choose to exert agency in your own life, in the way that humans always have and were doing just fine with until, like, three years ago? Or do you prefer to turn it over to a machine, which really means turning it over to the data miners and the advertising innovators in the world’s largest tech corporations?
Maintaining an AI boycott, it has to be acknowledged, is getting progressively harder to do—and I am not always the strictest Luddite about it. I was hooked into various forms of AI before they started being called AI—Google Translate, Apple’s text predict feature, and so on. If I think I have cancer for some ridiculous reason, I now no longer need to go to all the trouble of clicking on WebMD to be told I don’t have cancer; the AI tools can do it with just a glance at the screen.
But... it’s really not so difficult either. Every so often—in a moment of weakness—I’ll download an AI app onto my phone and the process will begin all over again where I rack my brains to come up with some way in which the technology might actually benefit my life. So far, I haven’t been able to come up with anything. I like structuring and writing my own essays; I have no idea why I would outsource that to AI. The advice it gives always starts out kind of interesting—and there’s always a moment of being awed by the technology—before it reveals itself to just be gleaned from a couple of random pieces of information on the web. The actual zone of my life just seems to be in a place that AI can’t touch or help—unless I proactively turn myself over to the bot.
And meanwhile, in that time, I can feel the whole world around me doing exactly that. It’s very common to have a conversation where someone shares their great idea—only to reveal that it came from AI. Teaching (I teach journalism and public relations at an international university) has largely become a matter of trying to suss out what is AI and what isn’t—and, since AI became prevalent (and all my students use it), they have become distinctly lazier. They clearly feel that the AI is just better than they are and there’s no real point in trying.
The general impression is of the invasion of the bodysnatchers occurring all around me—if I deal with anyone, it tends to feel like they’re becoming a kind of frontman for AI-generated work but without the work itself becoming in any way obviously better.
The lecture I tend to give to my students isn’t moral at all. It’s just very practical. They may fool their teachers sometimes with AI work, but their teachers are starting to catch on—all it takes is running their work through an AI detector, or insisting on giving them paper-and-pen exams, to starkly reveal how much they’ve been relying on the technology. And they certainly won’t be able to fool their employers. If they show up in the workforce using AI for everything, their employers will of course take them at their word and simply replace their jobs with AI. If they use AI for their products, then they will be competing with rivals using exactly the same sets of AI tools and generating the exact same kinds of work—there will be no way to distinguish themselves or to get ahead.
For adults, the question is a little more existential. It isn’t so much how to adapt to the world but what kind of world you really want to build and transmit onwards. The value that AI presupposes is about optimization—about doing things really, really well. AI text is always very clean, there is never a hint of, say, a spelling mistake; it really (and it is obviously a work in progress, but AI is getting better at it all the time) can do just about any task to a high level.
But whoever said that life is about optimization? If you simply shift your focus and presuppose that life is about enriching human experience, and finding meaning, then the value of AI almost instantly dwindles away. I’ve heard of people using AI to write their diaries—but the whole point of a diary isn’t to produce some sort of masterwork, it’s to record a particular day in the highly subjective, idiosyncratic way that lends meaning to you. A friend who works for a travel writing company told me that the boss had a meeting in which it was announced that they should “welcome in AI”—with the result that, only a few months later, virtually everyone at the company lost their jobs and the ones who stayed were basically there to check over AI-generated posts for hallucinations. But travel isn’t actually just a bundle of information—the whole point of travel is the relationship between you, the traveler, and the place visited—and the result of the travel industry’s turn to AI is that it now wouldn’t even occur to me to read one of their posts.
The assumption at the moment is that AI “is the future”—a phrase like that is the underpinning of just about any conversation on AI. I’m not sure actually. I could sort of imagine this spinning the other way, that the pace of LLM improvement slows down, that more and more of what they’re generating is obviously “slop,” and that the enthusiastic early adopters are the ones who will be caught out—who will be remembered in a few years for having downsized their workforce for the sake of a bot; or for having skipped out on their education, on a crucial period of investing in themselves, in order to generate work that turns out to be indistinguishable from the machine-generated work that everybody is producing.
But the ship really seems to be sailing on that fantasy of mine. AI has already moved into a “too big to fail” category—the economy isn’t producing much, but it is producing AI and that’s the horse that we’re collectively hitching our wagons to; the tech companies haven’t quite been able to figure out a clear use for AI, but they’ve been very good at generating stickiness—at persuading people to download the free samples of AI and to use it for one thing or another so that it already becomes unimaginable to write a student paper or to prepare a recipe without it.
AI may well push through the slight bottleneck it’s been in for the past couple of years and prevail. The dissenting voices are starting to get a little lonelier; when I make this point with people, the glazed look I get back tends to mean that my interlocutor has already turned AI into a habit. But even should AI win out in the culture wars, that actually doesn’t prove the larger point. AI isn’t actually some guaranteed future that we have to succumb to whether we like it or not. The adoption of AI rests on choices—on the individual choices of billions, and there is agency, actually, in deciding whether to use it or not, an agency that may be all the greater since we were fooled once by the tech companies’ marketing gimmicks in the high social media era of the 2010s, and should at least be wiser to the tricks they are pulling with AI in the 2020s.
At the moment, the conversation about AI is basically a lot of noise. Most of it turns on AI’s capabilities. The tendency to hallucinate and to generate confident slop were embarrassing for quite a while. The fact that it seems to be pushing past the worst hallucinatory tendencies is supposed to quiet the doubters. But the conversation about capabilities is, I am convinced, basically a distraction. The technology is already impressive and will continue to improve; that’s not in dispute. But cloning and nuclear technology are also impressive and have strict guardrails around them.
What’s at stake isn’t really technology—it’s about taking a good hard look at what our fundamental values are and questioning whether AI aligns with them. The question isn’t whether AI is a stochastic parrot or not; the question is whether you are.
Sam Kahn is associate editor at Persuasion, writes the Substack Castalia, and edits The Republic of Letters.
Follow Persuasion on X, Instagram, LinkedIn, and YouTube to keep up with our latest articles, podcasts, and events, as well as updates from excellent writers across our network.
And, to receive pieces like this in your inbox and support our work, subscribe below:

