I have had it to up to here with what I call AI “woo-woo.”
The breaking point was this headline on Anthropic CEO Dario Amodei’s podcast conversation with The New York Times’ Ross Douthat.
I can dispel Dario Amodei’s doubt. We know that large language models like his company’s Claude are not conscious because the underlying architecture which drives them does not allow for it.
I am aware of and have dipped my toes into the larger debates about consciousness and whether or not we can definitively say whether or not anything is conscious - I’ve read my Daniel Dennett - but these debates, as interesting as they may be in theory are not applicable to the work of large language models. These things do not think, feel, reason or communicate with intention. This is simply a fact of how they work. I shall return to one of my favorite short explainers of this from Eryk Salvaggio and his essay, “A Critique of Pure LLM Reason”:
Could there be an embodied AI someday that has the kinds of capabilities that should give us pause before dismissing them as a mere machine? I don’t know, maybe? I’ve read I, Robot, I’ve watched Ex Machina.
But LLMs are not that, never will be, and yet here is the CEO of a company which just raised another $30 billion dollars, pushing Anthropic’s valuation to $380 billion, making a truly absurd claim.
Amodei has not been the only Anthropic figure on the woo-woo weaving PR tour. Company “philosopher” Amanda Askell has been everywhere, including a recent episode of the Hard Fork podcast in which she talks about her view that shaping Claude is akin to the work of raising a child.
Golly! If the people who are closest to the development of this technology are actively wondering about whether or not the models might be conscious and trying to offer guidance in the role of a parent in shaping its “character,” we’ve got some really powerful stuff here!
Unfortunately, the woo-woo isn’t limited to the direct statements from Anthropic insiders. In a company profile published at the New Yorker, which has much to recommend it as a work of ethnography, but is also infused with woo-woo, Gideon Lewis-Kraus gives in to the impulse to describe a large language model as a “black box,” a description that is simply not true, or is only true if you stretch the definition of “black box” to mean some stuff happens that’s surprising.
In truth, large language models operate as they were theorized in a paper prior to their development. There is no genie that has been released from the bottle (or black box) and floating around the room. There is a piece of technology. This woo-woo is spun in the service of creating a myth (more good stuff from Salvaggio here), a myth which signals to regular folks that we should see ourselves as disempowered in the face of such a thing. Even the people in charge of this stuff can’t really get a hold of it. What hope do the rest of us have?
This disempowerment makes us vulnerable to outsized and unevidenced claims like those in a viral Twitter essay that claimed we’re on the precipice of a disruption in the labor force unlike anything that’s occurred previously, beyond even what we can conceive of. ( fisks the vial essay here.)
It’s all woo-woo. Even this from the ostensibly sober-minded Derek Thompson is its own form of woo-woo.
We must ask, why is Dario Amodei saying he’s not sure if his LLM is conscious? Three possibilities:
He’s genuinely not very bright or well informed on this stuff.
He’s bullshitting us.
He’s bullshitting himself.
Let’s dismiss number one. When I posted a screen shot of that headline from the Douthat/Amodei podcast conversation on BlueSky, lots of people showed up to just say that Amodei is an idiot, which he is not. It is important not to grant people like Dario Amodei, Sam Altman (OpenAI), or Demis Hassabis (Google) any kind of special oracle status because of their proximity to the technology, but at the same time we must recognize their agency in these discussions. When they say things they are done with knowledge and intent.
In my view, the answer is some combination of 2 and 3. If you have to ask why Dario Amodei might be bullshitting us, here is your answer:
But we also cannot dismiss the notion that he is bullshitting himself. The Lewis-Kraus New Yorker piece paints a picture of a group of people in thrall to their own world views, views which are steeped in Effective Altruism, a movement which tasks themselves with being responsible for saving not just humanity, but the uncountable number of future people. While Anthropic plays down these associations, as true-blue EA’s are deeply concerned about AI killing us all, these delusions of importance appear to be part of the overall DNA.
In his book, More Everything Forever, Adam Becker pokes through the EA movement and finds something strange, cultish, and ultimately contradictory. These are people who intend to preserve humanity by potentially destroying the Earth.
In the past, Amodei has put his (p)doom score - the belief that AI could unleash catastrophic events - at 25%. Consider the tension here. Imagine you are both an Effective Altruist and you are working on technology that you believe could have a one-in-four chance of essentially exterminating humanity. Amodei says he has oriented his company’s priorities around AI safety. But a sincere belief in the danger of AI should lead you to pull the plug on your own project and then advocate forcefully for doing the same to others.
His views are irreconcilable, which is how we know it’s all woo-woo.
We gotta ignore the woo-woo because that’s all it is.
As to why people like Derek Thompson are making massive claims about the future of labor based essentially on personal, anecdotal experience, I think there’s a couple things going on:
More people are finding genuine, interesting, and surprising uses for the technology.
This appears particularly true of Claude Code, which is what Thompson is referring to here. For the uninitiated, Claude Code (and a similar product, Claude Cowork) are self-directing agents that can execute a task after being prompted in plain language instructions. They are, no doubt, amazing applications of this technology and people are finding them useful.
Here’s declaring that after trying Claude Code “everything has changed.” Dan’s work with data visualization in publishing was previously hampered by the challenges of coding for the data sets available to him. Claude Code has removed those frictions. Sinykin sees a revolution in digital humanities.
, who sorts through some of the current vibes in Silicon Valley where lots of people apparently think their jobs are about to be obviated, also found Claude Code useful:
The Claudes Code and Cowork are extremely cool and impressive tools, especially to people like me with no real prior coding ability. I had it make me a widget to fetch assets and build posts for Read Max’s regular weekly roundups, a task it completed with astonishingly little friction. Admittedly, the widget will only save me 10 or so minutes of busy work every week, but suddenly, a whole host of accumulated but untouched wouldn’t-that-be-nice-to-have ideas for widgets and apps and pages and features has opened itself up to me.
I wish I could find this reference I came across a few weeks back, but someone I read remarked that Claude Code is a great advance for people who can’t code, but who have a “software shaped hole” in their lives.
That’s what Read has experienced.
We’re still in the stage of being stunned by the novelty of what Claude Code can do.
This has happened every time some leap in the capabilities of LLMs is demonstrated. When ChatGPT arrived, both high school and college English were supposedly ended. When OpenAI trotted out its Sora video generator, Hollywood was some short number of years away from elimination. When AI-generated music started coming, etc, etc…
I find Sinykin’s work very interesting and do not doubt his amazement, and the potential for advancement in the digital humanities is very exciting for people working in the digital humanities, but this is not an epoch shaking change. Max Read’s reaction would be my own, a small jolt of pleasure at making something that saves me a little digital scutwork every week, but this is not transformative at the core of what either of these people do.
As more people explore these applications, they too will find the software shaped holes in their lives, but we have to wonder, how numerous, and how big are those holes? What is the true value of filling them?
Like, I could imagine a Claude Code application that does something I’ve vaguely desired for this newsletter - finding Bookshop.org links for any book mentioned in this column, including in the recommendation request lists of readers. This would be both a minor service to readers, by allowing them to click on the link for more information, and a potential financial benefit to the charities I donate the Bookshop.org referral income to.
Claude Code could fill this hole for me, but it is a very small hole. Only somewhere between 10-15% of the people who access this newsletter ever click on any links period. The additional revenue would be negligible, maybe $50 a year.
We have to get beyond the novelty to fully understand how useful this stuff is going to be, and my bet is that it’s going to be much less transformative in the short term - and Derek Thompson’s two year window is short term - than many seem to believe.
Part of my belief is because I am team . Transformations in the labor force simply take a long time, no matter how powerful or disruptive the technology seems.
I also think that there are fewer “software shaped holes” than many people seem to think, and the software shaped holes that some perceive are not real to those who are in touch with and invested in their work.
This principle was nicely illustrated by an email exchange I had this week with someone who had read More Than Words (or said they had) and was not impressed.
This person wanted to convince me that I really was missing the boat by not using large language models for my writing. Their main argument was “They (LLMs) know more than you ever could.”
I replied that this was not true because LLMs do not have access to the material that goes into my writing, my mind, my experiences, my thoughts, my feelings. It’s not clear to me how someone who had claimed to have read my book had missed these important distinctions, but they were not convinced. This person wanted me to appreciate that I could write “hundreds” of books if I tapped into the power of generative AI. I replied, asking who was going to read these hundreds of books. I haven’t heard back yet.
As it happens, this email arrived just after I’d finished a draft of a proposal for what I’m hoping becomes my next book. I shouldn’t even be talking about it because my agent hasn’t even read it yet, but what the hell, just completing the proposal is meaningful because it was an exercise in proving to myself first that there is a book inside of me, ready to come out.
The introduction to the proposal is a kind of mini-essay ruminating on how, over the course of thirty-plus years I went from a rank incompetent as a writing teacher to someone who now gets paid real US greenbacks to go to places and talk to others about how we should approach teaching writing. One of the roots of how I have become this entirely different person is “experience.”
Here’s what I had to say about that:
“Experience is the best teacher” is one of the cliches that people offer up after you’ve had a lousy experience, an acknowledgement of your pain and suffering and a minor sop towards looking on the bright side because at least you won’t do that stupid thing again, at least not in the exact same way. The reason we don’t immediately punch out the person telling us this is because we know it to be true.
One of the important shifts in my own experience of teaching was when my failures went from humiliations brought about by (literal) inexperience, to failures following good faith, but ultimately ill-conceived or ill-executed attempts at better reaching my students.
I’d crossed the line of agency where I had some measure of intention over my work and my experiences now consistently took the form of intentional experiments. Those experiments often included some element of failure, but these were failures I could work from.
Having sufficient expertise to work from this kind of intention is somewhere down the line of growth, though. At first, you’re going to simply get cuffed around by life, your lack of knowledge, and your inexperience. This can be deeply unpleasant, for example if you forget to completely rinse down a surface that has previously been scrubbed with muriatic acid before applying a soapy cleanser with the nearly opposite pH, resulting in a toxic gas, as happened to me during a brief period of employment as a pool maintenance technician.
In the case of my near poisoning, I’d even been told by Reggie, the crew chief, to make sure to “rinse the shit” out of pool before giving it the wash, but I did not fully appreciate what rinsing the shit out of something meant until I almost killed myself. (Reggie, positioned down wind, laughed and said, “told ya, college” as I scrambled out of the pool and we watched our very minor airborne toxic event waft through north suburban Chicago.)
Similarly, I spent years warning my students about various pitfalls to avoid in the writing of their stories, essays, reports, etc…and yet they would relentlessly commit these sins nonetheless, sometimes minutes after they had been instructed otherwise. I would pull my hair, gnash my teeth, rend my garments and plead with them to pay better attention to my instruction next time, but it never worked.
Why? Because we learn through experience. Not incidentally, the capacity to learn from real world experiences is a hard and permanent difference between human beings and AI. (This generation of AI, at least.)
There is one small part here that particularly tickles me, and is a reminder to myself why writing requires me to just do the work of writing. I’m talking about the parenthetical about Reggie and I watching our “very minor airborne toxic event.”
It tickles me is that I retain these specific memories of Reggie, a guy I worked with for all of six weeks and never talked to again, but who was a genuine character, and that this memory combined in the moment of drafting with a reference to Don DeLillo’s White Noise, a book I would read for the first time in a postmodern literature course the semester after the summer I worked with Reggie.
How amazing is that, that my mind can reach back 35 years and combine these things into a piece of writing that I produced in February of 2026? I could quite easily have prompted an LLM to write a book proposal, to conjure possible chapters, comparative titles, and audience analysis, and it all would’ve been plausible, but it wouldn’t have been mine.
I think Derek Thompson is undervaluing two things. One is the importance of prior experiences to being able to use this technology in genuinely, enduringly useful ways that move beyond novelty. To use these applications productively, we must understand the deep context of our work. It is seductive to believe we can do everything faster, but I think this is a false hope when it comes to both the efficiency and quality of our work.
Interestingly, at least some coders are recognizing similar limitations in outsourcing work to Claude Code, the fact that the outsourcing removes important context that allows them to understand the full picture of what they’re creating. This post from Reddit is deeply thoughtful on this challenge.
One of the byproducts of any automating technology is the erasure of context. GPS erases the work of navigation. Using writing for LLMs erases the experience and unique intelligence behind a text. My work as a writer no doubt biases my thinking, but it’s my view that these contexts are far more important than some would like to believe and that the siren call of increased speed and efficiency may send lots of folks down a false path. I think we’re going to see a lot of visions of transformation that are later to be revealed as mirages as we lurch toward novelty and then have to retreat in order to ground the work in context.
It also has the potential for both labor deskilling and self-alienation. One reason to write my own book proposal is because having done so makes me excited and eager to work on the book. This makes me feel good. It will also help me make a better book. A shortcut to a faster proposal would, in the long run, be a detriment to the final product. I would’ve liked to have this done months ago, and using an LLM to make a simulation of it, I probably could have done so, maybe could have even sold it, but now I’d be sitting here wondering how I’m going to write a book that’s not mine.
Looked at through the lens of life and experience, rather than transaction and output, nothing is immaterial, including those six weeks where you were part of Reggie’s pool cleaning crew.
Update on my career-making correspondence with Reese W.
Last week I shared a series of emails with Reese W., who is offering me an opportunity to promote my work.
I didn’t have a lot of time to correspond with Reese W. this past week, but I did want to update her on my efforts to secure the $100 necessary to take advantage of the incredible promotional opportunity.
Reese remains very understanding.
Links
This week at the Chicago Tribune I had the pleasure of describing my pleasure at reading ’s short story collection, Hey You Assholes.
At Inside Higher Ed I rounded up some recent events in higher education that are absolutely, positively, nuts.
After drafting the main text of this newsletter I came across ’s post offering a bet to Scott Alexander that AI technology will be much less disruptive than many think. I’m not smart enough to work through the specific criteria of the bet, but directionally, on this issue, I’m team deBoer.
Another piece I wish I’d read before I drafted the main text, also assessing how big a change that agents like Claude Code may be by looking back at a different software revolution that wasn’t.
Via my friends “Give Us Access to Your Ring Camera and Maybe We’ll Find Your Dog” by Madeline Goetz an Will Lampe
Recommendations
1. Dreaming the Beatles by Rob Sheffield
2. Burn Book by Kara Swisher
3. I Want to Burn this Place Down by Maris Kreizman
4. Lorne by Susan Morrison
5. There’s Always this Year by Hanif Abdurraqib
Michael G. - Royse City, TX
What we got here is a fan of the personal/cultural/memoir-ish essay. I’m having a hard time choosing between two that came to mind so I’m going to break a rule and recommend them both. One is Foreskin’s Lament by Shalom Auslander and the other is Devil in the Details by Jennifer Traig.
Back on the road next week to spread the gospel of treating learning to write as an experience, but I should hopefully have time to maintain my correspondence with Reese W. Let me know in the comments if you any suggestions for what misfortunate may befall me.
Also, what software shaped holes do you have in your lives? I’m curious if my intuition that these gaps are smaller than some think is true.
Thanks, as always for reading, spare a good thought for my book proposal and I’ll see you next week.
JW
The Biblioracle




















