1435 stories
·
1 follower

Can We Please Stop it with the AI Woo-Woo?

1 Share

I have had it to up to here with what I call AI “woo-woo.”

The breaking point was this headline on Anthropic CEO Dario Amodei’s podcast conversation with The New York Times’ Ross Douthat.

I can dispel Dario Amodei’s doubt. We know that large language models like his company’s Claude are not conscious because the underlying architecture which drives them does not allow for it.

I am aware of and have dipped my toes into the larger debates about consciousness and whether or not we can definitively say whether or not anything is conscious - I’ve read my Daniel Dennett - but these debates, as interesting as they may be in theory are not applicable to the work of large language models. These things do not think, feel, reason or communicate with intention. This is simply a fact of how they work. I shall return to one of my favorite short explainers of this from Eryk Salvaggio and his essay, “A Critique of Pure LLM Reason”:

Could there be an embodied AI someday that has the kinds of capabilities that should give us pause before dismissing them as a mere machine? I don’t know, maybe? I’ve read I, Robot, I’ve watched Ex Machina.

But LLMs are not that, never will be, and yet here is the CEO of a company which just raised another $30 billion dollars, pushing Anthropic’s valuation to $380 billion, making a truly absurd claim.

Amodei has not been the only Anthropic figure on the woo-woo weaving PR tour. Company “philosopher” Amanda Askell has been everywhere, including a recent episode of the Hard Fork podcast in which she talks about her view that shaping Claude is akin to the work of raising a child.

Golly! If the people who are closest to the development of this technology are actively wondering about whether or not the models might be conscious and trying to offer guidance in the role of a parent in shaping its “character,” we’ve got some really powerful stuff here!

Unfortunately, the woo-woo isn’t limited to the direct statements from Anthropic insiders. In a company profile published at the New Yorker, which has much to recommend it as a work of ethnography, but is also infused with woo-woo, Gideon Lewis-Kraus gives in to the impulse to describe a large language model as a “black box,” a description that is simply not true, or is only true if you stretch the definition of “black box” to mean some stuff happens that’s surprising.

In truth, large language models operate as they were theorized in a paper prior to their development. There is no genie that has been released from the bottle (or black box) and floating around the room. There is a piece of technology. This woo-woo is spun in the service of creating a myth (more good stuff from Salvaggio here), a myth which signals to regular folks that we should see ourselves as disempowered in the face of such a thing. Even the people in charge of this stuff can’t really get a hold of it. What hope do the rest of us have?

This disempowerment makes us vulnerable to outsized and unevidenced claims like those in a viral Twitter essay that claimed we’re on the precipice of a disruption in the labor force unlike anything that’s occurred previously, beyond even what we can conceive of. ( fisks the vial essay here.)

It’s all woo-woo. Even this from the ostensibly sober-minded Derek Thompson is its own form of woo-woo.

We must ask, why is Dario Amodei saying he’s not sure if his LLM is conscious? Three possibilities:

  1. He’s genuinely not very bright or well informed on this stuff.

  2. He’s bullshitting us.

  3. He’s bullshitting himself.

Let’s dismiss number one. When I posted a screen shot of that headline from the Douthat/Amodei podcast conversation on BlueSky, lots of people showed up to just say that Amodei is an idiot, which he is not. It is important not to grant people like Dario Amodei, Sam Altman (OpenAI), or Demis Hassabis (Google) any kind of special oracle status because of their proximity to the technology, but at the same time we must recognize their agency in these discussions. When they say things they are done with knowledge and intent.

In my view, the answer is some combination of 2 and 3. If you have to ask why Dario Amodei might be bullshitting us, here is your answer:

But we also cannot dismiss the notion that he is bullshitting himself. The Lewis-Kraus New Yorker piece paints a picture of a group of people in thrall to their own world views, views which are steeped in Effective Altruism, a movement which tasks themselves with being responsible for saving not just humanity, but the uncountable number of future people. While Anthropic plays down these associations, as true-blue EA’s are deeply concerned about AI killing us all, these delusions of importance appear to be part of the overall DNA.

In his book, More Everything Forever, Adam Becker pokes through the EA movement and finds something strange, cultish, and ultimately contradictory. These are people who intend to preserve humanity by potentially destroying the Earth.

In the past, Amodei has put his (p)doom score - the belief that AI could unleash catastrophic events - at 25%. Consider the tension here. Imagine you are both an Effective Altruist and you are working on technology that you believe could have a one-in-four chance of essentially exterminating humanity. Amodei says he has oriented his company’s priorities around AI safety. But a sincere belief in the danger of AI should lead you to pull the plug on your own project and then advocate forcefully for doing the same to others.

His views are irreconcilable, which is how we know it’s all woo-woo.

We gotta ignore the woo-woo because that’s all it is.

As to why people like Derek Thompson are making massive claims about the future of labor based essentially on personal, anecdotal experience, I think there’s a couple things going on:

  1. More people are finding genuine, interesting, and surprising uses for the technology.

This appears particularly true of Claude Code, which is what Thompson is referring to here. For the uninitiated, Claude Code (and a similar product, Claude Cowork) are self-directing agents that can execute a task after being prompted in plain language instructions. They are, no doubt, amazing applications of this technology and people are finding them useful.

Here’s declaring that after trying Claude Code “everything has changed.” Dan’s work with data visualization in publishing was previously hampered by the challenges of coding for the data sets available to him. Claude Code has removed those frictions. Sinykin sees a revolution in digital humanities.

, who sorts through some of the current vibes in Silicon Valley where lots of people apparently think their jobs are about to be obviated, also found Claude Code useful:

The Claudes Code and Cowork are extremely cool and impressive tools, especially to people like me with no real prior coding ability. I had it make me a widget to fetch assets and build posts for Read Max’s regular weekly roundups, a task it completed with astonishingly little friction. Admittedly, the widget will only save me 10 or so minutes of busy work every week, but suddenly, a whole host of accumulated but untouched wouldn’t-that-be-nice-to-have ideas for widgets and apps and pages and features has opened itself up to me.

I wish I could find this reference I came across a few weeks back, but someone I read remarked that Claude Code is a great advance for people who can’t code, but who have a “software shaped hole” in their lives.

That’s what Read has experienced.

  1. We’re still in the stage of being stunned by the novelty of what Claude Code can do.

This has happened every time some leap in the capabilities of LLMs is demonstrated. When ChatGPT arrived, both high school and college English were supposedly ended. When OpenAI trotted out its Sora video generator, Hollywood was some short number of years away from elimination. When AI-generated music started coming, etc, etc…

I find Sinykin’s work very interesting and do not doubt his amazement, and the potential for advancement in the digital humanities is very exciting for people working in the digital humanities, but this is not an epoch shaking change. Max Read’s reaction would be my own, a small jolt of pleasure at making something that saves me a little digital scutwork every week, but this is not transformative at the core of what either of these people do.

As more people explore these applications, they too will find the software shaped holes in their lives, but we have to wonder, how numerous, and how big are those holes? What is the true value of filling them?

Like, I could imagine a Claude Code application that does something I’ve vaguely desired for this newsletter - finding Bookshop.org links for any book mentioned in this column, including in the recommendation request lists of readers. This would be both a minor service to readers, by allowing them to click on the link for more information, and a potential financial benefit to the charities I donate the Bookshop.org referral income to.

Claude Code could fill this hole for me, but it is a very small hole. Only somewhere between 10-15% of the people who access this newsletter ever click on any links period. The additional revenue would be negligible, maybe $50 a year.

We have to get beyond the novelty to fully understand how useful this stuff is going to be, and my bet is that it’s going to be much less transformative in the short term - and Derek Thompson’s two year window is short term - than many seem to believe.

Part of my belief is because I am team . Transformations in the labor force simply take a long time, no matter how powerful or disruptive the technology seems.

I also think that there are fewer “software shaped holes” than many people seem to think, and the software shaped holes that some perceive are not real to those who are in touch with and invested in their work.

This principle was nicely illustrated by an email exchange I had this week with someone who had read More Than Words (or said they had) and was not impressed.

This person wanted to convince me that I really was missing the boat by not using large language models for my writing. Their main argument was “They (LLMs) know more than you ever could.”

I replied that this was not true because LLMs do not have access to the material that goes into my writing, my mind, my experiences, my thoughts, my feelings. It’s not clear to me how someone who had claimed to have read my book had missed these important distinctions, but they were not convinced. This person wanted me to appreciate that I could write “hundreds” of books if I tapped into the power of generative AI. I replied, asking who was going to read these hundreds of books. I haven’t heard back yet.

As it happens, this email arrived just after I’d finished a draft of a proposal for what I’m hoping becomes my next book. I shouldn’t even be talking about it because my agent hasn’t even read it yet, but what the hell, just completing the proposal is meaningful because it was an exercise in proving to myself first that there is a book inside of me, ready to come out.

The introduction to the proposal is a kind of mini-essay ruminating on how, over the course of thirty-plus years I went from a rank incompetent as a writing teacher to someone who now gets paid real US greenbacks to go to places and talk to others about how we should approach teaching writing. One of the roots of how I have become this entirely different person is “experience.”

Here’s what I had to say about that:

“Experience is the best teacher” is one of the cliches that people offer up after you’ve had a lousy experience, an acknowledgement of your pain and suffering and a minor sop towards looking on the bright side because at least you won’t do that stupid thing again, at least not in the exact same way. The reason we don’t immediately punch out the person telling us this is because we know it to be true.

One of the important shifts in my own experience of teaching was when my failures went from humiliations brought about by (literal) inexperience, to failures following good faith, but ultimately ill-conceived or ill-executed attempts at better reaching my students.

I’d crossed the line of agency where I had some measure of intention over my work and my experiences now consistently took the form of intentional experiments. Those experiments often included some element of failure, but these were failures I could work from.

Having sufficient expertise to work from this kind of intention is somewhere down the line of growth, though. At first, you’re going to simply get cuffed around by life, your lack of knowledge, and your inexperience. This can be deeply unpleasant, for example if you forget to completely rinse down a surface that has previously been scrubbed with muriatic acid before applying a soapy cleanser with the nearly opposite pH, resulting in a toxic gas, as happened to me during a brief period of employment as a pool maintenance technician.

In the case of my near poisoning, I’d even been told by Reggie, the crew chief, to make sure to “rinse the shit” out of pool before giving it the wash, but I did not fully appreciate what rinsing the shit out of something meant until I almost killed myself. (Reggie, positioned down wind, laughed and said, “told ya, college” as I scrambled out of the pool and we watched our very minor airborne toxic event waft through north suburban Chicago.)

Similarly, I spent years warning my students about various pitfalls to avoid in the writing of their stories, essays, reports, etc…and yet they would relentlessly commit these sins nonetheless, sometimes minutes after they had been instructed otherwise. I would pull my hair, gnash my teeth, rend my garments and plead with them to pay better attention to my instruction next time, but it never worked.

Why? Because we learn through experience. Not incidentally, the capacity to learn from real world experiences is a hard and permanent difference between human beings and AI. (This generation of AI, at least.)

There is one small part here that particularly tickles me, and is a reminder to myself why writing requires me to just do the work of writing. I’m talking about the parenthetical about Reggie and I watching our “very minor airborne toxic event.”

It tickles me is that I retain these specific memories of Reggie, a guy I worked with for all of six weeks and never talked to again, but who was a genuine character, and that this memory combined in the moment of drafting with a reference to Don DeLillo’s White Noise, a book I would read for the first time in a postmodern literature course the semester after the summer I worked with Reggie.

How amazing is that, that my mind can reach back 35 years and combine these things into a piece of writing that I produced in February of 2026? I could quite easily have prompted an LLM to write a book proposal, to conjure possible chapters, comparative titles, and audience analysis, and it all would’ve been plausible, but it wouldn’t have been mine.

I think Derek Thompson is undervaluing two things. One is the importance of prior experiences to being able to use this technology in genuinely, enduringly useful ways that move beyond novelty. To use these applications productively, we must understand the deep context of our work. It is seductive to believe we can do everything faster, but I think this is a false hope when it comes to both the efficiency and quality of our work.

Interestingly, at least some coders are recognizing similar limitations in outsourcing work to Claude Code, the fact that the outsourcing removes important context that allows them to understand the full picture of what they’re creating. This post from Reddit is deeply thoughtful on this challenge.

One of the byproducts of any automating technology is the erasure of context. GPS erases the work of navigation. Using writing for LLMs erases the experience and unique intelligence behind a text. My work as a writer no doubt biases my thinking, but it’s my view that these contexts are far more important than some would like to believe and that the siren call of increased speed and efficiency may send lots of folks down a false path. I think we’re going to see a lot of visions of transformation that are later to be revealed as mirages as we lurch toward novelty and then have to retreat in order to ground the work in context.

It also has the potential for both labor deskilling and self-alienation. One reason to write my own book proposal is because having done so makes me excited and eager to work on the book. This makes me feel good. It will also help me make a better book. A shortcut to a faster proposal would, in the long run, be a detriment to the final product. I would’ve liked to have this done months ago, and using an LLM to make a simulation of it, I probably could have done so, maybe could have even sold it, but now I’d be sitting here wondering how I’m going to write a book that’s not mine.

Looked at through the lens of life and experience, rather than transaction and output, nothing is immaterial, including those six weeks where you were part of Reggie’s pool cleaning crew.

Leave a comment

Update on my career-making correspondence with Reese W.

Last week I shared a series of emails with Reese W., who is offering me an opportunity to promote my work.

I didn’t have a lot of time to correspond with Reese W. this past week, but I did want to update her on my efforts to secure the $100 necessary to take advantage of the incredible promotional opportunity.

Reese remains very understanding.

Links

This week at the Chicago Tribune I had the pleasure of describing my pleasure at reading ’s short story collection, Hey You Assholes.

At Inside Higher Ed I rounded up some recent events in higher education that are absolutely, positively, nuts.

After drafting the main text of this newsletter I came across ’s post offering a bet to Scott Alexander that AI technology will be much less disruptive than many think. I’m not smart enough to work through the specific criteria of the bet, but directionally, on this issue, I’m team deBoer.

Another piece I wish I’d read before I drafted the main text, also assessing how big a change that agents like Claude Code may be by looking back at a different software revolution that wasn’t.

Via my friends “Give Us Access to Your Ring Camera and Maybe We’ll Find Your Dog” by Madeline Goetz an Will Lampe

Share

Recommendations

1. Dreaming the Beatles by Rob Sheffield
2. Burn Book by Kara Swisher
3. I Want to Burn this Place Down by Maris Kreizman
4. Lorne by Susan Morrison
5. There’s Always this Year by Hanif Abdurraqib

Michael G. - Royse City, TX

What we got here is a fan of the personal/cultural/memoir-ish essay. I’m having a hard time choosing between two that came to mind so I’m going to break a rule and recommend them both. One is Foreskin’s Lament by Shalom Auslander and the other is Devil in the Details by Jennifer Traig.

Request your own recommendation.

Back on the road next week to spread the gospel of treating learning to write as an experience, but I should hopefully have time to maintain my correspondence with Reese W. Let me know in the comments if you any suggestions for what misfortunate may befall me.

Also, what software shaped holes do you have in your lives? I’m curious if my intuition that these gaps are smaller than some think is true.

Leave a comment

Thanks, as always for reading, spare a good thought for my book proposal and I’ll see you next week.

JW
The Biblioracle

Here you are at the bottom having read 4000 words by some guy. Subscribing so you can do it again might make sense.

Read the whole story
mrmarchant
20 minutes ago
reply
Share this story
Delete

The AI Vampire

2 Shares

The AI Vampire

Steve Yegge's take on agent fatigue, and its relationship to burnout.

Let's pretend you're the only person at your company using AI.

In Scenario A, you decide you're going to impress your employer, and work for 8 hours a day at 10x productivity. You knock it out of the park and make everyone else look terrible by comparison.

In that scenario, your employer captures 100% of the value from you adopting AI. You get nothing, or at any rate, it ain't gonna be 9x your salary. And everyone hates you now.

And you're exhausted. You're tired, Boss. You got nothing for it.

Congrats, you were just drained by a company. I've been drained to the point of burnout several times in my career, even at Google once or twice. But now with AI, it's oh, so much easier.

Steve reports needing more sleep due to the cognitive burden involved in agentic engineering, and notes that four hours of agent work a day is a more realistic pace:

I’ve argued that AI has turned us all into Jeff Bezos, by automating the easy work, and leaving us with all the difficult decisions, summaries, and problem-solving. I find that I am only really comfortable working at that pace for short bursts of a few hours once or occasionally twice a day, even with lots of practice.

Via Tim Bray

Tags: steve-yegge, ai, generative-ai, llms, ai-assisted-programming, ai-ethics, coding-agents

Read the whole story
mrmarchant
20 minutes ago
reply
Share this story
Delete

normal people don't use the internet

1 Share

I've been thinking about the small web, the indie web, and the seeming resurgence of personal blogs. Let me get right to it: The only people talking about this community-centric (?), personally driven Internet are tech people. Programmers, developers, and so on. Nobody outside of tech is talking about this, unless they happen to be terminally online. (Like me, hello!) This "indie web" is incredibly technical—just try parsing microformats. And, of course, none of that is necessary...but what is a non-technical person supposed to do? Pay money to use Squarespace? Create a Mastodon account? (A social network that has become primarily for programmers and journalists, it seems.)

No, I don't think any normal person who wants to escape the cage of social media is going to rent a VPS and install a headless Linux OS just to host a blog they have to code themselves. (Maybe they know HTML 2.0 from the early days, but now there's HTML 5? Not that there's much difference, mind you, especially for a personal website, but two to five is a big jump!)

Normal people don't use the internet; they use social media. All of the blog posts from developers who have largely left behind social media, who code a "status" page on their personal website that they coded themselves, celebrating their one or two years free of social media (all the while running curated groups chats on #pick-your-platform) are completely disconnected from the normal person who doomscrolls and follows influencers. Amongst the #SponCon, they still see life updates from their friends; they still have individual and group chats on proprietary platforms; they search on google dot com and read the AI summary because, well, there it is.

Normal people don't scrub the URLs they share for tracking links! They don't even know how to parse a URL—the domain looks right and, well, most of the other information they don't even see because they're using the share sheet on their phone's browser.

I think this has actually divided the internet. For all intents and purposes, normal people simply don't access the internet as it was 20 years ago (which is roughly what I would pin this small/indie web is trying to reach—not to go back in time, but to find that same freedom). They're using siloed social media, they're logging into Google Workshop or whatever, they're ignoring Gmail, they're making a restaurant reservation on OpenTable, and so on.

All of this is honestly just to complain about tech people blogging. I can't read another resume building post on what development to CSS is coming! Who cares!!! Email your colleagues! If you want the small web, start journaling. Start commenting on the world around you. At least start talking about some hobby in programming, not your fucking day job, god.

email a comment

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Some Things I Believe (Part 1)

1 Share

It was a Monday afternoon in early December and, boy, was I nervous.

I was giving a webinar on Story Tables. This is the first of several posts I hope to write about that day.

I always start presentations by introducing myself, telling some joke about the venue we are in or how little sleep we’ve all had, thanking all of the incredible people I’ve worked with, sharing a brief, yet vague agenda. Nothing terribly out of the ordinary.

Then, before we discuss any Story Tables, I spend about 5-10 minutes talking about one slide. I have found that it shapes our conversations and, when we come back to reflect on the same slide at the end, makes the work we have done together all the more powerful. The exact words have shifted over time but the title remains the same.

Here’s that slide from the Monday in early December.

1. Math is about making sense.

There is a lot of heated debate about what is important to teach and know in math, and why it matters in the first place. I don’t always know where I fall, but I do believe this: if the math that students are doing doesn’t make sense to them, something is wrong.

2. A sense of belonging is crucial.

If you asked me in high school what needed to be true for people to learn, I would have talked all about the content - what math we learned and maybe how the teacher explained it. I even once scheduled a meeting with the department chair formally requesting that we include more matrices in the curriculum.

Another thing that was true at that time is that I was bullied - a lot. The classroom, and especially the math classroom, was one place I felt like I belonged. When other spaces didn’t feel safe for me to share my ideas without fear of ridicule, there was at least one place that was.

That hasn’t been true for many students (and adults) I’ve met. I believe learning requires vulnerability, a willingness to share a half-formed idea or to analyze a mistake you’ve made and work to correct it. Without a sense of belonging, that’s just really hard.

Zaretta Lynn Hammond has a quote I think about a lot. “Attention drives learning. Neuroscience reminds us that before we can be motivated to learn what is in front of us, we must pay attention to it.” It’s really hard to pay attention if your energy is going towards wondering whether you’ll get laughed at for saying something wrong or whether you should have a place there at all.

Ask me now what needs to be true for people to learn and my answer is different.

3. One key to 👆 is owning math ideas.

I feel like I’ve won if every student thinks they have something valuable to share and all their classmates have something valuable to share with them.

4. The language of algebra is a gatekeeper.

I still remember how I felt when I saw this blogpost from Ben Orlin. Here’s the example I think about the most, though they’re all excellent.

THIS. This is it.

When I look at an equation, I see its structure - there’s something being squared and you take that number and subtract it from 7. Lots of my students see it as a mess of symbols, or, as my student Ariel* put it, as “alien language”.

What brilliant ideas could my students have if the world looked more to them like the image on the right than the one on the left?

*All student names in my posts are changed for their privacy.

What You Believe

You don’t have to believe the things I do. In fact, it would be surprising if you did.

After I talk about what I believe, I like to spend a minute on this slide.

If you were making a list of things you believe, what would you put on it?

A (Small) Lie

Okay. So the image I showed you at the beginning of this post isn’t exactly the one I shared in December. It’s close, but missing something at the bottom.

What’s the fifth thing I believe, the “One More”?

Well, that’s the subject of my next post.

Thanks for reading Story Table Talk! Subscribe for free to receive new posts and support my work.

Share

Leave a comment



Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

You Can’t Trust the Internet Anymore

1 Share

I like things that are strange and a bit obscure. It’s a habit of mine, and a lot of this blog is to document things I haven’t heard of before, because I wanted to learn about them. I mean, jeez, I’m certainly not writing blog posts about strip mahjong because the people demand it. But I can’t stop seeing misinformation everywhere, and I have to say something. This post is just a rant.

Phantasy Star Fukkokuban

This is Phantasy Star Fukkokuban, a Japanese Sega Genesis game released in 1994 to commemorate the release of Phantasy Star IV by re-releasing the original. It has an interesting component: it is the Master System game, just packaged into a Genesis cart. The PCB wires the Genesis lines the same way your Power Base Converter would. My guess is the reason for this is because the Master System wasn’t very popular in Japan, and Phantasy Star IV tied together the whole series with a lot of tiebacks to the first one in particular.

Phantasy Star Fukkokuban, which uses the Phantasy Star box art on a Japanese cartridge shell.

As a Master System game disguised as a Genesis one, this game is technically interesting. Some Genesis consoles can’t play Master System games, and those ones can’t play this game either. Also, I love the Phantasy Star series; even if 2 is my favorite. This makes this cartridge a perfect subject for my interest, so I’ve talked about it before and will talk about it again. In fact, I have a post I’m working on where I mention it.

Phantasy Star title screen. (C) SEGA 1987

So there I was, writing a blog post, and wanted to look up the release date. The first result I found in DuckDuckGo, my search engine?

DuckDuckGo search results. First, GameFAQs. Second, TCRF. Third, Press Start Gaming. An abandonware site is at the bottom

GameFAQs is at the top; a titan since the 1990’s. The second result is The Cutting Room Floor, a wiki much beloved by myself. And then the third result is “Press Start Gaming”.

Welcome to Press Start Gaming, your ultimate destination for gaming and tech enthusiasts! Founded with a passion for exploring the ever-evolving worlds of gaming and technology, we aim to deliver high-quality reviews, insightful articles, and the latest industry news to help you stay informed and inspired. Whether you’re a casual gamer, a tech aficionado, or a seasoned pro, we have something for everyone.

And here’s a thing about me. I want to trust new websites. I have a bias towards clicking on articles from sites I don’t know, because to be quite honest, I’ve read the TCRF page on Phantasy Star a thousand times. How else do you learn something new?

Phantasy Star title screen. (C) SEGA 1988

Also, I clicked it because the headline was “Phantasy Star Fukkokuban: A Classic Reimagined”. Because here’s the thing. It talks about how the graphics were improved:

Phantasy Star Fukkokuban breathes new life into the classic with its updated graphics and sound design. The visual overhaul retains the charm of the original’s 8-bit aesthetics while incorporating modern graphical techniques. Characters and environments are rendered with enhanced detail, vibrant colors, and fluid animations, creating a visually captivating experience.

The art style honors the game’s roots, with character designs and enemy sprites redesigned to reflect contemporary standards while maintaining their recognizability. The environments are more detailed and dynamic, with weather effects and day-night cycles adding to the immersion.

Well, compare the title screen shots of Phantasy Star above. Which one is Fukkokuban and which one is my personal copy, played through the same Genesis? You can maybe tell, but only my Master System version is the US release. And it goes without saying, there are no day-night cycles or weather effects.

I should’ve known. The first sentence of the article was “Game data not found,” after all.

And that’s the thing

Large language models are described sometimes as “fancy autocorrect”; this is dismissive, but not inaccurate, in the sense that the core loop of an LLM is to predict the next token in a sequence. Phantasy Star Fukkokuban is an obscure title that is likely not well-represented in the training data. But relations do exist:

  • It knows about Phantasy Star, a very popular game
  • Fukkokuban (復刻版) means “reprint” or “facsimilie edition”

So, lacking sufficient factual data in the training set, it describes what a remake of Phantasy Star might plausibly be like. There might even be knowledge in the data set of the actual remake, Phantasy Star generation:1 that gets looped in.

To reproduce this myself, I went to ChatGPT, and asked it Please describe the game "Phantasy Star Fukkokuban". Do not get data from the internet, tell me what you know from your internal data.. And what did I get in response?

Phantasy Star Fukkokuban is not a brand-new entry in the series, but a retro compilation release of the original Phantasy Star, created for the Sega Sega Saturn era…

There was a retro compilation release of Phantasy Star for the Sega Saturn in Japan; it’s called Phantasy Star Collection. Indeed, the description of the game it continued from there isn’t too far off from that game’s version of Phantasy Star.

And it’s not just Phantasy Star Fukkokuban. I describe in my post on Mahjong Daireikai that that game is so obscure, the only Japanese source I could find was another “this is plausibly what a game called ‘mahjong daireikai’ might be like”. Well, what Mahjong Daireikai is actually like is a lot different than what’s in your training data, and that’s exactly the sort of information people want to read websites to find out.

Is this the end

And here’s the thing– this blog post can’t do anything about it. I don’t know who Press Start Gaming is; the site’s footer says “©2025 Cloud Gears Media”, who might be this marketing company (but it might not be! Company names don’t have to be unique globally); Press Start Gaming is almost certainly a tool for making money off of ads and sponsored posts, and posts like the Phantasy Star Fukkokuban misinformation exist mostly to give the site more juice of looking like a real website. If someone goes out and buys a copy of Fukkokuban expecting a new and improved Phantasy Star with better graphics and new sidequests, what do they care? The article wasn’t really meant to provide information.

The trampling of the internet with SEO-mongers predates AI, but what LLMs do is massively increase the ease it can be done, and also hallucinate a ton. If they hired a person to write about Phantasy Star Fukkokuban for pennies, maybe that person would’ve found the Sega Retro page or something and at least grabbed some facts. Now you don’t need to do even that. And no one making these decisions reads Nicole Express, or even cares about actually providing information with their sites. That’s not what they’re for.

Anyways, eventually models will do a better job integrating Nicole Express, and will know more information about Phantasy Star Fukkokuban. And is this the worst thing the AI boom is doing? No, not even close. Even the fully automated hit piece against an open-source developer is probably worse than this.

But it’s a real shame. The commons of the internet are probably already lost, and while I might want to learn new things from new sites, I’ll just have to stick to those with pre-LLM reptuations that I trust. Well, until those sites burn their reputations to make a few extra pennies with AI, like Ars Technica seems to just have.

This post is just a rant. Thanks for listening, at least.



Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete

The best way to spot AI is also the easiest

1 Share

How to make AI spotting easy

Finding AI videos is hard, but finding AI accounts is easy. Anyone can do it with some basic knowledge about how AI media has advanced over the past two years.

I often joke about how many beautiful 24-year-olds discovered Instagram for the first time in November of 2025. Google’s Nano Banana Pro was released November 20th, and a rush of fake people registered. Let’s turn this into a practical way to find fake accounts. Put a pause on finding individual AI videos and AI videos; it’s more efficient to find the AI accounts, then work backwards.

In this piece we’ll plot out the timeline for recent advancements in AI generation and how new accounts and videos coincide with each change.

The Account Age Paradox

It’s obvious to us that an iPhone 17 Pro’s camera looks better than the camera from the iPhone 6S. 10 years of technological advancement separates them. But in 2015, a new iPhone 6S produced photos notably better than the HTC One M7 I had at the time. That HTC One took the earliest photos in my current photo library, and I look back on them fondly.

The same cannot be said of an AI creator in the year 2035, who is trying to prove their character is real. Can they look back at the old, Veo 3 AI generations from 10 years ago to establish proof of life? Of course not - it has the opposite effect, because Veo 3 looks awful and obviously AI to people in 2035. This is a huge difference between AI and real media, and one that AI creators are already aware of.

On Instagram, relevant AI accounts are usually just a few months old and pop up with AI media innovations. When I find a suspected AI character on TikTok, which unlike Instagram or YouTube does not make “account age” visible on the app, I immediately scroll down to their earliest posts. Did they leave the old, bad generations, or did they delete them and start over recently? Either are huge red flags and nearly impossible to overcome.

Putting it into action

I just found a new account on Instagram: an AI influencer named Olivia Arizpe. First of all, her bio says “Digital Creator” and “AI” in it, but let’s pretend the creator wasn’t so honest.

First, I’ll tap on the three dots at the top of the account page to pull up “About this account.” Here I see that the account was created in January of 2026. The first video is also from January of 2026. This video shows the AI avatar lip-syncing to a Tame Impala song. This is follows “motion control” trend I’ll talk about later. That’s all the information I need to confidently call this an AI account.

Or what about this account, a “News” page that claims to be from Tulsa, Oklahoma?

This page used to be called “Tulsa Area Breaking News”, but it’s just an AI slop page. They started in November, and their earliest available video is from the same month. This follows OpenAI’s release of Sora, and indeed there are Sora watermark scrubbing artifacts. Not to mention, their first video follows the format of the AI-generated EBT videos coming out at the time, and those were almost entirely generated with Sora.

None of this required good eyesight or any pixel peeping. And if any shiny, new AI slop comes out of the AI-generated Tulsa area and they really want to trick you, they’ll have to delete the old stuff.

We dive into how to find account transparency information in this article, by the way. It contains instructions for Instagram, Facebook, YouTube, TikTok, and X.

Granted, it’s unreasonable for you to remember that Sora came out in October, or that Motion Control AI generators were trending in January. Luckily, you don’t have to remember: I wrote it all down below for you to reference any time you need.

Subscribe now

The practical AI release timeline

When coming across an account we suspect is AI-generated, we’re looking for either:

  1. A relatively new account that appeared and immediately implemented some recent technology

  2. An account that shows signs of older or more affordable AI technologies in older posts

Let’s look at previous major advancements in AI media and how those popped up on the Internet. This will not only demonstrate how new technology changes trends on social media, but also what to look for when looking back at old posts. This will also give us some important cutoff dates. For example, if you see a dance video from 2023, you can be sure it wasn’t AI-generated.

Before 2025

2025 will probably go down as the most pivotal year in AI media history, and most of the AI-focused accounts today don’t use models from 2024 or earlier. But there are some notable exceptions you should be aware of.

Until Late 2023 - Deepfakes and AI avatars

In 2018, deepfake technology was creating memes and porn, but it was very much in the experimentation phase. Deepfakes aren’t AI videos — they use a different tech architecture — but they’re the first popular form of “AI video” in its broadest possible definition. The corporatized versions of deepfakes are usually called “Avatars”. These are made by companies like Synthesia, who were already around by 2020, and companies like Heygen joined the mix later.

Important takeaway: Face-centric “AI videos” were already possible before 2023, but their uses were pretty limited.

A collection of “classic” AI image slop

Late 2023 to Early 2024 - AI Image slop starts hitting social media

A lot of Facebook posts still have I would call “classic” AI slop. These are the images created by technology from late 2023. For example:

  • OpenAI released DALL-E 3 in October of 2023

  • Midjourney released V6 in December of 2023

While these are just two of many image generators of this era (and OpenAI is depreciating the DALL-E 3 API shortly), there are many accounts still stuck in this era because it’s cheap! These sorts of images became ubiquitous when OpenAI integrated DALL-E into ChatGPT, lowering the barrier of entry and opening the AI slop flood gates.

This is the era of Shrimp Jesus and “engagement slop” like old, disabled puppies asking for donations, or ragged puppies begging for likes. Today, we still see NFL coaches who desperately need help with medical bills. They don’t look like the coaches exactly, there’s a yellowish tint, and it’s giving “Polar Express.”

Important Takeaway: If you see a decent image from before mid-2023, its foundation is probably a real photo. Maybe it’s a heavily edited real photo, drawn, or 3D rendered by this era’s already incredibly powerful animation or gaming engines. But it wasn’t made by “AI”.

Early to mid 2024 - First (decent) AI Video models released publicly

Sora got a lot of early attention February of 2024, followed by models like Luma Dream Machine and Runway Gen-3 in June that year. AI video was still relatively limited, but boosted by many new scaling and training advancements from 2023. Given its limitations, memes and AI video slop was their primary social media use.

Important Takeaway: If you see a decent-quality video posted before mid-2024, unless it was professionally modified or rendered, it’s a real video.. There aren’t any good “AI videos” before this point.

2025

March - ChatGPT 4o Images

I still remember sitting on the couch with my wife on March 26th, 2025 - the day after OpenAI released ChatGPT’s 4o image generation. The internet was lit up with Studio Ghibli-ified images. Feeling guilt about the inherent ethical quandaries of that style, I instead made a photo of our recent vacation in the Family Guy art style. Our surprise and fear from that day seems pretty quaint in hindsight, given what else 2025 was about to bring.

AI-generated profile pictures from this era are still common. The yellow-tinted, glossy, evenly-lit photos from this generation are ubiquitous. Since ChatGPT is the most popular large language model, its built-in image generation is also really popular.

A collection of stills from Google Veo 3 test generations

May - Google Veo 3

Veo 3 was the first AI video model with meaningful sound, and it was also a jump forward in video quality. It generated videos with a cinematic look and feel, though it definitely wasn’t up to cinematic standards. As a result, though it was envisioned as a tool for creatives, it instead flooded the internet with AI videos.

A Veo 3 Review was my first ever YouTube video. Today, Veo 3 makes a platonic “AI video” that’s relatively easy to spot after you’re familiar with it. The most distinctive characteristics include smooth and even lighting, temporal inconsistencies and background issues, and the character’s robotic and melodramatic voices.

Immediately in May, a ton of AI slop pages popped up on social media. Since Veo 3 could only do widescreen 16:9 videos at first, vertical-native platforms like TikTok suddenly had a ton of new accounts posting AI videos with black bars (also known as letterboxes) on the top and bottom of every video. ASMR Fruit cutting videos and AI man-on-the-street interviews were trend. But also, people’s first “I got tricked” moments came with Veo 3’s bunnies on trampolines.

Important Takeaway: If you find a realistic video with matching sound or dialogue posted before May of 2025, unless it was generated with a game or animation engine, it’s a real video.

Mid-2025 - Other video models

Around the same time as Veo 3 were Runway Gen 4 (April) and Kling 2.1 (May). These were similar in video quality to Veo 3, but neither had synchronized sound. They traded this for vertically-native videos and different video styles. Along with Midjourney releasing its first video model in June, Alibaba’s open-sourcing of Wan 2.2 in July, and many more advancements from companies like LTX and Minimax, a plethora of good video-generation options came online in mid-2025.

With these vertical-native models, we got AI-generated landscapes, fake natural phenomena, and AI-generated cartoons for kids that were NOT kid-friendly. But this is also when AI slop started looking realistic enough to fool a ton of people in vertical feeds. By August this is almost all I covered.

August - Nano Banana

Google’s Nano Banana, the informal name for Gemini 2.5 Flash Image, was a big jump in photo quality. Before Nano Banana, mainstream AI photo still had a glossy look. After Nano Banana, photorealistic images were very accessible.

Important Takeaway: A lot of photo-only AI accounts got a huge boost or started in August of 2025.

October 2025 - Sora 2

The next jump in AI video came from OpenAI, who took a year and a half after the original Sora to release the Sora 2 video model. Alongside it was the release of the Sora app, a TikTok-style vertical video app with only AI videos.

The Sora 2 model had a few key innovations:

  • It felt less uncanny than Veo 3 for many viewers. Eyes, mouth, and skin detail were more realistic.

  • It had better physics than any other model at the time.

  • It was funny. Lazy prompts were spiced up by a language model in the background. This meant inexperienced AI prompters could make viral videos.

And yet, it was a very noisy model with a heavy “AI accent.”

Sora’s release coincided with a huge increase in the number of “AI slop” accounts because it was free. Until this point, good AI video generation was expensive, but the Sora app let users generate a lot of videos in the free app. These videos could be downloaded with a watermark (that was easily scrubbed), then reposted to the other, much more popular social media sites. The Sora 2 API released just 2 weeks after, providing videos without watermarks, and giving more people access to the powerful Sora 2 Pro model. To this day, a ton of AI videos come from Sora 2 because it has a good price-to-quality ratio.

Important Takeaway: Many AI video slop accounts have October 2025 birthdays or changed their usernames at this time.

November 2025 - Nano Banana Pro

Google’s jump to Nano Banana Pro (Gemini 3.0 Pro Image) was surprising. Just three months after the original, it brought big improvements to realism and quality, as well as improved spacial reasoning and text rendering. This is the point where AI photos became mostly undetectable on first glance, though they can be spotted when looking closely.

Along with Nano Banana Pro came the release of SynthID, which can be accessed through Google’s Gemini large language model. It’s an invisible watermarking system that embeds and detects a watermark hidden inside the pixels of a photo.

Creators reacted accordingly. Misinformation like the infamous “Bubba Trump“ photo and fake celebrity paparazzi photos spread wildly. More relevant to our everyday social media use, AI generated influencer accounts proliferated in late November. The Nano Banana Pro release also corresponded with TikTok’s unfortunately-timed new emphasis on image carousel posts. And, since many AI video generators have a photo-to-video mode, these AI images are the starting frames for many AI videos. Nano Banana Pro is still a leader in this field, and while other generators have caught up, this was an important demarcation point.

And that wasn’t all that happened in November...

November and December 2025 - Motion Control AI improvements

Coinciding with Nano Banana Pro’s release were improvements in Motion Control models, most notably Wan 2.2 Animate. This prompting innovation lets creators take a “control” video — often a video stolen from a real creator’s social media — and “replace” them with a new character, real or not. This method saw further improvement through Kling 2.6 Motion control.

These models unlock very realistic motion and physics for AI characters. Combining Nano Banana Pro (or similar photo models) with a motion control video model lets creators generate realistic, consistent characters across posts. While not perfect (there are still plenty of AI giveaways), AI influencer creators now had a ton of tools at their disposal.

Important Takeaway: Accounts that feature human avatars and started or rebranded in November or December of 2025 are a huge red flag. A lot of AI slop accounts moved into the more profitable AI influencer business.

Early 2026 trends

Releases from Kling and Bytedance show further improvements in AI video quality, which we’re still analyzing as of this post. I expect some new accounts in February that play off their strengths, but those are yet to be determined. Right now it’s a bunch of demos of stolen intellectual property and celebrity deepfakes, as can be expected with a new model release.

Moving Forward

People regularly ask me what might happen if AI video becomes “perfect” or “undetectable.” But to my eyes, real videos aren’t “perfect”. There are always artifacts of the process that made them.

I remember watching Superbowl XLV 15 years ago, on an imposing 75-inch high-definition TV. It was incredible and perfect at the time. But looking back at it now, it looks a bit dated, even without today’s de-interlacing conversion artifacts. In 5 years, perhaps the oversaturated HDR look of today’s iPhone phones will be an out-of-fashion artifact of our current era.

The highest-end, most impressive AI video generations today still don’t look "real” to me, but they do look “really good”, which is real enough for most people. The difference is marginal, but with hindsight it may become obvious. Those of us who make real media will have to adapt and differentiate ourselves from AI advancements, figuring out what our advantages are. It’s always going to be changing, so stay tuned here for updates on the latest releases and countermeasures.

Riddance is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.



Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete
Next Page of Stories