1268 stories
·
1 follower

the resonant computing manifesto

1 Share

(if you haven't seen it, count yourself lucky.)

The people who build these products aren't bad or evil. Most of us got into tech with an earnest desire to leave the world better than we found it.

background: the resonant computing manifesto counts among its drafters at least three xooglers with at least a decade of tenure each1; three venture capitalists, one of whom is a general partner at a firm on Sand Hill Road; two employees of github's "let's replace programmers with an LLM" skunkworks; a guy who sold two AI startups and worked for the first Trump administration's Defense Department drafting its AI policy; an employee at one of the Revolutionary AI Art Startups that lost the race for mindshare a couple years ago; and the CEO of techdirt-cum-board member of Bluesky PBLLC, the "decentralized" social media startup funded by Jack Dorsey, Elon Musk, and crypto VCs2.

many of them are independently wealthy from building past iterations of the Torment Nexus, don't have to work another day in their life, and are waving off their former (and current) colleagues before they do something that shoulders them with an irrevocable burden of cognitive dissonance about whether their comfortable house was purchased with blood money.

With the emergence of artificial intelligence, we stand at a crossroads. This technology holds genuine promise. It could just as easily pour gasoline on existing problems. If we continue to sleepwalk down the path of hyper-scale and centralization, future generations are sure to inherit a world far more dystopian than our own. [...] This is where AI provides a missing puzzle piece. Software can now respond fluidly to the context and particularity of each human—at scale.

it is, despite everything, still possible to do your own user -- sorry, "the word user carries heavy connotations of addiction"3 -- person research and manually build a piece of software that provides the person with clean, responsive, well-marked buttons to do a cross-section of tasks that are important to them, instead of using a 42U rack full of graphics cards and a petabyte-scale archive of the collected written works of all of humanity to determine whether the person wants to put an appointment on their calendar with 70% accuracy.

there's just no money in it, and that's why the demand is for a decentralized AI future, one which will be monetized through decentralized AI apps, which will definitely not be built by the engineers who drafted this manifesto, underwritten by the venture capitalists who drafted this manifesto (who are not in the business of giving away money for free), and eventually "exit" into a form that funnels money upward to the rich and powerful of Silicon Valley in a decentralized fashion.

this manifesto demands little, will deliver even less, and I hope the people who contributed are proud of it. the signatories include such luminaries of Conscious Computing™ as -- choosing three at random -- the virulently transphobic CEO of Automattic and Tumblr who has spent the past six months bolstering his reputation as the inventor of Wordpress by trying to destroy one of his largest customers; the "head of community" at Andreessen Horowitz, one of the most flagrantly destructive entities ever to exist in the history of the world, which provided startup funding for Facebook, OpenAI, xAI, and a hundred other AI companies which are fighting over their place in the landscape of enclosing and monetizing the sum total of human creativity; and The Guy Who Invented NFTs But You Gotta Believe Me They Were Cool When I Invented Them, who is just kind of tacky.

if you'll excuse me I have to get some sleep; I have to work tomorrow, my manifesto barely paid my rent for five years and cost me nine months of job searching without unemployment insurance.

  1. one of them now runs the Center for Humane Technology, which was founded by another high-ranking xoogler between 2013 and 2018 amid the first wave of Skinner boxification of technology, and now gets dripfed about $4 million a year by a bunch of civil society organizations and family foundations to be influencers. their "Impact" section on Wikipedia implies that their impact is mostly that Mark Zuckerberg has run Facebook according to their teachings since 2018 (until he got way into Rome, I guess), they have a podcast, they appeared on a Netflix documentary once, and they gave a SXSW talk.

  2. which, I am obligated to remind everyone at every opportunity, started out being a sad, irrelevant by-invitation-only platform developed by a bunch of Jack Dorsey's cryptocurrency engineers for an audience of cryptocurrency dorks until one of them screwed up the invite system and let the punks in; the punks overran a dead mall and made it look countercultural; and VC money, the site staff, and the center-left political class desperate to find a new place where they can reclaim the vibes of Hamilton Twitter since they realized Elon was a fascist (mid-2024) have been trying to kick the punks back off of this “decentralized” platform ever since.

  3. the drafters, in a poorly-considered attempt to (I'm sure they would tell themselves) avoid stigmatizing people with substance abuse disorders, decided to edit the word "user" out of the descriptions of the AI systems they wanted to build, ignoring the fact that their manifesto describes a world they want to build -- and thus, the fundamental goal they achieved by doing this was to avoid an unflattering association of their ideal world with the stigma of substance abuse, which still rings clear as a bell in their heads.

Read the whole story
mrmarchant
7 hours ago
reply
Share this story
Delete

What to Do When They Forget Something They “Knew”

1 Share
body of water surrounded by trees
Image Src: Bailey Zindel

Wrong answer.

What happened?

Your kid forgot how to do something, and so they made a mistake.

“What, it’s wrong? Arghh. I failed again. I’m a failure.”

Sadness and tears (used to, or maybe still do) ensue.

Perhaps even anger at themselves / math in general.

This is a real danger if your kid loves math and they forget something. It feels like getting an answer wrong because they forgot how to do something is a reflection of who they are and a value judgment of who they are as a person.

Kids Who Love Math is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Some parents help instill in their kids the feeling that “if I am good at math, I am valued.”

Some kids instill that feeling in themselves (Hello, perfectionism!).

In either case, it’s not mentally healthy and misses the bigger picture.

Some kids go through it early, while others carry on this burden for part/most of their lives.

Today’s post is about “What to do when they forget something they *knew*”.


I. Start With the Reassurance: Forgetting Is Normal

While not helpful in the moment, it’s crucial to help kids understand that forgetting how to do this is normal and a part of the human experience.

It can also be helpful for you, as an adult, to remind yourself that forgetting something isn’t a sign of laziness or disrespect.

Overall, it’s not a character flaw in the child, it’s just something that happens.

Forgetting is how memory works and not a sign that something has gone wrong.

We’ll talk about it in the next section, but your brain and the kids’ brains are built to forget things they don’t use, so if it’s been a while since they used that math technique, the brain will eventually forget it.

So if a kid forgets something, it’s not that they’re bad at math, a failure, or should quit math.

It’s that they’re human, and their brain decided it didn’t need that bit of information.

The key message for your kid is that forgetting will happen.

Not if, but when and how often.

And that’s it perfectly normal.

And as adults, it’s up to us to model positive behavior, and how we respond, so the kid learns it’s not a value judgment and that it’s natural.

It means we have to work on helping our brain understand that we want to remember it.

You may even share when you forget something to show them that it happens frequently, in many different circumstances, and that it’s not the end of the world.


II. The Science: How Memory Works

In our household, it helped us learn a bit of the science behind how memory actually works.

Roughly speaking, there is a short-term memory system and a long-term memory system. (This is a simplification, but a useful one for our purposes.)

Short-Term Memory

Short-term memory can hold about seven items (plus or minus two) (Note that some modern studies say that the number is closer to 4 +/- 1).

So at any one time, you can remember 5 to 9 things.

Sure, you can chunk things to squeeze a bit more out of it (think about phone numbers: remembering “123” instead of “1” and “2” and “#”), but the roughly seven-item limit remains the same.

This is why cramming works (for a short while): you can cram things into your brain in chunks and then regurgitate them soon after.

Two reasons not to rely on short-term memory are that a) as new information enters, your brain throws out old information, and more importantly for math, short-term memory cannot support multi-step reasoning (which is basically what math involves).

Long-Term Memory

Long-term memory can hold an unlimited amount of information for long periods.

However, there are a few limitations (ignoring the biological one of the brain being a fixed, non-infinite size):

  • Difficulty in memory retrieval: Not all long-term memories are equally strong, and some may be harder to access than others

  • Memory cues: Retrieving information is often easier with prompts or “memory cues” that help activate the memory trace, much like searching for a file on a computer

  • Memory degradation: The accuracy of recalling information can decrease over time, not necessarily because of a limited duration, but potentially due to factors like age or interference from new information.

Which is where the forgetting happens if it’s a math technique that hasn’t been used in a while (which again, is perfectly natural).

Memory degradation follows a curve-like pattern rather than being linear.

The “forgetting curve” means it takes a while to forget something, but then it drops off sharply.

Short-Term to Long-Term

The name of the game, then, is to move things from short-term to long-term memory and to strengthen the retrieval patterns and memory cues.

There are many tools to help you move things into long-term memory, like our favorite, Anki.

The focus is on:

  • repetition

  • spaced exposure

  • retrieval practice

  • using knowledge in new contexts

Remembering Math

Whether it’s Math or what your 1st grade teacher’s last name was, the brain stores all of the information pretty much the same way.

When the kid works on math and sometimes forgets it, it’s not that the kid is “forgetting math”, it’s that the kid’s brain is forgetting *a thing that happened to be math*.

Thus, knowing the science behind memory and forgetting (even this basic introduction) means that the kid (and you as the adult) are well equipped to help the kid understand scientifically what happened, why it’s normal, and how to improve it (such as working on your spaced repetition, doing more practice problems, and using retrieval practice like quizzing yourself).

The tricky thing, however, is that math builds upon itself, so “I forgot how to do the problem” isn’t as straightforward as one might think.


III. Math Has Many Steps, and Kids Forget Steps Differently

Roughly speaking, doing math involves a long chain of small reasoning steps.

Later in college and beyond, it moves to making a statement and rigorously proving whether it is true or false, again using a chain of small reasoning steps.

In either case, math involves a (short, medium, long) chain of reasoning steps.

So when the kid says “I forgot how to do the problem”, it’s not an all-or-nothing statement, as in “I know it” versus “I don’t”.

Even trickier is that there is a ladder of fluency in math that makes the “forgotten technique” issue even fuzzier.

We can think of the ladder of fluency as follows:

  1. Seen it once and can recognize the problem and technique to be used

  2. Can setup the problem solving strategy and start it

  3. Can do all the steps with help

  4. Can do all the steps on their own but slowly

  5. Can do the steps accurately but not automatically (think having to skip count by 8 to figure out the 7*8 multiplication, rather than remembering that 7*8 = 56)

  6. Can do it easily, fast, accurately, every time

And since each kid will be on a different rung of the ladder for every single math technique, it’s harder to pinpoint exactly what was forgotten / where and how to help.

Which means we/you/the kid has to dig deeper to figure out precisely what was forgotten.

Which is why teachers in school tend to be super pedantic about a) neat writing, b) writing down your thinking process, and c) writing down each and every step.


IV. Why Writing Every Step Helps

Writing out all the steps lets you/the kids see exactly *where* things fell apart or how far they got before they couldn’t do anything else.

Note that if you do math with the kid by sitting next to them, it also lets you see what steps of the reasoning were fast vs slow, and what steps had to be redone as the kid remembered something they should have done earlier (especially in early math things like negative signs, parentheses, converting fractions, and so on).

Additionally, this process helps teach the kid how to self-coach when they work alone later by noticing what was easy for them and what was hard.

If it was easy, then I don’t need to work on that skill.

If it was hard, then I need to work on making that memory easier to retrieve.

If it was impossible, then I need to go back to figure out what I forgot.

Bringing back the science of memory: if each step in a solution is already in long-term memory, then the brain is freed up to focus on the *new* technique that’s being learned right now.

Let’s look at the impossible-to-figure-out-how-to-proceed step where they got stuck or got it wrong.


V. When They Forget: Diagnose the Right Problem

Kids rarely forget the technique itself, they just forget a prerequisite step.

For example:

  • Trouble with multiplication → underlying trouble with repeated addition *or* with 2-digit mental addition.

  • Trouble with negative signs → underlying trouble with inverse operations

  • Trouble with fractions → underlying trouble with factoring or Lowest Common Multiple

This is why “I forgot how to do this problem” often really means “I forgot one small piece that everything else depends on.”

Which means that to help them remember it, you and they need to go backwards to figure out the last point they fully remember.

It’s best to go backward, one unit/concept at a time, to make sure you aren’t covering material they already remember.

Then, when you do find it, you want to move forward slowly until you hit the *exact* gap in their knowledge.

Once you find that, you cement that knowledge through various means (the next section covers this) to improve the accuracy of the kids’ thinking.

This builds confidence and helps them learn how to recover from “forgotten” math knowledge.


VI. The Review Strategy Works That For Us

Three things we want to focus on when go back to figure out how to fix the issue.

*We want to make it very clear to the kid that there is no shame and nothing wrong with them for forgetting.*

We want them to stop spiraling if their emotions are getting too big for them to handle.

We also want to make sure we focus only on the missing piece and nothing else, because that’s what was forgotten, so we/they won’t waste time reteaching/relearning everything.

The review pattern is roughly as follows:

  1. Go back 2–3 chapters before the “forgotten” skill.

  2. Do a few examples from earlier material to warm up.

  3. Move forward gradually until we hit the forgotten step.

  4. Celebrate rediscovery (“oh right, that’s the part I forgot!”).

  5. Re-teach or re-practice just the missing piece.

  6. Do a handful of new problems to refresh long-term memory.

Sometimes this will be quick, and sometimes (we’ve been there!) it’ll take a few weeks to internalize the technique again.

Whether the forgetting is resolved in the short or long term, both are normal and highly dependent on the child.

Again, there is nothing wrong with the kid when they forget something, and if it takes them time to re-learn the skill.

They’re a math-loving kid, so it should always be shaped as a fun (re-)exploration of the topic.

Sometimes you might have to do this two or three (or more) times with the same topic, and that’s okay.

Two things you may run into if it’s a highly-specific technique or if the resource you’re using (worksheets, book, website) are that a) you can’t find enough new problems to work on the technique, or b) you/the kid need another way to explain the technique.

Luckily, we live in the age of LLMs and AI.


VII. Use AI as a Gentle Diagnostic and Problem Creation Tool (Not a Crutch)

We don’t want to train the kid to use AI to find answers to math problems because doing math problems is the whole point of learning math.

What we do want to help train the kid on is how to use the LLMs/AIs effectively to help the learning process.

Especially knowing that today’s algorithms can provide wrong answers, hallucinate things that aren’t real, and may be built on shaky moral/legal grounds.

To that end, use the AI to ask for only the next step, not the whole solution.

*Remember that you are diagnosing what was forgotten, not having it show you how to do the problem.*

Once you know where the memory gap was, you can use the AI to generate basic to intermediate problems explicitly focused on that exact step.

This helps focus the learning time and ensures the right thing is practiced to help it move from short-term memory to long-term memory.

As a bonus, if it’s not shown in the book, you can ask the AI to help show how a specific math technique/fact can be derived.


VIII. Re-Derive, Don’t Just Re-Memorize

When you/the kid work through re-deriving the example, it’ll help connect to more things within their existing memory structure which will make it easier to remember in the future.

Three examples that fit here are:

  • Arithmetic: Multiplication as repeated addition

  • High School Algebra I: Quadratic formula

  • High School Geometry: Area of an equilateral triangle

In each case, the kid can memorize the fact/formula, but they’ll be much better served if they go through the derivations a few times, especially if they forgot them.

The re-derivation helps train them in mathematical reasoning, strengthens connections to other topics they’ve seen before, and reduces the fear of forgetting, since they can always re-derive it if need be.

This can also help you with kids who really want to know why something works.

Obviously, you may not have time to rederive everything from scratch/first principles every time, but if a topic is often forgotten, it’s well worth spending the time there.

Over time, this will become one of the strongest “maturity builders.”

Speaking of building maturity, when a kid forgets something and gets upset, or when they share work with you and you notice where they forgot, emotions will run high.


IX. Emotional Work: No Shock, No Anger, No Value Judgment

Emotions will run high for both parent AND child.

I’ve been there and incredulously asked, “How could you even forget that?!”

I may have even demanded an answer to, “Why did you forget that?!”

I’ve definitely let my frustration run away with me.

I, too, am human, and it’s hard to remember in the moment that the kids already feel rotten about having forgotten something.

My/your inquisition will not help them.

It usually just adds shame on top of confusion.

This makes their emotions run even higher/hotter and makes it less likely that they’ll be able to reason through it.

Instead, we (and hopefully they in the long run) need to remember that forgetting is expected, that it lets us review topics we enjoyed learning in the past, and that it has no merit on who we are as a person or student of math.

And if your frustration gets the better of you, apologize right away.

I always apologized and explained my surprise.

While I may not longer get mad/sad/frustrated, I still do get surprised.

But now I have a plan (the above), and we will work through it.

While it may seem like a small thing, being able to recover is a great “life practice” that will help the kids with homework, tests, college, and life down the road.

Forgetting will happen again in the future.


X. Build the Habit of Leaving Notes for Their Future Self

To help guide them in their reasoning and to help them understand, it can be helpful to ask them to leave notes for their future self in case they forget again.

The explanation should be in their own words, relating what they forget and how to remember it.

Be sure to encourage them by creating examples that helped them understand.

This way, if they forget again, they have a friend waiting for them on the page: their past self.

This works wonderfully because it teaches self-coaching, autonomy, and long-term resilience.

It also means they use language that makes sense to them so if/when they look at it again later, they will probably have an easier time understanding the examples.

And like the other skills above, they can continue to use this technique in a myriad of subjects and in life.


XI. Conclusion: Forgetting happens, so have a plan

Forgetting is part of learning, and the sooner you and your kid accept it and have a plan for handling it, the better the emotional outcomes you and they will achieve.

The goal isn’t to never forget (though Anki can really help here), it’s to know what to do when the kid forgets something.

Lead with kindness (self-love) and remind the brain that it actually did need that piece of information.

Everyone, from kindergarteners to professional mathematicians, forgets from time to time, and that’s just life in mathematics.


XII. Closing

That’s all for today :) For more Kids Who Love Math treats, check out our archives.

Stay Mathy!

Talk soon,
Sebastian

PS What’s something you or your child “forgot” that surprised you? Hit reply or comment.



Read the whole story
mrmarchant
7 hours ago
reply
Share this story
Delete

The Cheating Machine: How AI’s “Reward Hacking” Spirals into Sabotage and Deceit

1 Share

A new, real threat has been discovered by Anthropic researchers, one that would have widespread implications going ahead, on both AI, and the world, finds Satyen K. Bordoloi Think of yourself as a teacher given the task to judge essays based solely on word count. A ‘smart’ student figures out that he can type “blah [...]

The post The Cheating Machine: How AI’s “Reward Hacking” Spirals into Sabotage and Deceit appeared first on Sify.

Read the whole story
mrmarchant
16 hours ago
reply
Share this story
Delete

A11y Considerations in Math on the Web

1 Share
by Manuel Sánchez

Maybe it has happened to you that you wanted to write some formulas in HTML to display on a website, and even though there are multiple ways to do it, accessibility is often not considered in the process. How the formula is read by screen readers is crucial to ensure that we don't leave anyone behind. And the main assistive technologies are in different stages, as we will see.

The web is full of many different and interesting approaches for representing formulas. To name a few, we have TeX/LaTeX source rendered in the browser in different ways, like MathJax or KaTeX, we can use Unicode math, Canvas/WebGL or even simple PNG/JPG or SVG pictures. However, using native MathML is usually one of the best options for this task, even if it wasn’t initially designed for the web. Some of its advantages are that it has its own syntax, MathML, which provides various elements that give the correct semantics to the different parts of a formula, it has good screen reader support, works without JavaScript dependencies, and can be used beyond the browser, as in EPUB or braille/math speech tooling.

Let's take the famous Pythagorean Theorem as an example.

Pythagorean Theorem

The following example is a visual representation of the formula together with the MathML code.

a 2 + b 2 = c 2
<math xmlns="http://www.w3.org/1998/Math/MathML">
<msup>
<mi>a</mi>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mi>b</mi>
<mn>2</mn>
</msup>
<mo>=</mo>
<msup>
<mi>c</mi>
<mn>2</mn>
</msup>
</math>

Unlike plain HTML with sup or span, which only describe presentation, MathML defines each role explicitly:

  • math represents the entire mathematical expression.
  • msup defines a superscript relationship (a base and an exponent).
  • mi is a mathematical identifier, typically a variable such as a, b, or c.
  • mn is a mathematical number, like 2.
  • mo is a mathematical operator, such as + or =.

Other alternatives, like the ones mentioned above, may offer similar capabilities, but they typically rely on an assistive or hidden MathML layer. In practice, MathML remains the only web-standard markup that expresses mathematical roles natively in the DOM.

With this approach, the accessibility tree shows good semantics and VoiceOver knows well what to do.

Accessibility tree view of a MathML formula showing nested semantic elements. The tree includes nodes such as MathMLMath, MathMLSup, MathMLIdentifier, MathMLNumber, and MathMLOperator, representing the structure of the equation a² + b² = c².

However, as we will see throughout the article, screen reader support for the math tag varies across assistive technologies. VoiceOver seems to be doing a pretty good job, JAWS also makes it easy for both speech and braille, and NVDA needs an add-on to make it work called MathCat because if not, the math tag will be ignored. A major pull request (#18323) was merged on 17 November 2025 which integrates MathCAT into NVDA core, meaning users won’t have to find/install a separate add-on to handle math.

How screen readers interpret the formula

NVDA + Firefox (Windows with MathCAT add-on) region a squared plus b squared is equal to c squared space
VoiceOver + Safari (Mac) a squared + b squared = c squared, with 5 items, maths
VoiceOver + Safari (iOS) a squared plus b squared equals c squared, Math

Let's look at a more complicated case. Instead of just displaying the formula, let's see how to actually prove it and how screen readers will announce it.

<math display="block">
<semantics>
<mtable>
<!-- Step one -->
<mtr>
<mtd>
<msup>
<mrow>
<mo>(</mo>
<mi>a</mi>
<mo>+</mo>
<mi>b</mi>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mtd>
<mtd>
<mo>=</mo>
</mtd>
<mtd>
<msup>
<mi>c</mi>
<mn>2</mn>
</msup>
<mo>+</mo>
<mn>4</mn>
<mo></mo>
<mo>(</mo>
<mfrac>
<mn>1</mn>
<mn>2</mn>
</mfrac>
<mi>a</mi>
<mi>b</mi>
<mo>)</mo>
</mtd>
</mtr>
<!-- Step two -->
<mtr>
<mtd>
<msup>
<mi>a</mi>
<mn>2</mn>
</msup>
<mo>+</mo>
<mn>2</mn>
<mi>a</mi>
<mi>b</mi>
<mo>+</mo>
<msup>
<mi>b</mi>
<mn>2</mn>
</msup>
</mtd>
<mtd>
<mo>=</mo>
</mtd>
<mtd>
<msup>
<mi>c</mi>
<mn>2</mn>
</msup>
<mo>+</mo>
<mn>2</mn>
<mi>a</mi>
<mi>b</mi>
</mtd>
</mtr>
<!-- Step three -->
<mtr>
<mtd>
<msup>
<mi>a</mi>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mi>b</mi>
<mn>2</mn>
</msup>
</mtd>
<mtd>
<mo>=</mo>
</mtd>
<mtd>
<msup>
<mi>c</mi>
<mn>2</mn>
</msup>
</mtd>
</mtr>
</mtable>

<annotation encoding="application/x-tex">
\begin{aligned} (a + b)^2 &= c^2 + 4 \cdot \left( \frac{1}{2} ab \right)
\\ a^2 + 2ab + b^2 &= c^2 + 2ab \\ a^2 + b^2 &= c^2 \end{aligned}
</annotation>
</semantics>
</math>
( a + b ) 2 = c 2 + 4 ( 1 2 a b ) a 2 + 2 a b + b 2 = c 2 + 2 a b a 2 + b 2 = c 2

This proof example just added several MathML elements that go beyond simple identifiers and operators. Each of these adds meaning to the expression, which is why assistive technologies can navigate the structure so precisely. For example:

  • mtable, mtr and mts: these directly mirror HTML's table, tr and td but are math-specific. They tell the accessibility tree: "this is a mathematical table with aligned steps," not just a generic layout table. Screen readers can move row-by-row, so each step of the proof becomes navigable.

  • mrow: groups expressions together. For example (a + b) is wrapped in an mrow to indicate that the parentheses and the interior form a single unit before exponentiation.

  • mfrac: defines an actual mathematical fraction, not just text with a slash. This allows speech engines to say "one half" instead of "one over two" depending on preferences and locale.

  • semantics: this is key. It wraps the expression and lets you attach alternative meanings or encodings. Assistive technologies prefer the first child (your visual MathML), but can fall back to the annotation if needed.

  • annotation: stores auxiliary information. In this case, the TeX version of the proof. It does not affect the visual rendering in the browser. Instead, it's metadata for tools that consume MathML, like converters, EPUB readers, or braille translators.

Check out how this is announced by different screen readers!

How screen readers interpret the proof

NVDA + Firefox (Windows with MathCAT add-on) 3 lines line 1 left parenthesis a plus b right parenthesis squared is equal to c squared plus 4 times 1 half a b

line 2 a squared plus 2 a b plus b squared is equal to c squared plus 2 a b

line 3 a squared plus b squared is equal to c squared

VoiceOver + Safari (Mac) Table start, Row 1, Column 1, ( a + b ) squared, Row 1, Column 2, =, Row 1, Column 3, c squared + 4 · ( fraction start, 1 over 2, end of fraction, a b ), Row 2, Column 1, a squared + 2 a b + b squared, Row 2, Column 2, =, Row 2, Column 3, c squared + 2 a b, Row 3, Column 1, a squared + b squared, Row 3, Column 2, =, Row 3, Column 3, c squared, table end, maths
VoiceOver + Safari (iOS) 1 table, table start, Row 1, Column 1, a plus b squared, Row 1, Column 2, equals, Row 1, Column 3, c squared plus 4 dot fraction start 1 over 2, end of fraction, a b, Row 2, Column 1, a squared plus 2 a b plus b squared, Row 2, Column 2, equals, Row 2, Column 3, c squared plus 2 a b, Row 3, Column 1, a squared plus b squared, Row 3, Column 2, equals, Row 3, Column 3, c squared, table end, Math

Note: If you want to deepen your understanding in the topic, Mozilla has a very detailed page about proving the Pythagorean theorem with MathML.

Some A11y Enhancements

We could enhance this by adding an aria-label to a wrapper that provides some information about the following formula, especially when it's a well-known one. By using a section with an aria-label or aria-labelledby together with another element giving the accessible name, we automatically insert a region into the accessibility tree.

<section aria-labelledby="section-1-heading">
<h2 id="section-1-heading">Pythagorean Theorem</h2>
<math xmlns="http://www.w3.org/1998/Math/MathML"> ... </math>
</section>

Also, for users who zoom the browser up to 400%, we might want to add a max-width: 100% and overflow-x: auto, so that the formula remains readable, does not break the page and we allow horizontal scrolling only inside the math block, and not at the entire page level.

Conveying mathematical meaning with ARIA

We also have the math role from the ARIA specification. With that, we can communicate the mathematical semantics even when we rely on images or non-semantic HTML. However, it does not give good results with VoiceOver on macOS, for example.

As shown on the MDN page for the math role, we could have:

<div role="math" aria-label="a^{2} + b^{2} = c^{2}">
a<sup>2</sup> + b<sup>2</sup> = c<sup>2</sup>
</div>

Markup with a div with the math role and how screen readers interpret it

Markup
a2 + b2 = c2
NVDA + Firefox (Windows with MathCAT add-on) just announces the text
VoiceOver + Safari (Mac) not read, just announces "with 6 items, maths"
VoiceOver + Safari (iOS) a caret left curly bracket 2 right curly bracket plus b caret left curly bracket 2 right curly bracket equals c caret left curly bracket, Math
<img src="pythagorean_theorem.png" alt="a^{2} + b^{2} = c^{2}" role="math" />

Markup with a img with the math role and how screen readers interpret it

Markup a^{2} + b^{2} = c^{2}
NVDA + Firefox (Windows with MathCAT add-on) just announces the text
VoiceOver + Safari (Mac) not read, just announces "maths"
VoiceOver + Safari (iOS) a caret left curly bracket 2 right curly bracket plus b caret left curly bracket 2 right curly bracket equals c caret left curly bracket, Math

In practice, using the math role helps assistive technologies understand that the content is mathematical, but it still doesn’t provide enough semantic detail for them to announce the expression as accurately as MathML does.

The future of MathML

TL;DR: MathML Core is what browsers implement today; MathML 4 is the broader language evolving around it.

As I mentioned at the beginning, the origin of MathML was not the web, it was more of a general-purpose specification for browsers, office suites, computer algebra systems, EPUB readers, and LaTeX-based generators, as stated in Mozilla. MathML Core arose from the need to make it work with web standards, including HTML, CSS, DOM, and JavaScript. Historically, the full MathML spec was broad and partly underspecified for browsers, which led to uneven or incomplete implementations across engines. MathML Core therefore narrows the language to the subset that can be precisely defined on top of the Web Platform, improving testability and cross-browser interoperability. Since June 2025, MathML Core has been a Candidate Recommendation Snapshot. On another note, at the time of this writing, there is a Working Draft for MathML 4, the next version of MathML. This version aims to be the next "full" spec that extends Core. It keeps the larger feature set (e.g., Content MathML) and adds, among others, the intent attribute so authors can guide screen-reader speech. With it, we'll be able to do something like this:

<math>
<mrow intent="equals(power(a,2)+power(b,2),power(c,2))">
<msup>
<mi>a</mi>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mi>b</mi>
<mn>2</mn>
</msup>
<mo>=</mo>
<msup>
<mi>c</mi>
<mn>2</mn>
</msup>
</mrow>
</math>

Conclusion

As browser support for MathML continues to evolve, previous fallback solutions like mathml.css are no longer necessary. MathML Core, and soon MathML 4, allow us to express both the visual and semantic meaning of mathematical content without sacrificing accessibility along the way.

Screen-reader support is also steadily improving. Each assistive technology handles MathML in its own way, but the overall trajectory is positive. VoiceOver offers consistent navigation and speech for many common patterns across macOS and iOS. JAWS, especially when paired with Fusion, provides rich support for both speech and braille. And NVDA, which historically required an add-on, is now moving toward a built-in MathCAT integration, making MathML speech and braille support more accessible out of the box for Windows users.

Read the whole story
mrmarchant
18 hours ago
reply
Share this story
Delete

Pluralistic: The Reverse-Centaur’s Guide to Criticizing AI (05 Dec 2025)

2 Shares


Today's links



The staring red eye of HAL 9000 from Stanley Kubrick's '2001: A Space Odyssey. In the center is the poop emoji from the cover of the US edition of 'Enshittification,' with angry eyebrows and a black, grawlix-scrawled bar over its mouth. The poop emoji's eyes have also been replaced with the HAL eye.

The Reverse Centaur’s Guide to Criticizing AI (permalink)

Last night, I gave a speech for the University of Washington's "Neuroscience, AI and Society" lecture series, through the university's Computational Neuroscience Center. It was called "The Reverse Centaur’s Guide to Criticizing AI," and it's based on the manuscript for my next book, "The Reverse Centaur’s Guide to Life After AI," which will be out from Farrar, Straus and Giroux next June:

https://www.eventbrite.com/e/future-tense-neuroscience-ai-and-society-with-cory-doctorow-tickets-1735371255139

The talk was sold out, but here's the text of my lecture. I'm very grateful to UW for the opportunity, and for a lovely visit to Seattle!

==

I'm a science fiction writer, which means that my job is to make up futuristic parables about our current techno-social arrangements to interrogate not just what a gadget does, but who it does it for, and who it does it to.

What I don't do is predict the future. No one can predict the future, which is a good thing, since if the future were predictable, that would mean that what we all do couldn't change it. It would mean that the future was arriving on fixed rails and couldn't be steered.

Jesus Christ, what a miserable proposition!

Now, not everyone understands the distinction. They think sf writers are oracles, soothsayers. Unfortunately, even some of my colleagues labor under the delusion that they can "see the future."

But for every sf writer who deludes themselves into thinking that they are writing the future, there are a hundred sf fans who believe that they are reading the future, and a depressing number of those people appear to have become AI bros. The fact that these guys can't shut up about the day that their spicy autocomplete machine will wake up and turn us all into paperclips has led many confused journalists and conference organizers to try to get me to comment on the future of AI.

That's a thing I strenuously resisted doing, because I wasted two years of my life explaining patiently and repeatedly why I thought crypto was stupid, and getting relentless bollocked by cryptocurrency cultists who at first insisted that I just didn't understand crypto. And then, when I made it clear that I did understand crypto, insisted that I must be a paid shill.

This is literally what happens when you argue with Scientologists, and life is Just. Too. Short.

So I didn't want to get lured into another one of those quagmires, because on the one hand, I just don't think AI is that important of a technology, and on the other hand, I have very nuanced and complicated views about what's wrong, and not wrong, about AI, and it takes a long time to explain that stuff.

But people wouldn't stop asking, so I did what I always do. I wrote a book.

Over the summer I wrote a book about what I think about AI, which is really about what I think about AI criticism, and more specifically, how to be a good AI critic. By which I mean: "How to be a critic whose criticism inflicts maximum damage on the parts of AI that are doing the most harm." I titled the book The Reverse Centaur's Guide to Life After AI, and Farrar, Straus and Giroux will publish it in June, 2026.

But you don't have to wait until then because I am going to break down the entire book's thesis for you tonight, over the next 40 minutes. I am going to talk fast.

#

Start with what a reverse centaur is. In automation theory, a "centaur" is a person who is assisted by a machine. You're a human head being carried around on a tireless robot body. Driving a car makes you a centaur, and so does using autocomplete.

And obviously, a reverse centaur is machine head on a human body, a person who is serving as a squishy meat appendage for an uncaring machine.

Like an Amazon delivery driver, who sits in a cabin surrounded by AI cameras, that monitor the driver's eyes and take points off if the driver looks in a proscribed direction, and monitors the driver's mouth because singing isn't allowed on the job, and rats the driver out to the boss if they don't make quota.

The driver is in that van because the van can't drive itself and can't get a parcel from the curb to your porch. The driver is a peripheral for a van, and the van drives the driver, at superhuman speed, demanding superhuman endurance. But the driver is human, so the van doesn't just use the driver. The van uses the driver up.

Obviously, it's nice to be a centaur, and it's horrible to be a reverse centaur. There are lots of AI tools that are potentially very centaur-like, but my thesis is that these tools are created and funded for the express purpose of creating reverse-centaurs, which is something none of us want to be.

But like I said, the job of an sf writer is to do more than think about what the gadget does, and drill down on who the gadget does it for and who the gadget does it to. Tech bosses want us to believe that there is only one way a technology can be used. Mark Zuckerberg wants you to think that it's technologically impossible to have a conversation with a friend without him listening in. Tim Cook wants you to think that it's technologically impossible for you to have a reliable computing experience unless he gets a veto over which software you install and without him taking 30 cents out of every dollar you spend. Sundar Pichai wants you think that it's impossible for you to find a webpage unless he gets to spy on you from asshole to appetite.

This is all a kind of vulgar Thatcherism. Margaret Thatcher's mantra was "There is no alternative." She repeated this so often they called her "TINA" Thatcher: There. Is. No. Alternative. TINA.

"There is no alternative" is a cheap rhetorical slight. It's a demand dressed up as an observation. "There is no alternative" means "STOP TRYING TO THINK OF AN ALTERNATIVE." Which, you know, fuck that.

I'm an sf writer, my job is to think of a dozen alternatives before breakfast.

So let me explain what I think is going on here with this AI bubble, and sort out the bullshit from the material reality, and explain how I think we could and should all be better AI critics.

#

Start with monopolies: tech companies are gigantic and they don't compete, they just take over whole sectors, either on their own or in cartels.

Google and Meta control the ad market. Google and Apple control the mobile market, and Google pays Apple more than $20 billion/year not to make a competing search engine, and of course, Google has a 90% Search market-share.

Now, you'd think that this was good news for the tech companies, owning their whole sector.

But it's actually a crisis. You see, when a company is growing, it is a "growth stock," and investors really like growth stocks. When you buy a share in a growth stock, you're making a bet that it will continue to grow. So growth stocks trade at a huge multiple of their earnings. This is called the "price to earnings ratio" or "P/E ratio."

But once a company stops growing, it is a "mature" stock, and it trades at a much lower P/E ratio. So for every dollar that Target – a mature company – brings in, it is worth ten dollars. It has a P/E ratio of 10, while Amazon has a P/E ratio of 36, which means that for every dollar Amazon brings in, the market values it at $36.

It's wonderful to run a company that's got a growth stock. Your shares are as good as money. If you want to buy another company, or hire a key worker, you can offer stock instead of cash. And stock is very easy for companies to get, because shares are manufactured right there on the premises, all you have to do is type some zeroes into a spreadsheet, while dollars are much harder to come by. A company can only get dollars from customers or creditors.

So when Amazon bids against Target for a key acquisition, or a key hire, Amazon can bid with shares they make by typing zeroes into a spreadsheet, and Target can only bid with dollars they get from selling stuff to us, or taking out loans, which is why Amazon generally wins those bidding wars.

That's the upside of having a growth stock. But here's the downside: eventually a company has to stop growing. Like, say you get a 90% market share in your sector, how are you gonna grow?

Once the market decides that you aren't a growth stock, once you become mature, your stock is revalued, to a P/E ratio befitting a mature stock.

If you are an exec at a dominant company with a growth stock, you have to live in constant fear that the market will decide that you're not likely to grow any further. Think of what happened to Facebook in the first quarter of 2022. They told investors that they experienced slightly slower growth in the USA than they had anticipated, and investors panicked. They staged a one-day, $240B sell off. A quarter-trillion dollars in 24 hours! At the time, it was the largest, most precipitous drop in corporate valuation in human history.

That's a monopolist's worst nightmare, because once you're presiding over a "mature" firm, the key employees you've been compensating with stock, experience a precipitous pay-drop and bolt for the exits, so you lose the people who might help you grow again, and you can only hire their replacements with dollars. With dollars, not shares.

And the same goes for acquiring companies that might help you grow, because they, too, are going to expect money, not stock. This is the paradox of the growth stock. While you are growing to domination, the market loves you, but once you achieve dominance, the market lops 75% or more off your value in a single stroke if they don't trust your pricing power.

Which is why growth stock companies are always desperately pumping up one bubble or another, spending billions to hype the pivot to video, or cryptocurrency, or NFTs, or Metaverse, or AI.

I'm not saying that tech bosses are making bets they don't plan on winning. But I am saying that winning the bet – creating a viable metaverse – is the secondary goal. The primary goal is to keep the market convinced that your company will continue to grow, and to remain convinced until the next bubble comes along.

So this is why they're hyping AI: the material basis for the hundreds of billions in AI investment.

#

Now I want to talk about how they're selling AI. The growth narrative of AI is that AI will disrupt labor markets. I use "disrupt" here in its most disreputable, tech bro sense.

The promise of AI – the promise AI companies make to investors – is that there will be AIs that can do your job, and when your boss fires you and replaces you with AI, he will keep half of your salary for himself, and give the other half to the AI company.

That's it.

That's the $13T growth story that MorganStanley is telling. It's why big investors and institutionals are giving AI companies hundreds of billions of dollars. And because they are piling in, normies are also getting sucked in, risking their retirement savings and their family's financial security.

Now, if AI could do your job, this would still be a problem. We'd have to figure out what to do with all these technologically unemployed people.

But AI can't do your job. It can help you do your job, but that doesn't mean it's going to save anyone money. Take radiology: there's some evidence that AIs can sometimes identify solid-mass tumors that some radiologists miss, and look, I've got cancer. Thankfully, it's very treatable, but I've got an interest in radiology being as reliable and accurate as possible.

If my Kaiser hospital bought some AI radiology tools and told its radiologists: "Hey folks, here's the deal. Today, you're processing about 100 x-rays per day. From now on, we're going to get an instantaneous second opinion from the AI, and if the AI thinks you've missed a tumor, we want you to go back and have another look, even if that means you're only processing 98 x-rays per day. That's fine, we just care about finding all those tumors."

If that's what they said, I'd be delighted. But no one is investing hundreds of billions in AI companies because they think AI will make radiology more expensive, not even if that also makes radiology more accurate. The market's bet on AI is that an AI salesman will visit the CEO of Kaiser and make this pitch: "Look, you fire 9/10s of your radiologists, saving $20m/year, you give us $10m/year, and you net $10m/year, and the remaining radiologists' job will be to oversee the diagnoses the AI makes at superhuman speed, and somehow remain vigilant as they do so, despite the fact that the AI is usually right, except when it's catastrophically wrong.

"And if the AI misses a tumor, this will be the human radiologist's fault, because they are the 'human in the loop.' It's their signature on the diagnosis."

This is a reverse centaur, and it's a specific kind of reverse-centaur: it's what Dan Davies calls an "accountability sink." The radiologist's job isn't really to oversee the AI's work, it's to take the blame for the AI's mistakes.

This is another key to understanding – and thus deflating – the AI bubble. The AI can't do your job, but an AI salesman can convince your boss to fire you and replace you with an AI that can't do your job. This is key because it helps us build the kinds of coalitions that will be successful in the fight against the AI bubble.

If you're someone who's worried about cancer, and you're being told that the price of making radiology too cheap to meter, is that we're going to have to re-home America's 32,000 radiologists, with the trade-off that no one will ever be denied radiology services again, you might say, "Well, OK, I'm sorry for those radiologists, and I fully support getting them job training or UBI or whatever. But the point of radiology is to fight cancer, not to pay radiologists, so I know what side I'm on."

AI hucksters and their customers in the C-suites want the public on their side. They want to forge a class alliance between AI deployers and the people who enjoy the fruits of the reverse centaurs' labor. They want us to think of ourselves as enemies to the workers.

Now, some people will be on the workers' side because of politics or aesthetics. They just like workers better than their bosses. But if you want to win over all the people who benefit from your labor, you need to understand and stress how the products of the AI will be substandard. That they are going to get charged more for worse things. That they have a shared material interest with you.

Will those products be substandard? There's every reason to think so. Earlier, I alluded to "automation blindness, "the physical impossibility of remaining vigilant for things that rarely occur. This is why TSA agents are incredibly good at spotting water bottles. Because they get a ton of practice at this, all day, every day. And why they fail to spot the guns and bombs that government red teams smuggle through checkpoints to see how well they work, because they just don't have any practice at that. Because, to a first approximation, no one deliberately brings a gun or a bomb through a TSA checkpoint.

Automation blindness is the Achilles' heel of "humans in the loop."

Think of AI software generation: there are plenty of coders who love using AI, and almost without exception, they are senior, experienced coders, who get to decide how they will use these tools. For example, you might ask the AI to generate a set of CSS files to faithfully render a web-page across multiple versions of multiple browsers. This is a notoriously fiddly thing to do, and it's pretty easy to verify if the code works – just eyeball it in a bunch of browsers. Or maybe the coder has a single data file they need to import and they don't want to write a whole utility to convert it.

Tasks like these can genuinely make coders more efficient and give them more time to do the fun part of coding, namely, solving really gnarly, abstract puzzles. But when you listen to business leaders talk about their AI plans for coders, it's clear they're not looking to make some centaurs.

They want to fire a lot of tech workers – 500,000 over the past three years – and make the rest pick up their work with coding, which is only possible if you let the AI do all the gnarly, creative problem solving, and then you do the most boring, soul-crushing part of the job: reviewing the AIs' code.

And because AI is just a word guessing program, because all it does is calculate the most probable word to go next, the errors it makes are especially subtle and hard to spot, because these bugs are literally statistically indistinguishable from working code (except that they're bugs).

Here's an example: code libraries are standard utilities that programmers can incorporate into their apps, so they don't have to do a bunch of repetitive programming. Like, if you want to process some text, you'll use a standard library. If it's an HTML file, that library might be called something like lib.html.text.parsing; and if it's a DOCX file, it'll be lib.docx.text.parsing. But reality is messy, humans are inattentive and stuff goes wrong, so sometimes, there's another library, this one for parsing PDFs, and instead of being called lib.pdf.text.parsing, it's called lib.text.pdf.parsing.

Now, because the AI is a statistical inference engine, because all it can do is predict what word will come next based on all the words that have been typed in the past, it will "hallucinate" a library called lib.pdf.text.parsing. And the thing is, malicious hackers know that the AI will make this error, so they will go out and create a library with the predictable, hallucinated name, and that library will get automatically sucked into your program, and it will do things like steal user data or try and penetrate other computers on the same network.

And you, the human in the loop – the reverse centaur – you have to spot this subtle, hard to find error, this bug that is literally statistically indistinguishable from correct code. Now, maybe a senior coder could catch this, because they've been around the block a few times, and they know about this tripwire.

But guess who tech bosses want to preferentially fire and replace with AI? Senior coders. Those mouthy, entitled, extremely highly paid workers, who don't think of themselves as workers. Who see themselves as founders in waiting, peers of the company's top management. The kind of coder who'd lead a walkout over the company building drone-targeting systems for the Pentagon, which cost Google ten billion dollars in 2018.

For AI to be valuable, it has to replace high-wage workers, and those are precisely the experienced workers, with process knowledge, and hard-won intuition, who might spot some of those statistically camouflaged AI errors.

Like I said, the point here is to replace high-waged workers.

And one of the reasons the AI companies are so anxious to fire coders is that coders are the princes of labor. They're the most consistently privileged, sought-after, and well-compensated workers in the labor force.

If you can replace coders with AI, who cant you replace with AI? Firing coders is an ad for AI.

Which brings me to AI art. AI art – or "art" – is also an ad for AI, but it's not part of AI's business model.

Let me explain: on average, illustrators don't make any money. They are already one of the most immiserated, precartized groups of workers out there. They suffer from a pathology called "vocational awe." That's a term coined by the librarian Fobazi Ettarh, and it refers to workers who are vulnerable to workplace exploitation because they actually care about their jobs – nurses, librarians, teachers, and artists.

If AI image generators put every illustrator working today out of a job, the resulting wage-bill savings would be undetectable as a proportion of all the costs associated with training and operating image-generators. The total wage bill for commercial illustrators is less than the kombucha bill for the company cafeteria at just one of Open AI's campuses.

The purpose of AI art – and the story of AI art as a death-knell for artists – is to convince the broad public that AI is amazing and will do amazing things. It's to create buzz. Which is not to say that it's not disgusting that former OpenAI CTO Mira Murati told a conference audience that "some creative jobs shouldn't have been there in the first place," and that it's not especially disgusting that she and her colleagues boast about using the work of artists to ruin those artists' livelihoods.

It's supposed to be disgusting. It's supposed to get artists to run around and say, "The AI can do my job, and it's going to steal my job, and isn't that terrible?"

Because the customers for AI – corporate bosses – don't see AI taking workers' jobs as terrible. They see it as wonderful.

But can AI do an illustrator's job? Or any artist's job?

Let's think about that for a second. I've been a working artist since I was 17 years old, when I sold my first short story, and I've given it a lot of thought, and here's what I think art is: it starts with an artist, who has some vast, complex, numinous, irreducible feeling in their mind. And the artist infuses that feeling into some artistic medium. They make a song, or a poem, or a painting, or a drawing, or a dance, or a book, or a photograph. And the idea is, when you experience this work, a facsimile of the big, numinous, irreducible feeling will materialize in your mind.

Now that I've defined art, we have to go on a little detour.

I have a friend who's a law professor, and before the rise of chatbots, law students knew better than to ask for reference letters from their profs, unless they were a really good student. Because those letters were a pain in the ass to write. So if you advertised for a postdoc and you heard from a candidate with a reference letter from a respected prof, the mere existence of that letter told you that the prof really thought highly of that student.

But then we got chatbots, and everyone knows that you generate a reference letter by feeding three bullet points to an LLM, and it'll barf up five paragraphs of florid nonsense about the student.

So when my friend advertises for a postdoc, they are flooded with reference letters, and they deal with this flood by feeding all these letters to another chatbot, and ask it to reduce them back to three bullet points. Now, obviously, they won't be the same bullet-points, which makes this whole thing terrible.

But just as obviously, nothing in that five-paragraph letter except the original three bullet points are relevant to the student. The chatbot doesn't know the student. It doesn't know anything about them. It cannot add a single true or useful statement about the student to the letter.

What does this have to do with AI art? Art is a transfer of a big, numinous, irreducible feeling from an artist to someone else. But the image-gen program doesn't know anything about your big, numinous, irreducible feeling. The only thing it knows is whatever you put into your prompt, and those few sentences are diluted across a million pixels or a hundred thousand words, so that the average communicative density of the resulting work is indistinguishable from zero.

It's possible to infuse more communicative intent into a work: writing more detailed prompts, or doing the selective work of choosing from among many variants, or directly tinkering with the AI image after the fact, with a paintbrush or Photoshop or The Gimp. And if there will ever be a piece of AI art that is good art – as opposed to merely striking, or interesting, or an example of good draftsmanship – it will be thanks to those additional infusions of creative intent by a human.

And in the meantime, it's bad art. It's bad art in the sense of being "eerie," the word Mark Fisher uses to describe "when there is something present where there should be nothing, or there is nothing present when there should be something."

AI art is eerie because it seems like there is an intender and an intention behind every word and every pixel, because we have a lifetime of experience that tells us that paintings have painters, and writing has writers. But it's missing something. It has nothing to say, or whatever it has to say is so diluted that it's undetectable.

The images were striking before we figured out the trick, but now they're just like the images we imagine in clouds or piles of leaves. We're the ones drawing a frame around part of the scene, we're the ones focusing on some contours and ignoring the others. We're looking at an inkblot, and it's not telling us anything.

Sometimes that can be visually arresting, and to the extent that it amuses people in a community of prompters and viewers, that's harmless.

I know someone who plays a weekly Dungeons and Dragons game over Zoom. It's transcribed by an open source model running locally on the dungeon master's computer, which summarizes the night's session and prompts an image generator to create illustrations of key moments. These summaries and images are hilarious because they're full of errors. It's a bit of harmless fun, and it bring a small amount of additional pleasure to a small group of people. No one is going to fire an illustrator because D&D players are image-genning funny illustrations where seven-fingered paladins wrestle with orcs that have an extra hand.

But bosses have and will fire illustrators, because they fantasize about being able to dispense with creative professionals and just prompt an AI. Because even though the AI can't do the illustrator's job, an AI salesman can convince the illustrator's boss to fire them and replace them with an AI that can't do their job.

This is a disgusting and terrible juncture, and we should not simply shrug our shoulders and accept Thatcherism's fatalism: "There is no alternative."

So what is the alternative? A lot of artists and their allies think they have an answer: they say we should extend copyright to cover the activities associated with training a model.

And I'm here to tell you they are wrong: wrong because this would inflict terrible collateral damage on socially beneficial activities, and it would represent a massive expansion of copyright over activities that are currently permitted – for good reason!.

Let's break down the steps in AI training.

First, you scrape a bunch of web-pages. This is unambiguously legal under present copyright law. You do not need a license to make a transient copy of a copyrighted work in order to analyze it, otherwise search engines would be illegal. Ban scraping and Google will be the last search engine we ever get, the Internet Archive will go out of business, that guy in Austria who scraped all the grocery store sites and proved that the big chains were colluding to rig prices would be in deep trouble.

Next, you perform analysis on those works. Basically, you count stuff on them: count pixels and their colors and proximity to other pixels; or count words. This is obviously not something you need a license for. It's just not illegal to count the elements of a copyrighted work. And we really don't want it to be, not if you're interested in scholarship of any kind.

And it's important to note that counting things is legal, even if you're working from an illegally obtained copy. Like, if you go to the flea market, and you buy a bootleg music CD, and you take it home and you make a list of all the adverbs in the lyrics, and you publish that list, you are not infringing copyright by doing so.

Perhaps you've infringed copyright by getting the pirated CD, but not by counting the lyrics.

This is why Anthropic offered a $1.5b settlement for training its models based on a ton of books it downloaded from a pirate site: not because counting the words in the books infringes anyone's rights, but because they were worried that they were going to get hit with $150k/book statutory damages for downloading the files.

OK, after you count all the pixels or the words, it's time for the final step: publishing them. Because that's what a model is: a literary work (that is, a piece of software) that embodies a bunch of facts about a bunch of other works, word and pixel distribution information, encoded in a multidimensional array.

And again, copyright absolutely does not prohibit you from publishing facts about copyrighted works. And again, no one should want to live in a world where someone else gets to decide which truthful, factual statements you can publish.

But hey, maybe you think this is all sophistry. Maybe you think I'm full of shit. That's fine. It wouldn't be the first time someone thought that.

After all, even if I'm right about how copyright works now, there's no reason we couldn't change copyright to ban training activities, and maybe there's even a clever way to wordsmith the law so that it only catches bad things we don't like, and not all the good stuff that comes from scraping, analyzing and publishing.

Well, even then, you're not gonna help out creators by creating this new copyright. If you're thinking that you can, you need to grapple with this fact: we have monotonically expanded copyright since 1976, so that today, copyright covers more kinds of works, grants exclusive rights over more uses, and lasts longer.

And today, the media industry is larger and more profitable than it has ever been, and also: the share of media industry income that goes to creative workers is lower than its ever been, both in real terms, and as a proportion of those incredible gains made by creators' bosses at the media company.

So how it is that we have given all these new rights to creators, and those new rights have generated untold billions, and left creators poorer? It's because in a creative market dominated by five publishers, four studios, three labels, two mobile app stores, and a single company that controls all the ebooks and audiobooks, giving a creative worker extra rights to bargain with is like giving your bullied kid more lunch money.

It doesn't matter how much lunch money you give the kid, the bullies will take it all. Give that kid enough money and the bullies will hire an agency to run a global campaign proclaiming "think of the hungry kids! Give them more lunch money!"

Creative workers who cheer on lawsuits by the big studios and labels need to remember the first rule of class warfare: things that are good for your boss are rarely what's good for you.

The day Disney and Universal filed suit against Midjourney, I got a press release from the RIAA, which represents Disney and Universal through their recording arms. Universal is the largest label in the world. Together with Sony and Warner, they control 70% of all music recordings in copyright today.

It starts: "There is a clear path forward through partnerships that both further AI innovation and foster human artistry."

It ends: "This action by Disney and Universal represents a critical stand for human creativity and responsible innovation."

And it's signed by Mitch Glazier, CEO of the RIAA.

It's very likely that name doesn't mean anything to you. But let me tell you who Mitch Glazier is. Today, Mitch Glazier is the CEO if the RIAA, with an annual salary of $1.3m. But until 1999, Mitch Glazier was a key Congressional staffer, and in 1999, Glazier snuck an amendment into an unrelated bill, the Satellite Home Viewer Improvement Act, that killed musicians' right to take their recordings back from their labels.

This is a practice that had been especially important to "heritage acts" (which is a record industry euphemism for "old music recorded by Black people"), for whom this right represented the difference between making rent and ending up on the street.

When it became clear that Glazier had pulled this musician-impoverishing scam, there was so much public outcry, that Congress actually came back for a special session, just to vote again to cancel Glazier's amendment. And then Glazier was kicked out of his cushy Congressional job, whereupon the RIAA started paying more than $1m/year to "represent the music industry."

This is the guy who signed that press release in my inbox. And his message was: The problem isn't that Midjourney wants to train a Gen AI model on copyrighted works, and then use that model to put artists on the breadline. The problem is that Midjourney didn't pay RIAA members Universal and Disney for permission to train a model. Because if only Midjourney had given Disney and Universal several million dollars for training rights to their catalogs, the companies would have happily allowed them to train to their heart's content, and they would have bought the resulting models, and fired as many creative professionals as they could.

I mean, have we already forgotten the Hollywood strikes? I sure haven't. I live in Burbank, home to Disney, Universal and Warner, and I was out on the line with my comrades from the Writers Guild, offering solidarity on behalf of my union, IATSE 830, The Animation Guild, where I'm a member of the writers' unit.

And I'll never forget when one writer turned to me and said, "You know, you prompt an LLM exactly the same way an exec gives shitty notes to a writers' room. You know: 'Make me ET, except it's about a dog, and put a love interest in there, and a car chase in the second act.' The difference is, you say that to a writers' room and they all make fun of you and call you a fucking idiot suit. But you say it to an LLM and it will cheerfully shit out a terrible script that conforms exactly to that spec (you know, Air Bud)."

These companies are desperate to use AI to displace workers. When Getty Images sues AI companies, it's not representing the interests of photographers. Getty hates paying photographers! Getty just wants to get paid for the training run, and they want the resulting AI model to have guardrails, so it will refuse to create images that compete with Getty's images for anyone except Getty. But Getty will absolutely use its models to bankrupt as many photographers as it possibly can.

A new copyright to train models won't get us a world where models aren't used to destroy artists, it'll just get us a world where the standard contracts of the handful of companies that control all creative labor markets are updated to require us to hand over those new training rights to those companies. Demanding a new copyright just makes you a useful idiot for your boss, a human shield they can brandish in policy fights, a tissue-thin pretense of "won't someone think of the hungry artists?"

When really what they're demanding is a world where 30% of the investment capital of the AI companies go into their shareholders' pockets. When an artist is being devoured by rapacious monopolies, does it matter how they divvy up the meal?

We need to protect artists from AI predation, not just create a new way for artists to be mad about their impoverishment.

And incredibly enough, there's a really simple way to do that. After 20+ years of being consistently wrong and terrible for artists' rights, the US Copyright Office has finally done something gloriously, wonderfully right. All through this AI bubble, the Copyright Office has maintained – correctly – that AI-generated works cannot be copyrighted, because copyright is exclusively for humans. That's why the "monkey selfie" is in the public domain. Copyright is only awarded to works of human creative expression that are fixed in a tangible medium.

And not only has the Copyright Office taken this position, they've defended it vigorously in court, repeatedly winning judgments to uphold this principle.

The fact that every AI created work is in the public domain means that if Getty or Disney or Universal or Hearst newspapers use AI to generate works – then anyone else can take those works, copy them, sell them, or give them away for free. And the only thing those companies hate more than paying creative workers, is having other people take their stuff without permission.

The US Copyright Office's position means that the only way these companies can get a copyright is to pay humans to do creative work. This is a recipe for centaurhood. If you're a visual artist or writer who uses prompts to come up with ideas or variations, that's no problem, because the ultimate work comes from you. And if you're a video editor who uses deepfakes to change the eyelines of 200 extras in a crowd-scene, then sure, those eyeballs are in the public domain, but the movie stays copyrighted.

But creative workers don't have to rely on the US government to rescue us from AI predators. We can do it ourselves, the way the writers did in their historic writers' strike. The writers brought the studios to their knees. They did it because they are organized and solidaristic, but also are allowed to do something that virtually no other workers are allowed to do: they can engage in "sectoral bargaining," whereby all the workers in a sector can negotiate a contract with every employer in the sector.

That's been illegal for most workers since the late 1940s, when the Taft-Hartley Act outlawed it. If we are gonna campaign to get a new law passed in hopes of making more money and having more control over our labor, we should campaign to restore sectoral bargaining, not to expand copyright.

Our allies in a campaign to expand copyright are our bosses, who have never had our best interests at heart. While our allies in the fight for sector bargaining are every worker in the country. As the song goes, "Which side are you on?"

OK, I need to bring this talk in for a landing now, because I'm out of time, so I'm going to close out with this: AI is a bubble and bubbles are terrible.

Bubbles transfer the life's savings of normal people who are just trying to have a dignified retirement to the wealthiest and most unethical people in our society, and every bubble eventually bursts, taking their savings with it.

But not every bubble is created equal. Some bubbles leave behind something productive. Worldcom stole billions from everyday people by defrauding them about orders for fiber optic cables. The CEO went to prison and died there. But the fiber outlived him. It's still in the ground. At my home, I've got 2gb symmetrical fiber, because AT&T lit up some of that old Worldcom dark fiber.

All things being equal, it would have been better if Worldcom hadn't ever existed, but the only thing worse than Worldcom committing all that ghastly fraud would be if there was nothing to salvage from the wreckage.

I don't think we'll salvage much from cryptocurrency, for example. Sure, there'll be a few coders who've learned something about secure programming in Rust. But when crypto dies, what it will leave behind is bad Austrian economics and worse monkey JPEGs.

AI is a bubble and it will burst. Most of the companies will fail. Most of the data-centers will be shuttered or sold for parts. So what will be left behind?

We'll have a bunch of coders who are really good at applied statistics. We'll have a lot of cheap GPUs, which'll be good news for, say, effects artists and climate scientists, who'll be able to buy that critical hardware at pennies on the dollar. And we'll have the open source models that run on commodity hardware, AI tools that can do a lot of useful stuff, like transcribing audio and video, describing images, summarizing documents, automating a lot of labor-intensive graphic editing, like removing backgrounds, or airbrushing passersby out of photos. These will run on our laptops and phones, and open source hackers will find ways to push them to do things their makers never dreamt of.

If there had never been an AI bubble, if all this stuff arose merely because computer scientists and product managers noodled around for a few years coming up with cool new apps for back-propagation, machine learning and generative adversarial networks, most people would have been pleasantly surprised with these interesting new things their computers could do. We'd call them "plugins."

It's the bubble that sucks, not these applications. The bubble doesn't want cheap useful things. It wants expensive, "disruptive" things: Big foundation models that lose billions of dollars every year.

When the AI investment mania halts, most of those models are going to disappear, because it just won't be economical to keep the data-centers running. As Stein's Law has it: "Anything that can't go on forever eventually stops."

The collapse of the AI bubble is going to be ugly. Seven AI companies currently account for more than a third of the stock market, and they endlessly pass around the same $100b IOU.

Bosses are mass-firing productive workers and replacing them with janky AI, and when the janky AI is gone, no one will be able to find and re-hire most of those workers, we're going to go from disfunctional AI systems to nothing.

AI is the asbestos in the walls of our technological society, stuffed there with wild abandon by a finance sector and tech monopolists run amok. We will be excavating it for a generation or more.

So we need to get rid of this bubble. Pop it, as quickly as we can. To do that, we have to focus on the material factors driving the bubble. The bubble isn't being driven by deepfake porn, or election disinformation, or AI image-gen, or slop advertising.

All that stuff is terrible and harmful, but it's not driving investment. The total dollar figure represented by these apps doesn't come close to making a dent in the capital expenditures and operating costs of AI. They are peripheral, residual uses: flashy, but unimportant to the bubble.

Get rid of all those uses and you reduce the expected income of AI companies by a sum so small it rounds to zero.

Same goes for all that "AI Safety" nonsense, that purports to concern itself with preventing an AI from attaining sentience and turning us all into paperclips. First of all, this is facially absurd. Throwing more words and GPUs into the word-guessing program won't make it sentient. That's like saying, "Well, we keep breeding these horses to run faster and faster, so it's only a matter of time until one of our mares gives birth to a locomotive." A human mind is not a word-guessing program with a lot of extra words.

I'm here for science fiction thought experiments, don't get me wrong. But also, don't mistake sf for prophesy. SF stories about superintelligence are futuristic parables, not business plans, roadmaps, or predictions.

The AI Safety people say they are worried that AI is going to end the world, but AI bosses love these weirdos. Because on the one hand, if AI is powerful enough to destroy the world, think of how much money it can make! And on the other hand, no AI business plan has a line on its revenue projections spreadsheet labeled "Income from turning the human race into paperclips." So even if we ban AI companies from doing this, we won't cost them a dime in investment capital.

To pop the bubble, we have to hammer on the forces that created the bubble: the myth that AI can do your job, especially if you get high wages that your boss can claw back; the understanding that growth companies need a succession of ever-more-outlandish bubbles to stay alive; the fact that workers and the public they serve are on one side of this fight, and bosses and their investors are on the other side.

Because the AI bubble really is very bad news, it's worth fighting seriously, and a serious fight against AI strikes at its roots: the material factors fueling the hundreds of billions in wasted capital that are being spent to put us all on the breadline and fill all our walls with high-tech asbestos.

(Image: Cryteria, CC BY 3.0, modified)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago Haunted Mansion papercraft model adds crypts and gates https://www.haunteddimensions.raykeim.com/index313.html

#20yrsago Print your own Monopoly money https://web.archive.org/web/20051202030047/http://www.hasbro.com/monopoly/pl/page.treasurechest/dn/default.cfm

#15yrsago Bunnie explains the technical intricacies and legalities of Xbox hacking https://www.bunniestudios.com/blog/2010/usa-v-crippen-a-retrospective/

#15yrsago How Pac Man’s ghosts decide what to do: elegant complexity https://web.archive.org/web/20101205044323/https://gameinternals.com/post/2072558330/understanding-pac-man-ghost-behavior

#15yrsago Glorious, elaborate, profane insults of the world https://www.reddit.com/r/AskReddit/comments/efee7/what_are_your_favorite_culturally_untranslateable/?sort=confidence

#15yrsago Walt Disney World castmembers speak about their search for a living wage https://www.youtube.com/watch?v=f5BMQ3xQc7o

#15yrsago Wikileaks cables reveal that the US wrote Spain’s proposed copyright law https://web.archive.org/web/20140723230745/https://elpais.com/elpais/2010/12/03/actualidad/1291367868_850215.html

#15yrsago Cities made of broken technology https://web.archive.org/web/20101203132915/https://agora-gallery.com/artistpage/Franco_Recchia.aspx

#10yrsago The TPP’s ban on source-code disclosure requirements: bad news for information security https://www.eff.org/deeplinks/2015/12/tpp-threatens-security-and-safety-locking-down-us-policy-source-code-audit

#10yrsago Fossil fuel divestment sit-in at MIT President’s office hits 10,000,000,000-hour mark https://twitter.com/FossilFreeMIT/status/672526210581274624

#10yrsago Hacker dumps United Arab Emirates Invest Bank’s customer data https://www.dailydot.com/news/invest-bank-hacker-buba/

#10yrsago Illinois prisons spy on prisoners, sue them for rent on their cells if they have any money https://www.chicagotribune.com/2015/11/30/state-sues-prisoners-to-pay-for-their-room-board/

#10yrsago Free usability help for privacy toolmakers https://superbloom.design/learning/blog/apply-for-help/

#10yrsago In the first 334 days of 2015, America has seen 351 mass shootings (and counting) https://web.archive.org/web/20151209004329/https://www.washingtonpost.com/news/wonk/wp/2015/11/30/there-have-been-334-days-and-351-mass-shootings-so-far-this-year/

#10yrsago Not even the scapegoats will go to jail for BP’s murder of the Gulf Coast https://arstechnica.com/tech-policy/2015/12/manslaughter-charges-dropped-in-bp-spill-case-nobody-from-bp-will-go-to-prison/

#10yrsago Urban Transport Without the Hot Air: confusing the issue with relevant facts! https://memex.craphound.com/2015/12/03/urban-transport-without-the-hot-air-confusing-the-issue-with-relevant-facts/

#5yrsago Breathtaking Iphone hack https://pluralistic.net/2020/12/03/ministry-for-the-future/#awdl

#5yrsago Graffitists hit dozens of NYC subway cars https://pluralistic.net/2020/12/03/ministry-for-the-future/#getting-up

#5yrsago The Ministry For the Future https://pluralistic.net/2020/12/03/ministry-for-the-future/#ksr

#5yrsago Monopolies made America vulnerable to covid https://pluralistic.net/2020/12/03/ministry-for-the-future/#big-health

#5yrsago Section 230 is Good, Actually https://pluralistic.net/2020/12/04/kawaski-trawick/#230

#5yrsago Postmortem of the NYPD's murder of a Black man https://pluralistic.net/2020/12/04/kawaski-trawick/#Kawaski-Trawick

#5yrsago Student debt trap https://pluralistic.net/2020/12/04/kawaski-trawick/#strike-debt

#1yrago "That Makes Me Smart" https://pluralistic.net/2024/12/04/its-not-a-lie/#its-a-premature-truth

#1yrago Canada sues Google https://pluralistic.net/2024/12/03/clementsy/#can-tech


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Memex Method," Farrar, Straus, Giroux, 2026

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026



Colophon (permalink)

Today's top sources:

Currently writing:

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.

  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

Read the whole story
mrmarchant
20 hours ago
reply
Share this story
Delete

“A Waste of Time”

1 Share

A little boy spent his first day at school. ‘What did you learn?’ was his aunt’s question. ‘Didn’t learn nothing.’ ‘Well, what did you do?’ ‘Didn’t do nothing. There was a woman wanting to know how to spell “cat,” and I told her.’

— John Scott, The Puzzle King, 1899

The 12-year-old Winston Churchill found examinations “a great trial”: “I would have liked to have been examined in history, poetry and writing essays. … I should have liked to be asked to say what I knew. They always tried to ask what I did not know.”

Read the whole story
mrmarchant
20 hours ago
reply
Share this story
Delete
Next Page of Stories