1580 stories
·
2 followers

The AI Novelty Cycle

1 Share

Each additional subscriber puts a song in my heart. Paid subscribers add money to my Stripe account.

OpenAI demonstrated their Sora video generator on February 15th, 2024. People freaked out.

March 26th, 2026 from the New York Times.

Oops. The video revolution will apparently not be AI generated.

To illustrate what is happening here I want to talk about this 8-track from 1977, Dumb Ditties.

In 1977 I was seven years old, and for a period of time Dumb Ditties was my favorite album. Produced by the K-Tel corporation, Dumb Ditties is a collection of novelty songs like “Monster Mash,” or one of my favorite,s “On Top of Spaghetti,” a song that I’m pretty confident I could sing every lyric to:

On top of spaghetti,
All covered with cheese,
I lost my poor meatball,
When somebody sneezed.

…and so on.

But my favorite song was the one song by a non-novelty artist, Chuck Berry’s “My Ding-a-ling” which is taken from a live performance with a sing-along/call-and-response section where the entire crowd shouts the chorus:

My ding-a-ling, my ding-a-ling
I want you to play with my ding-a-ling
My ding-a-ling, my ding-a-ling
I want you to play with my ding-a-ling

If I have to explain whey a seven-year-old boy would experience a little thrill over being able to march around the house singing a one-and-a-half entendre joke about his privates, you’ve never met a seven-year-old boy.

“My Ding-a-ling” is someone else’s song, and the recording on Dumb Ditties is from the early 70’s, well after Chuck Berry’s heyday as one of the chief progenitors of rock and roll and when he apparently had to resort to novelty song sing-alongs to get the audience going.

I don’t recall listening to “My Ding-a-ling” in the years between 1977 and today as I started writing this newsletter. Pretty quickly my taste in novelty songs moved on to AC/DC’s “Big Balls” from the album Dirty Deeds Done Dirt Cheap, which has an opening verse and chorus that goes like this:

Well, I'm upper, upper-class, high-society
God's gift to ballroom notoriety
And I always fill my ballroom, the event is never small
The social pages say I've got the biggest balls of all

[Chorus]
I've got big balls, I've got big balls
They're such big balls, and they're dirty big balls
And he's got big balls and she's got big balls
But we've got the biggest balls of them all


Ha! Ha! Ha! Ha! Classic! He’s talking about like, you know, dancing in a ballroom, but it’s really about his balls! You know…his balls! (His testicles.) What a legend!

In fifth grade, I thought this ruled, but it is not the part of the AC/DC catalog I return to today.

It is not hard to identify a novelty song in that the only reason you listen to them is for the novelty of the joke, and once you hear the joke a few times, the novelty wears off unless that song is “On Top of Spaghetti” and you sing it just to annoy the people around you, which come to think of it, wears off too. These songs are not good in any sense of the word. Within a few years of Dumb Ditties, “Weird Al” Yankovic would hit on a superior formula, by yoking the novelty to good pop songs you want to hear, but even then the enduring appeal is limited.

I think, pretty much since ChatGPT became widely available (November 30, 2022), we’ve been in repeated novelty cycle when it comes to generative AI applications where we are initially captured and intrigued, only to have some measure of the novelty wear off, leaving disillusionment in its wake.

The saga of Sora is a perfect example, an application that was going to upend all of filmmaking turning into a dead commercial enterprise in just over two years. But this is far from the only example. In fact, I think you can look at almost the entirety of the public discourse around the capacities of this technology as a repeating novelty cycle.

For something to be a novelty, it must simultaneously surprise, and either entertain or upset a status quo. “My Ding-a-ling” is compelling because it authorizes young children to express something nominally dirty without getting in trouble. (At least in my household. Kudos to Mom and Dad knowing what battles to fight, as banning '‘My Ding-a-ling” would have only prolonged my interest.)

Sora threatened to upend all of the television and movie industries, but in reality, making short videos from prompts is, at best, a novelty. OpenAI tried to squeeze some additional juice by striking a $1 billion deal to license Disney characters to the platform, but I think nails it when he likens Sora to “MadLibs,” a fun game to play with your friends like once a year at best.

What is going on that we so readily and serially mistake novelty for something that’s meaningful, transformative, and enduring?

This very famous early response to the appearance of ChatGPT falls into the novelty trap:

Personally, I was not all that astounded by what ChatGPT could produce (and said so), because the thing the author above is reacting to - near instantaneous surface-level competent five-paragraph essays on academic subjects - is something that I, personally, put no stock in. They did not have value when done by students and so they also don’t have value when they are done by large language models.

Sora appeared to have value when demonstrated in short clips because people were willing, perhaps even eager to extrapolate from 30 seconds of video to extended video narratives on TV and film. But, as it turns out, to make a satisfactory - never mind good - full-length narrative video you need all kinds of capacities that are beyond an AI video generator.

I cannot emphasize enough that all of this was known at the time of Sora’s arrival and yet the coverage was nonetheless breathless. That students should be doing work other cranking out formulaic five-paragraph essays was also a known-known, and yet there we were, hyperventilating over something that wasn’t worth a human’s time anyway.

Am I getting a little frustrated here? Maybe. Three and a half years later and we still can’t manage to hunker down and treat this technology seriously, rather than lurching after novelty. Ethan Mollick, one of our leading experts on generative AI, a guy who will cost your organization six-figures to come opine about this stuff, essentially tests these models with a deliberate novelty, asking them to create a video of “an otter using a laptop on an airplane.” You can see how much better models have gotten at rendering this stuff over the last couple of years, and it’s impressive.

But so what! Who gives a shit? For much less than six-figures I will gladly come to your organization to discuss how if we’re going to manage ourselves in a world with AI technology we cannot take these wild swings based on responses to novelty.

How many of these benchmarks are actually tied to something meaningful in the world? I don’t think the otter on a plane makes the cut.

I have started to collect examples of people falling out of fascination with the novelty and I’ve particularly enjoyed the journey of the writer John Ganz over the last six weeks or so. Ganz is the author of When the Clock Broke: Con Men, Conspiracists, and How America Cracked Up in the Early 1990s, and the newsletter. Ganz is a great example of the virtue of “unique intelligences” that I mulled over a couple weeks ago.

It is a pleasure to see Ganz’s mind at work because it is a mind that is undeniably alive and it is Ganz’s willingness to speak and write his mind that is (in my view) the chief source of his success. So, I was a little surprised when he used Claude Code to “vibe code” something he called “Polybius” an automated program meant to measure the “Authoritarian Consolidation Index” a measure of how close a particular society is to authoritarian rule.

I probably shouldn’t have been surprised because Ganz is obviously a curious person who - like me - had been hearing a lot about Claude Code and autonomous agents, but unlike me had a potentially interesting idea about what he might code.

To be honest, I never bothered looking at Polybius because it had the whiff of novelty to me - as any vibe coded app should - so I was interested to hear Ganz’s remarks in his most recent weekly conversation with :

“I’m back to bearish on AI…I gotta tell you. I’ve been using Claude and I did build something out of it…I don’t think what I built, and maybe this says more about me than it, but I don’t think what I built it working anymore. It’s kind of a piece of garbage, to be honest with you.”

Ganz goes on to say how it requires a lot of work to make it work, time he hasn’t had and if you check out his tone from the conversation with Max Reed you can sense some understandable exasperation. This thing was supposed to be a kind of miracle, but it just isn’t.

Ganz literally says, “I honestly have gotten back to the point where I’d gotten from this is a miracle machine to this is stupid.”

Fortunately, Ganz has all the pre-existing capacities and knowledge (what I call a practice) to understand the limits of what he’s built and the technology in general. He knows the work he wants to do and that the tool is not necessarily well-suited to it. This is the kind of mind we should be inculcating in students, a mind that is resistant to novelty passing for meaning.

I have had people who are enthusiastic about these tools reach out to me and show me things they’ve done that seem amazing, but which, from my perspective, are clearly novelties. I previously linked to a neat piece by where he vibe coded a program to create a “power ranking” of writers since 1965.

It’s neat! But also, so what? What do we do with this? What, if anything, can be extended from this new knowledge? Perhaps something, but the discovery of that something requires us to move past novelty. Maybe Claude Code or autonomous agents are going to change the digital humanities forever, though I maintain that the chief skill of a good digital humanist will remain being able to formulate an interesting question from which an answer makes us want to ask and answer more questions.

I think the most significant, self-inflicted problem institutions and organizations are facing - particularly educational ones - is being seduced by novelty and mistaking novelty for something enduringly meaningful.

On BlueSky I worked through a bit of a thread responding to the announcement that the Canvas learning management system would incorporate AI as a “teaching agent.”

That screenshot gives you the nut of where I stand, but further down I note the use that Canvas envisions, using the AI to make grading rubrics for you. The problem is that student-facing rubrics are themselves a kind of novelty - a persistent one, but a novelty nonetheless - that is poorly aligned with the kinds of experiences and assessments that help students become competent and confident writers.

The agentic AI is adding novelty on top of novelty.

The challenge here is to know novelty when you see it. When it comes to a novelty song all you need is your inherent humanity which rather quickly grows bored of the merely novel. A lot of the music I listened to as a kid proved to be more novelty than enduring. I once could not get enough of Ted Nugent screaming “Anyone who wants to get mellow can get the fuck out of here!” before kicking into “Wang Tang Sweet Poontang” on his Double Live Gonzo album, though truth be told, I did not even understand the reference at the time I would’ve professed myself a fan of “the Nuge.” I don’t listen to Ted Nugent anymore because his music just kind of sucks.

There is other music from that 8-Track era that I do still listen to, such as Marvin Gaye Live at the London Palladium, an album my dad played relentlessly in the Oldsmobile Visa Cruiser Wagon, that I might’ve even complained about at the time, but which is for sure no novelty.

Chuck Berry is appropriately remembered for his contributions to rock and roll, not his novelty song. Ted Nugent is remembered as a guy who shoots things with crossbows and soon enough he will not be remembered at all. There are precisely zero Sora videos that have entered the public consciousness, making it less successful than Dumb Ditties, which at least has “On Top of Spaghetti.” I’m confident Polybius is the least interesting thing John Ganz will make.

Whenever the next amazing AI thing shows up, remember to ask yourself whether or not it’s a novelty and the answer is probably yes.

(Where do you see novelties in the AI world?)

Leave a comment

Links

This week at the Chicago Tribune I offered my take on Tana French’s conclusion to the Ardnakelty trilogy, The Keeper.

At Inside Higher Ed I published a very excellent guest post from Julia Morgan McKenzie “In Defense of Long Writing.”

At I published a Q&A with Tim Cain about a new report that collects the specific language on academic freedom in collective bargaining agreements. A great resource for labor work in higher education.

cannot shut up about these books. Nor should she!

Nobody understands or utilizes this platform to better effect than who taught me a new genre of writing, “effort posting,” while also confirming that I will never effort post.

I notice that South Carolina indie publisher Hub City got a very nice shoutout in this profile of the writer Nancy Lemann who is having her earlier work republished in a big way.

Via my friends and in honor of baseball’s opening day, a classic from the archives, “Casey ‘At the Bat’ Responds to that Mean Poem about Him,” by Jeremiah Budin.

Share

Recommendations

1. Creation Lake by Rachel Kushner
2. The Night Circus by Erin Morgenstern
3. A Manual For Cleaning Women by Lucia Berlin
4. James by Percival Everett
5. Hamnet by Maggie O’Farrell

Lesley S. - Marina, CA

For Lesley I’m going with a book that’s a little shaggy in terms of structure and execution, but which I thought gets so much right in its close attention to its characters that it completely won me over, Wayward by Dana Spiotta.

Request a bespoke book recommendation

As I was working on this edition UPS delivered a custom package containing Marlon James’ forthcoming novel, The Disappearers featuring the cover image embossed into the cardboard.

It was so nice that I wanted to disassemble and frame the cardboard. I felt sort of excited that a press can still go an extra mile or two for a book like this. It doesn’t publish until September, so I won’t be reading it until this summer, but it looks great.

I’ll see you all again next week.

JW
The Biblioracle

Subscribe now

Leave a comment

Share The Biblioracle Recommends

Read the whole story
mrmarchant
6 hours ago
reply
Share this story
Delete

Antienshittification

1 Share

Cory Doctorow has a word for what happens to internet platforms: enshittification. First it's good for users. Then it squeezes users to serve advertisers. Then it squeezes everyone to serve shareholders. Finally it dies, taking everything you built on it with it.

The farther they get from their founding moment, the more extractive platforms become. We've all lived this cycle. The pattern feels like a law of physics. 

We keep building our identities, audiences, and creative histories on platforms we don't control, hoping this time will be different. It won't be. Because the problem isn't the people running the platforms. It's the architecture.

The dark forest

In 2019 I wrote an essay that introduced the dark forest theory of the internet — the idea that surveillance, ads, and trolls had made the public web a hostile environment. People were retreating into private groupchats and invite-only communities to avoid the fray.

That movement has only grown since. It's become clear that the public, ad-supported spaces are not here to serve us. They’re adversarial to our interests and goals. We go online for connection and fun. Their algorithms trap us in ad-supported doomscrolls carefully designed to be hard to escape.

People move into groupchats, Discords, and private spaces — quieter corners, but still rented ones. Tools built for an earlier internet. They can't evolve with the communities inside them, and the data you build there still belongs to someone else.

Last year, Metalabel started working on a new platform that imagines a different world. Called Dark Forest Operating System, or DFOS, it’s a private internet of protected, member-governed spaces where people can be safe and real together in worlds of their own. Each DFOS space comes preloaded with a groupchat, private posts feed, shared treasury, DMs, and subgroups, and the ability to customize and expand the way each community sees fit.

alt
The homescreen of the New Creative Era DFOS space

Crucially, this platform — which launches next month — isn't ad-supported and doesn't use algorithmic feeds to dictate what people see. There's no advertising layer or rage-based business model to feed. It's member-supported, putting people in full control of what they see. Our incentives are aligned with yours. We grow when you grow. If we fail to serve you, we fail, full stop.

Aligned business models are critical, but DFOS goes farther. Underneath DFOS is a protocol that makes it structurally difficult to extract your data without your participation, in a way that a terms-of-service change cannot undo.

We built DFOS to be antienshittified.

The DFOS protocol

Existing platforms own and control your data. You create an account on their servers according to their rules and exist at their whim.

DFOS is different. The architecture that runs underneath our service – the DFOS protocol — presents an entirely different approach to data, ownership, and privacy.

alt
Screenshot of protocol.dfos.com

Instead of your data living within a username on someone’s server, the DFOS protocol derives your identity from cryptographic keys you control. Your identity is a signed record of your actions — a tamper-proof log that any device can verify, offline, without asking anyone's permission.

This happens using an open data architecture standard called a DID, or Decentralized Identifier. These are publicly available addresses that websites can reference that contain cryptographically protected information about a person and what actions they’ve taken. With DIDs, you can connect with sites and services while your data stays yours.

Your DID is a portable, consistent identity online, without the need for corporate overlords. The AT protocol, which powers Bluesky, also uses DIDs — more on it in a moment. None of this involves blockchains.

Content works the same way. When you publish a post through the DFOS protocol, what gets recorded publicly is a cryptographic proof that you authored it — not the content itself. The proof is public, but the content it holds stays private.

This solves another problem of the dark forest era: if everything is private and hidden, how do you establish trust, authorship, and reputation without dragging everything back into the extractive public layer? The DFOS protocol lets you prove you made something without revealing what it is.

DFOSBOX

As an example of what you can practically do, I made a simple app on top of the DFOS protocol called DFOSBOX. It's a Dropbox clone focused only on blog entries and comments I’ve made across the 16 DFOS spaces I’m in.

alt

After logging in with my DFOS credentials, DFOSBOX visits a relay server that holds my data, checks my DID signifier, verifies it’s me asking, and then copies and syncs everything I've made to a folder on my machine. Every post, piece of content, and action gets mirrored to a location I control.

alt
Screenshot of my DFOSBOX sync

This means when a platform enshittifies — when it changes the rules, locks you out, or just goes dark — you’re protected. You already have your data, saved somewhere else, that's verifiably yours.

Imagine if other services did this. Every YouTube video, IG post, tweet, and Substack you posted would be automatically saved and preserved in a consistent structure fully under your control. That's what the DFOS protocol allows you to do.

DFOSBOX is a project I made, but I’m not the only one. Three other dev projects have already emerged in our ecosystem in our private alpha. Projects using the protocol to mirror their data and power complementary products beyond DFOS.

alt
When those relays fire

A changing web

We think this is just the beginning.

Arguments for adopting more advanced technical systems or more do-gooder tech often requires some sacrifice in end experience. It's harder, more complicated to use. This is not the case with DIDs. We're able to create what we believe is a more empowering, richer, and more intuitive product experience, while powerful capabilities underneath the hood open up something much bigger.

alt

When we started using DIDs on Metalabel three years ago — prompted by my brilliant Metalabel/DFOS cofounder Brandon Valosek — we were inspired by Bluesky and the AT protocol, who laid the path. What's amazing is that even though the AT and DFOS protocols use DIDs for opposite purposes — AT to create a standard for decentralized public data, DFOS creating a standard for decentralized private data — because DIDs are simple, flexible, and open source, our two worlds can become compatible over time if developers wish, and extend into new worlds as well.

Read the protocol spec at protocol.dfos.com. If you're a developer, hop into clear.txt, a DFOS space for devs, for more. If you’d like to explore DFOS, join the waitlist.

The internet belongs to everyone, not the few. Its technical foundations should too. Welcome to our forest.

Read the whole story
mrmarchant
6 hours ago
reply
Share this story
Delete

Working on products people hate

1 Share

I’ve worked on a lot of unpopular products.

At Zendesk I built large parts of an app marketplace that was too useful to get rid of but never polished enough to be loved. Now I work on GitHub Copilot, which many people think is crap1. In between, I had some brief periods where I worked on products that were well-loved. For instance, I fixed a bug where popular Gists would time out once they got more than thirty comments, and I had a hand in making it possible to write LaTeX mathematics directly into GitHub markdown2. But I’ve spent years working on products people hate3.

If I were a better developer, would I have worked on more products people love? No. Even granting that good software always makes a well-loved product, big-company software is made by teams, and teams are shaped by incentives. A very strong engineer can slightly improve the quality of software in their local area. But they must still write code that interacts with the rest of the company’s systems, and their code will be edited and extended by other engineers, and so on until that single engineer’s heroics is lost in the general mass of code commits. I wrote about this at length in How good engineers write bad code at big companies.

Looking back, I’m glad that people have strongly disliked some of the software I’ve built, for the same reason that I’m glad I wasn’t born into oil money. If I’d happened to work on popular applications for my whole career, I’d probably believe that that was because of my sheer talent. But in fact, you would not be able to predict the beloved and disliked products I worked on from the quality of their engineering. Some beloved features have very shaky engineering indeed, and many features that failed miserably were built like cathedrals on the inside4. Working on products people hate forces you to accept how little control individual engineers have over whether people like what they build.

In fact, a reliable engineer ought to be comfortable working on products people hate, because engineers work for the company, not for users. Of course, companies want to delight their users, since delighted users will pay them lots of money, and at least some of the time we’re lucky enough to get to do that. But sometimes they can’t: for instance, they might have to tighten previously-generous usage limits, or shut down a beloved product that can’t be funded anymore. Sometimes a product is funded just well enough to exist, but not well enough to be loved (like many enterprise-grade box-ticking features) and there’s nothing the engineers involved can do about it.

It can be emotionally difficult working on products that people hate. Reading negative feedback about things you built feels like a personal attack, even if the decisions they’re complaining about weren’t your decisions. To avoid this emotional pain, it’s tempting to make the mistake of ignoring feedback entirely, or of convincing yourself that you’re much smarter than the stupid users anyway. Another tempting mistake is to go too far in the other direction: to put yourself entirely “on the user’s side” and start pushing your boss to do the things they want, even if it’s technically (or politically) impossible. Both of these are mistakes because they abdicate your key responsibility as an engineer, which is to try and find some kind of balance between what’s sustainable for the company and what users want. That can be really hard!

There’s also a silver lining to working on disliked products, which is that people only care because they’re using them. The worst products are not hated, they are simply ignored (and if you think working on a hated product is bad, working on an ignored product is much worse). A product people hate is usually providing a fair amount of value to its users (or at least to its purchasers, in the case of enterprise software). If you’re thick-skinned enough to take the heat, you can do a lot of good in this position. Making a widely-used but annoying product slightly better is pretty high-impact, even if you’re not in a position to fix the major structural problems.

Almost every engineer will work on a product people hate. That’s just the law of averages: user sentiment waxes and wanes over time, and if your product doesn’t die a hero it will live long enough to become the villain. Given that, it’s sensible to avoid blaming the engineers who work on unpopular products. Otherwise you’ll end up blaming yourself, when it’s your turn, and miss the best chances in your career to have a real positive impact on users.


  1. We used to be broadly liked, then disliked when Cursor and Claude Code came out, and now I’m fairly sure the Copilot CLI tool is changing people’s minds again. So it goes.

  2. Although even that got some heated criticism at the time.

  3. Of course, I don’t mean “every single person hates the software”, or even “more than half of its users hate it”. I just mean that there are enough haters out there that most of what you read on the internet is complaints rather than praise.

  4. This is reason number five thousand why you can’t judge the quality of tech companies from the outside, no matter how much you might want to (see my post on “insider amnesia”).

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Endgame for the Open Web

1 Share

You must imagine Sam Altman holding a knife to Tim Berners-Lee's throat.

It's not a pleasant image. Sir Tim is, rightly, revered as the genial father of the World Wide Web. But, all the signs are pointing to the fact that we might be in endgame for "open" as we've known it on the Internet over the last few decades.

The open web is something extraordinary: anybody can use whatever tools they have, to create content following publicly documented specifications, published using completely free and open platforms, and then share that work with anyone, anywhere in the world, without asking for permission from anyone. Think about how radical that is.

Now, from content to code, communities to culture, we can see example after example of that open web under attack. Every single aspect of the radical architecture I just described is threatened, by those who have profited most from that exact system.

Today, the good people who act as thoughtful stewards of the web infrastructure are still showing the same generosity of spirit that has created opportunity for billions of people and connected society in ways too vast to count while —not incidentally— also creating trillions of dollars of value and countless jobs around the world. But the increasingly-extremist tycoons of Big Tech have decided that that's not good enough.

Now, the centibillionaires have begun their final assault on the last, best parts of what's still open, and likely won't rest until they've either brought all of the independent and noncommercial parts of the Internet under their control, or destroyed them. Whether or not they succeed is going to be decided by decisions that we all make as a community in the coming months. Even though there have always been threats to openness on the web, the stakes have never been higher than they are this time.

Right now, too many of the players in the open ecosystem are still carrying on with business as usual, even though those tactics have been failing to stop big tech for years. I don't say this lightly: it looks to me like 2026 is the year that decides whether the open web as we know it will survive at all, and we have to fight like the threat is existential. Because it is.

What does the attack look like?

Calling this threat "existential" is a strong statement, so we should back that up with evidence. The point I want to make here is that this is a lot broader than just one or two isolated examples of trying to win in one market. What we are seeing is the application of the same market-crushing techniques that were used to displace entire industries with the rise of social media and the gig economy, now being deployed across the very open internet infrastructure that made the modern internet possible.

The big tech financiers and venture capitalists who are enabling these attacks are intimately familiar with these platforms, so they know the power and influence that they have — and are deeply experienced at dismantling any systems that have cultural or political power that they can't control. And since they have virtually infinite resources, they're able to carry out these campaigns simultaneously on as many fronts as they need to. The result is an overwhelming wave of threats. It's not a coordinated conspiracy, because it doesn't need to be; they just all have the same end goals in mind.

Some examples:

  • Publishers who still share their content openly, either completely free for their audience, as advertising-supported content, or with a limited amount of content available until they ask for some form of payment, are being absolutely hammered by ill-behaved AI bots. These bots are scouring their sites for every available bit of content, scraping all of it up to feed their LLMs, and then making summaries of that content available to users — typically without consent or compensation. The deal was always simple: search engines had permission to crawl sites because they were going to be sending users to those sites. If they're hitting your site half a million times for every one user they send to your site, all they're giving you is higher costs.
  • LLM-based AI platforms that have trained their AI models on this content gathered without consent typically have almost no links back to the original source content, and either bury or omit credits to the original site; as a result, publishers in categories like tech media have seen their traffic crater by over 50%, with some publishers seeing drops of over 90%.
  • As publishers see the danger from AI bots expand, they retreat to putting more and more content behind either password protection or payment walls or both, leaving the only publicly-accessible content to be AI-generated slop; open resources like research work, scientific analysis, and fair use of content all suffer as a result of people responding to the bad actors, since legitimate uses of open content are no longer possible. We're seeing this already as publishers block archival sites like the Internet Archive, even though we've already seen examples where the Internet Archive was the only accurate record of content that was disappeared by authoritarians in the current administration.
  • Open APIs, a building block of how developers build new experiences for users, and for how researchers understand people's behavior online, are rapidly being locked down due to abuse from LLMs, as well as the extremist CEOs not wanting anyone to understand what's happening on their platforms. The clamping down doesn't just affect coders — the people who were best poised to help monitor and translate what's been happening on platforms like Twitter have seen their work under siege, with over 60% of research projects on the platform stalled or abandoned just since Musk shut down their open API access.
  • Independent media based on open formats, like podcasts, are also under siege as platforms like Apple's podcasts move to closed infrastructure which means that content creators are now required to work with Apple's approved partners. Meanwhile, others like Spotify and Netflix leverage their dominant positions in the market to coerce creators to abandon open podcasts entirely, in favor of proprietary formats that require listeners to be on those platforms — locking in both creators and their audiences so they are stuck as they begin the enshittification process. The net result will be podcasts moving from being an open format that isn't controlled by either any one company or any manipulative algorithms, to just another closed social platform monetized by surveillance-based advertising.
  • Open source software projects, which power the vast majority of the internet's infrastructure, are now beleaguered by constant slop code submissions being made by automated AI code agents. These submissions attempt to look like legitimate open source code contributions, and end up overwhelming the largely-underpaid, mostly-volunteer maintainers of open source projects. Dozens of the most popular open source projects have either greatly limited, or even entirely closed their projects to community-based submissions from new contributors as a result. In addition to slowing down and disrupting the open source ecosystem's collaboration model, there's also collateral damage with the destruction of one of the best paths for new coders to establish their credentials, build relationships, and learn to be part of the coding community.
  • The most vital open content platforms, like Wikipedia, are under direct attack from bad-faith campaigns. Elon Musk has created Grokipedia to directly undermine Wikipedia with extremist hate content and conspiracist nonsense, by siphoning off traffic, revenues, and contributors from the site. All of this happens while launching spurious attacks on the credibility of the content on Wikipedia, which have led to such radical rhetoric around the site that gatherings of Wikipedia editors now face interruptions from armed attackers. Meanwhile, Wikipedia's human traffic has dropped significantly as AI platforms trained on its content answer users' questions without ever sending them to the site — a pattern that threatens the volunteer contributions and donations that keep it alive.
  • The open standards and specifications that underpin the Internet as we know it have always succeeded solely on the basis of there being a shared set of norms and values that make them work. In this way, they're like laws — only as strong as the society that agrees they ought to be enforced. A simple text file called robots.txt functioned for decades to describe the way that tools like search engines ought to behave when accessing content on websites, but now it is effectively dead as Big AI companies unilaterally decided to ignore more than a generation of precedent, and do whatever they want with the entirety of the web, completely without consent. Similarly, long-running efforts like Creative Commons and other community-driven attempts at creating shared declarations or definitions for content use are increasingly just ignored.
  • Open source software licenses, which used to be a bedrock of the software community because they provide a consistent way of encoding a set of principles in the form of a legal contract, are now treated as a minor obstacle which can be trivially overcome using LLMs. This means that it's possible to clone code and turn community-driven projects into commercial products without even having to credit the people who invented the original work, let alone compensating them or asking for consent. Many of these efforts are especially egregious because the reason the tools are able to perform this task is because they were trained on this open source code in the first place.

The human cost

The threat to the open web is far more profound than just some platforms that are under siege. The most egregious harm is the way that the generosity and grace of the people who keep the web open is being abused and exploited. Those people who maintain open source software? They're hardly getting rich — that's thankless, costly work, which they often choose instead of cashing in at some startup. Similarly, volunteering for Wikipedia is hardly profitable. Defining super-technical open standards takes time and patience, sometimes over a period of years, and there's no fortune or fame in it.

Creators who fight hard to stay independent are often choosing to make less money, to go without winning awards or the other trappings of big media, just in order to maintain control and authority over their content, and because they think it's the right way to connect with an audience. Publishers who've survived through year after year of attacks from tech platforms get rewarded by… getting to do it again the next year. Tim Berners-Lee is no billionaire, but none of those guys with the hundreds of billions of dollars would have all of their riches without him. And the thanks he gets from them is that they're trying to kill the beautiful gift that he gave to the world, and replace it with a tedious, extortive slop mall.

So, we're in endgame now. They see their chance to run the playbook again, and do to Wikipedians what Uber did to cab drivers, to get users addicted to closed apps like they are to social media, to force podcasters to chase an algorithm like kids on TikTok. If everyone across the open internet can gather together, and see that we're all in one fight together, and push back with the same ferocity with which we're being attacked, then we do have a shot at stopping them.

At one time, it was considered impossibly unlikely that anybody would ever create open technologies that would ever succeed in being useful for people, let alone that they would become a daily part of enabling billions of people to connect and communicate and make their lives better. So I don't think it's any more unlikely that the same communities can summon that kind of spirit again, and beat back the wealthiest people in the world, to ensure that the next generation gets to have these same amazing resources to rely on for decades to come.

Taking action

Alright, if it’s not hopeless, what are the concrete things we can do? The first thing is to directly support organizations in the fight. Either those that are at risk, or those that are protecting those at risk. You can give directly to support the Internet Archive, or volunteer to help them out. Wikipedia welcomes your donation or your community participation. The Electronic Frontier Foundation is fighting for better policy and to defend your rights on virtually all of these issues, and could use your support or provides a list of ways to volunteer or take action. The Mozilla Foundation can also use your donations and is driving change. (And full disclosure — I’m involved in pretty much all of these organizations in some capacity, ranging from volunteer to advisor to board member. That’s because I’m trying to make sure my deeds match my words!) These are the people whom I've seen, with my own eyes, stay the hand of those who would hold the knife to the necks of the open web's defenders.

Beyond just what these organizations do, though, we can remember how much the open web matters. I know from my time on the board of Stack Overflow that we got to see the rise of an incredibly generous community built around sharing information openly, under open licenses. There are very few platforms in history that helped more people have more economic mobility than the number of people who got good-paying jobs as coders as a result of the information on that site. And then we got to see the toll that extractive LLMs had when they took advantage of that community without any consideration for the impact it would have when they trained models on the generosity of that site's members without reciprocating in kind.

The good of the web only exists because of the openness of the web. They can't just keep on taking and taking without expecting people to finally draw a line and saying "enough". And interestingly, opportunities might exist where the tycoons least expect it. I saw Mike Masnick's recent piece where he argued that one of the things that might enable a resurgence of the open web might be... AI. It would seem counterintuitive to anyone who's read everything I've shared here to imagine that anything good could come of these same technologies that have caused so much harm.

But ultimately what matters is power. It is precisely because technologies like LLMs have powers that the authoritarians have rushed to try to take them over and wield them as effectively as they can. I don't think that platforms owned and operated by those bad actors can be the tools that disrupt their agenda. I do think it might be possible that the creative communities that built the web in the first place could use their same innovative spirit to build what could be, for lack of a better term, called "good AI". It’s going to take better policy, which may be impossible in the short term at the federal level in the U.S., but can certainly happen at more local levels and in the rest of the world. Though I’m skeptical about putting too much of the burden on individual users, we can certainly change culture and educate people so that more people feel empowered and motivated to choose alternatives to the big tech and big AI platforms that got us into this situation. And we can encourage harm reduction approaches for the people and institutions that are already locked into using these tools, because as we’ve seen, even small individual actions can get institutions to change course.

Ultimately I think, if given the choice, people will pick home-cooked, locally-grown, heart-felt digital meals over factory-farmed fast food technology every time.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

An appreciation for (technical) architecture

1 Share

Once upon a time I kept on meeting architects who had ended up working with the web.

I asked why. Some good answers:

  • Architects think about how people move between spaces (pages) and what that means for user experience - this was at a time when web designers often came from graphic design and drew more on single page layout
  • Architects think about negative space, and how what you put in a space shapes social behaviour – this was at a time when web before the social web
  • Architects have to work with a lot of different disciplines to make something, and all of those people believe they’re the most important person in the room, and that’s what product teams are like too - lol

I’m not an architect but some of my favourite books are about architecture.

Here are three:

Two things that architecture does have been on my mind recently: how it shapes understanding and how it shapes its own evolution.


Information architecture

It’s a rare designer who operates at both the macro of strategy and culture and organisations, and the micro of craft and taste and interactions.

Jeff Veen is one. I remember him saying to me once: "Design is about creating the right mental model for the user."

(Now clearly design is not only about that, but for the particular problem I took to Veen, he said precisely what I needed to hear to get un-stuck.)

So I love thinking about the primitives of functionality and content for the user and how they relate, such that the user can reason intuitively about what they can do with the system, and how.

And this is an interactive process: for a first time user, how do they first encounter a system and how do they way-find and learn over time?

And this is a cognitive process: mental models are abstract; what we perceive is real. So how does understanding happen?

(AI agents are using my software. Prioritise clarity over feels.)


Don Norman wrote The Design of Everyday Things (1988), much loved by web designers, and popularised “user-centred design.”

Norman also brought into design the term affordance from cognitive psychology. As coined by J J Gibson: "to perceive something is also to perceive how to approach it and what to do about it" (as previously discussed).

The best way to notice affordances is to notice where they go wrong! Norman doors:

Some doors require printed instructions to operate, while others are so poorly designed that they lead people to do the exact opposite of what they need to in order to open them. Their shapes or details may suggest that pushing should work, when in fact pulling is required (or the other way around).

Whenever you see a PUSH label stuck on as an extra, it’s papering over a Norman door.

I was delighted to encounter a Norman door irl this week.

So I’m stretching the definition of architecture here, to include this, but roll with it pls. Architecture is how things are understood.


Architecture is how things evolve – how they’re allowed to evolve.

There’s a beautiful housing estate on the top of a hill in south London.

Dawson’s Heights (1964) is shaped like an offset double wave, and looks different on the horizon from every angle and with every change of the light. Yet up-close it’s human-scale too, despite its 10 storeys.

Lead architect Kate Macintosh wanted residents to have balconies, but this was regarded as "wasting public money on unnecessary luxuries"

Knowing that they would be removed from her designs for cost-saving, she made them essential:

all the balconies on Dawson’s Heights are fire escape balconies, but they are also private balconies because the escape door is a “break glass to enter” type lock so you can securely use your balcony for whatever you like.


Technical architecture

So software architecture is also team structure - who needs to talk to who - but also how to make sure that doing something the quick and dirty thing way is also doing it the right way.

Half of software architecture is making sure that somebody can fix a bug in a hurry, add features without breaking it, and be lazy without doing the wrong thing.

I said in 2004.

I think this goes for internal software architecture and for libraries that you import.


The thing about agentic coding is that agents grind problems into dust. Give an agent a problem and a while loop and - long term - it’ll solve that problem even if it means burning a trillion tokens and re-writing down to the silicon.

Like, where’s the bottom? Why not take a plain English spec and grind in out in pure assembly every time? It would run quicker.

But we want AI agents to solve coding problems quickly and in a way that is maintainable and adaptive and composable (benefiting from improvements elsewhere), and where every addition makes the whole stack better.

So at the bottom is really great libraries that encapsulate hard problems, with great interfaces that make the “right” way the easy way for developers building apps with them. Architecture!

While I’m vibing (I call it vibing now, not coding and not vibe coding) while I’m vibing, I am looking at lines of code less than ever before, and thinking about architecture more than ever before.

I am sweating developer experience even though human developers are unlikely to ever be my audience.

How do we make libraries that agents love?


More posts tagged: multiplayer (32).

Auto-detected kinda similar posts:

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Come at the king, you best not miss

1 Share

Column view cut its teeth on NeXT computers…

…and blossomed on early versions of Mac OS X…

…but where I thought it really shone was the first iPods:

This was perhaps the most fun you could ever have navigating a hierarchy of things; it made sense what left/​right/up/down meant in this universe, to a point you could easily build a mental model of what goes where, even if your viewport was smaller than ever.

It was also a close-to-ideal union of software and hardware, admirable in its simplicity and attention to detail. This is where Apple practiced momentum curves, haptics (via a tiny speaker, doing haptic-like clicks), and handling touch programmatically (only the first iPod had a physically rotating wheel, later replaced by stationary touch-sensitive surfaces) – all necessary to make iPhone’s eventual multi-touch so successful. And, iPhone embraced column views wholesale, for everything from the Music app (obvi), through Notes, to Settings.

Well, sometimes you don’t appreciate something until it’s taken away. Here are settings in the iOS version of Google Maps:

I am not sure why the designers chose to deviate from the standard, replacing a clear Y/X relationship with a more confusing Y/Z-that-looks-very-much-like-Y. They kept the chevrons hinting at the original orientation – and they probably had to, as vertical chevrons have a different connotation, but perhaps this was the warning sign right here not to change things.

I think the principle is, in general: if you’re reinventing something well-established, both of your reasoning and your execution have to be really, really solid. I don’t think this has happened here. (Other Google apps seem to use standard column view model.)

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
Next Page of Stories