1756 stories
·
2 followers

Being Fed Content

1 Share

From an interview (gift link) with Don Hertzfeldt, creator of World of Tomorrow:

Not to sound like a curmudgeon, but when I was a teenager, I took the train to go to the record store to find rare stuff. Spotify is way more convenient, but that wasn’t the point. The point was to get out and to feel like you’re hunting, to feel like you’re living your life. I’m going to the movies, I’m going to this show. What streaming has done—it’s very convenient, but it’s taken the feeling of going hunting and turned it into we’re all just being fed. We’re all farm animals that are just being fed, and we’re being fed content. You can just stay home. Just stay home. We’ll just feed it to you. No wonder everyone’s depressed.

I feel like Xochitl Gonzalez’s piece on robotaxis, People Who Don’t Like People Are Making All of Our Decisions, rhymes with Hertzfeldt’s comments:

For two decades, I have watched us blindly fall for one sales pitch after another. Every app and advancement comes shrouded in promises of “progress” and “connectivity” and “convenience.” And in many early cases — such as the invention of ride-sharing apps — Silicon Valley truly did deliver a better mousetrap. But we’re getting diminishing returns. We are living in Silicon Valley’s future now, and we are lonelier, more anxious, and more polarized than ever before.

Tags: Don Hertzfeldt · interviews · Xochitl Gonzalez

Read the whole story
mrmarchant
48 minutes ago
reply
Share this story
Delete

That Damned LMS Dependency

1 Share
That Damned LMS Dependency

For the second time in less than a year, a major disruption to the learning management system Canvas has prompted cries (and headlines) that “we can’t do school!”

In the fall of 2025, an Amazon Web Services outage rendered the LMS inaccessible. At the time, Wired spoke to dozens of affected students who complained that “the Canvas outage threw off their schedules, preventing them from not just submitting and viewing assignments but also from participating in class activities, contacting professors, and accessing the textbooks and other materials they need to study.”

This week’s news was even worse, in terms of timing (finals!) and impact – “'The Biggest Student Data Privacy Disaster in History’” read the 404 Media headline. The LMS was taken offline by the hacking group ShinyHunters, which demanded the company pay a ransom or face a data leak -- data, including names, email addresses, and messages, belonging to some 275 million people across more than 9000 educational institutions (both colleges and K-12 schools).

“Higher education has long been a target of ransomware gangs and data extortion attacks,” Wired’s Lily Hay Newman and Andy Greenberg write. “But never before, perhaps, has a cyberattack against a single software platform so thoroughly disrupted the daily operations of thousands of schools across the United States.”

That is shocking. And no, I don’t mean the size and implications of this massive data breach although yes, sure, of course that’s bad. I mean that this is the story: “the LMS is down. We can’t do school.”

What does it say about education that this particular piece of software – one that was once utterly reviled by students and teachers alike – is not now only ubiquitous across all grades and all levels, but is viewed as essential to its operation?!

Indeed, for all the handwringing about the cognitive dependencies that “AI” might elicit, it seems as though education has already acquiesced to technologies that have cultivated precisely that: the productivity software way of thinking that has convinced people that their work must be orchestrated through some awful user interface. That there is no other way to access information, to complete assignments, to assess work, to talk to one another, without it.

The LMS isn’t necessary. It never has been. But it has sold itself to schools with the promise of convenience and – it’s right there in the name – “management.” In the decades since its introduction, the LMS has reshaped education to suit its design. Assignments, assessments, discussion – these have all been bent to fit the software. Teachers' work and students' work – and their ideas about what academic work looks like – has been bent to fit the software.

That Damned LMS Dependency

No matter the course – Intro to Biochemistry or Beowulf – the interface for a course is the same. No matter the course, no matter the school. Everything has been standardized – that's the goal at least; everything, everywhere is interchangeable. It's all just "content." Everything – except the LMS, I suppose – is replaceable. Everything "cognitive" soon to be automatable. Such was the promise of that recent “AI” agent that boasted it could cheat its way through any course on Canvas. And such is the story peddled in a recent (terribly stupid) New Yorker piece that argued that thanks to “AI”, college will only exist in the future as “a website or app.” Let’s just hope someone can keep that website online.

If college only exists in the LMS, if it cannot exist without the LMS, this means we have abandoned education as anything other than education technology. It means we have abandoned the physical spaces of learning: the campus, the classroom, the professor’s office, the student center, and so importantly, the library. It means we have abandoned the place; it means we have abandoned the people.


Although some of the features of the LMS predate its development – see Brian Dear's book on PLATO, The Friendly Orange Glow – the product appeared in the 1990s, and it has actually changed very little since then.

The LMS was -- and remains -- an online portal to a closed system. And much like other portals of the late 1990s, most famously AOL, the LMS was designed to keep schools and students “safe” from the open Web -- that is, to keep unauthorized people from accessing the content and keep authorized people (tuition-paying students, namely) inside its walls. The LMS served as student and faculty's interface to the data contained in the student information system, that massive database that housed the university academic bureaucracy -- registration, course enrollment, schedules, grades, room assignments, transcripts, and so on. If the SIS was the mechanism for controlling instructional bureaucracy, the LMS was how student and faculty labor could be managed.

And perhaps because being managed in this way is quite antithetical to the culture of academia, the LMS was long eschewed by faculty. It was disliked by students too -- neither group much appreciated being told to "go online" to do their work. Everyone had to be compelled to use it. It didn't help that, for decades, the interface seemed largely unchanged, reinforcing the stereotype that education technology sucks, that it is outmoded in both its design and engineering.

When Instructure, the maker of the Canvas LMS, launched in 2011, it promised something different. Notably, news of the company first ran in Techcrunch, signaling this was a software company aligned with Silicon Valley and its "software-as-a-service" trend -- something quite different from the older, legacy companies like Blackboard in look and feel (for the user) and control (for campus IT). Of course, the functionality was mostly the same -- such was the pitch that Instructure had to make to schools: although it was hosted in the cloud (that was new), it promised it could replace Blackboard or Desire2Learn or Moodle and do all the things that professors (begrudgingly) and students (also begrudgingly) could do in the old LMS.

The LMS has centralized the bureaucracy of teaching and learning, and importantly, schools have outsourced this task -- one of their key functions, arguably -- and as such, they have created their own dependency on companies, on technologies, on "expertise" beyond their control.

Who is the LMS for? One can answer that by looking at the phrase itself: learning management system. The LMS is not for teachers; it is not for students -- although both these groups have been framed in a way as “managers,” encouraged to see their own work as a series of tasks that need to be governed, monitored, controlled, optimized.

The LMS has foreclosed learning, in no small part by wrapping it in what I've called elsewhere the "software way of thinking," in which the product is prioritized, and the process of learning dismissed or ignored. But it's also a foreclosure of growth and possibility, as the LMS has become a central piece of the larger educational infrastructure that has embraced surveillance: everything that a student and a teacher does is monitored, measured.

You must log in. You must post. (You needn't download. You could but why. Everything's here.) You must sit in front of the screen. You must click. You must hit "submit." This is learning. This is how learning is managed.

The LMS strips agency from students and teachers and staff alike, forcing them into the template of what the software engineers have coded, into the template of what the LMS companies have decided pedagogy looks like. It's all akin to walking into a classroom where the chairs and desks are bolted to the floor – students and teachers alike know this experience: you are stuck with the architecture of someone else's design, and importantly stuck with a mode of teaching that places the instructor quite literally at the front and center and, as such, that lends itself to certain kinds of instruction.

A classroom should be a space for shared world-building not just knowledge-building, one in which a participatory culture is encouraged. But the LMS is a "management system." There is no participation except in the ways that which the software allows. There is no social contract to be negotiated. Contracts are something between the LMS company and the school's IT department, and this tells you much about who (or what), in fact, the LMS is for.

The learning management system is the platform upon which other software services have been integrated: learning analytics, assessment surveillance, plagiarism detection, autograders, and now "AI." As Nick Srniceck argues in Platform Capitalism, "the platform has emerged as a new business model, capable of extracting and controlling immense amounts of data, and with this shift we have seen the rise of large monopolistic firms." That massive store of data makes platforms like the LMS an obvious target for hackers. But that massive shift of power has served to destabilize educational institutions as well, bending them to suit the extractive processes that govern (and benefit) software companies rather than the generative practices that benefit teaching, learning, research, and importantly, care for community, academic or otherwise.

Who is the LMS for? At the end of the day, like all platforms, the LMS exists for itself.

What a failure of imagination to act as though an LMS outage means the end of education. Perhaps it means the beginning. Perhaps the outage signals the need to reboot, not reboot the machine, not reboot with a new or different machine. Good grief. Perhaps it can mean a restart, a revision of education in which joy and desire and curiosity – all those utterly unquantifiable elements of life and love and learning that the LMS can never capture – are actually made the priority, and not merely those elements that fit neatly in the LMS story of convenience, expedience, and analytics.


That Damned LMS Dependency
Scaly breasted honeyguide (Image credits)

Today's bird is the honeyguide, yet another bird species that's a brood parasite – they lay their eggs in the nest of other kinds of birds, which brood and raise the young. These honeybird babies then eject the host chicks from the nest, killing them. Yes, I'm making an analogy here. The honeybird, unlike the similarly named honeybadger, actually does lead humans to beehives. To the honeypot. Another analogy. You're welcome.

Thanks for reading Second Breakfast. Please consider becoming a paid subscriber, as your financial support makes this work possible.

Read the whole story
mrmarchant
21 hours ago
reply
Share this story
Delete

Taken : this is a web page that shows how much data your...

1 Share

Taken: this is a web page that shows how much data your browser can collect that websites can use to “fingerprint” your device, even without cookies. “It identified your device with enough specificity to distinguish it from most others on the internet.”

Read the whole story
mrmarchant
21 hours ago
reply
Share this story
Delete

How Unknowable Math Can Help Hide Secrets

1 Share

Mathematicians spend most of their time thinking about what’s knowable. But the unknowable can be just as compelling. Perhaps the most famous example comes from a theorem by the logician Kurt Gödel. Gödel’s celebrated result — one of two “incompleteness theorems” he published in 1931 — established that for any reasonable set of basic mathematical assumptions, called axioms, it’s impossible to…

Source



Read the whole story
mrmarchant
22 hours ago
reply
Share this story
Delete

You Need AI That Reduces Maintenance Costs

1 Share

I’ll get straight to the point: your AI coding agent, the one you use to write code, needs to reduce your maintenance costs. Not by a little bit, either. You write code twice as quick now? Better hope you’ve halved your maintenance costs. Three times as productive? One third the maintenance costs. Otherwise, you’re screwed. You’re trading a temporary speed boost for permanent indenture.

Oh, you want to know why? Sure. Let’s go for a drive. On a dark desert highway...

Productivity is Determined by Maintenance Costs

Every line of code you write has to be maintained: bug fixes, cleanup, dependency upgrades, and so forth. I’m not talking about new features or enhancements. Just maintenance. For every month you spend writing code, you’ll spend some amount of time in the following year maintaining that code, and some in each year after that, forever, as long as that code exists.

Let’s say you asked a crowd of, say, 50 developers what those maintenance costs were. Using a technique called Wisdom of the Crowd, you could get a reasonably accurate response.1

1You’re welcome to conduct your own wisdom-of-the-crowd survey! But it turns out that the specific numbers don’t matter for the overall point I’m making here.

Your crowd might tell you that, for each month you spend writing code, you’ll spend...

  • 10 days on maintenance in the first year; and

  • 5 days on maintenance each year after that.

If you were a particularly obsessive individual, you could spend hours making a spreadsheet modeling how those estimates affect productivity over time. A spreadsheet like this.

A graph showing the effects of maintenance costs on a project over time. The horizontal axis shows months, from zero to 120, and the vertical axis shows the percent of time spent on value-add work, from zero to 100. A thick blue line on the graph, labelled “normal,” starts at 100% and quickly drops down to about 65% in the first 12 months, then gradually drops to about 12.5% over the remaining 11 years. Two other lines follow a similar trajectory: a dashed yellow line, labelled “half maint,” ends at about 35%. A dashed red line, labelled “double maint,” ends at about 5%. Each line is marked at the point where it crosses 50% with a note that says “Time to <50% productivity.” For the “normal” line, it occurs at 31 months. For “half maint,” it occurs at 68 months. For “double maint,” it occurs at 10 months.

The first month of a new project is glorious. You spend all your time building fancy new features.

The next month is slightly less glorious. A fraction of your time—not much, but a smidge—goes to fixing bugs and cleaning up design mistakes from the first month. In the third month, a smidge more. And the fourth month, the fifth, the sixth...

Eventually, it’s not glorious at all. According to our crowd’s maintenance estimates, you’ll spend more than half your time on maintenance after 2½ years. After ten years, you can hardly do anything else.

Halving the crowd’s maintenance estimates gives you three more years before you hit the 50% mark. Doubling them sees you below 50% in less than a year.

The lesson is clear. If you want a productive team, you have to focus on their maintenance costs.

All Models Are Wrong

Do these numbers ring true to you? They do to me. In my career as a consultant, I specialized in late-stage startups, and they all had the exact problem shown in the graph above. About 5-9 years in, they’d notice their teams were no longer getting shit done, and then they’d call me.

Their teams weren’t quite as bad as the graph shows. Maybe their maintenance costs were lower. Or maybe... and this feels more likely to me... their maintenance costs were exactly that bad, and they papered over the problem instead. Maybe they:

  • Decided not to fix every bug, or upgrade every dependency

  • Added people when the team got slow... and then kept adding more, because it was never enough

  • Scrapped it all and started over with a rewrite

There’s room to debate the precise maintenance numbers, but overall, the model feels right. If you’ve been around the block, you know this graph is true. You’ve seen how productivity melts away over time. You have the scars.

What Does This Have to Do With AI?

Only everything.

Let’s say your team just started using Rock Lobster, the latest and greatest agentic coding framework, and it Doubles!! your code output! Woohoo! The code’s a bit harder to understand, though, and your team is drowning in pull requests, and you maybe kinda sorta teensy weensy don’t actually read the code before smashing the approve button. Like, at all. I mean, you skimmed it, during boring meetings, sometimes, and that’s gotta be good enough, right? LGTM, let’s get this shit done!

So now you’re producing two months of work in a month, and let’s say you’ve doubled how much each “month” of output costs to maintain. Next month’s maintenance costs quadruple.

The same graph as before, but only showing the thick blue “normal” line. Overlayed on that line is a thin red line labelled “AI Doubles Prod and Maint.” At the 36 month mark, it rockets up to about 85% productivity, to a peak labelled “AI provides massive short term benefit.” Then it rapidly falls below the pre-AI productivity level, with a label that says “Gains erased after 5 months.” Over the next 12 months, it drops to about 10% lower than the blue “normal line” and stays there. A label says “Permanent long-term penalty.”

Oh.

About five months after you start using Rock Lobster, your productivity is back down to where you started, and a few months after that, it’s worse than it would have been had you never touched Rock Lobster in the first place.

I’m not saying your AI doubles maintenance costs. Or productivity. This is an extreme example. But even if your AI produces code that’s just as easy to maintain as your human-written code, the productivity gains don’t last.

A new version of the previous graph, with the same thick blue “normal” line. This time, the thin red line is labelled “AI Doubles Prod, Normal Maint.” At 36 months, it rockets up to about 85% like before, but this time it falls more slowly. It falls below the pre-AI productivity level at month 55, with a label that says “Gains erased after 19 months.” It continues to fall a bit more rapidly than the blue line, crossing over at month 86 with a label that says “Net negative after 40 months.” It ends a few percentage points below the blue line.

You Can Check Out Any Time You Like2

2But you can never leave.

Agents are expensive, and they’re only getting more so. Once your agent’s juice is no longer worth the squeeze, you might decide to save your pennies and go back to coding the old way. Like a caveman. With your fingers.

Ha! Joke’s on you! When you stop using the agent, all the productivity benefit goes away... but the added maintenance costs don’t! As long as that code’s still around, you’re stuck with lower productivity than if you had never touched the agent at all.

A repeat of the graph that showed AI doubling productivity and maintenance costs. The thin red line from the previous graph, labelled “AI Doubles Prod and Maint,” is now a dotted red line. A new yellow line is labelled “AI Doubles Prod and Maint, Removed.” The thick blue line is still present and labelled “Normal.” The yellow line follows the trajectory of the red line, with the 36-month jump in productivity labelled “AI introduced.” As before, the line falls rapidly over the next six months. But at month 60, the yellow line diverges from the red line. It falls even more rapidly, losing about 10% more than the red line. This point is labelled “AI removed.” The yellow line recovers a bit, then loses ground more slowly than the red and blue lines, ending up about 5% better than the red line and 5% worse than the blue line.

The Passage Back

The math only works if the LLM decreases your maintenance costs, and by exactly the inverse of the rate it adds code. If you double your output and your cost of maintaining that output, two times two means you’ve quadrupled your maintenance costs. If you double your output and hold your maintenance costs steady, two times one means you’ve still doubled your maintenance costs.

Instead, you have to invert your productivity. If you’re producing twice as much code, you need code that costs half as much to maintain. Three times as much code, one third the maintenance.

This is the secret to success. All the benefits, none of the lock-in.

This graph shows the same thick blue “normal” line as the others. This time, though, the thin red line doesn’t fall below the blue line. The red line is labelled “AI Doubles Prod, Halves Maint, Removed.” At the 36 month mark, it jumps up to about 85% productivity, as in the other graphs, at a point that’s labelled “AI introduced.” Then it stays well above the blue line, falling on a similar, but slightly steeper curve. At the 84 month mark, it falls back down to exactly track the blue line at a point that’s labelled “AI removed.”

Can We Kill the Beast?

I dunno. All my reading of the finest news sources says that coding agents increase maintenance costs. Some people do say they help them understand large systems better. But big decreases in costs, of the size we need to see? No. Just the opposite.

That’s a problem. The model isn’t a perfect representation of reality, but the overall message is right. You need AI that reduces your maintenance costs, and in proportion to the speed boost you get from new code. Without it, you’re screwed. You’re trading a temporary speed boost for permanent indenture.

So, yeah, go ahead, chase improvements to your coding speed. But spend just as much time chasing improvements to your maintenance costs. Or you, too, will be trapped in Hotel California.

Such a lovely place.

Such a lovely face.

As much as it might seem like it, this isn’t meant to be an anti-AI rant. There’s other levers to pull, such as AI that makes maintenance itself more productive, even if it doesn’t make the code more maintainable. I encourage you to copy the spreadsheet and play with all the levers in the model. See what happens when you change the assumptions to match your real-world situation.

Read the whole story
mrmarchant
22 hours ago
reply
Share this story
Delete

Your AI Use Is Breaking My Brain

1 Share
Your AI Use Is Breaking My Brain

A few years ago, while I was covering the rise of AI slop on Facebook, I asked my friends and family if they were getting AI spam fed into their timelines and if they could send me examples. A handful of them responded, sending me obviously AI-generated science fiction scenescapes, shrimp Jesus, and forlorn, starving children begging for sympathy. But a few of my friends sent me images that they thought were AI but were not. Their mental guard was up to the point where they were looking at human-made art and photos and thought it safer to dismiss them as AI rather than be fooled by it.

To browse the internet today, to consume any sort of content at all, is to be bombarded with AI of all sorts. People think things that are fake are real, things that are real are fake. Much has been written about “AI psychosis,” the nonspecific, nonscientific diagnosis given to people who have lost themselves to AI. Less has been said about the cognitive load of what other people’s AI use is doing to the rest of us, and the insidious nature of having to navigate an internet and a world where lazy AI has infiltrated everything. Our brains are now performing untold numbers of calculations per day: Is this AI? Do I care if it’s AI? Why does this sound or look or read so weird? Does this person just write like this? Is this a person at all? 

I see AI content where I’m conditioned to expect and ignore it: In Google’s “AI Overviews” that famously told us to eat glue pizza, in engagement-bait LinkedIn posts, and throughout our Facebook and Instagram feeds. But increasingly I have the feeling that it’s everywhere, coming from all directions, completely unavoidable. It’s not exactly that I have a revulsion to AI-assisted content or don’t want to get fooled by it. It’s that something is happening where my brain has become the AI police because everything feels incredibly uncanny. I will be going about my day reading, watching, or listening to something and, suddenly, I notice that something is wildly off. Quite simply, I feel like I’m going nuts. 

An example: Last week, in a desperate attempt to avoid yet another take on the White House Correspondents Dinner shooting, I was listening to an episode of Everyone’s Talkin’ Money, a podcast I’ve been listening to off-and-on for years about taxes (yikes). This podcast has been going on for years, has a human host named Shari Rash, and hundreds of episodes. Rash started reading the intro script: “The shift I want you to make today—and this is the shift that changes everything—is starting to see your tax return as information—not a bill, not a badge of shame, but information.” The script went on and on and on like this, with AI writing trope after AI writing trope. My brain shut down and stopped paying attention to the script and started wondering if Rash was using AI just for the intro script? What about for the research? Did she edit the script at all? I turned the podcast off. 

Later that day, I was scrolling the Orioles Hangout forums, a small community of diehards obsessed with the Baltimore Orioles that I have been lurking on for decades. Until recently, it had been one of the few places on the internet that I could safely assume was not full of AI. Except now, it is. The site’s administrator has started using AI to analyze player performance and to help him write some of his posts. To his credit, he explains how he’s using AI and prefaces these posts by noting they are AI-assisted analysis. Some of them are interesting. But now, most days I’m browsing the forums, I will see arguments between posters who have been there for years that seem overly generic or don’t really make sense. One recent post arguing about the timetable for an injured player’s return suggested a ludicrously long recovery. One poster pointed this out: “You said 10-18 months and I said it won’t take that long for a position player.” The poster responded: “You’re right I did. The 10-18 months was an AI generated answer … consider it a small cautionary tale about trusting AI and another on the benefits of seeking out actual medical research on questions like this.” Every day I now scroll the forum and see people noting that they plugged something into ChatGPT or Gemini and have copy pasted the answers for other people to see. In this 30-year-old community of human beings discussing sports, AI is unavoidable. 

It is, of course, not just me. Friends send me screenshots of texts they’ve gotten from people they’ve started dating, wondering if they’re using ChatGPT to flirt. I’ve gotten obviously AI-generated apologies or excuses from people trying to bail on a social engagement. I’ve been to weddings where the speeches felt—and were—partially AI-generated. 

A recent PEW poll showed that people believe it is important to be able to tell whether an image, video, or piece of writing was AI-generated, AI-assisted, or written by a human. And it showed that a majority of people do not believe that they are able to tell the difference between AI-generated works and human made works. Studies have repeatedly shown that humans judge AI-generated art and writing more harshly than human works, and a study published in the Journal of Experimental Psychology found that when people know or perceive a piece of writing to be AI-generated, it is “stubbornly difficult to mitigate” and “remarkably persistent, holding across the time period of our study; across different evaluation metrics, contexts, and different types of written content.” Put simply, it is not just me who hates AI writing or finds it annoying. Even if AI writing can be “fine,” it very often feels bland, weird, formulaic. The writer Eve Fairbanks wrote a thread the other day that I thought more or less nailed it: “The tell for AI isn’t rhythm, wording, or fact errors. It’s that problems with *all these elements* exist equally & at once.” 

Read the whole story
mrmarchant
22 hours ago
reply
Share this story
Delete
Next Page of Stories