1755 stories
·
2 followers

That Damned LMS Dependency

1 Share
That Damned LMS Dependency

For the second time in less than a year, a major disruption to the learning management system Canvas has prompted cries (and headlines) that “we can’t do school!”

In the fall of 2025, an Amazon Web Services outage rendered the LMS inaccessible. At the time, Wired spoke to dozens of affected students who complained that “the Canvas outage threw off their schedules, preventing them from not just submitting and viewing assignments but also from participating in class activities, contacting professors, and accessing the textbooks and other materials they need to study.”

This week’s news was even worse, in terms of timing (finals!) and impact – “'The Biggest Student Data Privacy Disaster in History’” read the 404 Media headline. The LMS was taken offline by the hacking group ShinyHunters, which demanded the company pay a ransom or face a data leak -- data, including names, email addresses, and messages, belonging to some 275 million people across more than 9000 educational institutions (both colleges and K-12 schools).

“Higher education has long been a target of ransomware gangs and data extortion attacks,” Wired’s Lily Hay Newman and Andy Greenberg write. “But never before, perhaps, has a cyberattack against a single software platform so thoroughly disrupted the daily operations of thousands of schools across the United States.”

That is shocking. And no, I don’t mean the size and implications of this massive data breach although yes, sure, of course that’s bad. I mean that this is the story: “the LMS is down. We can’t do school.”

What does it say about education that this particular piece of software – one that was once utterly reviled by students and teachers alike – is not now only ubiquitous across all grades and all levels, but is viewed as essential to its operation?!

Indeed, for all the handwringing about the cognitive dependencies that “AI” might elicit, it seems as though education has already acquiesced to technologies that have cultivated precisely that: the productivity software way of thinking that has convinced people that their work must be orchestrated through some awful user interface. That there is no other way to access information, to complete assignments, to assess work, to talk to one another, without it.

The LMS isn’t necessary. It never has been. But it has sold itself to schools with the promise of convenience and – it’s right there in the name – “management.” In the decades since its introduction, the LMS has reshaped education to suit its design. Assignments, assessments, discussion – these have all been bent to fit the software. Teachers' work and students' work – and their ideas about what academic work looks like – has been bent to fit the software.

That Damned LMS Dependency

No matter the course – Intro to Biochemistry or Beowulf – the interface for a course is the same. No matter the course, no matter the school. Everything has been standardized – that's the goal at least; everything, everywhere is interchangeable. It's all just "content." Everything – except the LMS, I suppose – is replaceable. Everything "cognitive" soon to be automatable. Such was the promise of that recent “AI” agent that boasted it could cheat its way through any course on Canvas. And such is the story peddled in a recent (terribly stupid) New Yorker piece that argued that thanks to “AI”, college will only exist in the future as “a website or app.” Let’s just hope someone can keep that website online.

If college only exists in the LMS, if it cannot exist without the LMS, this means we have abandoned education as anything other than education technology. It means we have abandoned the physical spaces of learning: the campus, the classroom, the professor’s office, the student center, and so importantly, the library. It means we have abandoned the place; it means we have abandoned the people.


Although some of the features of the LMS predate its development – see Brian Dear's book on PLATO, The Friendly Orange Glow – the product appeared in the 1990s, and it has actually changed very little since then.

The LMS was -- and remains -- an online portal to a closed system. And much like other portals of the late 1990s, most famously AOL, the LMS was designed to keep schools and students “safe” from the open Web -- that is, to keep unauthorized people from accessing the content and keep authorized people (tuition-paying students, namely) inside its walls. The LMS served as student and faculty's interface to the data contained in the student information system, that massive database that housed the university academic bureaucracy -- registration, course enrollment, schedules, grades, room assignments, transcripts, and so on. If the SIS was the mechanism for controlling instructional bureaucracy, the LMS was how student and faculty labor could be managed.

And perhaps because being managed in this way is quite antithetical to the culture of academia, the LMS was long eschewed by faculty. It was disliked by students too -- neither group much appreciated being told to "go online" to do their work. Everyone had to be compelled to use it. It didn't help that, for decades, the interface seemed largely unchanged, reinforcing the stereotype that education technology sucks, that it is outmoded in both its design and engineering.

When Instructure, the maker of the Canvas LMS, launched in 2011, it promised something different. Notably, news of the company first ran in Techcrunch, signaling this was a software company aligned with Silicon Valley and its "software-as-a-service" trend -- something quite different from the older, legacy companies like Blackboard in look and feel (for the user) and control (for campus IT). Of course, the functionality was mostly the same -- such was the pitch that Instructure had to make to schools: although it was hosted in the cloud (that was new), it promised it could replace Blackboard or Desire2Learn or Moodle and do all the things that professors (begrudgingly) and students (also begrudgingly) could do in the old LMS.

The LMS has centralized the bureaucracy of teaching and learning, and importantly, schools have outsourced this task -- one of their key functions, arguably -- and as such, they have created their own dependency on companies, on technologies, on "expertise" beyond their control.

Who is the LMS for? One can answer that by looking at the phrase itself: learning management system. The LMS is not for teachers; it is not for students -- although both these groups have been framed in a way as “managers,” encouraged to see their own work as a series of tasks that need to be governed, monitored, controlled, optimized.

The LMS has foreclosed learning, in no small part by wrapping it in what I've called elsewhere the "software way of thinking," in which the product is prioritized, and the process of learning dismissed or ignored. But it's also a foreclosure of growth and possibility, as the LMS has become a central piece of the larger educational infrastructure that has embraced surveillance: everything that a student and a teacher does is monitored, measured.

You must log in. You must post. (You needn't download. You could but why. Everything's here.) You must sit in front of the screen. You must click. You must hit "submit." This is learning. This is how learning is managed.

The LMS strips agency from students and teachers and staff alike, forcing them into the template of what the software engineers have coded, into the template of what the LMS companies have decided pedagogy looks like. It's all akin to walking into a classroom where the chairs and desks are bolted to the floor – students and teachers alike know this experience: you are stuck with the architecture of someone else's design, and importantly stuck with a mode of teaching that places the instructor quite literally at the front and center and, as such, that lends itself to certain kinds of instruction.

A classroom should be a space for shared world-building not just knowledge-building, one in which a participatory culture is encouraged. But the LMS is a "management system." There is no participation except in the ways that which the software allows. There is no social contract to be negotiated. Contracts are something between the LMS company and the school's IT department, and this tells you much about who (or what), in fact, the LMS is for.

The learning management system is the platform upon which other software services have been integrated: learning analytics, assessment surveillance, plagiarism detection, autograders, and now "AI." As Nick Srniceck argues in Platform Capitalism, "the platform has emerged as a new business model, capable of extracting and controlling immense amounts of data, and with this shift we have seen the rise of large monopolistic firms." That massive store of data makes platforms like the LMS an obvious target for hackers. But that massive shift of power has served to destabilize educational institutions as well, bending them to suit the extractive processes that govern (and benefit) software companies rather than the generative practices that benefit teaching, learning, research, and importantly, care for community, academic or otherwise.

Who is the LMS for? At the end of the day, like all platforms, the LMS exists for itself.

What a failure of imagination to act as though an LMS outage means the end of education. Perhaps it means the beginning. Perhaps the outage signals the need to reboot, not reboot the machine, not reboot with a new or different machine. Good grief. Perhaps it can mean a restart, a revision of education in which joy and desire and curiosity – all those utterly unquantifiable elements of life and love and learning that the LMS can never capture – are actually made the priority, and not merely those elements that fit neatly in the LMS story of convenience, expedience, and analytics.


That Damned LMS Dependency
Scaly breasted honeyguide (Image credits)

Today's bird is the honeyguide, yet another bird species that's a brood parasite – they lay their eggs in the nest of other kinds of birds, which brood and raise the young. These honeybird babies then eject the host chicks from the nest, killing them. Yes, I'm making an analogy here. The honeybird, unlike the similarly named honeybadger, actually does lead humans to beehives. To the honeypot. Another analogy. You're welcome.

Thanks for reading Second Breakfast. Please consider becoming a paid subscriber, as your financial support makes this work possible.

Read the whole story
mrmarchant
14 hours ago
reply
Share this story
Delete

Taken : this is a web page that shows how much data your...

1 Share

Taken: this is a web page that shows how much data your browser can collect that websites can use to “fingerprint” your device, even without cookies. “It identified your device with enough specificity to distinguish it from most others on the internet.”

Read the whole story
mrmarchant
14 hours ago
reply
Share this story
Delete

How Unknowable Math Can Help Hide Secrets

1 Share

Mathematicians spend most of their time thinking about what’s knowable. But the unknowable can be just as compelling. Perhaps the most famous example comes from a theorem by the logician Kurt Gödel. Gödel’s celebrated result — one of two “incompleteness theorems” he published in 1931 — established that for any reasonable set of basic mathematical assumptions, called axioms, it’s impossible to…

Source



Read the whole story
mrmarchant
15 hours ago
reply
Share this story
Delete

You Need AI That Reduces Maintenance Costs

1 Share

I’ll get straight to the point: your AI coding agent, the one you use to write code, needs to reduce your maintenance costs. Not by a little bit, either. You write code twice as quick now? Better hope you’ve halved your maintenance costs. Three times as productive? One third the maintenance costs. Otherwise, you’re screwed. You’re trading a temporary speed boost for permanent indenture.

Oh, you want to know why? Sure. Let’s go for a drive. On a dark desert highway...

Productivity is Determined by Maintenance Costs

Every line of code you write has to be maintained: bug fixes, cleanup, dependency upgrades, and so forth. I’m not talking about new features or enhancements. Just maintenance. For every month you spend writing code, you’ll spend some amount of time in the following year maintaining that code, and some in each year after that, forever, as long as that code exists.

Let’s say you asked a crowd of, say, 50 developers what those maintenance costs were. Using a technique called Wisdom of the Crowd, you could get a reasonably accurate response.1

1You’re welcome to conduct your own wisdom-of-the-crowd survey! But it turns out that the specific numbers don’t matter for the overall point I’m making here.

Your crowd might tell you that, for each month you spend writing code, you’ll spend...

  • 10 days on maintenance in the first year; and

  • 5 days on maintenance each year after that.

If you were a particularly obsessive individual, you could spend hours making a spreadsheet modeling how those estimates affect productivity over time. A spreadsheet like this.

A graph showing the effects of maintenance costs on a project over time. The horizontal axis shows months, from zero to 120, and the vertical axis shows the percent of time spent on value-add work, from zero to 100. A thick blue line on the graph, labelled “normal,” starts at 100% and quickly drops down to about 65% in the first 12 months, then gradually drops to about 12.5% over the remaining 11 years. Two other lines follow a similar trajectory: a dashed yellow line, labelled “half maint,” ends at about 35%. A dashed red line, labelled “double maint,” ends at about 5%. Each line is marked at the point where it crosses 50% with a note that says “Time to <50% productivity.” For the “normal” line, it occurs at 31 months. For “half maint,” it occurs at 68 months. For “double maint,” it occurs at 10 months.

The first month of a new project is glorious. You spend all your time building fancy new features.

The next month is slightly less glorious. A fraction of your time—not much, but a smidge—goes to fixing bugs and cleaning up design mistakes from the first month. In the third month, a smidge more. And the fourth month, the fifth, the sixth...

Eventually, it’s not glorious at all. According to our crowd’s maintenance estimates, you’ll spend more than half your time on maintenance after 2½ years. After ten years, you can hardly do anything else.

Halving the crowd’s maintenance estimates gives you three more years before you hit the 50% mark. Doubling them sees you below 50% in less than a year.

The lesson is clear. If you want a productive team, you have to focus on their maintenance costs.

All Models Are Wrong

Do these numbers ring true to you? They do to me. In my career as a consultant, I specialized in late-stage startups, and they all had the exact problem shown in the graph above. About 5-9 years in, they’d notice their teams were no longer getting shit done, and then they’d call me.

Their teams weren’t quite as bad as the graph shows. Maybe their maintenance costs were lower. Or maybe... and this feels more likely to me... their maintenance costs were exactly that bad, and they papered over the problem instead. Maybe they:

  • Decided not to fix every bug, or upgrade every dependency

  • Added people when the team got slow... and then kept adding more, because it was never enough

  • Scrapped it all and started over with a rewrite

There’s room to debate the precise maintenance numbers, but overall, the model feels right. If you’ve been around the block, you know this graph is true. You’ve seen how productivity melts away over time. You have the scars.

What Does This Have to Do With AI?

Only everything.

Let’s say your team just started using Rock Lobster, the latest and greatest agentic coding framework, and it Doubles!! your code output! Woohoo! The code’s a bit harder to understand, though, and your team is drowning in pull requests, and you maybe kinda sorta teensy weensy don’t actually read the code before smashing the approve button. Like, at all. I mean, you skimmed it, during boring meetings, sometimes, and that’s gotta be good enough, right? LGTM, let’s get this shit done!

So now you’re producing two months of work in a month, and let’s say you’ve doubled how much each “month” of output costs to maintain. Next month’s maintenance costs quadruple.

The same graph as before, but only showing the thick blue “normal” line. Overlayed on that line is a thin red line labelled “AI Doubles Prod and Maint.” At the 36 month mark, it rockets up to about 85% productivity, to a peak labelled “AI provides massive short term benefit.” Then it rapidly falls below the pre-AI productivity level, with a label that says “Gains erased after 5 months.” Over the next 12 months, it drops to about 10% lower than the blue “normal line” and stays there. A label says “Permanent long-term penalty.”

Oh.

About five months after you start using Rock Lobster, your productivity is back down to where you started, and a few months after that, it’s worse than it would have been had you never touched Rock Lobster in the first place.

I’m not saying your AI doubles maintenance costs. Or productivity. This is an extreme example. But even if your AI produces code that’s just as easy to maintain as your human-written code, the productivity gains don’t last.

A new version of the previous graph, with the same thick blue “normal” line. This time, the thin red line is labelled “AI Doubles Prod, Normal Maint.” At 36 months, it rockets up to about 85% like before, but this time it falls more slowly. It falls below the pre-AI productivity level at month 55, with a label that says “Gains erased after 19 months.” It continues to fall a bit more rapidly than the blue line, crossing over at month 86 with a label that says “Net negative after 40 months.” It ends a few percentage points below the blue line.

You Can Check Out Any Time You Like2

2But you can never leave.

Agents are expensive, and they’re only getting more so. Once your agent’s juice is no longer worth the squeeze, you might decide to save your pennies and go back to coding the old way. Like a caveman. With your fingers.

Ha! Joke’s on you! When you stop using the agent, all the productivity benefit goes away... but the added maintenance costs don’t! As long as that code’s still around, you’re stuck with lower productivity than if you had never touched the agent at all.

A repeat of the graph that showed AI doubling productivity and maintenance costs. The thin red line from the previous graph, labelled “AI Doubles Prod and Maint,” is now a dotted red line. A new yellow line is labelled “AI Doubles Prod and Maint, Removed.” The thick blue line is still present and labelled “Normal.” The yellow line follows the trajectory of the red line, with the 36-month jump in productivity labelled “AI introduced.” As before, the line falls rapidly over the next six months. But at month 60, the yellow line diverges from the red line. It falls even more rapidly, losing about 10% more than the red line. This point is labelled “AI removed.” The yellow line recovers a bit, then loses ground more slowly than the red and blue lines, ending up about 5% better than the red line and 5% worse than the blue line.

The Passage Back

The math only works if the LLM decreases your maintenance costs, and by exactly the inverse of the rate it adds code. If you double your output and your cost of maintaining that output, two times two means you’ve quadrupled your maintenance costs. If you double your output and hold your maintenance costs steady, two times one means you’ve still doubled your maintenance costs.

Instead, you have to invert your productivity. If you’re producing twice as much code, you need code that costs half as much to maintain. Three times as much code, one third the maintenance.

This is the secret to success. All the benefits, none of the lock-in.

This graph shows the same thick blue “normal” line as the others. This time, though, the thin red line doesn’t fall below the blue line. The red line is labelled “AI Doubles Prod, Halves Maint, Removed.” At the 36 month mark, it jumps up to about 85% productivity, as in the other graphs, at a point that’s labelled “AI introduced.” Then it stays well above the blue line, falling on a similar, but slightly steeper curve. At the 84 month mark, it falls back down to exactly track the blue line at a point that’s labelled “AI removed.”

Can We Kill the Beast?

I dunno. All my reading of the finest news sources says that coding agents increase maintenance costs. Some people do say they help them understand large systems better. But big decreases in costs, of the size we need to see? No. Just the opposite.

That’s a problem. The model isn’t a perfect representation of reality, but the overall message is right. You need AI that reduces your maintenance costs, and in proportion to the speed boost you get from new code. Without it, you’re screwed. You’re trading a temporary speed boost for permanent indenture.

So, yeah, go ahead, chase improvements to your coding speed. But spend just as much time chasing improvements to your maintenance costs. Or you, too, will be trapped in Hotel California.

Such a lovely place.

Such a lovely face.

As much as it might seem like it, this isn’t meant to be an anti-AI rant. There’s other levers to pull, such as AI that makes maintenance itself more productive, even if it doesn’t make the code more maintainable. I encourage you to copy the spreadsheet and play with all the levers in the model. See what happens when you change the assumptions to match your real-world situation.

Read the whole story
mrmarchant
15 hours ago
reply
Share this story
Delete

Your AI Use Is Breaking My Brain

1 Share
Your AI Use Is Breaking My Brain

A few years ago, while I was covering the rise of AI slop on Facebook, I asked my friends and family if they were getting AI spam fed into their timelines and if they could send me examples. A handful of them responded, sending me obviously AI-generated science fiction scenescapes, shrimp Jesus, and forlorn, starving children begging for sympathy. But a few of my friends sent me images that they thought were AI but were not. Their mental guard was up to the point where they were looking at human-made art and photos and thought it safer to dismiss them as AI rather than be fooled by it.

To browse the internet today, to consume any sort of content at all, is to be bombarded with AI of all sorts. People think things that are fake are real, things that are real are fake. Much has been written about “AI psychosis,” the nonspecific, nonscientific diagnosis given to people who have lost themselves to AI. Less has been said about the cognitive load of what other people’s AI use is doing to the rest of us, and the insidious nature of having to navigate an internet and a world where lazy AI has infiltrated everything. Our brains are now performing untold numbers of calculations per day: Is this AI? Do I care if it’s AI? Why does this sound or look or read so weird? Does this person just write like this? Is this a person at all? 

I see AI content where I’m conditioned to expect and ignore it: In Google’s “AI Overviews” that famously told us to eat glue pizza, in engagement-bait LinkedIn posts, and throughout our Facebook and Instagram feeds. But increasingly I have the feeling that it’s everywhere, coming from all directions, completely unavoidable. It’s not exactly that I have a revulsion to AI-assisted content or don’t want to get fooled by it. It’s that something is happening where my brain has become the AI police because everything feels incredibly uncanny. I will be going about my day reading, watching, or listening to something and, suddenly, I notice that something is wildly off. Quite simply, I feel like I’m going nuts. 

An example: Last week, in a desperate attempt to avoid yet another take on the White House Correspondents Dinner shooting, I was listening to an episode of Everyone’s Talkin’ Money, a podcast I’ve been listening to off-and-on for years about taxes (yikes). This podcast has been going on for years, has a human host named Shari Rash, and hundreds of episodes. Rash started reading the intro script: “The shift I want you to make today—and this is the shift that changes everything—is starting to see your tax return as information—not a bill, not a badge of shame, but information.” The script went on and on and on like this, with AI writing trope after AI writing trope. My brain shut down and stopped paying attention to the script and started wondering if Rash was using AI just for the intro script? What about for the research? Did she edit the script at all? I turned the podcast off. 

Later that day, I was scrolling the Orioles Hangout forums, a small community of diehards obsessed with the Baltimore Orioles that I have been lurking on for decades. Until recently, it had been one of the few places on the internet that I could safely assume was not full of AI. Except now, it is. The site’s administrator has started using AI to analyze player performance and to help him write some of his posts. To his credit, he explains how he’s using AI and prefaces these posts by noting they are AI-assisted analysis. Some of them are interesting. But now, most days I’m browsing the forums, I will see arguments between posters who have been there for years that seem overly generic or don’t really make sense. One recent post arguing about the timetable for an injured player’s return suggested a ludicrously long recovery. One poster pointed this out: “You said 10-18 months and I said it won’t take that long for a position player.” The poster responded: “You’re right I did. The 10-18 months was an AI generated answer … consider it a small cautionary tale about trusting AI and another on the benefits of seeking out actual medical research on questions like this.” Every day I now scroll the forum and see people noting that they plugged something into ChatGPT or Gemini and have copy pasted the answers for other people to see. In this 30-year-old community of human beings discussing sports, AI is unavoidable. 

It is, of course, not just me. Friends send me screenshots of texts they’ve gotten from people they’ve started dating, wondering if they’re using ChatGPT to flirt. I’ve gotten obviously AI-generated apologies or excuses from people trying to bail on a social engagement. I’ve been to weddings where the speeches felt—and were—partially AI-generated. 

A recent PEW poll showed that people believe it is important to be able to tell whether an image, video, or piece of writing was AI-generated, AI-assisted, or written by a human. And it showed that a majority of people do not believe that they are able to tell the difference between AI-generated works and human made works. Studies have repeatedly shown that humans judge AI-generated art and writing more harshly than human works, and a study published in the Journal of Experimental Psychology found that when people know or perceive a piece of writing to be AI-generated, it is “stubbornly difficult to mitigate” and “remarkably persistent, holding across the time period of our study; across different evaluation metrics, contexts, and different types of written content.” Put simply, it is not just me who hates AI writing or finds it annoying. Even if AI writing can be “fine,” it very often feels bland, weird, formulaic. The writer Eve Fairbanks wrote a thread the other day that I thought more or less nailed it: “The tell for AI isn’t rhythm, wording, or fact errors. It’s that problems with *all these elements* exist equally & at once.” 

Read the whole story
mrmarchant
15 hours ago
reply
Share this story
Delete

EdTech, Security Theater, and the Canvas Breach

1 Share
Thomas Jefferson High School for Science and Technology. Source.

The Schoolhouse is a series from Education Progress featuring articles for and from teachers, parents, education officials, and others working in the education system.


I have loved EdTech for a long time.

And for a long time everything about it made my job easier. My love began slowly, though, starting with simple communications to students and parents, or sharing problem sets that students would complete on paper from their math textbooks. As the technology evolved, so did use of it: distributing PDFs, sharing lecture notes, accepting assignments, communicating even more with families, and giving students more ways to access materials.

I did not have to worry about printing allowances, or whether a student had a pencil, which are bigger concerns than many realize. As schools moved to 1:1 technology, computers became abundant. I could stop providing printed copies of my typed lectures. I could communicate with parents more easily. I could allow students to type instead of handwriting, which by the mid-2010s had become so atrocious that it could only be interpreted with the help of a cuneiform tablet.

Most of the time, tech was also something my students liked. With very, very few exceptions, most students preferred typing papers and essays to writing them out by hand. Students generally preferred typing notes, even though we now know that is often an inferior method of learning. Most students preferred being able to edit and revise their work. They preferred being able to turn something in at midnight if they had a basketball game that went until 10 PM.

But decreasing friction came at a cost. Today we are reckoning with a decade of having conflated technological progress with progress in education. It is easier to update our EdTech platforms than it is to improve learning outcomes, and technological development has become a metric all its own.

Technology Transfers

It’s hard to say how it could have gone differently, though. EdTech gave us — teachers and school admins — flexibility with deadlines. It let me personalize classroom and accommodation materials and made collection and distribution easier in almost every way. It was a lubricant in the wheels of education. For myself and for the hundreds of students I taught over the years, it made a lot of daily classroom work much less onerous.

Not only did I love it, but I was also an early adopter. I started with Weebly and classroom blogs, then began using Canvas/Instructure in 2012. I helped integrate EdTech into every aspect of classroom management and classroom practice, from using Microsoft Teams for video calling to early adoption of Google Classroom. I have done it all. And yet, even as I continue to use it for my classes this week, I also understand its limitations. I also understand what instructors give up when we move instruction online.

While I am the first to say I love EdTech, I am also the first to acknowledge that sometimes it really sucks. There are times when, over the past 15-plus years, it has failed. I have had to extend deadlines. I have had to rush to print things out because the Wi-Fi was down or a student system had crashed. There are times when tech does not do the job it promises to do, and times when using it makes you neither more knowledgeable nor more competent in your work.

As much as I am an early adopter of all things EdTech, I also — shockingly (to many) — paired that with a devoted use of notebooks and paper-based materials. I would have discussions online and still require students to take handwritten notes and submit problem sets, graphs, and illustrations drawn by hand, even when Word, Slides or Desmos might have provided me with something more visually interesting, technically polished, or easier to read. To this day I have not found a suitable technological alternative to a student taking handwritten lecture notes, or keeping notebooks of vocabulary, mathematics proofs, or market diagrams.

It is a paradox, then, that I land where I am today: increasingly convinced that we need to reconsider 1:1 EdTech in K-8 classrooms and return much of early technology use to a more intentional, lab-based model. I think there are real benefits to using technology in the classroom. But it has a major problem we need to consider more carefully moving forward: the problem of security theater.

Having managed EdTech admin accounts across Microsoft Teams, Instructure, GSuite, and other platforms, I know how much information we provide to EdTech companies: student names, ages, birthdays, ID numbers, email addresses, parent contacts, assignments, grades, accommodations, and messages. All of this goes into a black box system that only the technology advisor or administrator really sees and approves, often on behalf of every student opted into that system while every parent has little choice but to accept it. Public or private, across the country, this is now normal. But what happens when that data is breached?

We say it is unavoidable because we have to use EdTech. But is it really that unavoidable? I still enjoy EdTech, and I will continue to use it. But that does not mean that I believe we have to use it uncritically. Does it make certain parts of the job easier? Undoubtedly. Recent regulations for ADA accommodations for visual media and audio media used in the classroom make EdTech one of the greatest tools for inclusion. But I would also be a hypocrite if I did not also acknowledge that it also had a deleterious effect on student privacy. I do not think we can separate the benefits of EdTech from its harms. And one of its main harms is the commodification of student data and student profiles.

EdTech allows us to communicate with parents and stakeholders much more efficiently through a thin veneer of security. These platforms give the appearance of walls and barriers around student information. They tell us that communication is protected, that data are secure, that access is controlled, and that vendors are compliant. But this is a façade. The truth is actually far more complicated.


Taken from the Education Department’s Flickr.

Security Theater

Most EdTech companies claim to provide best-in-class security. But as administrators, we must rely on our technical advisors and on people more knowledgeable than us about how secure that information is. Most operate at a knowledge deficit. I devoted several years of my life to understanding cloud database structures, cloud data systems, multi-character key encryptions, and student information tagging, and while I have a much better understanding of what types of security EdTech platforms can provide, I also know just how much information most schools feel obliged to share with these platforms.

When I started using EdTech in my classroom there were no logins for students, no single sign-on, and no two-factor authentication. Student use of EdTech was often passive. Teachers used online blogs or personal pages to post materials for wide distribution. Students and parents could, if they wanted to, create their own accounts to access and participate in online discussions, coordinate material submission, or receive feedback, but accounts began anonymously in online systems and most of the coordination happened outside the digital world. Over time formalized systems were established to comply with FERPA as gradebooks and materials submissions moved online and became connected to unique student profiles.

Today, almost any tech-enabled tool requires the creation of a unique student avatar. That avatar is connected not only to individual students, but also parents, teachers, materials, assignments, and of course grades. Increasingly, these online learning systems are connected to student information systems, which also archive academic records, attendance records, discipline, and whatever other systems are connected across schools, districts, and state data collection systems. As schools have moved from being hubs for education to hubs for community services, our data systems additionally catalog vaccines opt-outs and immunization records, mental health appointments, social service referrals, health records, and other sensitive information housed in or coordinated through schools. While much of this is managed under HIPAA, anything shared from a HIPAA secure system willingly, but unknowledgeable, by parents goes into an SIS system or Cloud Platform that almost never maintains the same level of security.

“As schools have moved from being hubs for education to hubs for community services, our data systems additionally catalog vaccines opt-outs and immunization records, mental health appointments, social service referrals, health records, and other sensitive information housed in or coordinated through schools.”

I do not think most people understand what this landscape really looks like, or how it operates. Few grasp just how interconnected our school management systems have become, not just to each other but also to the numerous cloud platforms that schools and districts do not manage themselves. Most people, moreover, have no idea how much information schools are required to distribute across so many unique platforms including AWS, Azure, Box, Google Cloud, Dropbox, and so on. Families often have very little visibility into which privacy regime applies, which vendor holds the data, and what downstream integrations may touch it.

Clever was built to help manage the data coordination and login problem for districts, and it is used nationwide. But this is one example of a popular platform. There are many others managing, hosting, syncing, and transferring student data in the background. When we share and provide these student data profiles, we increase our exposure to bad actors. The issue is not just whether one database is encrypted. The issue is what happens when student identity, school records, assignments, messages, accommodations, grades, and third-party tools are all connected across systems.

What used to be securely stored in PDFs, PostgreSQL relational databases, and other encrypted storage systems is not suddenly insecure because of AI. Those systems can still be very secure when they are properly configured, access-controlled, and narrowly managed. The problem is that AI makes data exposure more consequential. Neural networks, transformers, and generative AI systems can now process images, scripts, text, tables, and scanned documents at speed. Their real power lies not simply in processing information, but in finding connections across information that once remained separate. This architecture is extraordinarily useful for science, research, and complex monitoring systems. But it also means that what we choose to collect, copy, sync, and retain is more vulnerable to reconstruction, linkage, and misuse than ever before.

We have built an education technology ecosystem where ease of access often depends on connecting more systems together. A student logs in once and reaches the LMS, email, documents, assignments, messages, third-party tools, assessment platforms, parent portals, and more. It all feels efficient, but every connection also expands the surface area of risk.

The Canvas Hack

Which brings us to last week, the ransomware hack of Canvas systems. Over 9,000 schools were impacted, and we still do not know the full scope of what was accessed or how exposure varied across institutions. But that uncertainty is itself part of the problem. The risk to any given school depends not only on whether it used Canvas, but on how deeply Canvas was integrated into its identity systems, messaging, student records, assignments, third-party tools, and student-facing communication systems. For some schools, the exposed information may have been relatively limited. For others, depending on how much information was shared with and routed through Canvas, the exposure may have been much broader. That is precisely the point. The danger is not just the platform. The danger is the degree of integration.

If this Canvas incident exposes anything, it is not that one login provider caused the breach, or that one type of school account was uniquely vulnerable. It exposes something broader: how deeply modern EdTech depends on linked identity systems, single sign-on, cloud platforms, browser-saved credentials, third-party integrations, and centralized student profiles. By allowing any company, system, or platform to manage student data at this scale, we give it more power than it deserves. And we engage in security theater.

We play-act that we are secure because the system looks professional, because the vendor has a compliance statement, because there is a login screen, because there is two-factor authentication, because the contract has been approved, or because someone in technology signed off on it. But vendor approval is not the same thing as safety. Single sign-on is not the same thing as safety. A privacy policy is not the same thing as safety. How did we let this become so ubiquitous in our education system?

It makes logging into everything easier. We can save passwords in our Chrome accounts and browsers. We can use Clever and other login systems to connect everything together. We can make a student’s digital school life seamless. But seamless is not always safe.


So where do we go from here?

Perhaps the problem of EdTech, laid bare for millions of students, teachers, administrators, and parents last week, is the wake-up call we needed. My suggestion today is simple: roll things back. Require EdTech only where and when it is actually needed. We need to look honestly at the value of EdTech in our schools, for students, and for the education ecosystem as a whole. Do endless EdTech subscriptions provide a real value-add for student learning? Do we need a 1:1 device program for every student at every level of instruction? Do we need every assignment, message, accommodation, assessment, and parent communication routed through third-party systems?

I am increasingly convinced that the answer, particularly at the K-8 level, is no. If we are serious about FERPA and COPPA, then we need to stop pretending that compliance is achieved because a vendor has a privacy policy, a district has approved a contract, or a parent clicked through a consent form they did not really understand. Compliance cannot be reduced to paperwork; it should mean minimizing unnecessary exposure in the first place.

For younger students especially, the default should not be full integration into an always-on EdTech ecosystem. The default should be to opt-out. Let’s bring back lab-based technology, limited-purpose tools, local storage where possible, paper-based alternatives, and far fewer student accounts connected across platforms. This does not mean no technology, but it does mean more technology with clear boundaries.

We need to care more about students’ realized learning, and we need to more intentionally ask which systems are essential in achieving that goal. Platforms that provide and extend access, accommodation, instruction, or communication — we can consider using those carefully. But many are not essential, and if one is not, we should cut ties with it.

In any case, what I have learned over the past five days is that the most meaningful technological revolution for schools may not be changes in security authentication, widespread use of AI, or the adoption of personalized learning platforms. It might just be refusing to collect, upload, connect, and retain data that schools did not need to hand over in the first place. Perhaps in doing so we can begin to reconstruct a model of education that works for the students in our classrooms, rather than one that quietly turns every child into a permanent digital profile.

Join the Center for Educational Progress and receive all our content — and thanks to all our amazing paid subscribers for their support.

Read the whole story
mrmarchant
16 hours ago
reply
Share this story
Delete
Next Page of Stories