1444 stories
·
2 followers

The Broken Record

1 Share
The Broken Record

I was amused to read Dan Meyer’s account of the recent AI+Education Summit at Stanford, particularly the remarks made by the university’s former president, John Hennessy, who asked the audience if anyone remembered “the MOOC revolution” and could explain how, this time, things will be different. The panelists all seemed to assert that -- thanks to “AI” -- the revolution is definitely here. The revolution or, say, a tsunami -- the word that Hennessy used back in 2012 when he himself predicted a sweeping technological transformation of education -- a phrase echoed in so many stupid NYT op-eds and pitch decks. Dan recalled the utterance, but no one else seemed to -- at least no one on stage or in the audience seemed to have the guts to turn to Hennessy (or any of the attendees or speakers, many of whom were also high on the MOOC vapors) and call him on his predictive bullshit.

As Dan correctly notes,

Look — this is more or less how the same crowd talked about MOOCs ten years ago. Copy and paste. And AI tutors will fall short of the same bar for the same reason MOOCs did: it’s humans who help humans do hard things. Ever thus. And so many of these technologies — by accident or design — fit a bell jar around the student. They put the kid into an airtight container with the technology inside and every other human outside. That’s all you need to know about their odds of success.

The odds of success are non-existent. There will be no “AI” tutor revolution just as there was no MOOC revolution just as there was no personalized learning revolution just as there was no computer-assisted instruction revolution just as there was no teaching machine revolution. If there is a tsunami, it’s not technological as much as ideological, as the values of Silicon Valley -- techno-libertarianism, accelerationism -- are hard at work in undermining democratic institutions, including school.

The history of failed ed-tech startups and ed-tech schools is long, and yet we’re trapped in this awful cycle where investors and entrepreneurs keep repackaging the same bad ideas.

There was another story this week on Alpha School, this one by 404 Media’s Emanuel Maiberg: “’Students Are Being Treated Like Guinea Pigs:' Inside an AI-Powered Private School.” Back in October, Wired documented the miserable experiences of students, forced into hours of repetitive clicking on drill-and-kill software under incessant surveillance. Maiberg’s reporting, in part, expands on this, as he writes about the goal of building “bossware for kids” -- that is ways to identify “enhanced tracking and monitoring of kids beyond screentime data.”

But much of Maiberg’s story examines the use of technologies to build the “AI curriculum” touted by the school’s founders. Not only does Alpha School’s reliance on LLMs for creating curriculum, reading assignments, and exercises mean these materials are littered with garbled nonsense, but the company seems to also be scraping (i.e. stealing) other education companies’ materials, including those of IXL and Khan Academy, for use in building their own.

While I deeply appreciate Maiberg’s reporting here -- I am a huge fan of 404 Media and am a paid subscriber because I think investigative journalism is important and necessary -- this story is a huge disappointment because it does not push back at all on the underlying ideas of Alpha School. Indeed, this is precisely the problem that keeps us trapped in this “ed-tech deja vu” -- the one that has, just in the last couple of decades, recycled this same idea over and over and over again (funded and promoted, it’s worth noting, by the very same people -- the Marc Andreessens and Reid Hoffmans and Mark Zuckerbergs of the world): Rocketship Education. Summit Learning. AltSchool. And now Alpha School.

Maiberg suggests in his story (and more explicitly on the podcast in which he and the publication’s other co-founders discuss the week’s articles) that Alpha School’s idea of “2 hour learning” is a good idea. But I think that claim -- the school’s key marketing claim, to be sure, before, like everyone else, it started to tout the whole “AI” thing -- needs to really be interrogated. Why are speed and efficiency the goal? These are the goals of the tech industry’s commitment to accelerationism, yes. These are the goals for a lot of video games, where you grind through repetitive tasks to accumulate enough points to level up. But why should these be something that schools embrace? Why should these be core values for education? Does learning -- deep, rich, transformative learning -- ever actually happen this way? (And what else are we learning, one might ask, when we adopt technological systems and world views that prioritize these?)

Let me quote math educator Michael Pershan at length here:

I keep coming around to this: the interesting innovation of Alpha School is not their apps or schedule or Timeback but their relationship to core academics. This is a school that believes that the “core” of schooling should be taken care of as quickly and painlessly as possible so that the rest of the day can be opened up to things that actually matter. Most schools don’t do this! We instead tell kids that history is a way of understanding ourselves and others. Math, we say, can be an absolute joy, full of logical surprises. We tell kids that a good story can open up your heart and mind.

Alpha doesn’t. They aim to streamline and focus on the essentials for skill mastery. Maybe they are showing you can learn to comprehend challenging texts without reading books. Maybe a math education composed of examples and (mostly) multiple choice questions is, in reality, all you need to ace the SAT.

If it turns out they’re succeeding at this, it’s because they’re trying.

And maybe, one day, Alpha or someone else will crack the code for good. It then will be possible to get all students to grind through the skills and move on. With all that extra time, schools will find better things for kids to do than academics. And maybe, at some point, we’ll ask, what’s the point of grinding through things we don’t care about? Do we really need to become great at mathematics when machines can do it? How important is it really to learn how to read novels or fiction? Maybe, one day, this is how books disappear from schools for good.

The schools like Alpha School, AltSchool, Summit, and Rocketship are all strikingly dystopian insofar as they compromise, if not reject, any sort of agency for students; they compromise, if not reject, any sort of democratic vision for the classroom. School is simply an exercise in engineering and optimization: command and control and test-prep and feedback loops. There is no space for community or cooperation, no time for play -- there is no openness, no curiosity, no contemplation, no pause. There is no possibility for anything, other than what the algorithm predicts.

(Kids hate this shit, no surprise. They want to be human; they want to be with other humans, even if tech-bros try to build a world that’s forgotten how.)


Or rather, most kids hate this shit. There are a few who embrace it because if they play the game right, they reckon, they too can join the tech elite. Case in point, yet another profile of Cluely founder Roy Lee, this one by Sam Kriss in Harper’s: “Child’s Play: Tech’s new generation and the end of thinking.”

The Broken Record

I find this insistence from certain quarters that “there is no evidence that social media harms children” to be pretty disingenuous. There’s a lot of evidence -- plenty of research that points to negative effects and sure plenty that points to positive effects of technology, so it’s a little weird to see efforts to curb kids’ mobile phone and social media usage as just some big conspiracy for Jonathan Haidt to sell more books.

Mark Zuckerberg took the stand this week in a California court case that contends that Meta (along with other tech companies such as TikTok and Google) knowingly created software that was addictive, leading to personal injury -- and for the plaintiff in this particular case, leading to anxiety, depression, and poor body image.

That the judge in the case had to chastise Zuckerberg and his legal team for wearing their “AI” Ray-bans in the courtroom just serves to underscore how very little these people care for the norms and values of democratic institutions.

We see this in the courtroom. We see this in the media. We see this in schools -- from Slate: “Meta’s A.I. Smart Glasses Are Wreaking Havoc in Schools Across the Country. It’s Only Going to Get Worse.”

We see this in the billions of dollars that the tech companies plan to funnel into elections this year to try to ensure there are no regulatory measures taken to curb their extractive practices -- $65 million from Meta alone.


What on earth would make you think that tech companies -- their investors, their executives, their sycophants in the media -- want to make education better?

Inside Higher Ed reported this week that the University of Texas Board asked faculty to “avoid ‘controversial’ topics in class.” There weren’t any details on what this meant -- what counts as “controversial” -- or how this might be enforced. (Meanwhile in Florida, college faculty were handed a state-created curriculum and told to teach from it.)

We are witnessing the destruction – the targeted destruction – of academic freedom across American universities. This trickles down into all aspects of education at every level.

And to be clear, again, my god, I'm a broken record too: this is all inextricable from the rise of “AI,” from its injection into every piece of educational and managerial software. The tech industry seeks the monopolization of knowledge; they seek the control of labor – intellectual labor and all labor, “white collar” and “blue collar” is intellectual labor. They worship speed and efficiency, not because these values are democratic, but precisely because they believe they can make us bend our entire beings towards their profitability.


The Broken Record

Perhaps ed-tech is, in the end, simply "optimistic cruelty"; and these cycles that we keep going through are just repeated and failed attempts to replay and harness Ayn Rand's bad ideas, her mean-spirited visions for a shiny, shitty technolibertarian future – one in which children (other people's children, of course) are the grist for the entrepreneurial mill.


More bad people doing bad things in ed-tech:


The Broken Record
(Image credits)

Today’s bird is the pigeon, because yeah, we are still living in B. F. Skinner’s world -- one where people will look you in the eye and say that being “agentic” means handing over all your decision-making to their system, that “freedom” and “dignity” don’t really matter because their brilliant engineering is going to make everything fine and dandy. This time. Really. It’s a revolution. It’s a tsunami. It’s a shit storm.

Russian Startup Hacks Pigeon Brains to Turn Them into Living Drones.”


Thanks for reading Second Breakfast. Please consider becoming a paid subscriber. Your financial support is what enables me to do this work.

Read the whole story
mrmarchant
6 hours ago
reply
Share this story
Delete

Books and screens

1 Share

A modern library with tall bookshelves and people reading or using devices in a cosy, well-lit atmosphere.

Your inability to focus isn’t a failing. It’s a design problem, and the answer isn’t getting rid of our screen time

- by Carlo Iacono

Read on Aeon

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

AI vibe-generates the same ‘random’ passwords over and over

1 Share

If you ask a chatbot for a random number from one to ten, it’ll usually pick seven: [arXiv, PDF]

GPT-4o-mini, Phi-4 and Gemini 2.0, in particular, seem much more restricted in this range, as they choose “7” in ~80% of total cases.

Seven has long been known to also be humans’ favourite number when they’re asked for something that sounds random. From 1976: [APA, 1976]

When asked to report the 1st digit that comes to mind, a predominant number (28.4%) of 558 persons on the Yale campus chose 7.

Computers are pretty good at random numbers. But chatbots don’t work in numbers — they work in word fragments. So if you ask a chatbot for a random number, it’ll pick words from its training.

Guess what happens when people ask the chatbot for a password? Irregular, a chatbot testing company, tested chatbots on passwords: [Irregular]

LLM-generated passwords (generated directly by the LLM, rather than by an agent using a tool) appear strong, but are fundamentally insecure, because LLMs are designed to predict tokens — the opposite of securely and uniformly sampling random characters.

Despite this, LLM-generated passwords appear in the real world — used by real users, and invisibly chosen by coding agents as part of code development tasks, instead of relying on traditional secure password generation methods.

When you ask the chatbot for a strong password, it doesn’t generate a password — it picks example patterns of random passwords from its training.

Irregular asked Claude for 50 strong passwords. They found standard patterns in the passwords — most start with “G7”. The characters “L ,” “9,” “m,” “2,” “$” and “#” appeared in all the passwords.

And the bot kept repeating passwords. One password appeared 18 times in the 50 passwords!

ChatGPT and Gemini gave similar results. But the passwords sure looked random.

The other problem with predictable passwords is that they’re easily crackable. In cryptography jargon, they have low entropy. Guessing predictable passwords is so much easier.

The Register tried reproducing Irregular’s work, and they got results much like Irregular’s. Chatbots are just bad at this. [Register]

Why would you even ask a chatbot to generate a password for you? Because chatbot users use the chatbot as their first call for everything. It’s their universal answer machine!

You and I might know better. But so many people just don’t. They fell for the machine that was tuned really hard to make people fall for it. Even the vibe coders fall for the password one.

So what should you tell them to do to generate a strong password? If your web browser has a password generator, use that. All the password manager apps, like 1password or LastPass, have a password generator site. They’ll be okay. But fundamentally, anything is better for the job than a chatbot.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Unsung heroes: Flickr’s URLs scheme

1 Share

Half of my education in URLs as user interface came from Flickr in the late 2000s. Its URLs looked like this:

flickr.com/photos/mwichary/favorites
flickr.com/photos/mwichary/sets
flickr.com/photos/mwichary/sets/72177720330077904
flickr.com/photos/mwichary/54896695834
flickr.com/photos/mwichary/54896695834/in/set-72177720330077904

This was incredible and a breath of fresh air. No redundant www. in front or awkward .php at the end. No parameters with their unpleasant ?&= syntax. No % signs partying with hex codes. When you shared these URLs with others, you didn’t have to retouch or delete anything. When Chrome’s address bar started autocompleting them, you knew exactly where you were going.

This might seem silly. The user interface of URLs? Who types in or edits URLs by hand? But keyboards are still the most efficient entry device. If a place you’re going is where you’ve already been, typing a few letters might get you there much faster than waiting for pages to load, clicking, and so on. It might get you there even faster than sifting through bookmarks. Or, if where you’re going is up in hierarchy, well-designed URL will allow you to drag to select and then backspace a few things from the end.

Flickr allowed to do all that, and all without a touch of a Shift key, too.

Any URL being easily editable required for it to be easily readable, too. Flickr’s were. The link names were so simple that seeing the menu…

…told you exactly what the URLs for each item were.

In the years since, the rich text dreams didn’t materialize. We’ve continued to see and use naked URLs everywhere. And this is where we get to one other benefit of Flickr URLs: they were short. They could be placed in an email or in Markdown. Scratch that, they could be placed in a sentence. And they would never get truncated today on Slack with that frustrating middle ellipsis (which occasionally leads to someone copying the shortened and now-malformed URL and sharing it further!).

It was a beautiful and predictable scheme. Once you knew how it worked, you could guess other URLs. If I were typing an email or authoring a blog post and I happened to have a link to your photo in Flickr, I could also easily include a link to your Flickr homepage just by editing the URL, without having to jump back to the browser to verify.

Flickr is still around and most of the URLs above will work. In 2026, I can think of a few improvements. I would get rid of /photos, since Flickr is already about photos. I would also try to add a human-readable slug at the end, because…
flickr.com/mwichary/sets/72177720330077904-alishan-forest-railway
…feels easier to recall than…
flickr.com/photos/mwichary/sets/72177720330077904

(Alternatively, I would consider getting rid of numerical ids altogether and relying on name alone. Internet Archive does it at e.g. archive.org/details/leroy-lettering-sets, but that has some serious limitations that are not hard to imagine.)

But this is the benefit of hindsight and the benefit of things I learned since. And I started learning and caring right here, with Flickr, in 2007. Back then, by default, URLs would look like this:

www.flickr.com/Photos.aspx?photo_id=54896695834&user_id=mwichary&type=gallery

Flickr’s didn’t, because someone gave a damn. They fact they did was inspiring; most of the URLs in things I created since owe something to that person. (Please let me know who that was, if you know! My grapevine says it’s Cal Henderson, but I would love a confirmation.)

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Plums

2 Comments and 5 Shares
My icebox plum trap easily captured William Carlos Williams. It took much less work than the infinite looping network of diverging paths I had to build in that yellow wood to ensnare Robert Frost.
Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete
2 public comments
tedder
2 days ago
reply
I'm confused by this. What's the dilemma? Wanting to use the plum for dinner?
Uranus
marcrichter
2 days ago
Me too, but there's always this: https://www.explainxkcd.com/wiki/index.php/3209:_Plums
ttencate
1 day ago
Not always; it's throwing 500 errors right now. Apparently you and I are not the only ones who are confused 😁
dlowe
1 day ago
There's a famous (infamous) poem about stealing plums from the icebox that has been remixed a thousand times in meme-culture.
fancycwabs
1 day ago
He's tempted to go all William Carlos Williams on the plums, mostly for the opportunity to apologize later.
alt_text_bot
2 days ago
reply
My icebox plum trap easily captured William Carlos Williams. It took much less work than the infinite looping network of diverging paths I had to build in that yellow wood to ensnare Robert Frost.

Highlights from Stanford's AI+Education Summit

1 Share

I attended the AI+Education Summit at Stanford last week, the fourth year for the event and the first year for me. Organizer Isabelle Hau invited researchers, philanthropists, and a large contingent of teachers and students, all of them participating in panels throughout the day. That mix—heavier on practitioners than edtech professionals—gave me lots to think about on my drive home. Here are several of my takeaways.

¶ The party is sobering up. The triumphalism of 2023 is out. The edtech rapture is no longer just one more model release away. Instead, from the first slide of the Summit above, panelists frequently argued that any learning gains from AI will be contingent on local implementation and just as likely to result in learning losses, such as those in the second column of the slide.

¶ Stanford’s Guilherme Lichand presented one of those learning losses with his team’s paper, “GenAI Can Harm Learning Despite Guardrails: Evidence from Middle-School Creativity.” His study replicated previous findings that kids do better on certain tasks with AI assistance in the near-term—creative tasks, in his case—and worse later when the tool is taken away. “Already pretty bad news,” Lichand said. But when he gave the students a transfer task, the students who had AI and had it taken away saw negative transfer. “Four-fold,” said Lichand. What’s happening here? Lichand:

It’s not just persistence. It’s a little bit about how you don’t have as much fun doing it, but most importantly, you start thinking that AI is more creative than you. And the negative effects are concentrated on those kids who really think that AI became more creative than them.

A paper I’ll be interested in reading. This was using a custom AI model, as well, one with guardrails to prevent the LLM from solving the tasks for students, the same kind of “tutor modes” we’ve seen from Google, Anthropic, OpenAI, Khan Academy, etc.

¶ Teacher Michael Taubman had the line that brought down the house.

In the last year or so, it’s really started to feel like we have 45 minutes together and the together part is what’s really mattering now. We can have screens involved. We can use AI. We should sometimes. But that is a human space. The classroom is taking on an almost sacred dimension for me now. It’s people gathering together to be young and human together, and grow up together, and learn to argue in a very complicated country together, and I think that is increasingly a space that education should be exploring in addition to pedagogy and content.

¶ Venture capitalist Miriam Rivera urged us to consider the nexus of technology and eugenics that originated in the Silicon Valley:

I have a lot of optimism and a lot of fear of where AI can take us as a society. Silicon Valley has had a long history of really anti-social kinds of movements including in the earliest days of the semi-conductor, a real belief that there are just different classes of humans and some of them are better than others. I can see that happening with some of the technology champions in AI.

Rivera kept bringing it, asking the crowd to consider whether or not they understand the world they are trying to change:

But my sense is there is such a bifurcation in our country about how people know each other. I used to say that church was the most segregated hour in America. I just think that we’ve just gotten more hours segregated in America. And that people often are only interacting with people in their same class, race, level of education. Sometimes I’ve had a party one time, and I thought, my God, everybody here has a master’s degree at least. That’s just not the real world.

And I am fortunate in that because of my life history, that’s not the only world that I inhabit. But I think for many of us and our students here, that is the world that they primarily inhabit, and they have very little exposure to the real world and to the real needs of a lot of Americans, the majority of whom are in financial situations that don’t allow them to have a $400 emergency, like their car breaks down. That can really push them over the edge.

Related: Michael Taubman’s comments above!

¶ Former Stanford President John Hennessy closed the day with a debate between various education and technology luminaries. His opening question was a good one:

How many people remember the MOOC revolution that was going to completely change K-12 education? Why is this time really different? What fundamentally about the technology could be transformative?

This was an important question, especially given the fact that many of the same people at the same university on the same stage had championed the MOOC movement ten years ago. Answers from the panelists:

Stanford professor Susanna Loeb:

I think the ability to generate is one thing. We didn’t have that before.

Rebecca Winthrop, author of The Disengaged Teen:

Schools did not invite this technology into their classroom like MOOCs. It showed up.

Neerav Kingsland, Strategic Initiatives at Anthropic:

This might be the most powerful technology humanity has ever created and so we should at least have some assumption and curiosity that that would have a big impact on education—both the opportunities and risks.

Shantanu Sinha, Google for Education, former COO of Khan Academy:

I’d actually disagree with the premise of the question that education technology hasn’t had a transformative impact over the last 10 years.

Sinha related an anecdote about a girl from Afghanistan who was able to further her schooling thanks to the availability of MOOC-style videos, which is an inspiring story, of course, but quite a different definition of “transformation” than “there will be only 10 universities in the world” or “a free, world‑class education for anyone, anywhere” or Hennessy’s own prediction (unmentioned by anyone) that “there is a tsunami coming” for higher education.

After Sinha described the creation of LearnLM at Google, a version of their Gemini LLM that won’t give students the answer even if asked, Rebecca Winthrop said, “What kid is gonna pick the learn one and not the give-me-the-answer one?”

Susanna Loeb responded to all this chatbot chatter by saying:

I do think we have to overcome the idea that education is just like feeding information at the right level to students. Because that is just one important part of what we do, but not the main thing.

Later, Kingsland gave a charge to edtech professionals:

The technology is, I think, about there, but we don’t yet have the product right. And so what would be amazing, I think, and transformative from AI is, if in a couple of years we had a AI tutor that worked with most kids most of the time, most subjects, that we had it well-researched, and that it didn’t degrade on mental health or disempowerment or all these issues we’ve talked on.

Look—this is more or less how the same crowd talked about MOOCs ten years ago. Copy and paste. And AI tutors will fall short of the same bar for the same reason MOOCs did: it’s humans who help humans do hard things. Ever thus. And so many of these technologies—by accident or design—fit a bell jar around the student. They put the kid into an airtight container with the technology inside and every other human outside. That’s all you need to know about their odds of success.

It’ll be another set of panelists in another ten years scratching their heads over the failure of chatbot tutors to transform K-12 education, each panelist now promising the audience that AR / VR / wearables / neural implants / et cetera will be different this time. It simply will.

Hey thanks for reading. I write about technology, learning, and math on special Wednesdays. Throw your email in the box if that sounds like your thing! -Dan

Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete
Next Page of Stories