745 stories
·
0 followers

All Learning Happens in Small Steps

1 Share

I have a little theory. All learning happens in small steps. All new learning is a small step, building on what students already know.

Small steps have become one of the central ways I look at learning. I'll get to the implications in a minute. First, a story.

Evolution

Charles Darwin's discovery of the theory of evolution is one of the marvels of the human mind. The idea that species change over time might seem obvious to us today, but it was a massive conceptual leap at the time.

One might think that Darwin's discovery contradicts my thesis, that all learning happens in small steps. The theory of natural selection feels like a radical breakthrough, the kind of idea that arrives fully formed in a single stroke of genius. Fortunately, Darwin kept incredibly detailed notebooks during the time he was formulating his theory, and those notebooks give us a window into how the discovery happened.

The truth is, there was no eureka moment. Darwin's insights emerged slowly, over years and hundreds of pages of writing. He built on contemporaries like Thomas Malthus, whose essay on population growth sparked Darwin’s thinking about the struggle for existence, and Jean-Baptiste Lamarck, who had earlier suggested that species might change over time. Darwin combined those influences with his own painstaking observations on the Galápagos Islands and elsewhere.

His journals show him wrestling with the pieces that became his grand theory, tiptoeing around an idea that seems obvious today. Reading his journals, it becomes surprising he didn’t discover evolution sooner. The discovery of evolution by natural selection was not a single leap, but a staircase of small steps that added up to something remarkable.1

Classrooms

My theory is simple. There are no large conceptual leaps, no eureka moments, no huge realizations. There is the slow, steady accumulation of learning. It’s not as flashy as big a-ha moments, but there’s magic in getting all the pieces to fit together in the right way.

There are a lot of corollaries of the “all learning happens in small steps.” Here are a few:

  • The most important variable in student learning is what they already know. Each small step builds on one or several steps that came before it. The best thing I can do to help students learn something is figure out what that learning builds off of and make sure the foundation is secure. If I'm teaching two-step equations, give students some practice with one-step equations. If I'm teaching angle theorems, make sure students have a good intuition for the size of angles.

  • Students often arrive to class with experiences or intuition that academic learning can build off of. Everyday experiences with numbers, proportions, unknowns, and more can act as extra building blocks, and using those building blocks smooths the learning process.

  • When students appear to pick something up quickly or make a big leap, it's typically because they already know a lot about a topic. Maybe one student comes in knowing a lot about tips and discounts, and knows what increase and decrease mean. To a teacher it seems like they pick up percent increase and decrease problems quickly. Really, they already knew a lot about the topic. They're still learning one small step at a time, they've just already taken a bunch of the steps. It might be tempting to watch that student and wish that every other student could make the same leaps. But another student who speaks English as a second language, doesn't know what increase and decrease mean, and isn't familiar with tips needs you to guide them through each of those little steps along the way. The best way to facilitate an a-ha moment is to give students all the pieces they need to make the leap a small, manageable step.

  • If I try to teach something that isn't connected to the step before it, students will struggle to remember and apply it in the future. If I try to bash that piece of knowledge into their brains without making any connections, it ends up as inert, rote knowledge. This is what people often mean when they talk about memorization. Avoid teaching like this. Get students thinking about the connections between ideas, and make sure each new idea is connected to something students already know.

  • If I push students to take steps that are too big, they will need much more practice for the learning to stick. When the steps are small and all fit together, students need less practice. Practice is still important. I’m a big believer in practice. But when I get small steps right, students need a few rounds of practice with a handful of questions each, and we can keep moving. I don’t want to spend time grinding through long rounds of practice and still finding that students struggle to remember.

  • Trying to teach too many things at once leads to fragile learning. From the teacher's perspective, it might seem obvious that if you know A, then B must be true, and B leads to C, and then D. But students need time to consolidate each step along the way, and rushing means the whole thing will fall apart. It might seem in the moment like students are following along, but trying to teach A → B → C → D in one go just means you'll show up to class the next day and realize you need to start at B again. Or, more accurately, some students who already knew A and B and C remember everything, and students who didn’t felt lost. Take one step at a time, practice and consolidate, then take the next step.

  • Learning builds on what students already know, so if students arrive with misconceptions they need to be addressed directly or learning will reinforce those misconceptions. Since learning happens in small steps, misconceptions that become ingrained are hard to change. Avoid rules that expire, build the foundation slowly, and address misconceptions directly before moving on.

An Architect

I don’t have any proof. This is just a theory. But it’s a theory that makes a lot of sense to me, both with my experience as a teacher and my knowledge of how learning works. Most importantly, the idea that all learning happens in small steps is actionable. If I see my students struggling, I have a few clear ways to respond. I can slow down and make sure each step is secure before moving on. I can step back and break the learning down into smaller steps. I can find ways for students to see how what they are learning is connected to something they already know.

I also think focusing on small steps changes how we look at teachers and teaching. The assumption that teaching is about creating a-ha moments creates an image of the ideal teacher as a passionate, inspiring storyteller who dazzles students with big ideas. Through the lens of small steps I think of a great teacher more like an architect. Someone working slowly and patiently behind the scenes, making sure the pieces fit together. Maybe no one will ever appreciate that work but it makes the whole structure strong and lasting. The architect’s work isn’t always flashy, but it endures. In the same way, great teaching often means carefully designing a sequence of small steps so that understanding holds up years later, even if students never remember exactly how it was built.

1

The book Darwin on Man is a great tour of Darwin’s discovery and all of the thinking that led to it.



Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

How a Lafufu in a shot glass sent me down a dark path today

2 Shares

The U.S. Consumer Product Safety Commission (CPSC) on Monday issued an “urgent safety warning to Labubu collectors.

If you don’t know what a Labubu is, oh, boy, do you have fun in store for you when you search around, but, for our purposes, just know that it’s a collectible that many people covet, leading to a number of fakes (commonly known to as Lafufus.)

I laughed, because it is objectively funny, especially when I saw the Lafufu in what looked like a shot glass — but then became confused because Labubus are bigger than that.

I saw that this was about “[m]ini versions,” though, and got it. OK, we’re dealing with tiny Lafufus.

Then, I got confused again as I continued reading the caption and saw that the shot glass is actually “the small parts cylinder.” More on that to come.

So, what is the warning?

Now it made sense. Thanks, CPSC.

But, the small parts cylinder got my brain spinning, so I decided to do a little research.

That’s when I learned about the “small parts ban,” care of the CPSC website:

Incredible. The shot glass, aka, small parts cylinder is actually an attempt to prevent children from choking to death on toys. And there is, apparently, a figure!

At that point, I had no choice but to click through to the Code of Federal Regulations:

§ 1501.4 Size requirements and test procedure.  (a) No toy or other children's article subject to § 1500.18(a)(9) and to this part 1501 shall be small enough to fit entirely within a cylinder with the dimensions shown in Figure 1, when tested in accordance with the procedure in paragraph (b) of this section. In testing to ensure compliance with this regulation, the dimensions of the Commission's test cylinder will be no greater than those shown in Figure 1. (In addition, for compliance purposes, the English dimensions shall be used. The metric approximations are included only for convenience.)  (b)   (1) Place the article, without compressing it, into the cylinder. If the article fits entirely within the cylinder, in any orientation, it fails to comply with the test procedure. (Test any detached components of the article the same way.)  (2) If the article does not fit entirely within the cylinder, subject it to the appropriate “use and abuse” tests of 16 CFR 1500.51 and 1500.52 (excluding the bite tests of §§ 1500.51(c) and 1500.52(c)). Any components or pieces (excluding paper, fabric, yarn, fuzz, elastic, and string) which have become detached from the article as a result of the use and abuse testing shall be placed into the cylinder, one at a time. If any such components or pieces fit entirely within the cylinder, in any orientation and without being compressed, the article fails to comply with the test procedure.

Then, there it was — the small parts cylinder:

Amazing that the government figured that out. Tell me more!

Luckily, in the CFR, there is more.

There it was. The Federal Register from June 15, 1979 …

… was the place where the requirements; test, and Figure 1, the small parts cylinder, were all laid out.

The provision was submitted by Sadye Dunn, then-secretary of the CPSC; the rule went into effect on January 1, 1980; and the CPSC has been enforcing it since.

Why am I telling you all of this on a Monday in August?

President Donald Trump purported to fire three of the five CPSC commissioners in May — despite a legal provision only allowing their firing “for neglect of duty or malfeasance in office but for no other cause.”

Although lower courts ruled that Trump had likely violated the law and an injunction was in place keeping the three — Mary Boyle, Alexander Hoehn-Saric, and Richard Trumka Jr. — in their roles during litigation, the U.S. Supreme Court’s majority stopped that in its tracks.

Under the Supreme Court’s unsigned order, the three are gone from the CPSC during litigation — and, likely, gone from the CPSC, period. It was a stark order that ignored longstanding precedent and that I criticized harshly at the time because the conservative justices’ action, ultimately, was nothing less than a partnering up with Trump in his lawlessness.

As such, the commission has only left two appointees: Peter Feldman, who is the acting chair, and Douglas Dziak, whose term expired nearly a year ago (on October 27, 2024). Under federal law, not that Trump is otherwise following it, a CPSC commissioner can continue to serve until a replacement is named and confirmed for up to a year after their term expires. So, Feldman could be alone come November.

In the midst of all of that, there’s more.

The CPSC is trying to eliminate itself.

Truly. In addition to requesting that a big chunk of its budget be cut, its budget request stated:

The FY 2026 President’s Budget proposes to reorganize the functions of the Consumer Product Safety Commission and embed it in the HHS Office of the Secretary as the Assistant Secretary for Consumer Product Safety (ASCPS). Contingent upon enactment of authorizing legislation, CPSC accounts will be transferred to the U.S. Department of Health and Human Services.

The CPSC’s position — and Trump’s position — is that HHS Secretary Robert F. Kennedy Jr. should be given more authority for overseeing America’s efforts to keep kids from dying unnecessarily.

The Supreme Court helped that along.

Just think, we almost made it 50 years with an agency that was dedicated to enforcing a rule protecting kids from choking to death on toys.

But Trump fired the majority of the CPSC commissioners, his budget would gut the commission, and he wants to eliminate it altogether and send its “functions” to the deadly secretary of HHS.

Sorry, Sadye.

Subscribe now


Tuesday at Law Dork

Heads up!

At 2:30 p.m. PT/5:30 p.m. ET Tuesday, I will be having a Substack Live chat with California ! We’ll be discussing the attorney general’s extensive efforts to challenge the Trump administration’s actions, the successes and challenges his office has faced in doing so, and so much more.

Tune in!

And, don’t fret, for those who miss it live, I’ll be posting a video and transcript later.

Share



Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete

We accidentally built the wrong internet

1 Share
We accidentally built the wrong internet

Imagine a simple tool for the internet, built from scratch for today's world. One app. One thing you own. It proves who you are and lets you pay for things all in one place. Something only you control. No middlemen. No passwords. No credit cards.

When a website wants to know it's really you, you don't type anything. You just tap "Yes", like unlocking your phone with your face. That tap is a silent, secure confirmation that you're you, but no one learns anything about you, and nothing gets stored or stolen.

When you want to buy something? Same thing. One tap. It's like handing over cash, but digital, no forms, no card numbers to fill out, no companies keeping your payment info forever.

One tool. One tap. Works everywhere. Secure by design. Built for people, not corporations.


Now, let's engage in a thought experiment. Imagine this elegant system doesn't exist and we're gathered in a conference room in the late 1990s, designing the identity layer for the web. A rogue engineer stands up and says: "I've got a better idea."

"Okay, instead of giving users a tool they control, we'll make them rely on a centralized third-party account: an email address, hosted by some company somewhere. Those companies could read your messages, shut you out anytime, and track where you go online. That email will be your username for everything because it's easy to remember and people get to choose how their email looks."

"For proving who you are, we'll use passwords, just some words or symbols people have to remember. But people suck at remembering dozens of strong ones, so they'll reuse the same password everywhere. To fix that, we'll need another app called a password manager, just to survive our own bad idea."

"Is it more secure?" someone asks.

"No. It's worse. Every website will now store millions of passwords or hashes of these in giant databases. Hackers will constantly break in and steal them. These leaks will happen weekly. We'll just accept it as normal and have websites to look up if you were hacked lol."

"Okay... Is it more private?"

"Absolutely not. It's a privacy catastrophe. Every login, every action, is logged in a bunch of disconnected data silos, all tied back to a single corporate owned identifier. The user leaves a trail of digital exhaust everywhere they go: an exhaust that is immensely profitable to collect and sell. Your whole digital life gets chopped up and sold. Your identity becomes a product."

"Surely it's simpler to build and use?"

"Not a chance. To patch the glaring security holes, we'll have to bolt on more components. We'll need a two-factor authentication system that sends codes to a completely separate device. We'll need annoying CAPTCHAs to prove users aren't robots. We'll need convoluted 'forgot password' email loops, which themselves become a prime vector for account takeovers. It's a Rube Goldberg machine of trust, with dozens of failure points: the email provider, the service's database, the password manager, the 2FA app, the SMS gateway…"

He pauses. "And here's the worst part: after all that hassle just to log in… you still can't pay for anything!"

The room is silent.

"That whole system & all that complexity does nothing for buying stuff. To pay, you'll still need to find a physical card, type in 16 numbers, an expiry date, a security code, all into a web form. But don't worry we'll ask everyone to use HTTPS so they feel secure. That info gets then passed through banks, payment processors, networks each taking a cut, each adding risk. And if one of them messes up? Your card gets cloned. Your money's at risk."

"So, what is the upside to this… system?" someone finally asks.

"Well", the engineer says, "it lets people sign up for free stuff they'll never use again without needing money upfront."

That's it. That's the grand benefit. For this single, narrow edge case, we created an insecure, privacy-invasive, and breathtakingly complex architecture that divorced identity from commerce and burdened the entire digital world with friction and risk.

Here's the truth though: the gravity of convenience is the most powerful, irrational force in the world. A better system doesn't win by being better, it wins by being lighter because people will trade a pound of their sovereignty for an ounce of convenience, every single time.

A better system doesn't win by being smarter. It wins by being simpler first. Then better. Then both.

So why are we still stuck with this mess?

The honest answer is that the user experience of the alternative has, until now, been a steep cliff rather than a gentle on-ramp. The ethos of "be your own bank" came with the terrifying corollary of "be your own high-stakes security expert". Managing seed phrases, understanding gas fees, and navigating the unforgiving finality of transactions created a barrier to entry that was simply too high for the mainstream user. The comforting safety net of a "forgot password" link remained preferable to the catastrophic potential of a lost hardware wallet or a misplaced 12-word phrase.

That's terrifying. No wonder most people chose the flawed but familiar: a password, a reset link, a bank that might help if things go wrong.

But that's changing.

The new tools aren't asking you to become a tech expert. They're building the power of ownership into things that feel normal like unlocking your phone with your fingerprint, or approving a payment with a tap. Features like social recovery (letting trusted friends help you regain access), smart wallets (that work like apps, not crypto dashboards), and passkeys (using your phone or face instead of passwords) are making secure, self-owned identity actually easy.

The goal is to make the right thing the easy thing, not make everyone a crypto expert.

In any rational design meeting, that engineer would've been laughed out of the room.

Yet here we are, living in the world he described.

The good news? We don't have to stay here.

Of course, many in the mainstream tech world still see this kind of technology as tainted by its association with hype, scams, or volatile cryptocurrencies. They dismiss it all as "blockchain stuff", a knee-jerk reaction that throws the baby out with the bathwater. But the core idea is about ownership, privacy, and finally giving users a secure digital "self" that works as seamlessly as the rest of the web.

The internet was built without a native way to prove who you are and move value securely. But now, for the first time, that missing piece (cryptographic signer, example: Polkadot Vault) finally starts working like the rest of the web, simple, fast, and yours.

Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete

Why AI and “helpful” tech are making us helpless

1 Share

Three months to forget

A recent study in The Lancet sparked a rabbit hole of research about the longer-term effects of leveraging technology in our day-to-day lives. In this study, doctors used AI to spot precancerous growths during colonoscopies, seeing a marked improvement in detections (yay!). Then researchers took the tool away. After only three months of regular AI assistance, the doctors’ unassisted detection rates fell from 28% to 22%. That is a 20% drop in a life-and-death domain.

What bothered me most wasn’t the number. It was the mechanism. The doctors didn’t use AI because they felt less capable. Their skills declined because they were using AI. The tool slowly took over the mental work, and the muscle memory faded. Years of expertise dulled in twelve weeks.

If we zoom out, we know for a fact it’s not just medicine. We are running miniature versions of this experiment every day.


The GPS effect

I’ve driven to certain places in Atlanta dozens of times. (If you’ve had the pleasure of driving in Atlanta, you know how chaotic it can be and how important it is to have alternate routes.). On occasion, my GPS will go awry, and I suddenly can’t remember which street I was supposed to turn on or which exit to take from the roundabout. It isn’t charming. It’s a little scary (and a little embarrassing). I can eventually find my way out, but it can sometimes take a moment for me to regroup.

That isn’t just me being “bad with directions.” Habitual GPS use is linked to decline in hippocampal-dependent spatial memory. Neuroscientists say we become “passive passengers rather than active explorers” when we let the blue line on the screen do the deciding for us. The less our brains need to build mental maps, the less they bother.

Navigation is the obvious example because we all feel it. But this “I can’t do it without the tool” reflex is spreading everywhere.


The great forgetting

  • Engineering. I hear this from developers all the time: after a few months of leaning on AI coding assistants, debugging instincts get dull. One engineer summed this up well: “I’ve become a human clipboard, shuttling errors to the AI and pasting solutions back into code”. When you stop tracing the problem yourself, you stop seeing the patterns.

  • Writing. Ask most of us to handwrite more than a grocery list and what comes out looks like cave drawings. Years of typing will do that, but the cognitive cost is real. Handwriting isn’t just about neatness; it supports memory and idea formation.

  • Math. Calculators used to be for the hard stuff. Now they do the easy stuff too, and students miss obviously wrong outputs because they have lost the gut check.

Researchers even have a name for this pattern: AI-chatbots-induced cognitive atrophy (AICICA), the deterioration that comes from over-reliance on chatbots and similar tools. The label sounds dramatic until your battery dies and you realize you can’t navigate, can’t spell “definitely,” and aren’t totally sure how to approach the bug without autocomplete holding your hand.


After Burnout is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


Why this fuels burnout

We bring tools in to reduce cognitive load so we can “focus on higher-level work.” Good intention. Wrong model. Abilities do not sit in cold storage while a tool handles them. They weaken with disuse. The less you do a thing, the harder it becomes to do that thing, and the faster your confidence plummets.

That was another finding from the colonoscopy study: once the AI was gone, doctors felt less motivated, less focused, and less responsible for their own decisions. It wasn’t only a performance dip. It was a psychological one.

Now add modern work. Every outage, wrong answer, or broken integration turns into a crisis because we no longer trust our own skills to backstop the tool. What could have been a small inconvenience becomes a spike of panic. This is the problem: we don’t just lose competence. We start to feel incompetent. That feeling is gasoline on the burnout fire.


From helpful to helpless

There’s a name for what happens when that panic becomes a pattern: learned helplessness. If you’re repeatedly in situations where you don’t control the outcome, eventually you stop trying, even when you could succeed.

When the default becomes “let the tool do it,” two pillars of mental health crack:

  • Self-efficacy, the belief that you can figure things out.

  • Locus of control, the belief that your actions influence outcomes.

Undercut those, and you don’t get peace. You get anxiety. That’s what researchers call technostress—stress driven by the demands and failures of technology at work—and it correlates with higher burnout. Studies of remote workers during the pandemic linked techno-stressors directly to burnout, which then predicted depression and anxiety symptoms.

You can probably name your own triggers:

  • Your phone dies or you lose cell signal and you cannot navigate.

  • Spell-check breaks and basic words suddenly look wrong.

  • The AI insists on a fix you can’t validate, and you realize you don’t remember how to validate it.

Each one is a reminder that the tool holds more of your capability than you do.


What this looks like across jobs

Education research on preservice teachers found that AI dependency had significant negative effects on problem-solving, critical thinking, creative thinking, and self-confidence. These are the people preparing to teach the next generation.

In tech, where AI adoption is furthest along, nearly half of workers report depression or anxiety. It isn’t only the pace or the ambiguity. It is the cognitive dissonance of being told to be innovative while outsourcing the very skills innovation requires.

That is the double bind: use the tools or fall behind, but risk losing the capabilities that made you valuable in the first place.


What to do instead: make AI a collaborator, not a crutch

Throwing the phone in a lake and writing code on a typewriter is not the move. The goal is conscious competence: use tools to extend your reach while keeping your core skills alive.

A few practices that actually help:

Practice deliberate difficulty. Do things the harder way on purpose, regularly. Drive to a familiar place without GPS. Sketch your outline longhand before drafting. Try a debugging session without code suggestions. It is not a test of purity. It is maintenance for the neural circuits you want to keep.

Build tolerance for uncertainty. Resist the reflex to Google or ask the AI the second a question pops up. Sit with the not-knowing for a few minutes. Form a hypothesis. Then check yourself. Uncertainty tolerance is part of the job for any kind of creative or technical work.

Audit your dependencies. For a week, notice the moments you reach for a tool out of habit. Ask, “Could I do this without assistance?” If the answer is yes, try it. If the answer is no, put that skill on your practice list.

Protect the basics. Handwriting, mental math, and spatial navigation pay cognitive dividends well beyond the tasks themselves. These aren’t nostalgic hobbies. They are insurance policies for your brain. (Journaling is a great habit to have for your mental health!)

Create “manual mode” drills. Have a standing block where you operate without the usual scaffolding. For engineers, that might be reading logs and tracing data flow by hand before asking an assistant. For writers, drafting a page without autocomplete and then running edits. For managers, diagnosing a team issue without polling Slack or dropping it in an AI prompt.

Design for outages. If you lead a team, treat tool failure like a fire drill. How do we ship if the assistant is down? How do we make decisions if our analytics dashboard is broken? Write the checklist. Practice it. The point is confidence: “We can still do the work.”

Use the tool, keep the skill. Pair AI with explicit skill-keeping. If you accept an AI suggestion, explain to yourself why it works. If you take a generated outline, rewrite the structure in your own words before you draft. The tool accelerates the start; you keep the thinking.


Where this leaves us

Education researcher Amy Ko puts a fine point on it: unlike calculators, which extended math fluency, many AI tools “supplant thought itself,” short-circuiting the struggle where writing, synthesis, and reasoning live.

The fear is not that AI becomes sentient. The fear is that humans become inert.

The good news is atrophy is not permanent. Doctors can retrain their pattern recognition. We can rebuild navigation instincts, regain math fluency, and remember how to debug without autocomplete. It takes intention and a little discomfort. That’s it.

Real efficiency isn’t outsourcing every ounce of effort. Real efficiency is choosing where effort matters most and keeping our most human skills sharp.

Your GPS might know the fastest route. Do you still know the way home?

Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete

Prescriptive Practices

1 Share
Prescriptive Practices

Math teacher Michael Pershan wrote an excellent newsletter this week, and I'd like to start there rather than with the ubiquitous stories about the underwhelming roll-out of OpenAI's latest GPT.

Michael's insights are interesting and important; OpenAI, not so much – unless you want to talk about the future imagined by techno-oligarchy, the petrochemical industry, and monopoly capitalism which we should, of course, but yuck. So let's talk math instruction and how, as Michael titles his piece, "practice software is struggling," "flailing around, complaining that people aren’t using it right. They’re trying to tackle one of the harder parts of teaching, and while I get what they’re going for, their solutions actually make it worse."

(Sort of like generative "AI" perhaps, and I sure do hear something similar from a lot of its supporters: you're doing it, you're prompting it wrong. And to borrow again from Michael's tongue-in-cheek assessment of personalized learning software, about 5% of the time, it works every time. And it's only gonna get better – soon, maybe it'll always work 6 or 7% of the time.)

Michael makes a good case for focusing on making group instruction better instead of adopting the "individualization" promised by technology companies – many of which are now rebranding their software from "personalized learning" or "adaptive learning" to "AI," of course of course of course. And I love this, in no small part, because the evangelists for sticking children in front of computers all day to click their way alone to "mastery" – Sal Khan and Bill Gates, most famously – always decry group instruction as the worst possible thing that education can do. It's so inefficient, they shudder. It's so anti-individualistic and as such, (implied) deeply unamerican. It held them back, they whine, seething with resentment against teachers (women often). Instead of getting therapy, they get venture capital. But I digress...

Michael explains how he hands out individual whiteboards to students; he writes a practice problem on the big board for students to solve on their own. He asks them to lift up their boards and show him what they've done.

“That’s pretty good. You all seemed pretty confident. Everybody wipe your boards. Let’s try another one like that.” 
Why another one like that? Maybe because one kid got the previous question wrong —I want to give him another shot. Maybe I just want everyone to have a few wins before moving on. I get to decide.
...This is dynamic. Depending on how students answer, I’ll change the questions they’re served. Look at me—I’m the algorithm. And I’m getting an enormous amount of information from the kids, though thank god there’s no teacher dashboard. I can see the “data” directly and simply. It guides my instruction.

The tech industry's focus on personalized learning – their promises, their efforts decades and decades and decades old now to make it better – is misguided. "We shouldn’t be going all-in on kids learning on their own," Michael concludes. "We should be trying to figure out how to make whole-group learning even better."

Learning is social after all. And while Michael is specifically talking about students understanding algebraic equations, there are other lessons – crucial lessons – that are imparted in these group experiences.

Ursula Franklin spoke of something akin to this in her Massey lectures, delivered in 1989 and collected in The Real World of Technology, warning decades ago that the "personalization" and isolation of software was damaging – to education and to democracy.

Whenever a group of people is learning something together two separate facets of the process should be distinguished: the explicit learning of, say, how to multiply and divide or to conjugate French verbs, and the implicit learning, the social teaching, for which the activity of learning provides the setting. It is here that students acquire social understanding and coping skills, ranging from listening, tolerance, and cooperation to patience, trust, or anger management. In a traditional setting, most implicit learning occurred "by the way" as groups worked together. The achievement of implicit learning is uaully taken for granted once the explicit task has been accomplished. This is no longer a valid assumption. When external devices are used to diminish the need for the drill of explicit learning, the occasion for implicit learning may also diminish.

Because of the ways in which new technologies encourage work to be done alone and asynchronously, Franklin argued, there would be fewer and fewer places to actually develop society. "...[H]ow and where, we ask again, is discernment, trust, and collaboration learned, experience and acution passed on, when people no longer work, build, create, and learn together or share sequence and consequence in the course of a common task?"

We are dismantling our shared future – quite literally, quite explicitly – by embracing the ideology and the practice of the technology industry, one that promises radically individualized optimization but that is predicated on prediction, prescription, and compliance.


Men explain GPT 5 to you: "GPT-5 is alive," says Casey Newton, who a few days later offers "Three big lessons from the GPT-5 backlash". "GPT-5 is a joke. Will it matter?" Brian Merchant asks. "The New ChatGPT Resets the AI Race," according to Matteo Wong. "GPT-5: Overdue, overhyped and underwhelming. And that’s not the worst of it," Gary Marcus pronounces. "OpenAI Scrambles to Update GPT-5 After Users Revolt," Will Knight reports. Also Will Knight: "GPT-5 Doesn't Dislike You — It Might Just Need a Benchmark for Emotional Intelligence." (Standardized testing for machines – after all, it's given us such a rich history of ranking humans.)

Jay Peters reports from the GPT5 launch event that "OpenAI gets caught vibe graphing." I've been trying to build up an argument (I daresay, a chapter) that the "productivity suite" of software has shaped how we think – the old "spreadsheet way of knowledge" thing. It has shaped how we demonstrate our thinking to others – the ubiquitous PowerPoint presentation. So what happens if generative "AI" takes over these specific tools then? Is it this "vibe graphing" nonsense?


The response – the emotional response – to the new OpenAI model is noteworthy, with some users expressing, as James O'Sullivan observes, not just disappointment but "genuine loss ... the kind typically reserved for relationships that actually matter."

People are not in "relationships" with their machines, although that is the delusion that is actively being sold to them – a way to re-present and heighten the behavioral nudges for incessant clicking and scrolling and staring. As Kelly Hayes writes, "Fostering dependence is a normal business practice in Silicon Valley. It’s an aim coded into the basic frameworks of social media — a technology that has socially deskilled millions of people and conditioned us to be alone together in the glow of our screens. Now, dependence is coded into a product that represents the endgame of late capitalist alienation: the chatbot. Rather than simply lacking the skills to bond with other human beings as we should, we can replace them with digital lovers, therapists, creative partners, friends, and mothers. As the resulting psychosis and social fallout amassed, OpenAI tried to pump the brakes a bit, and dependent users lashed out."

See also: "AI as normal technology (derogatory)" by Max Read. ""The AI boyfriend ticking time bomb" by Ryan Broderick. And one of the many many reports this week about chatbot-triggered delusions, hospitalizations, obsessions – at some point, we are going to have to admit that these aren't anomalies. "The purpose of a system is what it does," Stafford Beer famously said. Look what AI does, and try tell me that it's purpose is not to smash democracy, monopolize power, and create complete and total dependency among its users.


Just Zuck doing Zuck things: "Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info." "Meta just hired a far right influencer as an 'AI bias advisor'."

Well, at least he's no longer funding school stuff anymore, right?

Oh. This just in: "Zuckerberg's Compound Had Something that Violated City Code: A Private School."


Silicon Valley and Stanford University have long been at the center of the eugenics movement in the US. What we're seeing now is not some new or sudden lurch rightward. From The Wall Street Journal this week: "Inside Silicon Valley’s Growing Obsession With Having Smarter Babies."

You cannot separate the push for artificial general intelligence from the push to IQ test embryos (and the push to incarcerate and deport anyone not white).


“Go outside” has been quietly replaced with “Go online.” The internet is one of the only escape hatches from childhoods grown anxious, small, and sad. We certainly don’t blame parents for this. The social norms, communities, infrastructure, and institutions that once facilitated free play have eroded. Telling children to go outside doesn’t work so well when no one else’s kids are there.

-- Lenore Skenazy, Zach Rausch, and Jonathan Haidt, "What Kids Told Us About How to Get Them Off Their Phones"


The latest PDK poll, according to The 74, finds Americans' confidence in public education at an all-time low. Surprise sur-fucking-prise. I mean, I think Naomi Klein was right when she described the machinations of disaster capitalism back in 2007; I'm just not sure, after decades of austerity that we can really call it "shock doctrine" as it's become so utterly commonplace.

The survey also found that two-thirds of Americans oppose closing the Department of Education.

People's opinions do not matter to an authoritarian regime – and that regime includes both the Trump Administration and the technology industry.

In the latest episode of the This Machine Kills podcast, host Edward Ongweso Jr. talks with Brian Merchant and Paris Marx about "Whose AI Bubble Is This Anyways" and among the points they make is that there is no big consumer demand for generative "AI." But as with the PDK poll, the folks in power just shrug.

There is, of course, a big push by the industry to insert "AI" into every piece of software that consumers use; and there is, a growing push to chase Defense Department contracts as the military, contrary to the austerity that has schools struggling, is unencumbered by financial responsibility or restriction.

What's propping up "AI" is not "the people." It's the police. And it's the petroleum industry.

As such, when I hear educators insist that "AI" is the future that we need to be preparing students for, I wonder why they're so willing to build a world of prisons and climate collapse. I guess they identify with the oligarchs, or perhaps they believe that they're somehow going to live above the destruction.

"The AI Takeover of Education Is Just Getting Started," Lila Shroff writes in The Atlantic. My god, the whole "there's no turning back" rhetoric is just so embarrassingly acquiescent to these horrors.

I mean, if nothing else, look: there is turning back. Why, just this week, "South Korea pulls plug on AI textbooks."

“Your opponents would love you to believe that it's hopeless, that you have no power, that there's no reason to act, that you can't win. Hope is a gift you don't have to surrender, a power you don't have to throw away.” – Rebecca Solnit
Prescriptive Practices

There is always hope.

Thanks for reading Second Breakfast. Please consider becoming a paid subscriber as this is my full-time job and your support enables me to do this work. A little scheduling note: on Mondays, I send a more personal version of this newsletter. You can opt in to that by clicking on the Account button at the top of this page. I'll have another essay for paid subscribers on Wednesday – just a couple of paragraphs of it for free subscribers.

Read the whole story
mrmarchant
5 days ago
reply
Share this story
Delete

Man convinced of genius by chatbot

1 Share

In what now seems like a tale as old as time, a man grew convinced that he had untapped mathematical genius, with the help of ChatGPT. But 90,000 words later, it seems that might not be the case. For the New York Times, Kashmir Hill and Dylan Freedman evaluated Allan Brooks’ very long chat.

This is going to keep happening, and it’s probably going to get worse until people realize that the chatbot is not thinking. It’s a product of statistical convergence. The “delusions” are computer errors. Please stop pretending the chatbots are people.

Tags: , , ,

Read the whole story
mrmarchant
7 days ago
reply
Share this story
Delete
Next Page of Stories