1469 stories
·
2 followers

Secret Agent Man

1 Share
Secret Agent Man

Some weeks, the education technology news is incredibly grim, and sorry to say this was one of those weeks. (Warning: this is a long email.) Indeed, anytime education- and child-related tech stories fill a Garbage Day newsletter as they did on Monday -- Garbage Day describes itself as a publication that “doomscrolls so you don’t have to” -- it’s just not a good sign.

Several of those stories fell under the subheader “Roblox, OpenAI, The New Web, And Radicalization,” including The Wall Street Journal’s coverage of the internal debate at OpenAI on whether or not to alert Canadian authorities about what eventual school shooter Jesse Van Rootselaar had been typing to ChatGPT. (OpenAI banned Van Rootselaar from the platform but did not alert police.) And 404 Media wrote about how Van Rootselaar had created a shooting simulator inside Roblox, a video game very popular with young people.

Garbage Day’s Ryan Broderick argues that

...both ChatGPT and Roblox are not traditional social platforms. We are very used to the social media wild goose chases that happen after mass shootings, where users scour public platforms for content that might provide some kind of insight into why the attack happened. The unspoken hope being that if we had just caught it in time, things may have been different. To say nothing of all the would-be attackers that are reported to law enforcement in time because of their Facebook or X posts. But apps like ChatGPT and Roblox are not simple feed-based platforms. They are far more reactive and personalized and we are quickly discovering how hard they will be to moderate.

Later in the week, Vulture published a Q&A with the head of Parental Advocacy at Roblox, who (no surprise) says “we’re all responsible” for kids’ online safety.

No need to worry. No need to regulate. No need to hold the company -- this company, that company -- responsible. We just need better “digital literacy.”


“Literacy.” Honestly, that word is getting to be some real bullshit. Often, what’s framed as a “literacy” problem is actually the technology working by design, urging us to be compliant clickers.

But damn, “literacy” is such a friendly way to frame training and branding exercises. It sounds so progressive, so eminently philanthropically fundable: Digital literacy. Web literacy. Coding literacy. AI literacy. Gambling literacy.

[Record scratch.] Wait what? You haven’t heard of the latter?

Yeah, apparently some folks [cough] are trying to make it “a thing” – and what with the rise of sports gambling and prediction markets, I think we can see what the next ed-tech trend will be. At least Edsurge, bless their hearts, tried to make the case for gambling literacy this week: “The Math Skill Schools Should Teach — Gambling.”

When I texted a friend with a link to the article and my very savvy commentary “what. the. fuck,” I learned that the Alliance for Decision Education exists, its founder a former professional poker player. So that's something to look forward to once education inevitably pivots away from "AI" (as it did with MOOCs and adaptive learning and every other ed-tech trend ever).


Speaking of literacy-laundering, on Monday morning The 74 came out with “the exclusive”: “New Google Partnership a ‘Sizable Investment’ in AI for Teachers” – that is, a three year deal between ISTE+ASCD and Google to “offer AI training to ‘all six million K-12 teachers and higher education faculty’ in the U.S.” (Or as Ben Riley wryly put it, “Google and ISTE+ASCD announce new partnership to destroy US education.”)

This "sizable investment" (an undisclosed amount) will flow into ISTE+ASCD under the guise of "AI training" and “AI literacy,” the latter of which, as MIT’s Justin Reich told The 74, is a phrase that doesn’t really have an agreed-upon meaning, let alone being a thing with any substantive research supporting its application. (As Justin has argued elsewhere, we got “web literacy” really really wrong for a long time, and we miseducated a couple of generation of students as a result. So why exactly are we rushing into this whole “AI literacy” thing? I mean, other than the obvious grift, of course.)

Interestingly, the Pew Research Center released some survey data this week on teens’ use and views of “AI,” and somehow somehow without schools providing adequate (or any) “AI training," more than half of them are using “AI” to do their homework. Why, it’s almost as if, chatbots are just another in a long line of consumer-facing technologies, and akin to posting on Facebook or watching YouTube don't actually require any special courses or classes.

To be clear, when Google (or OpenAI or Anthropic or Microsoft or whoever) says they’re offering teachers and/or students “AI training” (let alone promoting “AI literacy”) what they’re really doing is brand marketing. This is simply an effort to get more users to outsource their thinking to their particular product.


Perhaps what we need is not technology training but technology un-training – the former is cognitive surrender; the latter will be the only way we can actually pursue learning. Of course, what anti-democratic billionaire technoauthoritarian would ever pay for that?


What’s the Point of School When AI Can Do Your Homework?,” asks Matthew Gault in 404 Media. Arguably, one point might be to help people not ask such stupid fucking questions.

The story covers “a new agentic AI called Einstein that will, according to its developers, live the life of a student for them. Einstein’s website claims that the AI will attend lectures for you, write your papers, and even log into EdTech platforms like Canvas to take tests and participate in discussions.”

And I know that we’re all supposed to freak out about this stuff and wring our hands and teachers are supposed to sign up for the “AI” training programs so that they can be “AI” ready and engineer “AI ready” classrooms and churn out “AI ready” students, but my god. This is bullshit. Einstein is bullshit. It’s a scam, a fraud, a con, a grift.

I mean, yes, it’s bad. The promise of a magic button that will do your work for you is bad. This is bad: the founder who says

agentic AIs are a method of freeing people from the labor of education. ‘I think we really need to question what learning even is and whether traditional educational institutions are actually helping or harming us,’ he said. ‘We're seeing a rise in unemployment across degree holders because of AI, and that makes me question whether this is really what humans are born to do. We've been brainwashed as a society into valuing ourselves by the output of our productive work, and I think humanity is a lot more beautiful than that. Is it really education if we're just memorizing things to perform a task well?’

But also: agentic AI cannot do all that, most of the time not at all, and even if it can, certainly not consistently – and that’s despite all the ways in which universities have sought over the past few decades to instrumentalize and standardize everything, primarily through the terrible technology of the learning management system. (Einstein claims to complete coursework hosted on Canvas.) Agentic AI cannot do this because all classes are different and all instruction is messy and no two professors, even those in the same department, teach the same or grade the same or set up their course in an LMS the same way. I know that the “AI” hype-monsters think we’re on the cusp of the “cheat on everything,” “automate education” world. But we’re not. (One demo, hell, 20 demos of an agentic AI posting on a discussion forum or answering a quiz question does not an agentic anti-education revolution make.)

That’s not to say that this threat is meaningless or irrelevant. We have to confront the beliefs and practices that underlie Einstein’s promises, let alone its supposed adoption. We have to challenge them and examine them with students, with professors, with university administrators, with parents, etc etc etc.

For the past seventy years or so, everyone has been told that education is a the silver bullet and going to college will lead to financial stability, if not success. That was a lie but not because learning is worthless or because school is a rip-off. Rather, it’s because capitalism never cared about your Bachelors degree, and society has been purposefully structured and restructured in such a way that opportunity has declined and precarity increased.


Two more school-related stories from that Garbage Day newsletter on Monday:


Yik Yak is back, I learned this week. The pseudonymous social network launched over a decade ago, but quickly shut down after a number of high profile scandals and cyberbullying incidents (and after college students lost interest in its toxicity). It was apparently relaunched a couple of years ago, because Silicon Valley insists on shipping their shitty, exploitative, democracy-destroying ideas to the young.

Any space, any place where there is a potential for community and growth will be surveilled and poisoned.


A little pushback on a comment that Justin Reich makes in that article in The 74 in which he claims that we don’t understand how LLMs work, that even Google engineers don’t understand how LLMs work. We do.

As Rusty Foster insists in a glorious Today in Tabs missive, AI, “isn’t a black box. It’s a statistical model of data connected to a mechanism for producing more data that resembles the data in the model.” Yes, sure, there’s a lot more math and a lot more code going on in there (and much of it is beyond my pay-grade), but it’s not actually a mystery, despite those heavily invested in the Great and Powerful Oz sort of rhetoric about the technology.

Sam Altman suggested this week (again) that humans are radically inferior to “AI” and, when challenged about the amount of electricity that it takes to manufacture Oz, made some dumb quip about the amount of food it takes to raise a human, an amount that results in a far inferior “intelligence.” What an utter misanthrope.

But Rusty writes something rather lovely/loving while comparing the ways in which “AI” ostensibly and children actually learn (the latter really is marvelous, miraculous, beautiful), as he offers a critique of a recent piece by Gideon Lewis-Kraus in The New Yorker:

Lewis-Kraus writes: “At the dawn of deep learning, a little more than a dozen years ago, machines picked up how to distinguish a cat from a dog… Once they had seen every available image of a cat, they could reliably sort cats from non-cats.” Later he asserts that “If a language model can bootstrap its way to linguistic mastery, we can no longer rule out the possibility that we’re doing the same thing.”

I’ve watched all three of my children learn what a cat is, and in each case the number of pictures of a cat they needed to see was not “all of them.” It was like, two or three? Half a dozen, tops. I helped them learn to speak and read fluently, and the number of Reddit posts required was not “every Reddit post.” I don’t need to know what mechanism underlies human intelligence to rule out the possibility that it’s the same as what a large language model does. The whole trick underlying the apparent magic of modern A.I. is simply giving it tons of data. Give it the whole internet. Give it every book ever written. This is required — it does not work with less training data....

Read all of Rusty's essay, particularly if you think that Anthropic are "the good guys."

And then let's ask this quite serious question: what is to be gained with arguing that humans have been surpassed by “AI”? Why push for an end to education? Why insist that your LLM is more intelligent than your child? What does this say about your belief in humanity, in your vision for the future? Why do "AI"' advocates hate humans so much, why are they so committed to engineering away all the complexities and richness of the human mind, the human life, the human psyche, the human experience?


Still more links, mostly without commentary:


They Built Stepford AI and Called It ‘Agentic’” by Abi Awomosu. "Women’s ‘ick’ for AI isn’t technophobia or a gap to close. It’s wisdom to act on.”

The industry narrative about AI automation tells a story about factories — robots replacing assembly workers, self-driving trucks replacing drivers. This is the visible, masculine-coded story about production.

But look at what’s actually being automated first: customer service (predominantly female), administrative assistants (94% female), data entry (predominantly female), scheduling and coordination (predominantly female), contact centers (70%+ female), emotional support (feminized).

The factory narrative is the cover story. The actual automation is happening in the reproductive economy—the care, attention, organization, and emotional labor that women have always performed.

The labor was always treated as mechanical. If a machine can do it, the implication is the work was never truly human. Essential but not skilled. Now it’s being replaced by software that doesn’t need to be paid.

Women don’t need “AI training.” Teachers don’t need “AI training.” They need their work -- all their productive and reproductive labor -- recognized and valued. Politically. Culturally. Materially.


Secret Agent Man
(Image credits)

Today’s bird is the Eurasian hoopoe. According to Wikipedia, “The call is typically a trisyllabic oop-oop-oop, which may give rise to its English and scientific names, although two and four syllables are also common. An alternative explanation of the English and scientific names is that they are derived from the French name for the bird, huppée, which means crested.”

And isn’t that just exemplary of Internet informatics: could be this, could be that, who knows, but let’s hit “publish” anyway. Wonder and curiosity once prompted more scientific investigation, but now we just have intellectual choose-your-own-adventures and chatbots that (wrongly) reassure their users that “that’s just how it is” and “there’s nothing more to know or do.”

There's plenty to do. There's plenty still to know.

Thanks for reading Second Breakfast. I'm exhausted.

Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete

The Man Who Stole Infinity

1 Share

When Demian Goos followed Karin Richter into her office on March 12 of last year, the first thing he noticed was the bust. It sat atop a tall pedestal in the corner of the room, depicting a bald, elderly gentleman with a stoic countenance. Goos saw no trace of the anxious, lonely man who had obsessed him for over a year. Instead, this was Georg Cantor as history saw him. An intellectual giant…

Source



Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Say Goodbye to the Undersea Cable That Made the Global Internet Possible

1 Share

Jane Ruffino’s reported feature for Wired opens with what is, so far, my favorite piece of magazine art this year. (Nice one, Rob Vargas.) From there, you’re quickly aboard the Maasvliet, the diesel electric ship whose crew is tasked with hauling up thousands of kilometers of TAT-8—the first fiber-optic cable to span the seabed of the Atlantic Ocean, a feat that Ruffino describes as “practically tantamount to human galactic expansion.” An absolute showcase of explanatory writing, Ruffino’s story goes deep on the history and afterlife of the cables that enable digital communication, and grants the many people who manage such infrastructure some welcome visibility.

Fiber-optic transmission is a near-magical way of carrying information by pulses of light. Most people don’t even think about how quickly we’ve accepted instantaneous communication as normal, even those of us who can remember when an international phone call had to be booked in advance. The more people I meet in this industry, in this network of networks of people and things, the more insulting it sounds to hear that “we” only notice it when it breaks. (Who is this “we,” I always want to know?) Billions of people are able to walk around not noticing this infrastructure because of the daily work of a few thousand people, sometimes at sea, other times buried under piles of permits, surveys, and purchase orders for thousands of kilometers of cables that will join the millions of kilometers of cables on the seabed that ensure that our planet is continuously being hugged by light.

Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete

They’re Vibe-Coding Spam Now

1 Share

The problem with making coding easier for more people is that it makes spam more conventionally attractive. Which is bad.

They’re Vibe-Coding Spam Now

I have a problem: Unlike most people, I actually read my spam folder on a regular basis. (Often, they’re some of the most interesting emails I get.) I find spam to be intriguing, interesting, and often highlighting some modern trends.

And sometimes, it surfaces something I actually care about that missed my other folders, like an upcoming interview I’m excited to share with all of you.

But one thing about spam that has been true across the board is that it’s ugly. Really, really ugly. Often, what will happen with spam is that they’ll get your email address through questionable means, say a leak of your information in an exploit, and flood your inbox with some of the worst crap you’ve ever seen.

But recently, some of these clearly trash emails have gotten a design upgrade:

spam-screenshot.png
A spam email informing me that my fake cloud storage platform is full.

That is a relatively attractive spam email, trying to sell me on a scam. It is obviously the work of one Claude A. Fakeguy.

It has that swing. Other, less attractive spam emails also have this swing, such as this one:

UglySpam.png
A less attractive email informing me of upcoming video game addiction litigation. How did they know!?!?

But what I think the real tell is that these emails hang together when you have images off, which they did not in the past. This is a problem, because in your spam folder, images are automatically turned off.

Hence why this email warning me that my antivirus plus renewal failed now looks like this:

Warning.png
Oh no, what will I do on my Linux computer that doesn’t support your antivirus program?

This is a funny, if troubling element in the history of spam—and probably a spot of bad news for people who use vibe coding to actually make real things.

… You?
Sponsored By … You?

If you find weird or unusual topics like this super-fascinating, the best way to tell us is to give us a nod on Ko-Fi. It helps ensure that we can keep this machine moving, support outside writers, and bring on the tools to support our writing. (Also it’s heartening when someone chips in.)

We accept advertising, too! Check out this page to learn more.

freespins.png
The strange thing about spam is that it tells you what the internet’s underbelly is into.

The slop looks more competent than ever

Put simply: Now that the baseline of what makes something well-designed, albeit spartan, has increased, many of the signs we once used to detect a spam message are getting thrown out the window.

Which means that we’re more likely to get hit by spam that tricks us into clicking. And that’s bad news as we attempt to protect ourselves from the crap hiding in our inbox. We’re likely to trust less and accidentally give away more. And untrustworthy figures who don’t know how to code are more likely to throw more crap our way.

This is a point Anthropic itself pointed out in one of its own reports from last summer, about “no-code” ransomware that can be built by people incapable of actually building ransomware without the help of an LLM.

Despite this, these people can create commercial malware programs that they can sell for up to $1,200 a pop.

The security platform Guard.io makes clear that platforms like Lovable are going to enable a new class of criminal:

Just like with “Vibe-coding”, creating scamming schemes these days requires almost no prior technical skills. All a junior scammer needs is an idea and access to a free AI agent. Want to steal credit card details? No problem. Target a company’s employees and steal their Office365 credentials? Easy. A few prompts, and you’re off. The bar has never been lower, and the potential impact has never been more significant. That’s what we call VibeScamming.

And, for people who vibe code, the real problem is that, long-term, their stuff is going to look very untrustworthy because of the specific mix of chrome, color, and emojis that vibe-coded applications specialize in.

The thing that ultimately makes something look human is the addition of actual design and human flair. I encourage you to actually put a little humanness into what you build if you’re going to do it and share it with the world.

How to spot a vibe-coded faker

But for many, it is going to be harder than ever to tell what’s real and what’s fake. Which means you should probably go out of your way to use techniques like email obfuscation and email aliases to protect yourself. (It makes it easier to tell which bread-baking forum violated your trust, for one thing.)

On the plus side, there are still tells. A key one is if they refer to you by not your name, but the name of your email address. Another is the from address, which is often some highly obfuscated bit of junk designed to evade detection.

The one that made me laugh recently was when I got really crappy spam emails on an address that has never gotten them for the first time, promoting traditional spam topics with a Claudecore flair. They seemed random, but were extremely easy to get rid of, because they were all emailed from a bare Firebase domain, meaning that I could remove them with the help of a single filter.

Just because spam emails are more attractive now doesn’t mean the people making them aren’t still extremely stupid.

Spam-Free Links

A quick shout-out to the only tool that makes my inbox bearable in 2026, Simplify Gmail.

Oh good, there’s a new web browser for PowerPC Macs in 2026, and per my pal Action Retro, it’s quite good!

Speaking of inboxes, this story of an AI safety exec letting an AI tool delete her inbox is so darkly funny that I’m surprised it’s real.

--

Find this one an interesting read? Share it with a pal!

Want to actually learn how to code with minimal vibes? Check out our sponsor Scrimba, which mixes video lessons with interactive code windows—and makes it feel downright approachable. Sign up here for a 20% discount.

Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete

Five Friends Help a Math Teacher Get Out of a Jam

1 Share

I have made some very interesting friends in my time working in math education. Many of them have their own platforms, but many of them don’t, and I have started to feel selfish keeping their wisdom between them and me. So periodically, I’m going to ask them a question that’s bothering me, that I think should bother you, and report their thoughts back to you.

This Week’s Question

Here is a 30-second video of a teacher from the 1999 TIMSS study.


Teacher: Okay what if I say, “six less than a number.” Six less than a number. Michelle?

Michelle: 6-n

Teacher: “6-n.” What do you think, Aubrey?

Aubrey: n-6

Teacher: Why do you think that?

Aubrey: Um because …

Teacher: You’re right. Tell me why.

Aubrey: Six less than a number.

Teacher: Right, do you see the difference?


In that clip, a teacher is trying to help students understand how to convert English sentences into algebraic expressions. It really seems to me the teacher has found herself in a miserable kind of jam—one that I think is recognizable to any math teacher with >0 days of teaching experience.

What My Friends Said

First, I asked my friends to describe that jam

Shelley Carranza is a high school math teacher in Mountain View, CA, a former math teacher coach, and a former colleague of mine at Desmos and Amplify:

The dilemma is that we don’t know whether Aubrey and her classmates really understand why the answer is n-6 instead of 6-n. Aubrey looks doubtful at the end of the video, and now I’m curious to know how many students are in the same place as Aubrey, wondering whether they’ve got the order right.

A student looks doubtful about the answer she just gave.
Aubrey looking doubtful about her own answer.

Jenna Laib is a math coach in Brookline, MA, and has developed the idea of a “Slow Reveal Graph”:

The teacher seems to anticipate a potential misconception: that students may recognize “6 less than a number” as subtraction but follow word order when creating an expression, producing 6 - n instead of n - 6. The first student called on did exactly this. Rather than engaging with the response, the teacher seemed to invalidate it and ignore it, moving to another student who provided the correct expression.

Fawn Nguyen is another colleague at Amplify. She helps people imagine a transformative math program at their school and has also been a math teacher and teacher coach.

The dilemma: The teacher was listening for the correct answer in a way that’s “n minus six or death.” And I say this with full empathy—English is tricky, Dan. Tricky for Michelle, and hard for a perennial English learner like me too. I wasn’t even sure which expression was correct until the teacher confirmed Aubrey’s answer and I heard that it was simply the reverse order of what Michelle had said.

That’s it. Two common teacher imperatives are in tension here.

  1. The teacher wants to know what kids know.

  2. The teacher wants the thing kids know to be the right answer.

Those imperatives have created this jam where the teacher finds out that a kid knows a wrong answer and then moves onto another kid hoping to find the right answer, doing (I suspect) some damage to the first kid’s idea of math and of themselves as a mathematician. How can the teacher get out of this jam?

Shelley Carranza:

At this moment, I really want to write both expressions on the board, and celebrate what the students know about the problem. From there, I’d want to give students a chance to discuss how you could decide which was right, and make sure to elicit the strategy of testing specific numbers.

Marilyn Burns is a former teacher, an expert in K-12 math learning, and an author of (I’m estimating here) 1,000 books about learning math:

To write an expression that represents 6 less than a number, some students think it could be “6 – n” and others think it could be “n – 6.” Then, for both options: Turn and talk with your neighbor and then we’ll talk about it as a class.

Stephanie Blair has held every job there is in K-12 schools except (I think) cafeteria worker. She worked with me at Amplify and Desmos, and now supports schools as they adopt Snorkl:

Instead of asking what the answer is, give students 2–4 possible correct answers and then have them decide and defend which one is correct.

Jenna Laib:

Here are two ways this could have gone differently:

(1) Debate: elicit multiple responses from students. Accept them neutrally, and record them on the board to support discussion. The format encourages students to justify their thinking.

(2) Try it out: stick with the initial response of 6 - n, and test it with a number. What is 6 less than 10? Is 6 - 10 the same thing? Record everything on the board.

In both cases, the goal is to make student thinking visible and support justification of why an expression works.

As an editorial aside, this problem may have been avoided entirely if the students had been using mini-whiteboards, because then this particular teacher would not have called on the student with the incorrect response. However, I’d rather encourage rigorous engagement with all student ideas!

Fawn Nuguyen

Write both expressions on the board. Ask students to think quietly first: which one matches “six less than a number”? Then turn and talk to a neighbor. Then rate your confidence —100% or nah? Now convince me.

Your Turn

Exercise for you, the reader, who I also consider a friend:

What is common among all of my friends’ suggestions—both pedagogically and socially?

Each of my friends have identified a common pedagogical technique but they also share a certain understanding of the social relationship between teachers and students. They have different imperatives. Great stuff. Thanks, friends.

Featured Comment

Efrat Furst on my review last week of the Stanford AI+Education Summit:

I keep coming back to the MOOCs story, I just can’t figure out how people refuse to see how similar it is and learn the lessons. It was just 10 years ago, we were all here to witness the rise and fall.

Audrey Watters identifies the stakes here:

There will be no “AI” tutor revolution just as there was no MOOC revolution just as there was no personalized learning revolution just as there was no computer-assisted instruction revolution just as there was no teaching machine revolution. If there is a tsunami, it’s not technological as much as ideological, as the values of Silicon Valley -- techno-libertarianism, accelerationism -- are hard at work in undermining democratic institutions, including school.

Let’s take the next step. Get one (1) new email from me about teaching, technology, and math on special Wednesdays. -DM

Odds & Ends

Alpha School, the $65k private school that claims to have replaced teachers with AI, had a no good, very bad week. 404 Media interviewed former employees and reviewed documents and found that Alpha School:

  • used AI to develop some sloppy, hallucinatory instructional materials,

  • generated those materials, in part, by scraping content from other curriculum providers (including my own company FWIW),

  • created clones of other edtech platforms like Khan Academy,

  • exposed webcam videos of students at public URLs.

Check my post on LinkedIn for a bit more commentary but, putting it plainly: Alpha School is speedrunning some of the worst excesses of the move-fast-and-break-things era. Even still, I think we should separate a few questions:

  1. Is Alpha School pursuing their model of schooling in a sloppy, unethical way?

  2. Who does this model of schooling best serve?

  3. Is there anything the rest of us can learn from it?

#1 is, barring some kind of contrition from Alpha School, a settled question.

Michael Pershan wrote a piece about the second question that managed to get agreement from everyone—critics and proponents alike.

This is a school that believes that the “core” of schooling should be taken care of as quickly and painlessly as possible so that the rest of the day can be opened up to things that actually matter. Most schools don’t do this! We instead tell kids that history is a way of understanding ourselves and others. Math, we say, can be an absolute joy, full of logical surprises. We tell kids that a good story can open up your heart and mind. Alpha doesn’t.

Dylan Kane wrote a piece about the third question, arguing that, whatever we can learn from Alpha School, it isn’t anything about technology.

¶ Congrats to fellow Desmos and Amplify alum Christopher Danielson for winning his third Mathical Book Award. I have gifted How Did You Count and its beautiful photos of everyday mathematical collections to a bunch of my friends when they become parents.

¶ Amplify colleague Shira Helft’s last statement of belief as a math teacher is cryptic and essential: “If you can, use a knife.” Read what she means.

¶ What happens when an AI bear hangs out with the AI bulls? Listen to my recent chat with Ben Kornell of the Edtech Insiders podcast.



Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete

Is Compton Unified the best school district in California?

1 Share

The CDE recently released the first batch of growth scores for schools and districts across California. Growth scores measure the performance of schools and districts by calculating by how much their students’ SBAC scores differ from what was expected given their prior SBAC scores. By design, the average growth score across all students should be zero (or very close to it). Growth scores are given in the same units as SBAC scores. A growth score of -5 doesn’t mean that students scored five points worse than they did the year before. It means that their SBAC score grew by 5 points less than expected. Perhaps they scored 2500 last year and were expected to score 2523 this year but only scored 2518.

A previous post went into more detail about how growth scores are calculated and what the limitations are of the CDE’s chosen algorithm. The key points to remember come from these two charts, which I’ve lifted from that previous post:

Observe that students who stay in the 70th percentile gain about 150 SBAC points between 3rd grade and 8th grade while students who stay in the 20th percentile gain only about 130. In general, students in higher percentiles tend to increase their SBAC scores by more than students in lower percentiles. The gap between them gets wider over time.

Observe here that the pattern is even more extreme for Math: students in higher percentiles gain a lot more points than students in lower percentiles.

The growth model is based on a linear regression whose only independent variables are a student’s ELA and Math scores in the previous year. In particular, the student’s grade is not a variable in the model. A student who scores 2500 in 3rd grade will be predicted to grow as much as a student who scores 2500 in 7th grade even though the 3rd grade student will be in a much higher percentile. The charts showed that students in higher percentiles gained more SBAC points than students in lower percentiles. Since they both receive the same prediction, the higher percentile student will tend to exceed that prediction and thus get a growth score greater than zero while the lower percentile student will tend to get a growth score less than zero.

A district whose students start and finish the year in the 25th percentile has done just as good a job, no better and no worse, than a district whose students start and finish the year in the 75th percentile. But, due to the way the growth scores are calculated, a district whose students stay in the 25th percentile will tend to have a lower growth score than a district whose students stay in the 75th percentile.

For this reason, when we look at growth scores, we are always going to look at them in the context of the prior year’s SBAC scores, specifically the average Distance from Standard1 (DFS) of the students. This will enable us to see if a district’s performance is truly outstanding. Note that the Distance from Standard and the growth score are in the same units.

ELA Growth Scores

The chart below shows the ELA growth scores for the 114 districts that had at least 4,000 students with growth scores.

The diagonal line is the best-fit line based on a linear regression against each district’s average distance from standard in 2024. The R-squared is 0.36 indicating that, while there’s clearly a relationship, there’s a lot of scope for districts with similar prior achievement scores to achieve very different growth scores. Hayward and West Contra Costa both had weak SBAC scores in 2024 (both were around 60 points below standard) but Hayward’s growth score of 0 was a lot better than West Contra Costa’s –11. Los Angeles and Compton both had 2024 SBAC scores 25-28 points below standard but Compton’s growth score of 12 was much better than Los Angeles’s still-creditable +1. In fact, Compton’s growth score was better than any other district in the sample. San Francisco, meanwhile, had a growth score of -1, which is meh. It’s a bit below what would be expected but not egregiously so.

Math Growth Scores

The analogous chart for Math growth scores is different in two ways.

  • the relationship between the prior year SBAC scores and the current year growth score is much stronger (R-squared = 0.81)

  • the range of growth score values is significantly wider. Growth score values of +20 or higher are found.

Both are a consequence of the phenomenon we saw in the first charts, namely that the gap between the average score gain in higher and lower percentiles is much greater for Math than ELA.

Nevertheless, Compton still excels. Its growth score of +13 is less than that of districts like Cupertino and Irvine and San Ramon Valley (all of which are at +20 or higher) but, given that its students started the year 39 points below standard, it surpassed expectations by more than any of these other districts.

So, which is the best performing district?

There are nearly 700 districts with growth scores and only the 114 largest are shown on the charts above. Those 114 districts represent about 63% of all students but there are districts too small to show on the charts which had even higher growth scores than Compton in both ELA and Math. The largest of these districts was Orinda, in Contra Costa, which had growth scores of 13 (ELA) and 16 (Math). But Orinda had only 1,350 students, far less than Compton’s 5,900, and its prior achievement scores were 88 points above standard (ELA) and 72 above standard (Math) so its high growth scores are not as impressive. The highest growth scores of all belong to Scotia Union Elementary in Humboldt County (24 in ELA; 42 in Math) but Scotia Union had only 104 students with growth scores, fewer than many schools. Similarly, the absolute worst growth scores belong to Geyserville Unified in Sonoma (-17 in ELA; -44 in Math) but Geyserville had only 73 students. The worst performing district of any size was Barstow Unified in San Bernardino, whose 1,900 students had growth scores of -32 in ELA and -24 in Math.

Impressive as its scores are, it is far too early to blithely declare that Compton is the best school district in California. During the development of the growth scores model, the CDE published what the test scores would have been using the last pre-pandemic SBAC data. At that time, Compton’s growth scores were the equivalent of -1 in ELA and +3 in Math. Has Compton improved significantly in the intervening five years or are its high scores just a statistical artifact?

SFUSD’s preferred benchmark has long been Long Beach Unified. Years ago, when I first started analyzing student achievement data, I identified Clovis Unified in Fresno and ABC Unified in Los Angeles as districts that seemed to do particularly well after adjusting for their demographics. How are these three rated by the growth scores method? Long Beach scored +2 in ELA and -3 in Math; ABC scored +5 in ELA and +3 in Math; Clovis scored +9 in ELA and +4 in Math. Good scores, but not as good as Compton. In the test data from the pre-pandemic era, those districts were all stronger than Compton. Long Beach was +5 and +2, ABC was +6 and +6, and Clovis was +9 and +3. Even San Francisco was +5 and +2.

It will take multiple years of data to know whether Compton’s high scores are an indicator of true excellence or just a blip.

Thanks for reading SFEDup! Subscribe for free to receive new posts and support my work.

1

Example: the lowest score required to meet the standard for 5th grade ELA is 2502. If the average 5th grader in the district has a score of 2510, that’s 8 points above the average. Calculate the distance from standard for each of the grades from 3-7 and average them to get the school or district’s DFS. Grades 3-7 are used because growth scores are calculated only for students in grades 4-8. Instead of DFS, I could have used the percentage who met or exceeded the standard because the two numbers have a 99% correlation but it seemed better to use DFS because it’s in the same units as the growth score.



Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete
Next Page of Stories