Jane Ruffino’s reported feature for Wired opens with what is, so far, my favorite piece of magazine art this year. (Nice one, Rob Vargas.) From there, you’re quickly aboard the Maasvliet, the diesel electric ship whose crew is tasked with hauling up thousands of kilometers of TAT-8—the first fiber-optic cable to span the seabed of the Atlantic Ocean, a feat that Ruffino describes as “practically tantamount to human galactic expansion.” An absolute showcase of explanatory writing, Ruffino’s story goes deep on the history and afterlife of the cables that enable digital communication, and grants the many people who manage such infrastructure some welcome visibility.
Fiber-optic transmission is a near-magical way of carrying information by pulses of light. Most people don’t even think about how quickly we’ve accepted instantaneous communication as normal, even those of us who can remember when an international phone call had to be booked in advance. The more people I meet in this industry, in this network of networks of people and things, the more insulting it sounds to hear that “we” only notice it when it breaks. (Who is this “we,” I always want to know?) Billions of people are able to walk around not noticing this infrastructure because of the daily work of a few thousand people, sometimes at sea, other times buried under piles of permits, surveys, and purchase orders for thousands of kilometers of cables that will join the millions of kilometers of cables on the seabed that ensure that our planet is continuously being hugged by light.
The problem with making coding easier for more people is that it makes spam more conventionally attractive. Which is bad.
I have a problem: Unlike most people, I actually read my spam folder on a regular basis. (Often, they’re some of the most interesting emails I get.) I find spam to be intriguing, interesting, and often highlighting some modern trends.
And sometimes, it surfaces something I actually care about that missed my other folders, like an upcoming interview I’m excited to share with all of you.
But one thing about spam that has been true across the board is that it’s ugly. Really, really ugly. Often, what will happen with spam is that they’ll get your email address through questionable means, say a leak of your information in an exploit, and flood your inbox with some of the worst crap you’ve ever seen.
But recently, some of these clearly trash emails have gotten a design upgrade:
A spam email informing me that my fake cloud storage platform is full.
That is a relatively attractive spam email, trying to sell me on a scam. It is obviously the work of one Claude A. Fakeguy.
It has that swing. Other, less attractive spam emails also have this swing, such as this one:
A less attractive email informing me of upcoming video game addiction litigation. How did they know!?!?
But what I think the real tell is that these emails hang together when you have images off, which they did not in the past. This is a problem, because in your spam folder, images are automatically turned off.
Hence why this email warning me that my antivirus plus renewal failed now looks like this:
Oh no, what will I do on my Linux computer that doesn’t support your antivirus program?
This is a funny, if troubling element in the history of spam—and probably a spot of bad news for people who use vibe coding to actually make real things.
If you find weird or unusual topics like this super-fascinating, the best way to tell us is to give us a nod on Ko-Fi. It helps ensure that we can keep this machine moving, support outside writers, and bring on the tools to support our writing. (Also it’s heartening when someone chips in.)
The strange thing about spam is that it tells you what the internet’s underbelly is into.
The slop looks more competent than ever
Put simply: Now that the baseline of what makes something well-designed, albeit spartan, has increased, many of the signs we once used to detect a spam message are getting thrown out the window.
Which means that we’re more likely to get hit by spam that tricks us into clicking. And that’s bad news as we attempt to protect ourselves from the crap hiding in our inbox. We’re likely to trust less and accidentally give away more. And untrustworthy figures who don’t know how to code are more likely to throw more crap our way.
This is a point Anthropic itself pointed out in one of its own reports from last summer, about “no-code” ransomware that can be built by people incapable of actually building ransomware without the help of an LLM.
Despite this, these people can create commercial malware programs that they can sell for up to $1,200 a pop.
The security platform Guard.io makes clear that platforms like Lovable are going to enable a new class of criminal:
Just like with “Vibe-coding”, creating scamming schemes these days requires almost no prior technical skills. All a junior scammer needs is an idea and access to a free AI agent. Want to steal credit card details? No problem. Target a company’s employees and steal their Office365 credentials? Easy. A few prompts, and you’re off. The bar has never been lower, and the potential impact has never been more significant. That’s what we call VibeScamming.
And, for people who vibe code, the real problem is that, long-term, their stuff is going to look very untrustworthy because of the specific mix of chrome, color, and emojis that vibe-coded applications specialize in.
The thing that ultimately makes something look human is the addition of actual design and human flair. I encourage you to actually put a little humanness into what you build if you’re going to do it and share it with the world.
How to spot a vibe-coded faker
But for many, it is going to be harder than ever to tell what’s real and what’s fake. Which means you should probably go out of your way to use techniques like email obfuscation and email aliases to protect yourself. (It makes it easier to tell which bread-baking forum violated your trust, for one thing.)
On the plus side, there are still tells. A key one is if they refer to you by not your name, but the name of your email address. Another is the from address, which is often some highly obfuscated bit of junk designed to evade detection.
The one that made me laugh recently was when I got really crappy spam emails on an address that has never gotten them for the first time, promoting traditional spam topics with a Claudecore flair. They seemed random, but were extremely easy to get rid of, because they were all emailed from a bare Firebase domain, meaning that I could remove them with the help of a single filter.
Just because spam emails are more attractive now doesn’t mean the people making them aren’t still extremely stupid.
Spam-Free Links
A quick shout-out to the only tool that makes my inbox bearable in 2026, Simplify Gmail.
Oh good, there’s a new web browser for PowerPC Macs in 2026, and per my pal Action Retro, it’s quite good!
Want to actually learn how to code with minimal vibes? Check out our sponsor Scrimba, which mixes video lessons with interactive code windows—and makes it feel downright approachable. Sign up here for a 20% discount.
I have made some very interesting friends in my time working in math education. Many of them have their own platforms, but many of them don’t, and I have started to feel selfish keeping their wisdom between them and me. So periodically, I’m going to ask them a question that’s bothering me, that I think should bother you, and report their thoughts back to you.
Teacher: Okay what if I say, “six less than a number.” Six less than a number. Michelle?
Michelle: 6-n
Teacher: “6-n.” What do you think, Aubrey?
Aubrey: n-6
Teacher: Why do you think that?
Aubrey: Um because …
Teacher: You’re right. Tell me why.
Aubrey: Six less than a number.
Teacher: Right, do you see the difference?
In that clip, a teacher is trying to help students understand how to convert English sentences into algebraic expressions. It really seems to me the teacher has found herself in a miserable kind of jam—one that I think is recognizable to any math teacher with >0 days of teaching experience.
What My Friends Said
First, I asked my friends to describe that jam
Shelley Carranza is a high school math teacher in Mountain View, CA, a former math teacher coach, and a former colleague of mine at Desmos and Amplify:
The dilemma is that we don’t know whether Aubrey and her classmates really understand why the answer is n-6 instead of 6-n. Aubrey looks doubtful at the end of the video, and now I’m curious to know how many students are in the same place as Aubrey, wondering whether they’ve got the order right.
Aubrey looking doubtful about her own answer.
Jenna Laib is a math coach in Brookline, MA, and has developed the idea of a “Slow Reveal Graph”:
The teacher seems to anticipate a potential misconception: that students may recognize “6 less than a number” as subtraction but follow word order when creating an expression, producing 6 - n instead of n - 6. The first student called on did exactly this. Rather than engaging with the response, the teacher seemed to invalidate it and ignore it, moving to another student who provided the correct expression.
Fawn Nguyen is another colleague at Amplify. She helps people imagine a transformative math program at their school and has also been a math teacher and teacher coach.
The dilemma: The teacher was listening for the correct answer in a way that’s “n minus six or death.” And I say this with full empathy—English is tricky, Dan. Tricky for Michelle, and hard for a perennial English learner like me too. I wasn’t even sure which expression was correct until the teacher confirmed Aubrey’s answer and I heard that it was simply the reverse order of what Michelle had said.
That’s it. Two common teacher imperatives are in tension here.
The teacher wants to know what kids know.
The teacher wants the thing kids know to be the right answer.
Those imperatives have created this jam where the teacher finds out that a kid knows a wrong answer and then moves onto another kid hoping to find the right answer, doing (I suspect) some damage to the first kid’s idea of math and of themselves as a mathematician. How can the teacher get out of this jam?
Shelley Carranza:
At this moment, I really want to write both expressions on the board, and celebrate what the students know about the problem. From there, I’d want to give students a chance to discuss how you could decide which was right, and make sure to elicit the strategy of testing specific numbers.
Marilyn Burns is a former teacher, an expert in K-12 math learning, and an author of (I’m estimating here) 1,000 books about learning math:
To write an expression that represents 6 less than a number, some students think it could be “6 – n” and others think it could be “n – 6.” Then, for both options: Turn and talk with your neighbor and then we’ll talk about it as a class.
Stephanie Blair has held every job there is in K-12 schools except (I think) cafeteria worker. She worked with me at Amplify and Desmos, and now supports schools as they adopt Snorkl:
Instead of asking what the answer is, give students 2–4 possible correct answers and then have them decide and defend which one is correct.
Jenna Laib:
Here are two ways this could have gone differently:
(1) Debate: elicit multiple responses from students. Accept them neutrally, and record them on the board to support discussion. The format encourages students to justify their thinking.
(2) Try it out: stick with the initial response of 6 - n, and test it with a number. What is 6 less than 10? Is 6 - 10 the same thing? Record everything on the board.
In both cases, the goal is to make student thinking visible and support justification of why an expression works.
As an editorial aside, this problem may have been avoided entirely if the students had been using mini-whiteboards, because then this particular teacher would not have called on the student with the incorrect response. However, I’d rather encourage rigorous engagement with all student ideas!
Fawn Nuguyen
Write both expressions on the board. Ask students to think quietly first: which one matches “six less than a number”? Then turn and talk to a neighbor. Then rate your confidence —100% or nah? Now convince me.
Your Turn
Exercise for you, the reader, who I also consider a friend:
What is common among all of my friends’ suggestions—both pedagogically and socially?
Each of my friends have identified a common pedagogical technique but they also share a certain understanding of the social relationship between teachers and students. They have different imperatives. Great stuff. Thanks, friends.
Featured Comment
Efrat Furst on my review last week of the Stanford AI+Education Summit:
I keep coming back to the MOOCs story, I just can’t figure out how people refuse to see how similar it is and learn the lessons. It was just 10 years ago, we were all here to witness the rise and fall.
There will be no “AI” tutor revolution just as there was no MOOC revolution just as there was no personalized learning revolution just as there was no computer-assisted instruction revolution just as there was no teaching machine revolution. If there is a tsunami, it’s not technological as much as ideological, as the values of Silicon Valley -- techno-libertarianism, accelerationism -- are hard at work in undermining democratic institutions, including school.
Let’s take the next step. Get one (1) new email from me about teaching, technology, and math on special Wednesdays. -DM
Odds & Ends
¶ Alpha School, the $65k private school that claims to have replaced teachers with AI, had a no good, very bad week. 404 Media interviewed former employees and reviewed documents and found that Alpha School:
used AI to develop some sloppy, hallucinatory instructional materials,
generated those materials, in part, by scraping content from other curriculum providers (including my own company FWIW),
created clones of other edtech platforms like Khan Academy,
exposed webcam videos of students at public URLs.
Check my post on LinkedIn for a bit more commentary but, putting it plainly: Alpha School is speedrunning some of the worst excesses of the move-fast-and-break-things era. Even still, I think we should separate a few questions:
Is Alpha School pursuing their model of schooling in a sloppy, unethical way?
Who does this model of schooling best serve?
Is there anything the rest of us can learn from it?
#1 is, barring some kind of contrition from Alpha School, a settled question.
This is a school that believes that the “core” of schooling should be taken care of as quickly and painlessly as possible so that the rest of the day can be opened up to things that actually matter. Most schools don’t do this! We instead tell kids that history is a way of understanding ourselves and others. Math, we say, can be an absolute joy, full of logical surprises. We tell kids that a good story can open up your heart and mind. Alpha doesn’t.
Dylan Kane wrote a piece about the third question, arguing that, whatever we can learn from Alpha School, it isn’t anything about technology.
¶ Congrats to fellow Desmos and Amplify alum Christopher Danielson for winning his third Mathical Book Award. I have gifted How Did You Count and its beautiful photos of everyday mathematical collections to a bunch of my friends when they become parents.
¶ Amplify colleague Shira Helft’s last statement of belief as a math teacher is cryptic and essential: “If you can, use a knife.” Read what she means.
¶ What happens when an AI bear hangs out with the AI bulls? Listen to my recent chat with Ben Kornell of the Edtech Insiders podcast.
The CDE recently released the first batch of growth scores for schools and districts across California. Growth scores measure the performance of schools and districts by calculating by how much their students’ SBAC scores differ from what was expected given their prior SBAC scores. By design, the average growth score across all students should be zero (or very close to it). Growth scores are given in the same units as SBAC scores. A growth score of -5 doesn’t mean that students scored five points worse than they did the year before. It means that their SBAC score grew by 5 points less than expected. Perhaps they scored 2500 last year and were expected to score 2523 this year but only scored 2518.
A previous post went into more detail about how growth scores are calculated and what the limitations are of the CDE’s chosen algorithm. The key points to remember come from these two charts, which I’ve lifted from that previous post:
Observe that students who stay in the 70th percentile gain about 150 SBAC points between 3rd grade and 8th grade while students who stay in the 20th percentile gain only about 130. In general, students in higher percentiles tend to increase their SBAC scores by more than students in lower percentiles. The gap between them gets wider over time.
Observe here that the pattern is even more extreme for Math: students in higher percentiles gain a lot more points than students in lower percentiles.
The growth model is based on a linear regression whose only independent variables are a student’s ELA and Math scores in the previous year. In particular, the student’s grade is not a variable in the model. A student who scores 2500 in 3rd grade will be predicted to grow as much as a student who scores 2500 in 7th grade even though the 3rd grade student will be in a much higher percentile. The charts showed that students in higher percentiles gained more SBAC points than students in lower percentiles. Since they both receive the same prediction, the higher percentile student will tend to exceed that prediction and thus get a growth score greater than zero while the lower percentile student will tend to get a growth score less than zero.
A district whose students start and finish the year in the 25th percentile has done just as good a job, no better and no worse, than a district whose students start and finish the year in the 75th percentile. But, due to the way the growth scores are calculated, a district whose students stay in the 25th percentile will tend to have a lower growth score than a district whose students stay in the 75th percentile.
For this reason, when we look at growth scores, we are always going to look at them in the context of the prior year’s SBAC scores, specifically the average Distance from Standard1 (DFS) of the students. This will enable us to see if a district’s performance is truly outstanding. Note that the Distance from Standard and the growth score are in the same units.
ELA Growth Scores
The chart below shows the ELA growth scores for the 114 districts that had at least 4,000 students with growth scores.
The diagonal line is the best-fit line based on a linear regression against each district’s average distance from standard in 2024. The R-squared is 0.36 indicating that, while there’s clearly a relationship, there’s a lot of scope for districts with similar prior achievement scores to achieve very different growth scores. Hayward and West Contra Costa both had weak SBAC scores in 2024 (both were around 60 points below standard) but Hayward’s growth score of 0 was a lot better than West Contra Costa’s –11. Los Angeles and Compton both had 2024 SBAC scores 25-28 points below standard but Compton’s growth score of 12 was much better than Los Angeles’s still-creditable +1. In fact, Compton’s growth score was better than any other district in the sample. San Francisco, meanwhile, had a growth score of -1, which is meh. It’s a bit below what would be expected but not egregiously so.
Math Growth Scores
The analogous chart for Math growth scores is different in two ways.
the relationship between the prior year SBAC scores and the current year growth score is much stronger (R-squared = 0.81)
the range of growth score values is significantly wider. Growth score values of +20 or higher are found.
Both are a consequence of the phenomenon we saw in the first charts, namely that the gap between the average score gain in higher and lower percentiles is much greater for Math than ELA.
Nevertheless, Compton still excels. Its growth score of +13 is less than that of districts like Cupertino and Irvine and San Ramon Valley (all of which are at +20 or higher) but, given that its students started the year 39 points below standard, it surpassed expectations by more than any of these other districts.
So, which is the best performing district?
There are nearly 700 districts with growth scores and only the 114 largest are shown on the charts above. Those 114 districts represent about 63% of all students but there are districts too small to show on the charts which had even higher growth scores than Compton in both ELA and Math. The largest of these districts was Orinda, in Contra Costa, which had growth scores of 13 (ELA) and 16 (Math). But Orinda had only 1,350 students, far less than Compton’s 5,900, and its prior achievement scores were 88 points above standard (ELA) and 72 above standard (Math) so its high growth scores are not as impressive. The highest growth scores of all belong to Scotia Union Elementary in Humboldt County (24 in ELA; 42 in Math) but Scotia Union had only 104 students with growth scores, fewer than many schools. Similarly, the absolute worst growth scores belong to Geyserville Unified in Sonoma (-17 in ELA; -44 in Math) but Geyserville had only 73 students. The worst performing district of any size was Barstow Unified in San Bernardino, whose 1,900 students had growth scores of -32 in ELA and -24 in Math.
Impressive as its scores are, it is far too early to blithely declare that Compton is the best school district in California. During the development of the growth scores model, the CDE published what the test scores would have been using the last pre-pandemic SBAC data. At that time, Compton’s growth scores were the equivalent of -1 in ELA and +3 in Math. Has Compton improved significantly in the intervening five years or are its high scores just a statistical artifact?
SFUSD’s preferred benchmark has long been Long Beach Unified. Years ago, when I first started analyzing student achievement data, I identified Clovis Unified in Fresno and ABC Unified in Los Angeles as districts that seemed to do particularly well after adjusting for their demographics. How are these three rated by the growth scores method? Long Beach scored +2 in ELA and -3 in Math; ABC scored +5 in ELA and +3 in Math; Clovis scored +9 in ELA and +4 in Math. Good scores, but not as good as Compton. In the test data from the pre-pandemic era, those districts were all stronger than Compton. Long Beach was +5 and +2, ABC was +6 and +6, and Clovis was +9 and +3. Even San Francisco was +5 and +2.
It will take multiple years of data to know whether Compton’s high scores are an indicator of true excellence or just a blip.
Thanks for reading SFEDup! Subscribe for free to receive new posts and support my work.
Example: the lowest score required to meet the standard for 5th grade ELA is 2502. If the average 5th grader in the district has a score of 2510, that’s 8 points above the average. Calculate the distance from standard for each of the grades from 3-7 and average them to get the school or district’s DFS. Grades 3-7 are used because growth scores are calculated only for students in grades 4-8. Instead of DFS, I could have used the percentage who met or exceeded the standard because the two numbers have a 99% correlation but it seemed better to use DFS because it’s in the same units as the growth score.
Physicist Sean Carroll leads off this video with this line:
I like to say that Einstein is, if anything, underrated as a physicist, which is hard to imagine given how highly he is rated.
And then leads us through a history of modern physics and quantum mechanics that, Einstein and Newton aside, is much more collaborative than you often hear about.
This idea that there are many people contributing and many different parts of the pieces need to put together is actually much more characteristic of how physics is usually done than the single person inventing everything all by themselves.
How a single hack infected the world’s most important operating system. Sponsored by NordVPN - Get exclusive NordVPN deal here: https://NordVPN.com/veritasium It’s risk free with Nord’s 30 day money-back guarantee!
If you’re looking for a molecular modelling kit, try Snatoms, a kit I invented where the atoms snap together magnetically - https://ve42.co/SnatomsV
▀▀▀ 0:00 The Free Software Foundation 5:03 Why is Linux so popular? 9:57 The XZ Weakness 12:07 End To End Encryption - SSH 18:40 How To Compress Data 23:47 How The .XZ Hack Worked 34:24 A Bug In Jia’s Code 38:27 Henry Hacks Derek 43:16 The Back Door Is Exposed 47:16 Who is Jia Tan? 50:33 Open Vs Closed Source
▀▀▀ A huge thank you to everyone who made this possible:
Rich Jones for his openness throughout this project.
Denzel Farmer for his incredible breakdown.
Karsten Nohl @hackingmatters, Yannis Hofmann, and Matthias Marx at SRLabs for their help throughout this project.
Fabian Fäßler @LiveOverflow Alex Schlögl and the rest of the Cure53 team for their technical insights on the project.
Tom Scott, and Computerphile for their excellent videos on compression.
Josh at Breakfast Serial for the filming inspiration.
Planet Money for a podcast that helped inform our research and Fern, the youtube channel, for inspiration.
Thomas Roccia for his technical feedback on the video.
▀▀▀ Special thanks to our Patreon supporters: Adam Foreman, Albert Wenger, Alex Porter, Alexander Tamas, Anton Ragin, armedtoe, Balkrishna Heroor, Bertrand Serlet, Blake Byers, Bruce, Charles Ian Norman Venn, Daniel Martins, Data Don, Dave Kircher, David Johnston, David Tseng, EJ Alexandra, Evgeny Skvortsov, Garrett Mueller, Gnare, gpoly, Hayden Christensen, Hong Thai Le, Ibby Hadeed, Jeromy Johnson, Jesse Brandsoy, Jon Jamison, Juan Benet, Kelcey Steele, KeyWestr, Kyi, Lee Redden, Marinus Kuivenhoven, Mark Heising, Martin Paull, Meekay, meg noah, Michael Krugman, Moebiusol - Cristian, Orlando Bassotto, Parsee Health, Paul Peijzel, Richard Sundvall, Robson, Sam Lutfi, Shalva Bukia, Sinan Taifour, Tj Steyn, Ubiquity Ventures, Vahe Andonians, wolfee
▀▀▀ Writer, Director & Producer: Henry van Dyck Presenters: Derek Muller & Henry van Dyck Editor: Trenton Oliver Animators: Fabio Albertelli, Domonkos Józsa, Andrew Neet, Alex Drakoulis & Emma Wright Illustrators: Jakub Misiek & Nataly Zhuk Researchers: Aakash Singh Bagga & Sophia Rose Additional Editing: James Stuart & Peter Nelson Thumbnail Designers: Abdallah Rabah, Ren Hurley, Ben Powell & Henry van Dyck Production Team: Josh Pitt, Matthew Cavanagh, Anna Milkovic, Katy Southwood & Jess Bishop-Laggett Executive Producers: Derek Muller & Casper Mebius
Additional video/photos supplied by Getty Images & Storyblocks Music from Epidemic Sound