1724 stories
·
2 followers

Another death in the AI-in-education family

1 Share

Well here we are, still reeling from the emotional devastation of Dan Meyer’s poignant eulogy for the recently deceased Khanmigo, only to have yet another tragic death in the AI-in-education family. Please join me, won’t you, in offering condolescences to the recent passing of ChatGPT’s “Study Mode,” and may its memory be a blessing.

What’s that, you don’t remember Study Mode? Tragedy on top of tragedy. You see, one year ago almost to the day, a relentless wave of negative press hit OpenAI as journalists discovered that students were using chatbots en masse to avoid effortful thinking, aka “to cheat.” As such, the company quickly announced a new product feature within ChatGPT that would ensure it would be used solely for educational good rather than evil: Study Mode. And OpenAI promised us that when Study Mode was activated, ChatGPT would “be engaging and interactive, and to help students learn something—not just finish something.”

There were, of course, some obvious challenges with Study Mode, as people quickly observed. For one thing…

…and for another…

Still, flaws notwithstanding, over the last year we could at least say that Study Mode…existed…as something that students seeking cognitive repentance might activate to curb their wicked cheating ways. That is to say, if they clicked the little “+” in the prompt box, there they could find Study Mode, waiting for activation such that—and here again I quote from OpenAI’s press release—students would “think critically about their learning.”

Alas, critical thinking is dead, and so too is Study Mode for all intents and purposes. Two weeks ago, I debated James Donovan, OpenAI’s Head of Learning & Cognitive Outcomes Research, regarding the role of chatbots in education. As our conversation kicked off, moderator Alex Grodd expressed some befuddlement related to his inability to find Study Mode, and Donovan confirmed it is no longer a tool that ChatGPT users can directly access in the “vanilla model” (that is, normal ChatGPT). Instead, ChatGPT will now allegedly “detect” when students are trying to study and then automatically refrain from providing answers. To which I say:

Movie gif. Mike Myers as Doctor Evil in Austin Powers looks a bit disappointed with pouty lips as he says, “Right…”

Donovan did suggest that for “B2B” users of ChatGPT in education—which apparently means countries such as Estonia that have partnered with OpenAI to embed ChatGPT in their education systems (shudder)—Study Mode will remain an option. Cool, that’s nice for Estonian kids, but not really relevant to the 400 million students worldwide who are using vanilla ChatGPT every day.1 Donovan, a smart and engaging guy, thus ended up arguing—based on secret OpenAI data—that students really are “cognitively augmenting” their education. You can judge for yourself how persuasive that is:

Study Mode was always a ruse, of course, a PR exercise masquerading as pedagogical safeguard, but while it’s admittedly fun to dance upon its grave, something bigger is happening at OpenAI regarding its education posture. Remember last year when Leah Belsky, OpenAI’s VP of Education, promised us an “education moonshot”? I think she’s still employed there, but the entire “moonshot division” at OpenAI appears to have run up the curtain to join the choir invisible as the company desperately tries to figure out how to generate revenue in advance of its suddenly somewhat dicey IPO prospects.

To wit: a few weeks ago Denise Dresser, OpenAI’s Chief Revenue Officer and de facto COO at present, issued an org-wide memo that leaked—and boy, it’s hard to see how education fits in to what they have planned going forward. Time for a brief fisking:

As we start Q2, I want to begin where we always should: with our customers. I have been spending time with leaders across our largest enterprises, most influential startups, and key venture firms. The message is clear. People are excited about what we are building, and they want a deeper view into our roadmap so they can plan with confidence and stay ahead of the market.

Enterprises, startups, and venture firms—but no mention of talking to students, who still comprise the majority of ChatGPT users (never forget this!). But students are not paying customers, and this memo is crystal clear about who really matters to OpenAI right now.

Enterprises buy business outcomes….They pay for higher revenue per employee, faster cycle times, lower support costs, and better execution.

Yep, this’ll map neatly to the culture of schools.

Our compute advantage sets us up to deliver continuous leaps in capability….Every step forward in compute lets us train stronger models, serve more demand, and lower the cost per unit of intelligence.

Oh yeah, teachers will love love love the idea of calculating the “per unit cost of intelligence.” Sign ‘em up.

The market has moved from prompts to agents. That shift is a massive opportunity for us.

Customers want systems that can reason, use tools, operate across workflows, and perform reliably inside real business environments.

Oh you don’t say! RIP, “prompt engineering,” long live AI agents, I guess. Does this mean we no longer need to provide professional development to teachers to improve their “AI literacy”?

I’ll remove the tongue from my cheek now, because I do believe this corporate reorientation “from prompts to agents” is a big deal, and I plan to write more about it soon. For now, though, I’ll just observe that while businesses may be lining up to have digital agentic hamsters scurrying about their databases, it’s not clear that normal humans share in that excitement. As teacher Stephen Fitzpatrick notes, “AI is being built for coders,” which is cool for them I suppose…but what about everyone else? Over to Elizabeth Lopatto in The Verge:

LLMs are, at best, an enterprise technology that may make certain kinds of data organization easier, or coding faster. This has almost nothing to do with most people’s lives. Dinking around with code is a hobby many tech people enjoy and one the rest of us simply don’t care about. Making it easier to write code doesn’t change that I don’t want to write code. I have other hobbies!

Me too, Liz, me too. And look, while I’m intellectually curious about AI, there is no part of me yearning to expose my private, personal data to AI agents. Not only do I not trust the technology, I’m not even sure what I’d want a coding agent to do. John Herrman with the Intelligencer recently ran into the same problem, with hilarious results:

Unfortunately, you must now confront the problem at the heart of every AI deployment, personal or corporate, fun or fatal, lark-driven or editorially minded: What is all this automation for?

This is a recurring theme when you try out new AI tools. You recognize that there’s a lot that might be done with them, but not much comes to you. You see this in the rise of AI coding tools, which you find extraordinarily impressive as you use them to … make yourself another … news reader? Notes app? Personal website, again?

You also dimly comprehend that in trying to understand your daily habits as a series of workflows with an eye on automation, you’re going through a similar set of motions as countless thousands of companies across the economy, some of whom see nothing but opportunity in AI — to cut costs and people, or to invest and grow — while others, fearing competition and obsolescence, rush to adopt AI without knowing what problems they need to solve, much less which ones the technology can handle. You identify on an emotional level with the doomed firms buying compute they don’t really know how to use.

Mercifully, humans are not firms, and students are not enterprises. And strangely enough, I’ll be (somewhat) relieved if OpenAI pivots to “B2B,” because it should mean less education malpractice from Belsky and team. As I note in the clip below, such harms are largely invisible in the moment, but eventually they will be made apparent. The good news is that concerned parents, students, educators, journalists, and others are observing this malpractice first-hand, and they do not like what they see.2

And they—we—are organizing to stop it.

Subscribe now

1

To be completely fair—cue Miracle Max voice—Study Mode is only mostly dead. I say that because when I reached out to OpenAI’s press team to confirm its demise, the company informed me it can still be activated by typing “/” in the prompt box. Weirdly, that seems true for the web-version of ChatGPT but not the mobile app. Of course it’s a moot point because no one knows about this option in the first place, and OpenAI’s press person did not respond to my further inquiries about why Study Mode is confined to this sad digital purgatory.

2

My thanks to Jodi Carreon for supplying this video snippet—and she’s one of the organizers helping to stem the EdTech tide in schools. More details here.

Read the whole story
mrmarchant
5 minutes ago
reply
Share this story
Delete

Editing my LLM assisted Articles

1 Share

Last year, I used AI to help me write articles. As I've mentioned before, it's convenient when you are doing so because it saves you time. But the problem comes up when you try to quote those articles back. Whatever you think you wrote is not what's in there. I always cringe when I read them back.

As I've said before, I'm rewriting those articles so that they capture my voice, and so that I can actually quote the real thoughts I had in mind at the time of writing. I will show you exactly what the article looked like before and after. From prompt, to the final post, and the new edit.


Prompt: Feb 4th, 2025 on DeepSeek

I am writing a blog post, help me refine it . It should be a fun read that helps convince readers that building useless tools is part of the journey and career growth.

Programmers don't use calculators, they build them and abandon them 3 quarters completed.

The start of a project is always exciting. You have an brilliant idea, you have the skills to build it, and you have a blind spot. This blind spot propels you into a journey of excitement, setting up the project, deciding on frameworks, paying for AWS servers. You even get to test some of your ideas in a prototype version of your project.

As you keep building the project, and resolving the low hanging fruits, you realize that it takes more than the CRUD part to bring your project to life. This is where we start making excuses. The clearer the blind spot gets, the more we lose interest. Resistance is the secret to creating a collection of abandoned projects.

Early in my career, i found myself with a brand new life altering idea several times a week. My weekends were spent in front of a laptop, typing away on my IDE and testing an idea. I have to admit I often get distracted by creating meta code, like a ORM that should be ultra efficient, or a html parser that will allow me to use custom html tags that automatically render into components in real time. At the end, I have a robust framework with 3 quarters of an application that is getting less and less exciting.

But this is not a warning call to tell you your idea sucks and shouldn't work on it. It's a confession to tell you that those useless projects are exactly how developers hone their skills. We are tricked into wanting to change the world by our own mind. But the result is that we end up learning new marketable skills. I wrote about the importance of kids playing before, it helps them hone physical skills that you can't learn from a book. You have to climb the monkey bar at the park to develop the strength of your arms. In that same way, we need to build 3 quarters of a calculator to learn how to properly compartmentalize your util functions


DeepSeek "thought" for 38 seconds, and gave me a draft titled: "Why Building Useless Tools is the Secret Sauce of Developer Growth".

First Draft:

Title: Why Building Useless Tools is the Secret Sauce of Developer Growth

Hook:
Programmers don’t use calculators—they build them, abandon them three-quarters finished, and then accidentally invent the next big thing while trying to fix a typo in the README.


1. The Siren Song of a New Project

Every developer knows the rush: you’re struck by a “life-altering” idea (your 14th this week), and suddenly, you’re setting up repositories, debating frameworks, and buying AWS servers at 2 AM. The blind spot? You’re convinced this time, you’ll finish. Spoiler: You won’t. But that’s okay.

“The first 90% of the project takes 10% of the time. The remaining 10% takes another lifetime.”
—Ancient Developer Proverb (probably).


2. The Blind Spot: A Fuel for Creative Chaos

That initial blind spot isn’t a flaw—it’s rocket fuel. It lets you ignore the boring bits (user auth, error handling, documentation) and dive headfirst into the fun: prototyping wild features, inventing a “revolutionary” ORM named LazyLlama, or crafting custom HTML tags that’ll “change frontend development forever.” Sure, your enthusiasm fades faster than a free-tier server under load, but along the way, you’ve accidentally learned:
- How to optimize database queries (to make LazyLlama slightly less slow).
- Why shadow DOM exists (after your custom <sparkle-button> tag broke 14 legacy sites).


3. Resistance is Inevitable (and That’s the Point)

As the blind spot clears, reality hits: your app needs more than CRUD and caffeine to survive. Suddenly, you’re procrastinating by rewriting config files or alphabetizing your npm dependencies. This resistance isn’t failure—it’s a teacher. Every abandoned project is a masterclass in:
- Scope creep: “Wait, should the calculator also solve quantum physics?”
- Tooling: “I spent 3 days building a CI/CD pipeline… for a todo app.”
- Prioritization: “The login page can wait—let’s add animated emojis first.”


4. Playtime for Grown-Ups: How Useless Projects Hone Skills

Kids don’t climb monkey bars to “build upper body strength”—they do it because it’s fun. Likewise, developers don’t build half-finished tools to pad their résumés. We do it because playing with code is joyful. And just like kids, we walk away stronger:
- That abandoned Twitter-for-Cats app taught you websockets.
- The “ultra-efficient” ORM? Now you understand query optimization.
- The calculator left at 75%? You’ve mastered state management… and humility.


5. Embrace the Graveyard of Unfinished Ideas

Your GitHub is a museum of “almosts,” and that’s beautiful. Every skeleton in your code closet taught you something. Remember:
- Instagram started as a check-in app.
- Slack was a gaming company’s side hustle.
- Your Blockchain-Based Pet Rock project? It taught you smart contracts—and that not every idea needs to see daylight.


6. Keep Building (and Abandoning)

The next time guilt whispers, “You never finish anything,” laugh in its face. You’re not failing—you’re iterating. The world needs developers who’ve climbed the mountain of “useless” projects, survived the valleys of tech debt, and lived to code another day.

So go ahead: build that calculator. Abandon it at 75%. Then use what you learned to accidentally create something actually world-changing.

Final Thought:
The only thing better than a finished project? A dozen unfinished ones that turned you into a coding wizard.


Call to Action:
What’s your favorite abandoned project? Tag it #UselessButBrilliant and confess your coding sins. We’ve all got a LazyLlama in our closets.


I've made some edits, asked to remove the headings, and this is what I got:

Title: Why Building Useless Tools is the Secret Sauce of Developer Growth

Programmers don’t use calculators—they build them, abandon them three-quarters finished, and then accidentally invent the next big thing while trying to fix a typo in the README. Every developer knows the rush: you’re struck by a “life-altering” idea (your 14th this week), and suddenly, you’re setting up repositories, debating frameworks, and buying AWS servers at 2 AM. The blind spot? You’re convinced this time, you’ll finish. Spoiler: You won’t. But that’s okay.

That initial blind spot isn’t a flaw—it’s rocket fuel. It lets you ignore the boring bits (user auth, error handling, documentation) and dive headfirst into the fun: prototyping wild features, inventing a “revolutionary” ORM named LazyLlama, or crafting custom HTML tags that’ll “change frontend development forever.” Sure, your enthusiasm fades faster than a free-tier server under load, but along the way, you’ve accidentally learned how to optimize database queries (to make LazyLlama slightly less slow) or why shadow DOM exists (after your custom <sparkle-button> tag broke 14 legacy sites).

As the blind spot clears, reality hits: your app needs more than CRUD and caffeine to survive. Suddenly, you’re procrastinating by rewriting config files or alphabetizing your npm dependencies. This resistance isn’t failure—it’s a teacher. Every abandoned project is a masterclass in scope creep (“Wait, should the calculator also solve quantum physics?”), tooling (“I spent 3 days building a CI/CD pipeline… for a todo app”), and prioritization (“The login page can wait—let’s add animated emojis first”).

Kids don’t climb monkey bars to “build upper body strength”—they do it because it’s fun. Likewise, developers don’t build half-finished tools to pad their résumés. We do it because playing with code is joyful. And just like kids, we walk away stronger. That abandoned Twitter-for-Cats app taught you websockets. The “ultra-efficient” ORM? Now you understand query optimization. The calculator left at 75%? You’ve mastered state management… and humility.

Your GitHub is a museum of “almosts,” and that’s beautiful. Every skeleton in your code closet taught you something. Instagram started as a check-in app. Slack was a gaming company’s side hustle. Your Blockchain-Based Pet Rock project? It taught you smart contracts—and that not every idea needs to see daylight.

The next time guilt whispers, “You never finish anything,” laugh in its face. You’re not failing—you’re iterating. The world needs developers who’ve climbed the mountain of “useless” projects, survived the valleys of tech debt, and lived to code another day. So go ahead: build that calculator. Abandon it at 75%. Then use what you learned to accidentally create something actually world-changing.

The only thing better than a finished project? A dozen unfinished ones that turned you into a coding wizard.


I've retitled it, removed the headings from the draft, edited it down, and published this version:

Article Published on March 6th, 2025:

Why Developers Build Unfinished Calculators

  • It's the only way to learn

Every developer knows the rush. You are driving and suddenly you’re struck by a “life-altering” idea (your 14th this week). At the next red light, you record an audio while driving, avoiding eye contact with what clearly looks like a cop’s car. At 2 AM, you wake abruptly remembering the recording. Now you’re setting up repositories, debating frameworks, and buying AWS servers in the middle of the night. The blind spot? You’re convinced this time, you’ll finish.

Spoiler: You won’t. But that’s okay.

Programmers don’t use calculators. They build them, abandon them three-quarters finished, and then accidentally invent the next big thing while trying to fix a typo in the README.

That initial blind spot isn’t a flaw. It’s rocket fuel. It lets you ignore the boring bits (user auth, error handling, documentation) and dive headfirst into the fun: prototyping wild features, inventing a “revolutionary” ORM named LazyLlama, or crafting custom HTML tags that’ll “change frontend development forever.” Sure, your enthusiasm fades faster than a free-tier server under load, but along the way, you’ve accidentally learned how to optimize database queries (to make LazyLlama slightly less slow) or why shadow DOM exists (after your custom <sparkle-button> tag broke the browser's rendering engine).

As the blind spot clears, reality hits: your app needs more than CRUD and caffeine to survive. Suddenly, you’re procrastinating by rewriting config files or alphabetizing your npm dependencies. This resistance isn’t failure. It’s a teacher. Every abandoned project is a masterclass in scope creep (“Wait, should the calculator also solve quantum physics?”), tooling (“I spent 3 days building a CI/CD pipeline… for a todo app”), and prioritization (“The login page can wait—let’s add animated emojis first”).

Kids don’t climb monkey bars to “build upper body strength.” They do it because it’s fun. Likewise, developers don’t build half-finished tools to pad their résumés. We do it because playing with code is joyful. And just like kids, we walk away stronger. That abandoned Twitter-for-Cats app taught you websockets. The “ultra-efficient” ORM? Now you understand query optimization. The calculator left at 75%? You’ve mastered state management… and humility.

Your GitHub is a museum of “almosts,” and that’s beautiful. Every skeleton in your code closet taught you something. Instagram started as a check-in app. Slack was a gaming company’s side hustle. Your Blockchain-Based Pet Rock project? It taught you smart contracts and that not every idea needs to see daylight.

The next time guilt whispers, “You never finish anything,” laugh in its face. “You’ve got the wrong fellow,” you answer. You’re not failing, you’re iterating. The world needs developers who’ve climbed the mountain of “useless” projects, survived the valleys of tech debt, and lived to code another day. So go ahead: build that calculator. Abandon it at 75%. Then use what you learned to accidentally create something actually world-changing.

The only thing better than a finished project? A dozen unfinished ones that turned you into a coding wizard.


It sounds very much like any LLM, and I couldn't stand reading it. At the time, I was trying to save time with my heavy schedule of writing every other day for a whole year. But I ended with this. If you read it, it captures the idea I was trying to share.

As far as being functional, it did exactly what it was supposed to do. But it wasn't my human experience with the subject. In my new edit, I've removed things that do not sound like me. Phrasings that are awkward to me. I'm happy with the result. It's not a banger, but it captures my sentiment on why developers build calculators.

Read edited article here (May 1st 2026)

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Why are the Artemis II photos on Flickr?

2 Shares

If you followed along with the recent joyful celebrations of the Artemis cruise around the moon, and took a moment to dive into the photographic archives of the mission, you might have noticed that all of the original images were shared by NASA on the venerable photo sharing service Flickr. What you might not know is… why?

Here’s the TL;DR:

  • Flickr comes from (and helped start!) the Web 2.0 era, which was based on users having control over their data
  • Tools at that time began giving creators the power to decide what license they wanted to release their content under, including permissions about how it could be shared, used, or remixed
  • Because the people who made platforms back then were users and creators themselves, they thought about the long term and wanted to be able to preserve people’s work
  • After lots of corporate shuffling, Flickr ended up in the hands of a family-owned company, SmugMug, and they made the Flickr Foundation to preserve public photos for the next 100 years
  • NASA’s images should only be on a service where they can be stored in full resolution, for the long term, dedicated to the public domain — which the other social media apps of today can’t do

The Photographic Record

First, some background for folks who might not know what Flickr is, or who may have forgotten. Flickr is a social sharing site for photography which was founded in 2004, and these days people might say that it shares some of its cofounders with Slack, though back when Slack started, everybody said that the company was started by some of the founders of Flickr. That’s because Flickr was arguably the most influential site of the Web 2.0 era, helping define everything from the user interface design to the bright colors to the easy way that developers could access data from the platform. A lot of the things that we take for granted on the modern social internet, like a friendly “voice” used to communicate to users, were pioneered by Flickr, and then quickly came to be considered standard expectations for the apps and sites that followed. It’s hard to imagine that sites from Tumblr to Grindr would have omitted their final “e”s without Flickr’s precedent.

Flickr spun out of a Canadian gaming company called Ludicorp, founded by Stewart Butterfield (later CEO/co-founder of Slack) and Caterina Fake (later an investor and chair of Etsy). The photo-sharing service was extracted from the pieces of a somewhat unsuccessful attempt at multiplayer gaming called “Game Neverending”, but it retained the playfulness of that game even as it became a social app. Flickr also inherited the fine-grained privacy controls and thoughtful community features of earlier social platforms like LiveJournal — along with being actively, intentionally moderated by actual humans who worked diligently to prevent destructive behaviors on the platform. This meant that, more than 20 years ago, this early photo sharing community typically had better social norms than people see on today’s social media apps. (A little side note: Part of Flickr/Ludicorp’s initial funding was with public money. What a remarkable way to fund lasting innovation!)

With all of these groundbreaking features, Flickr didn’t just inspire lots of other entrepreneurs to create a new wave of Web 2.0 startups, it also attracted millions of users who, for the first time, began taking photos with the primary goal of sharing them online. Prior to this moment, the earliest phones with decent cameras were coming to market (it would be years until the iPhone came out), and other photo services of the time were still often oriented towards taking film to processing facilities, and then having the professionals at those facilities scan the resulting images and post them to a clunky online service where you could tediously click through them in a virtual album. Until Flickr, photo sharing online was essentially still analog, even if the experience was technically happening online.

In Focus

Flickr wasn't a social platform first — it was a photography platform first. That means it was designed to store high-resolution versions of every image, and didn't distort pictures with things like filters. Every image showed details like what kind of camera had taken the photo, and even what specific settings were used to take the shot. People started building communities around the then-new idea of using tags to help them find content by topics online — an idea that would directly influence the creation of hashtags on Twitter a few years later.

Another core idea of the time was a firm belief in open data: people should own and control their own work. Eventually, some experts (including a then-teenage Aaron Swartz, who we'd later talk about in the early days of Markdown) created a set of standards called Creative Commons licenses, now maintained by an organization of the same name. Flickr made it easy for users to describe what permissions people had for reusing or remixing any photos they posted. (I was helping out with a blogging platform back then, and I think we were the first tool to support this stuff. It felt like a big deal at the time!)

People's Flickr images started popping up in corporate PowerPoint presentations or commercial advertising almost immediately. A little sidebar: the incredibly positive and generous intent of these open licenses has since been exploited by extractive Big AI companies, who ransacked all of the images on Flickr that had permissive licenses without any consent from, or compensation to, the creators. That might be legal by most readings of the licenses, but if you have hundreds of billions of dollars and don't think you should at least have a conversation with the photographers whose work you're using, you're probably an asshole.

Archival Prints

Our close-knit community of people building the new era of web apps was keenly aware that our users were creating culture. This realization brought a huge amount of responsibility — not just in enabling users to express themselves, but in thinking about the long term for people's ownership of their works. Public institutions had just begun to use these platforms, which meant that the content being shared wasn't just a nice picture to look at: it might be socially or even historically significant.

What happened in the years that followed was… a lot of corporate machinations. Flickr got bought by Yahoo. Flickr's founders left Yahoo. Yahoo got bought by Verizon. You can imagine how all of that went; the details aren't all that important, except to say that by the time Instagram launched, Flickr had begun to fade into obscurity. People were focused on mobile phones instead of the desktop, on sharing square images with filters instead of full-resolution photography, and on connecting socially instead of caring about photos as art or a cultural record. Nobody would post the canonical historical photo of an event with a Valencia filter on it. Most of Flickr's users moved on, rarely checking their old accounts — until a family-owned photo service named SmugMug bought the service from Yahoo. A human-scale operation with some actual heart and a love of photography was a much better home for the platform than some random division of Verizon.

Commons Sense

In 2022, the new team at SmugMug that owned Flickr decided to focus on Flickr’s larger place in culture. Many major institutions around the world had chosen to archive their public photos on Flickr because of its superior support for high-resolution imagery, its unique ability to declare explicit legal licenses (including public domain licenses), and its long-term reputation for reliably hosting content without any of the harms or abuses that typical social networks had inflicted on users. Museums around the world had entire catalogs on the platform, and governments routinely used it to document their public events. When I had a photo taken at an official White House event with President Obama, his team sent me the final image afterward by sending me a Flickr link; when Zohran Mamdani met King Charles, the NYC Mayor’s Office shared those pictures on Flickr, too.

The Flickr team at SmugMug did something special with their responsibility about these public works, due to their cultural significance to the world. They made the Flickr Commons, and brought in a team with expertise in digital archiving and community. This is a project of The Flickr Foundation, designed to preserve digital legacies, and begun in collaboration with no less than the U.S. Library of Congress (back before that was an institution under siege.) They are developing a hundred year plan for how to care for these works, which is virtually unheard-of in the digital world. (You should absolutely donate to support the Flickr Foundation in their mission to preserve these vital public resources for many years in the future.)

It’s in this context that NASA has long been sharing its imagery on Flickr, for all of its missions — not just Artemis II. There’s even a special section for NASA on The Commons. And since everything is provided in incredibly high-resolution and has every single detail about the photo and how it was taken, it’s possible to combine the information about the photo with other data and create amazing resources like this beautiful timeline of the entire mission. You can see Hank Green’s wonderful narration of his inspiration and creative journey behind the timeline right here:

Why Not With Us?

Anybody who’s read my site for a while knows that I’m a huge proponent of owning your own website, and having your own content live there. Shouldn’t NASA, of all institutions, have their photos live on their own nasa.gov website? Well, yes! But.

One complication is that many large institutions, especially ones that have developed complex processes for good reasons, like government agencies and big businesses, often have trouble maintaining public-facing web infrastructure over long timeframes. Running a website that millions of people can access requires constant updates and maintenance, guarding against a never-ending onslaught of security challenges (a task that’s rapidly getting more difficult!), and the internal knowledge on how a site was created in the first place often leaves when employees do.

In contrast, platforms that are run by technically fluent, well-intentioned and thoughtful technologists can be very effective in maintaining content over a timescale of decades. The SmugMug team has been very thoughtful in managing both their business and their technical infrastructure in order to sustain Flickr’s public archives for years to come. (Though, as mentioned, you should still donate to ensure they can keep doing so!)

What’s more painful is the more recent threats to public stewardship of this kind of content. The traditional authoritarian impulse to destroy or falsify the public record has not spared the digital realm under the current administration. Wide swaths of the government’s websites have been erased, taken offline, or had their content modified to either delete or adulterate the content. Leaders who regularly post AI slop on their social media accounts, and who have begun posting lies and distortions on major websites like the White House’s, will of course not hesitate to modify or remove photos from public archives as well. By having the public’s images preserved in an independent archive in standard formats, we increase the likelihood of future generations being able to access accurate copies of these historical records.

We’ll be glad to have archives like Flickr’s in the future, and people around the world will be glad for its place in archiving even much more mundane aspects of culture.

Taking off

I was honored to get to reflect on my long history with Flickr, and with online community, in an interview with my old friend Jessamyn West, for the Flickr Foundation’s blog. In a conversation that unspooled over a few months, I think we covered so many of the themes that resonated in what I’ve mentioned here, and what struck me most was how much I wanted a new generation of people on the internet to have their own version of the communities and experiences that we got to have when sites like Flickr were first being made. People still cherish those values!

The beautiful thing about communities and platforms like Flickr is that they remind us that not everything on the internet has to be ephemeral, not everything on the web has to be hyper-commercial. Sometimes a bunch of decent people can do a good thing for the right reasons, and the result of that work can persevere for decades. Then, others who do some of the most ambitious and astounding things imaginable can build on that work to inspire us. And then, some more regular folks can build on top of that and help us waste a little bit of time just clicking around on something fun. That’s what the internet is supposed to be about!

This isn’t just about recounting old web lore — this is about explaining the internet we have right now. Hank’s timeline site is brand new, entertaining a whole new generation, and probably the majority of the audience who are looking at it weren’t even born when Flickr was first conceived. But the reason he can build that site is because of the values and the inventiveness of the team and community who created a platform like Flickr — and because those kinds of values are durable. They might not be as loud or flashy, but they are still everywhere, quietly enabling a lot of the things we enjoy most every day.

Public dollars helped make a fascinating community, then public dollars enabled a breathtaking journey into space, and then a public commons helped a creator make a novel way to explore that journey. Lots of people chose, over and over, to be generous with their genius. These are all gifts that a bunch of strangers gave each other, over hundreds of thousands of miles, and many years. Inspiration is all around us!

A Setting Earth

Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete

The Ethiopian running secret

1 Share

A group of runners on a road with two people observing from the side in a flat, open landscape.

One school of training is highly personalised, technical and data-driven. The other is the one that wins marathons

- by Michael Crawley & Geoff Burns

Read on Aeon

Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete

Ada Palmer & Bruce Schneier: AI Learns Language From...

1 Share

Ada Palmer & Bruce Schneier: AI Learns Language From Skewed Sources. That Could Change How We Humans Speak – and Think. “Our sense of the world may become distorted in ways we have barely begun to comprehend.”

Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete

What to do when “Research Shows” shuts you down

1 Share

Chalk & Talk is one of our favorite sources for education content, and so we’re thrilled to have this guest post from Anna Stokke, based on a presentation she gave at researchED Toronto in June, 2025. You can listen to Anna talk about this article on her most recent Chalk & Talk episode, now available here!

The Schoolhouse is a series from Education Progress featuring articles for and from teachers, parents, education officials, and others working in the education system.


People often ask me how I became involved in math education, and why I so often call out poor practice and insist on evidence. As with many of us, it’s personal.

We sent our daughter to school expecting she’d be taught math. After all, that’s what schools do: they teach kids how to read and do math. But our daughter didn’t seem to be learning anything in her math class. By Grade 3, most days either consisted of a “problem of the day” that students didn’t have the skills to solve, or confusing lessons on convoluted methods for doing basic arithmetic.

It all came to a head when we were invited to a parent math information night. “What should math look and feel like?” the flyer asked. “How do we help children see that math is a subject where thinking, not just remembering, is the main event?”

Who could be against thinking? Certainly not me. But if I’d known then what I know now, I’d have recognized this as code for “no remembering at all.” My husband and I walked to the school that evening hopeful that those first few months of Grade 3 were an anomaly. Maybe soon they’d start teaching some math.

Instead, the parent math night deepened our concerns. We were told that the new math curriculum (or standards, as they’re called in the United States) discouraged standard algorithms — the traditional vertical algorithms for arithmetic — in favor of invented strategies or less efficient, overcomplicated procedures. We were assured this approach promotes “conceptual understanding” but, as mathematicians, my husband and I were skeptical. To reinforce the message, we were given a research paper that supposedly showed that standard algorithms are harmful, claims that often trace back to the widely criticized work of Constance Kamii (see critiques here and here).

Looking around the room, most parents seemed satisfied. Who wouldn’t trust the schools to teach our children well?

Some weeks later, the school brought in a well-known Canadian math consultant and author to give a presentation to parents. When asked directly, she gave parents the advice that it doesn’t really matter whether kids commit multiplication tables to memory, a claim that runs counter to strong evidence (see, for example, here and here). It became clear where those “problems of the day” were coming from.

As for the research paper we’d been given, it was a small case study involving children with learning difficulties. There was no control group and no statistical analysis. The researcher drew faulty conclusions that did not follow from the evidence. It didn’t support what the school was telling us at all.

This was my first encounter with education research, and I wasn’t impressed. How could flawed studies and non-existent evidence shape how children were being taught math? And why was no one asking questions?

Our daughter’s classroom wasn’t an outlier. The same patterns were playing out in classrooms across the country. That moment set me on a path that I’m still on today to push for better standards in math education. Over time, I’ve learned to read between the lines, to ask pointed questions, to look closely at what’s presented as evidence, and to never take education claims at face value.

I’d like to share what I’ve learned, in the hope that it helps other parents and teachers.

The first thing to understand is that the phrase “research shows” is used loosely in education. It doesn’t carry the same weight as when a doctor or scientist uses the phrase. In education, it might refer to a blog post, an opinion dressed up as evidence, or a small, low-quality study. Even a published journal article in education should be scrutinized. A surprising amount of education research is of very low quality.

But the phrase is powerful and persuasive. It makes opinion sound like fact and lends authority to claims that haven’t been properly tested. Yet when a claim is repeated enough, it starts to feel like established truth.

I call this the wildfire effect.

The wildfire effect: How bad ideas spread

  1. A flawed study or opinion piece is cited by an influential educator.

  2. It’s repeated at education conferences, professional development sessions, and on social media.

  3. It appears in district documents, books, and other education papers.

  4. It then gets cited as well-established research.

  5. It becomes justification for education policy.

At no point in the process is the evidence seriously examined.

A good example is the claim that timed tests cause math anxiety. This is not supported by high-quality research, and most claims seem to trace back to an opinion piece written by influential math educator, Jo Boaler. It has been repeated so many times that many educators believe it is accepted research. However, these assumptions are not supported by research, and recommendations from the Institute of Education Sciences (IES) list timed activities as a research-informed way to support students struggling with math.

What’s at stake? A lot.

When weak or non-existent evidence drives decisions, students don’t get effective instruction, struggling students fall further behind, teachers are misled, resources are wasted, and high-quality research gets drowned out.

For this reason, I believe more teachers and parents need to be proactive. A PhD, a position of influence, or a published book is not proof of accuracy. Anyone can write a book, and people with PhDs can be wrong. Evidence is what matters, not credentials.

Ask for evidence, evaluate it, and become informed. Here’s where to start.

Step 1. Ask for evidence

When you ask for evidence, you may encounter tactics designed to shut you down.

One common tactic is shifting the burden of proof: when someone makes a radical claim but refuses to provide evidence, instead throwing the burden of proof onto you.

Here’s an example:

  • Claim: “Research shows standard algorithms are harmful.”

  • You: “Please provide evidence of your claim.”

  • Response: “You need to prove they’re not harmful.”

Who holds the burden of proof? As Carl Sagan said, “Extraordinary claims require extraordinary evidence.” The burden of proof lies with the person making the claim, especially when it challenges established practice. In this case, stating that standard algorithms are harmful is a radical claim that goes against conventional wisdom. It is therefore incumbent on the individual making that claim to provide evidence.

Another tactic to watch out for is the firehose effect: avoiding providing evidence by overwhelming the questioner with sources. I’ve experienced this firsthand, repeatedly. When I asked for evidence, I’d be told to read a book with hundreds of references. The reason is simple: it’s impractical to check the validity of hundreds of references.

The best example I know of someone pushing back against the wildfire effect is from Stanford math professor Brian Conrad. Alarmed by dubious claims in a 1000-page draft of the 2021 California Math Framework (CMF), he carefully examined every claim and reference. He found repeated citation misrepresentations, non peer-reviewed articles, and sweeping generalizations, which he documented in a public critique of the CMF.

Most people won’t do what Conrad did, but there are some things you can do to dampen the firehose effect. First, be specific. Ask for two or three high-quality studies — not entire books — on the specific topic in question. You can also divide large reference lists among several people. My colleagues and I did this recently when an education professor claimed that requiring K-8 teachers to take math made them worse math teachers. When asked for evidence she sent 22 articles. We split them up, read every one, and wrote a report on our findings. None of the provided articles supported her claim, and several contradicted it.

A third tactic you might encounter is credential deflection: instead of providing you with the requested evidence, someone questions your right to ask for it. I’ve experienced this directly. A contract instructor in a Faculty of Education once publicly wrote this about me: “Let me stress that her perspective as a mathematician is far different than that of a math educator. Many of the statements she makes are a reflection of her lack of knowledge regarding effective practice.” This is an ad hominem attack: criticizing the person instead of engaging with the argument. He was implicitly saying that only people trained in education are qualified to evaluate evidence within that field — as though there’s something special about education research that the rest of us can’t understand.

But this is preposterous. A mathematician is often in an even better position to identify weak methodology in education papers, such as missing control groups, flawed statistical analyses, or illogical conclusions. Honestly, critical thinking skills are often all that’s needed to assess the validity of many education papers. If someone attacks your credentials, direct them back to the critical question: please provide evidence for your claim.

The fourth tactic to guard against is gaslighting: when someone tells you that a poor practice you’ve witnessed is barely happening in schools. This tactic is used to shut down conversations before they can start. For instance, I’ve been told that inquiry-based instruction is rare and that most classrooms are dominated by direct instruction: the practice is hardly occurring, so why dwell on its effectiveness? A simple response is to provide evidence to the contrary, which means the best defense is having receipts. Professional development and school newsletters reflect how teachers are being encouraged to teach. What professional development is being offered? How often does it focus on explicit instruction, retrieval practice, acquiring fluency with basic math facts, or direct teaching of critical math skills versus Building Thinking Classrooms, growth mindset, or inquiry? These kinds of school and district resources offer a paper trail that can’t be easily gaslit.

Step 2. Evaluate the evidence

If you do receive research articles (which I’ve found is unlikely, particularly when the claim runs counter to common sense), you’ll need to assess them. First, here’s what doesn’t count as evidence:

  1. Opinion pieces

  2. Newspaper or magazine articles

  3. Articles that are not peer-reviewed

  4. Position statements: the NCTM (National Council for Teachers of Mathematics), for example, has published position statements that are not grounded in evidence.

Next, watch for these five red flags, discussed in detail here:

One common issue, often found in education articles, is the lack of meaningful and measurable criteria.

Math education is full of appealing but vague terms: critical thinking, conceptual understanding, number sense, curiosity, differentiation. These lack clear definitions and are difficult to measure. If you are told that a program promotes number sense or critical thinking, that’s a red flag. There really is no standard definition for what these terms mean, making them impossible to measure.

Another thing to watch out for is when programs get labeled as research-based, but the underlying studies didn’t actually measure whether students learned. For example, the popular math program, Building Thinking Classrooms, is often described this way, but the study often cited measured engagement, not whether students learned math (see critiques here and here). Engagement isn’t learning. Students can be very engaged but learn very little.

If a math program claims to be evidence-based, it should be supported by high-quality research that measured whether students learned math.

Step 3. Become informed

Finally, the best defence against bad ideas in education is to equip yourself with knowledge about evidence-informed practices. High-quality sources include the Institute of Educational Science practice guides, the National Math Advisory Panel Final Report, the National Center on Intensive Intervention, and the Education Endowment Foundation. These sources synthesize rigorous research and focus on what improves student outcomes. The more you know about what the best research supports, the easier it is to spot bad ideas before they spread. And check out my podcast Chalk & Talk, where I speak with experts from around the world about evidence-based education.

Our daughters have mathematicians as parents, so they got the math instruction they needed. Most children don’t have that advantage. If we want better outcomes for children, we must stop accepting “research shows” at face value and start demanding evidence.

Join the Center for Educational Progress and receive all our content — and thanks to all our amazing paid subscribers for their support.

Read the whole story
mrmarchant
3 days ago
reply
Share this story
Delete
Next Page of Stories