1443 stories
·
1 follower

Books and screens

1 Share

A modern library with tall bookshelves and people reading or using devices in a cosy, well-lit atmosphere.

Your inability to focus isn’t a failing. It’s a design problem, and the answer isn’t getting rid of our screen time

- by Carlo Iacono

Read on Aeon

Read the whole story
mrmarchant
4 hours ago
reply
Share this story
Delete

AI vibe-generates the same ‘random’ passwords over and over

1 Share

If you ask a chatbot for a random number from one to ten, it’ll usually pick seven: [arXiv, PDF]

GPT-4o-mini, Phi-4 and Gemini 2.0, in particular, seem much more restricted in this range, as they choose “7” in ~80% of total cases.

Seven has long been known to also be humans’ favourite number when they’re asked for something that sounds random. From 1976: [APA, 1976]

When asked to report the 1st digit that comes to mind, a predominant number (28.4%) of 558 persons on the Yale campus chose 7.

Computers are pretty good at random numbers. But chatbots don’t work in numbers — they work in word fragments. So if you ask a chatbot for a random number, it’ll pick words from its training.

Guess what happens when people ask the chatbot for a password? Irregular, a chatbot testing company, tested chatbots on passwords: [Irregular]

LLM-generated passwords (generated directly by the LLM, rather than by an agent using a tool) appear strong, but are fundamentally insecure, because LLMs are designed to predict tokens — the opposite of securely and uniformly sampling random characters.

Despite this, LLM-generated passwords appear in the real world — used by real users, and invisibly chosen by coding agents as part of code development tasks, instead of relying on traditional secure password generation methods.

When you ask the chatbot for a strong password, it doesn’t generate a password — it picks example patterns of random passwords from its training.

Irregular asked Claude for 50 strong passwords. They found standard patterns in the passwords — most start with “G7”. The characters “L ,” “9,” “m,” “2,” “$” and “#” appeared in all the passwords.

And the bot kept repeating passwords. One password appeared 18 times in the 50 passwords!

ChatGPT and Gemini gave similar results. But the passwords sure looked random.

The other problem with predictable passwords is that they’re easily crackable. In cryptography jargon, they have low entropy. Guessing predictable passwords is so much easier.

The Register tried reproducing Irregular’s work, and they got results much like Irregular’s. Chatbots are just bad at this. [Register]

Why would you even ask a chatbot to generate a password for you? Because chatbot users use the chatbot as their first call for everything. It’s their universal answer machine!

You and I might know better. But so many people just don’t. They fell for the machine that was tuned really hard to make people fall for it. Even the vibe coders fall for the password one.

So what should you tell them to do to generate a strong password? If your web browser has a password generator, use that. All the password manager apps, like 1password or LastPass, have a password generator site. They’ll be okay. But fundamentally, anything is better for the job than a chatbot.

Read the whole story
mrmarchant
4 hours ago
reply
Share this story
Delete

Unsung heroes: Flickr’s URLs scheme

1 Share

Half of my education in URLs as user interface came from Flickr in the late 2000s. Its URLs looked like this:

flickr.com/photos/mwichary/favorites
flickr.com/photos/mwichary/sets
flickr.com/photos/mwichary/sets/72177720330077904
flickr.com/photos/mwichary/54896695834
flickr.com/photos/mwichary/54896695834/in/set-72177720330077904

This was incredible and a breath of fresh air. No redundant www. in front or awkward .php at the end. No parameters with their unpleasant ?&= syntax. No % signs partying with hex codes. When you shared these URLs with others, you didn’t have to retouch or delete anything. When Chrome’s address bar started autocompleting them, you knew exactly where you were going.

This might seem silly. The user interface of URLs? Who types in or edits URLs by hand? But keyboards are still the most efficient entry device. If a place you’re going is where you’ve already been, typing a few letters might get you there much faster than waiting for pages to load, clicking, and so on. It might get you there even faster than sifting through bookmarks. Or, if where you’re going is up in hierarchy, well-designed URL will allow you to drag to select and then backspace a few things from the end.

Flickr allowed to do all that, and all without a touch of a Shift key, too.

Any URL being easily editable required for it to be easily readable, too. Flickr’s were. The link names were so simple that seeing the menu…

…told you exactly what the URLs for each item were.

In the years since, the rich text dreams didn’t materialize. We’ve continued to see and use naked URLs everywhere. And this is where we get to one other benefit of Flickr URLs: they were short. They could be placed in an email or in Markdown. Scratch that, they could be placed in a sentence. And they would never get truncated today on Slack with that frustrating middle ellipsis (which occasionally leads to someone copying the shortened and now-malformed URL and sharing it further!).

It was a beautiful and predictable scheme. Once you knew how it worked, you could guess other URLs. If I were typing an email or authoring a blog post and I happened to have a link to your photo in Flickr, I could also easily include a link to your Flickr homepage just by editing the URL, without having to jump back to the browser to verify.

Flickr is still around and most of the URLs above will work. In 2026, I can think of a few improvements. I would get rid of /photos, since Flickr is already about photos. I would also try to add a human-readable slug at the end, because…
flickr.com/mwichary/sets/72177720330077904-alishan-forest-railway
…feels easier to recall than…
flickr.com/photos/mwichary/sets/72177720330077904

(Alternatively, I would consider getting rid of numerical ids altogether and relying on name alone. Internet Archive does it at e.g. archive.org/details/leroy-lettering-sets, but that has some serious limitations that are not hard to imagine.)

But this is the benefit of hindsight and the benefit of things I learned since. And I started learning and caring right here, with Flickr, in 2007. Back then, by default, URLs would look like this:

www.flickr.com/Photos.aspx?photo_id=54896695834&user_id=mwichary&type=gallery

Flickr’s didn’t, because someone gave a damn. They fact they did was inspiring; most of the URLs in things I created since owe something to that person. (Please let me know who that was, if you know! My grapevine says it’s Cal Henderson, but I would love a confirmation.)

Read the whole story
mrmarchant
21 hours ago
reply
Share this story
Delete

Plums

2 Comments and 5 Shares
My icebox plum trap easily captured William Carlos Williams. It took much less work than the infinite looping network of diverging paths I had to build in that yellow wood to ensnare Robert Frost.
Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
2 public comments
tedder
1 day ago
reply
I'm confused by this. What's the dilemma? Wanting to use the plum for dinner?
Uranus
marcrichter
1 day ago
Me too, but there's always this: https://www.explainxkcd.com/wiki/index.php/3209:_Plums
ttencate
16 hours ago
Not always; it's throwing 500 errors right now. Apparently you and I are not the only ones who are confused 😁
dlowe
14 hours ago
There's a famous (infamous) poem about stealing plums from the icebox that has been remixed a thousand times in meme-culture.
fancycwabs
12 hours ago
He's tempted to go all William Carlos Williams on the plums, mostly for the opportunity to apologize later.
alt_text_bot
2 days ago
reply
My icebox plum trap easily captured William Carlos Williams. It took much less work than the infinite looping network of diverging paths I had to build in that yellow wood to ensnare Robert Frost.

Highlights from Stanford's AI+Education Summit

1 Share

I attended the AI+Education Summit at Stanford last week, the fourth year for the event and the first year for me. Organizer Isabelle Hau invited researchers, philanthropists, and a large contingent of teachers and students, all of them participating in panels throughout the day. That mix—heavier on practitioners than edtech professionals—gave me lots to think about on my drive home. Here are several of my takeaways.

¶ The party is sobering up. The triumphalism of 2023 is out. The edtech rapture is no longer just one more model release away. Instead, from the first slide of the Summit above, panelists frequently argued that any learning gains from AI will be contingent on local implementation and just as likely to result in learning losses, such as those in the second column of the slide.

¶ Stanford’s Guilherme Lichand presented one of those learning losses with his team’s paper, “GenAI Can Harm Learning Despite Guardrails: Evidence from Middle-School Creativity.” His study replicated previous findings that kids do better on certain tasks with AI assistance in the near-term—creative tasks, in his case—and worse later when the tool is taken away. “Already pretty bad news,” Lichand said. But when he gave the students a transfer task, the students who had AI and had it taken away saw negative transfer. “Four-fold,” said Lichand. What’s happening here? Lichand:

It’s not just persistence. It’s a little bit about how you don’t have as much fun doing it, but most importantly, you start thinking that AI is more creative than you. And the negative effects are concentrated on those kids who really think that AI became more creative than them.

A paper I’ll be interested in reading. This was using a custom AI model, as well, one with guardrails to prevent the LLM from solving the tasks for students, the same kind of “tutor modes” we’ve seen from Google, Anthropic, OpenAI, Khan Academy, etc.

¶ Teacher Michael Taubman had the line that brought down the house.

In the last year or so, it’s really started to feel like we have 45 minutes together and the together part is what’s really mattering now. We can have screens involved. We can use AI. We should sometimes. But that is a human space. The classroom is taking on an almost sacred dimension for me now. It’s people gathering together to be young and human together, and grow up together, and learn to argue in a very complicated country together, and I think that is increasingly a space that education should be exploring in addition to pedagogy and content.

¶ Venture capitalist Miriam Rivera urged us to consider the nexus of technology and eugenics that originated in the Silicon Valley:

I have a lot of optimism and a lot of fear of where AI can take us as a society. Silicon Valley has had a long history of really anti-social kinds of movements including in the earliest days of the semi-conductor, a real belief that there are just different classes of humans and some of them are better than others. I can see that happening with some of the technology champions in AI.

Rivera kept bringing it, asking the crowd to consider whether or not they understand the world they are trying to change:

But my sense is there is such a bifurcation in our country about how people know each other. I used to say that church was the most segregated hour in America. I just think that we’ve just gotten more hours segregated in America. And that people often are only interacting with people in their same class, race, level of education. Sometimes I’ve had a party one time, and I thought, my God, everybody here has a master’s degree at least. That’s just not the real world.

And I am fortunate in that because of my life history, that’s not the only world that I inhabit. But I think for many of us and our students here, that is the world that they primarily inhabit, and they have very little exposure to the real world and to the real needs of a lot of Americans, the majority of whom are in financial situations that don’t allow them to have a $400 emergency, like their car breaks down. That can really push them over the edge.

Related: Michael Taubman’s comments above!

¶ Former Stanford President John Hennessy closed the day with a debate between various education and technology luminaries. His opening question was a good one:

How many people remember the MOOC revolution that was going to completely change K-12 education? Why is this time really different? What fundamentally about the technology could be transformative?

This was an important question, especially given the fact that many of the same people at the same university on the same stage had championed the MOOC movement ten years ago. Answers from the panelists:

Stanford professor Susanna Loeb:

I think the ability to generate is one thing. We didn’t have that before.

Rebecca Winthrop, author of The Disengaged Teen:

Schools did not invite this technology into their classroom like MOOCs. It showed up.

Neerav Kingsland, Strategic Initiatives at Anthropic:

This might be the most powerful technology humanity has ever created and so we should at least have some assumption and curiosity that that would have a big impact on education—both the opportunities and risks.

Shantanu Sinha, Google for Education, former COO of Khan Academy:

I’d actually disagree with the premise of the question that education technology hasn’t had a transformative impact over the last 10 years.

Sinha related an anecdote about a girl from Afghanistan who was able to further her schooling thanks to the availability of MOOC-style videos, which is an inspiring story, of course, but quite a different definition of “transformation” than “there will be only 10 universities in the world” or “a free, world‑class education for anyone, anywhere” or Hennessy’s own prediction (unmentioned by anyone) that “there is a tsunami coming” for higher education.

After Sinha described the creation of LearnLM at Google, a version of their Gemini LLM that won’t give students the answer even if asked, Rebecca Winthrop said, “What kid is gonna pick the learn one and not the give-me-the-answer one?”

Susanna Loeb responded to all this chatbot chatter by saying:

I do think we have to overcome the idea that education is just like feeding information at the right level to students. Because that is just one important part of what we do, but not the main thing.

Later, Kingsland gave a charge to edtech professionals:

The technology is, I think, about there, but we don’t yet have the product right. And so what would be amazing, I think, and transformative from AI is, if in a couple of years we had a AI tutor that worked with most kids most of the time, most subjects, that we had it well-researched, and that it didn’t degrade on mental health or disempowerment or all these issues we’ve talked on.

Look—this is more or less how the same crowd talked about MOOCs ten years ago. Copy and paste. And AI tutors will fall short of the same bar for the same reason MOOCs did: it’s humans who help humans do hard things. Ever thus. And so many of these technologies—by accident or design—fit a bell jar around the student. They put the kid into an airtight container with the technology inside and every other human outside. That’s all you need to know about their odds of success.

It’ll be another set of panelists in another ten years scratching their heads over the failure of chatbot tutors to transform K-12 education, each panelist now promising the audience that AR / VR / wearables / neural implants / et cetera will be different this time. It simply will.

Hey thanks for reading. I write about technology, learning, and math on special Wednesdays. Throw your email in the box if that sounds like your thing! -Dan

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Beyond the Brainstorming Plateau

1 Share

Introduction

Eighty-four percent of high school students now use generative AI for schoolwork.¹ Teachers can no longer tell whether a competent essay reflects genuine learning or a 30-second ChatGPT prompt. The assessment system they’re operating in, one that measured learning by evaluating outputs, was designed before this technology existed.

Teachers are already trying to redesign curriculum, assessment, and practice for this new reality. But they’re doing it alone, at 9pm, without tools that actually help.

Thanks for reading! Subscribe for free to receive new posts and support my work.

I spent the last few months trying to understand why and what I found wasn’t surprising, exactly, but it was clarifying: the AI products flooding the education market generate worksheets and lesson plans, but that’s not what teachers are struggling with. The hard work is figuring out how to teach when the old assignments can be gamed and the old assessments no longer prove understanding. Current tools don’t touch that problem.

Based on interviews with eight K-12 teachers and analysis of 350+ comments in online teacher communities, I found three patterns that explain why current AI tools fail teachers and what it would take to build ones they’d actually adopt. The answer isn’t more content generation. It’s tools that act as thinking partners where AI helps teachers reason through the redesign work this moment demands.

The Broken Game

For most of educational history, producing the right output served as reasonable evidence that a student understood the material. If a student wrote a coherent essay, they probably knew how to write. If they solved a problem correctly, they probably understood the underlying concepts. The output was a reliable proxy for understanding.

But now that ChatGPT can generate an entire essay in five seconds, that proxy no longer holds. A student who submits a coherent essay might have written it themselves, revised an AI draft, or copied one wholesale. The traditional system can’t tell the difference.

This breaks the game.

Students aren’t cheating in the way we traditionally understand it. They’re using available tools to win a game whose rules no longer make sense. And the asymmetry here matters since students have already changed their behavior. They have AI in their pockets and use it daily. Whether that constitutes “adapting” or merely “gaming the system in new ways” depends on the student. Some are using AI as a crutch that bypasses thinking. Others are genuinely learning to leverage it. Most are somewhere in between, figuring it out without much guidance.

Meanwhile, teachers are still operating the old system: grading essays that might be AI-generated, assigning problem sets that can be solved in seconds, trying to assess understanding through outputs that no longer prove anything. They’re refereeing a broken game without the tools to redesign it.

The response can’t be banning AI or punishing students for using it. Those are losing battles. The response is redesigning the game itself and rethinking what we assess, how we assess it, and how students practice.

Teachers already sense this. In my interviews, they described the same realization again and again that their old assignments don’t work anymore, their old assessments can be gamed, and they’re not sure what to do about it. They are not resistant to change but they are overwhelmed by the scope of what needs to change, and they’re doing it without support.

This matters beyond one industry. While K-12 EdTech is a $20 billion market, and every major AI lab is positioning for it, the real stakes are larger. When knowledge is instantly accessible and outputs are trivially producible, what does it even mean to learn? The companies building educational AI aren’t just selling software, that’s just a means to an end. They’re shaping the answer to that question.

And right now, they’re getting it wrong.

What Teachers Actually Need

The conventional explanation for uneven AI adoption is that teachers resist change. But my research surfaced a different explanation. Teachers aren’t resistant to AI, they’re resistant to AI that doesn’t help them do the actual hard work.

So, what is the ‘hard work’, exactly?

Teacher work falls into two categories. The first is administrative: entering grades, formatting documents, drafting routine communications. The second is the core of the job: designing curriculum, assessing student understanding, providing feedback that changes how students think, diagnosing what a particular student needs next. This second category is how teachers actually teach.

Current AI tools focus almost entirely on the first category. They generate lesson plans, create worksheets, and draft parent emails. Teachers find this useful. It surfaces ideas quickly, especially when colleagues aren’t available to bounce ideas off of. But administrative efficiency isn’t what teachers are struggling with.

The hard problems are pedagogical. How do you redesign an essay assignment when the old prompt can be completed by AI in 30 seconds? How do you assess understanding when AI can produce correct-looking outputs? How do you structure practice so students develop genuine understanding rather than skip the struggle that builds it?

These questions require deep thinking. And most teachers have to work through them alone. They don’t have a curriculum expert available at 9pm when planning tomorrow’s lesson, or a colleague who knows the research, knows their students, and can reason through these tradeoffs with them in real time.

AI could be that thinking partner. That’s the opportunity current tools are missing.

Three Patterns from the Classroom

To understand why current tools miss the mark, I interviewed eight K-12 teachers across subjects, school types, and levels of AI adoption. I then analyzed 350+ comments in online teacher communities to test whether their experiences reflected broader patterns or idiosyncratic views.

What I found was remarkably consistent. Three patterns kept emerging, and each one reveals a gap between what teachers need and what current AI tools provide.

Pattern 1: The Brainstorming Plateau

Every teacher I interviewed used AI for brainstorming. It’s the universal entry point. “Give me ten practice problems.” “Suggest some activities for this unit.” “What are some real-world connections I could make?” Teachers find this useful because it surfaces ideas quickly when colleagues aren’t available.

But that’s where it stops.

One math teacher described the ceiling bluntly. “Lesson planning, I find to be not very useful. Does it save time or does it not save time? I think it does not save time, because if you are using big chunks, you spend a lot of time going back over them to make sure that they are good, that they don’t have holes.”

He wasn’t alone. This pattern appeared across all eight interviews and echoed throughout the Reddit threads I analyzed. Teachers described the same ceiling again and again. “The worksheets it made for me took longer to fix than what I could have just made from scratch.”

Notice what they’re highlighting is that AI tools produce content that then they have to evaluate alone. They aren’t reasoning through curriculum decisions with AI, they’re cleaning up after it. The cognitive work, the hard part, remains entirely theirs and there is now also incremental work.

A genuine thinking partner would work differently. Not just “here’s a draft assignment” but “I notice your old essay prompt can be completed by AI in 30 seconds. What if we redesigned it so students have to draw on class discussions, articulate their own reasoning, reflect on what they actually understand versus what AI could generate for them? Here are a few options. Which direction fits what you’re trying to teach?”

That’s the kind of collaboration teachers want.

Pattern 2: The Formatting Tax

Even if AI could be that thinking partner, teachers would need to trust it first. And trust is exactly what current tools are squandering.

A recurring frustration across my interviews and Reddit threads is that AI generates content quickly, but the output rarely matches classroom needs. Teachers described spending significant time reformatting, fixing notation that didn’t render correctly, and restructuring content to fit their established norms. One teacher put it simply, “by the time you’ve messed around with prompts and edited the results into something usable, you could’ve just made it yourself.”

This might seem minor, a ‘fit-and-finish’ problem, but it highlights a broader trust problem.

Every time a teacher reformats AI output, they are reminded that this tool doesn’t know my classroom. It doesn’t understand my context and it produces generic content, expecting me to do the translation.

And this points to something deeper. Teachers know that these tools weren’t designed for classrooms. They were built for enterprise use cases and retrofitted for education, optimizing for impressive demos to district leaders rather than daily classroom use. Teachers haven’t been treated as partners in shaping what AI-augmented teaching should look like.

A tool that understood context would work differently. Not “here’s a worksheet on fractions” but “here’s a worksheet formatted the way you like, with the name field in the top right corner, more white space for showing work, vocabulary adjusted for your English language learners. I noticed your last unit emphasized visual models, so I’ve included those. Want me to adjust the difficulty progression?”

That’s what earning trust looks like.

Pattern 3: Grading as Diagnostic Work

And even then, trust is only part of the story. Even well-designed tools will fail if they try to automate the wrong things.

When it comes to grading, teachers are willing to experiment with AI for low-stakes feedback (exit tickets, rough drafts, etc.). But when they do, the results disappoint. “The feedback was just not accurate,” one teacher told me. “It wasn’t giving a bad signal, but it wasn’t giving them the things to focus on either.”

For teachers, the problem is that the AI feedback is just too generic.

One English teacher shared why this matters, “I know who could do what. Some students, if they get to be proficient, that’s amazing. Others need to push further. That’s dependent on their starting point, not something I see as a negative.”

This is why teachers resist automating grading even when they’re exhausted by it. Grading isn’t just about assigning scores, it’s diagnostic work. When a teacher reads student work, they’re asking themselves: what does this student understand? What did I do or not do that contributed to this outcome? How does this represent growth or regression for this particular student? Grading is how teachers know what to do tomorrow for their students.

A tool that understood this would work differently. Not “here’s a grade and some feedback” but “I noticed three students made the same error on question 4. They’re confusing correlation with causation. Here’s a mini-lesson that addresses that misconception. Also, Marcus showed significant improvement on thesis statements compared to last month. And Priya’s response suggests she might be ready for more challenging material. What would you like to prioritize?”

That’s a thinking partner that helps teachers see patterns so they can make better decisions. The resistance to AI grading isn’t technophobia so much as an intuition that the diagnostic work is too important to outsource.

The Equity Dimension

Now, these patterns don’t affect all students equally… and this is the part that keeps me up at night.

Some students will figure out how to use AI productively. They have adults at home who can guide them, or the metacognitive skills to self-regulate. But many won’t. Research already shows that first-generation college students are less confident in appropriate AI use cases than their continuing-generation peers.² The gap is already forming.

This is an equity problem hiding in plain sight. When AI tools fail teachers, they fail all students, but they fail some students more than others. The students who most need guidance on how to use AI productively are least likely to get it outside of school.

Teachers are the only scalable intervention, but only if they can see what’s happening: which students are over-relying on AI as a crutch, which are underutilizing it, and which are developing genuine capability. Without tools that surface these patterns, the redesign helps the students who were already going to be fine.

What Would Actually Work

These three patterns reinforce the gap that tools are currently designed to generate content when teachers need help thinking through problems. Years of context-blind, teacher-replacing products have eroded the trust in this critical population.

So what would it actually take to build tools teachers adopt? Three principles emerge from my research.

Design for thinking partnership, not content generation. The measure of success isn’t content volume. It’s whether teachers make better decisions. That means tools that engage in dialogue rather than just produce drafts. Tools that ask “what are you trying to assess?” before generating an assignment. Tools that surface relevant research when teachers are wrestling with hard questions. The goal is elevating teacher thinking, not replacing it.

Automate the tedious, amplify the essential. Teachers want some tasks done faster such as entering grades, formatting documents, drafting routine communications. Other tasks they want to do better: diagnosing understanding, designing curriculum, providing feedback that changes how students think. The first category is ripe for automation. The second requires amplification where AI enhances teacher capability rather than substituting for it. “AI can surface patterns across student work” opens doors that “AI can grade your essays” closes.

Help teachers see who is struggling and why. Build tools that surface patterns on which students show signs of skipping the thinking, which show gaps that AI is papering over, which are developing genuine capability. This is the diagnostic information teachers need to differentiate instruction and ensure the students who need the most support actually get it.

The Broader Challenge

This is a preview of a challenge that will confront every domain where AI meets skilled human work.

The question of whether AI should replace human judgment or augment it isn’t abstract. It gets answered, concretely, in every product decision. Do you build a tool that grades essays, or one that helps teachers understand what students are struggling with? Do you build a tool that generates lesson plans, or one that helps teachers reason through pedagogical tradeoffs?

The first approach is easier to demo and easier to sell. The second is harder to build but more likely to actually work, and more likely to be adopted by the people who matter.

Education is a particularly revealing test case because the stakes are legible. When AI replaces teacher judgment in ways that don’t work, students suffer visibly. But the same dynamic plays out in medicine, law, management, and every domain where expertise involves judgment, not just information retrieval.

The companies that figure out how to build AI that genuinely augments expert judgment, rather than producing impressive demos that experts eventually abandon, will have learned something transferable. And very, very important.

Conclusion

Students are developing their relationship with AI right now, largely without deliberate guidance. They’re playing a broken game, and they know it. Whether they learn to use AI as a crutch that bypasses thinking or as a tool that augments it depends on whether teachers can redesign the game for this new reality.

That won’t happen with better worksheets. It requires tools designed with teachers, not for them. Tools that treat teachers as collaborators in figuring out what AI-augmented education should look like, rather than as end-users to be sold to.

What would that look like in practice? A tool that asks teachers what they’re struggling with before offering solutions. A tool that remembers their classroom context, their students, their constraints. A tool that surfaces research and options rather than just producing content. A tool that helps them see patterns in student work they might have missed. A tool that makes the hard work of teaching, the judgment, the diagnosis, the redesign, a little less lonely.

The teachers I interviewed are ready to do this work and they’re not waiting for permission.

They’re waiting for tools worthy of the challenge.


Research Methodology

This essay draws on semi-structured interviews with eight K-12 teachers across different subjects (math, English, science, history, journalism, career education), school types (public, charter, private), and levels of AI adoption. All quotes are anonymized to protect participant privacy.

To test generalizability, I analyzed 350+ comments in online teacher communities, particularly Reddit’s r/Teachers. I intentionally selected high-engagement threads representing both enthusiasm and skepticism about AI tools. A limitation worth noting is that high-engagement threads tend to surface the most articulate and polarized voices, which may not represent typical teacher experiences. Still, the consistency between interview data and online discourse, despite different geographic contexts and school types, suggests these patterns reflect meaningful dynamics in K-12 AI adoption.

¹ College Board Research, “U.S. High School Students’ Use of Generative Artificial Intelligence,” June 2025. The study found 84% of high school students use GenAI tools for schoolwork as of May 2025, with 69% specifically using ChatGPT.

² Inside Higher Ed Student Voice Survey, August 2025.

About the Author: Akanksha has over a decade of experience in education technology, including classroom teaching through Teach for America, school leadership roles at Charter Schools, curriculum design and operations leadership at Juni Learning, and AI strategy work at Multiverse, where she led efforts to scale learning delivery models from 1:30 to 1:300+ ratios while maintaining quality outcomes.

Thanks for reading! Subscribe for free to receive new posts and support my work.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
Next Page of Stories